Back to APIs list

ChatGPT API examples & templates

Use these vals as a playground to view and fork ChatGPT API examples and templates on Val Town. Run any example below or find templates that can be used as a pre-built solution.
tarekr93 avatar
tarekr93
chatgptPromptWebsite
HTTP
@jsxImportSource https://esm.sh/react@18.2.0
0
tmcw avatar
tmcw
chatgptchess
HTTP
ChatGPT Chess Inspired by all this hubbub about chess weirdness , this val lets you play chess against ChatGPT 4 . Expect some "too many requests" hiccups along the way. ChatGPT gets pretty bad at making valid moves after the first 10 or so exchanges. This lets it retry up to 5 times to make a valid move, but if it can't, it can't.
0
willthereader avatar
willthereader
ChatGPTTextDefinitionUserscript
Script
// @name Improved ChatGPT Text Definition with Follow-up Questions
0
fadi avatar
fadi
promptChatGPT
Script
An interactive, runnable TypeScript val by fadi
0
gtrufitt avatar
gtrufitt
handleChatGPTRequest
Script
An interactive, runnable TypeScript val by gtrufitt
0
stevekrouse avatar
stevekrouse
openai
Script
OpenAI ChatGPT helper function This val uses your OpenAI token if you have one, and the @std/openai if not, so it provides limited OpenAI usage for free. import { chat } from "https://esm.town/v/stevekrouse/openai"; const { content } = await chat("Hello, GPT!"); console.log(content); import { chat } from "https://esm.town/v/stevekrouse/openai"; const { content } = await chat( [ { role: "system", content: "You are Alan Kay" }, { role: "user", content: "What is the real computer revolution?"} ], { max_tokens: 50, model: "gpt-4o" } ); console.log(content);
1
maxdrake avatar
maxdrake
chatGPTExample
Script
An interactive, runnable TypeScript val by maxdrake
0
maxm avatar
maxm
selfEditingWebsiteGPT
HTTP
Forked from maxm/selfEditingWebsite
0
priyam avatar
priyam
Priyam28Gpt
HTTP
@jsxImportSource https://esm.sh/react@18.2.0
1
ktodaz avatar
ktodaz
getGPTResponse
Script
Get a response from GPT for the player.
0
bingo16 avatar
bingo16
newChatGPT35
Script
An interactive, runnable TypeScript val by bingo16
0
stevekrouse avatar
stevekrouse
cerebrasTemplate
HTTP
Cerebras Inference template This val shows you how you can deploy an app using Cerebras Inference on Val Town in seconds. What is Cerebras? Cerebras is an American chip manufacturer that produces large wafer chips that deliver mind-blowing LLM inference speeds. As of this writing on Jan 17, 2025, Cerebras Inference provides Llama 3.18b, 3.1 70b, and 3.370b at a jaw-dropping 2k tokens per second – that's 50x faster than what the frontier labs produce. Llama 3.370b at 2k tokens per second is particularly noteworthy because it is a GPT-4-class model . This level of intelligence at that level of speed will unlock whole new classes of applications. Quick start Set up Cerebras Sign up for Cerebras Get a Cerebras API Key Save it in a Val Town environment variable called CEREBRAS_API_KEY Once Cerebras is set up in your Val Town account, there are two ways to get started: Fork this app and customize it (or ask Townie AI to customize it) Start a new chat with Townie AI and copy & paste the following instructions: . Use Cerebras for AI on the backend like so: const { OpenAI } = await import("https://esm.sh/openai"); const client = new OpenAI({ apiKey: Deno.env.get("CEREBRAS_API_KEY"), baseURL: "https://api.cerebras.ai/v1" }); const response = await client.chat.completions.create({ model: "llama-3.3-70b", messages: [], }); const generatedText = response.choices[0].message.content; For example, the val in this template was created by asking Townie AI to "Make a chatgpt clone", then I hit shift-enter twice, and then pasted in the instructions on how to use Cerebras from above, then hit enter. Townie built this app on its first try, in about 20 seconds. Sample apps Cerebras Searcher - a Perplexity clone that uses the SerpAPI to do RAG and summaries with Cerebras ( requires a SerpAPI key ) Cerebras Coder - an app that generates websites in a second with Cerebras Cerebras Debater - an app that truly shows Cerebras's speed: it's Cerebras talking to Cerebras in a debate
0
chatgpt avatar
chatgpt
chat
Script
Forked from webup/chat
0
bingo16 avatar
bingo16
getChatgpt
Script
An interactive, runnable TypeScript val by bingo16
0
std avatar
std
openai
Script
OpenAI - Docs ↗ Use OpenAI's chat completion API with std/openai . This integration enables access to OpenAI's language models without needing to acquire API keys. For free Val Town users, all calls are sent to gpt-4o-mini . Basic Usage import { OpenAI } from "https://esm.town/v/std/openai"; const openai = new OpenAI(); const completion = await openai.chat.completions.create({ messages: [ { role: "user", content: "Say hello in a creative way" }, ], model: "gpt-4", max_tokens: 30, }); console.log(completion.choices[0].message.content); Images To send an image to ChatGPT, the easiest way is by converting it to a data URL, which is easiest to do with @stevekrouse/fileToDataURL . import { fileToDataURL } from "https://esm.town/v/stevekrouse/fileToDataURL"; const dataURL = await fileToDataURL(file); const response = await chat([ { role: "system", content: `You are an nutritionist. Estimate the calories. We only need a VERY ROUGH estimate. Respond ONLY in a JSON array with values conforming to: {ingredient: string, calories: number} `, }, { role: "user", content: [{ type: "image_url", image_url: { url: dataURL, }, }], }, ], { model: "gpt-4o", max_tokens: 200, }); Limits While our wrapper simplifies the integration of OpenAI, there are a few limitations to keep in mind: Usage Quota : We limit each user to 10 requests per minute. Features : Chat completions is the only endpoint available. If these limits are too low, let us know! You can also get around the limitation by using your own keys: Create your own API key on OpenAI's website Create an environment variable named OPENAI_API_KEY Use the OpenAI client from npm:openai : import { OpenAI } from "npm:openai"; const openai = new OpenAI(); 📝 Edit docs
6
stevekrouse avatar
stevekrouse
chatGPT
HTTP
Forked from maxm/valTownChatGPT
0