std
openai
Script
OpenAI - Docs ↗ Use OpenAI's chat completion API with std/openai . This integration enables access to OpenAI's language models without needing to acquire API keys. For free Val Town users, all calls are sent to gpt-4o-mini . Basic Usage import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
{ role: "user", content: "Say hello in a creative way" },
],
model: "gpt-4",
max_tokens: 30,
});
console.log(completion.choices[0].message.content); Images To send an image to ChatGPT, the easiest way is by converting it to a
data URL, which is easiest to do with @stevekrouse/fileToDataURL . import { fileToDataURL } from "https://esm.town/v/stevekrouse/fileToDataURL";
const dataURL = await fileToDataURL(file);
const response = await chat([
{
role: "system",
content: `You are an nutritionist.
Estimate the calories.
We only need a VERY ROUGH estimate.
Respond ONLY in a JSON array with values conforming to: {ingredient: string, calories: number}
`,
},
{
role: "user",
content: [{
type: "image_url",
image_url: {
url: dataURL,
},
}],
},
], {
model: "gpt-4o",
max_tokens: 200,
}); Limits While our wrapper simplifies the integration of OpenAI, there are a few limitations to keep in mind: Usage Quota : We limit each user to 10 requests per minute. Features : Chat completions is the only endpoint available. If these limits are too low, let us know! You can also get around the limitation by using your own keys: Create your own API key on OpenAI's website Create an environment variable named OPENAI_API_KEY Use the OpenAI client from npm:openai : import { OpenAI } from "npm:openai";
const openai = new OpenAI(); 📝 Edit docs
5
gigmx-handler.web.val.run
Updated: January 10, 2025