Search

Results include substring matches and semantically similar vals. Learn more
patrickjm avatar
gpt3
@patrickjm
* OpenAI text completion. https://platform.openai.com/docs/api-reference/completions * * val.town has generously provided a free daily quota. Until the quota is met, no need to provide an API key. * To see if the quota has been met, you can run @patrickjm.openAiFreeQuotaExceeded() * * For full REST API access, see @patrickjm.openAiTextCompletion
RPC (deprecated)
import { trackOpenAiFreeUsage } from "https://esm.town/v/patrickjm/trackOpenAiFreeUsage";
import { openAiTextCompletion } from "https://esm.town/v/patrickjm/openAiTextCompletion";
import { openAiModeration } from "https://esm.town/v/patrickjm/openAiModeration";
import { openAiFreeQuotaExceeded } from "https://esm.town/v/patrickjm/openAiFreeQuotaExceeded";
import { openAiFreeUsageConfig } from "https://esm.town/v/patrickjm/openAiFreeUsageConfig";
* OpenAI text completion. https://platform.openai.com/docs/api-reference/completions
* To see if the quota has been met, you can run @patrickjm.openAiFreeQuotaExceeded()
* For full REST API access, see @patrickjm.openAiTextCompletion
openAiKey?: string,
const apiKey = params.openAiKey ?? openAiFreeUsageConfig.key;
stevekrouse avatar
gpt4TurboExample
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
Script
import { OpenAI } from "npm:openai";
const openai = new OpenAI();
let chatCompletion = await openai.chat.completions.create({
messages: [{ role: "user", content: "Teach me a word I don't know" }],
jacoblee93 avatar
model
@jacoblee93
An interactive, runnable TypeScript val by jacoblee93
Script
import { ChatOpenAI } from "langchain/chat_models/openai";
const model = new ChatOpenAI({
temperature: 0.9,
openAIApiKey: @me.secrets.OPENAI_API_KEY,
return model.invoke("What is your name?");
victorli avatar
aigreeting
@victorli
An interactive, runnable TypeScript val by victorli
HTTP
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function(req: Request): Promise<Response> {
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
stevekrouse avatar
tanDinosaur
@stevekrouse
// let's ask openai's new gpt-4o model to tell us a joke
Script
// let's ask openai's new gpt-4o model to tell us a joke
import { chat } from "https://esm.town/v/stevekrouse/openai";
const { content } = await chat("Tell me a joke", { max_tokens: 50, model: "gpt-4o" });
console.log(content);
nicoalbanese avatar
tomatoMinnow
@nicoalbanese
Vercel AI SDK with Val Town! Use the Vercel AI SDK in your Vals. Note : you must add your OpenAI key to your Val Town Env variables under OPENAI_API_KEY . If you would like to specify a different name for your API Key, you can create a custom OpenAI provider with the createOpenAI function. Prefer another AI provider? Use any supported provider by changing just two lines of code!
HTTP
Use the Vercel AI SDK in your Vals.
**Note**: you must add your OpenAI key to your Val Town [Env variables](https://www.val.town/settings/environment-variables)
Prefer another AI provider? Use [any supported provider](https://sdk.vercel.ai/providers/ai-sdk-providers) by changing just t
import { openai } from "npm:@ai-sdk/openai";
import { StreamingTextResponse, streamText } from "npm:ai";
const result = await streamText({
model: openai("gpt-4o"),
prompt: "Generate a fast recipe for Lasagna.",
webup avatar
chat
@webup
An interactive, runnable TypeScript val by webup
Script
options = {},
// Initialize OpenAI API stub
const { Configuration, OpenAIApi } = await import(
"https://esm.sh/openai@3.3.0"
const configuration = new Configuration({
apiKey: process.env.OPENAI,
const openai = new OpenAIApi(configuration);
// Request chat completion
: prompt;
const { data } = await openai.createChatCompletion({
model: "gpt-3.5-turbo-0613",
Pushpam avatar
spotlessMagentaSilverfish
@Pushpam
An interactive, runnable TypeScript val by Pushpam
Script
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
yawnxyz avatar
aiImageExample
@yawnxyz
// Function to handle image and text input using OpenAI's GPT-4-turbo
Script
import { ModelProvider, modelProvider } from "https://esm.town/v/yawnxyz/ai";
import { z } from "npm:zod";
// Function to handle image and text input using OpenAI's GPT-4-turbo
async function handleImageChat() {
const initialMessages = [
const response = await modelProvider.gen({
model: "gpt-4-turbo",
provider: "openai",
messages: [
...initialMessages,
maxm avatar
openAIStreamingExample
@maxm
An interactive, runnable TypeScript val by maxm
HTTP
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function(req: Request): Promise<Response> {
const openai = new OpenAI();
const stream = await openai.chat.completions.create({
stream: true,
stevekrouse avatar
tealBadger
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
HTTP
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function(req: Request): Promise<Response> {
const openai = new OpenAI();
const stream = await openai.chat.completions.create({
stream: true,
youtubefree avatar
solicitousAmethystMeadowlark
@youtubefree
An interactive, runnable TypeScript val by youtubefree
HTTP
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function(req: Request): Promise<Response> {
status: 204,
const openai = new OpenAI();
try {
model: "gpt-4-turbo",
const stream = await openai.chat.completions.create(body);
if (!body.stream) {
jierui avatar
deftRedRoadrunner
@jierui
An interactive, runnable TypeScript val by jierui
HTTP
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function(req: Request): Promise<Response> {
status: 204,
const openai = new OpenAI();
try {
model: "gpt-4-turbo",
const stream = await openai.chat.completions.create(body);
if (!body.stream) {
yawnxyz avatar
openaiUploadFile
@yawnxyz
An interactive, runnable TypeScript val by yawnxyz
Script
import { fetch } from "https://esm.town/v/std/fetch";
export async function openaiUploadFile({ key, data, filename = "data.json", purpose = "assistants" }: {
key: string;
formData.append("file", file, filename);
let result = await fetch("https://api.openai.com/v1/files", {
method: "POST",
if (result.error)
throw new Error("OpenAI Upload Error: " + result.error.message);
else
llqhyk777 avatar
worthyChocolateDog
@llqhyk777
An interactive, runnable TypeScript val by llqhyk777
HTTP
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function(req: Request): Promise<Response> {
status: 204,
const openai = new OpenAI();
try {
model: "gpt-4-turbo",
const stream = await openai.chat.completions.create(body);
if (!body.stream) {