Search

Results include substring matches and semantically similar vals. Learn more
arthrod avatar
comfortableOrangeTyrannosaurus
@arthrod
An interactive, runnable TypeScript val by arthrod
Script
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
"messages": [
stevekrouse avatar
openaiUploadFile
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
Script
import { fetch } from "https://esm.town/v/std/fetch";
export async function openaiUploadFile({ key, data, purpose = "assistants" }: {
key: string;
formData.append("file", file, "data.json");
let result = await fetch("https://api.openai.com/v1/files", {
method: "POST",
if (result.error)
throw new Error("OpenAI Upload Error: " + result.error.message);
else
stevekrouse avatar
langchainEx
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
Script
export const langchainEx = (async () => {
const { OpenAI } = await import("https://esm.sh/langchain/llms/openai");
const { PromptTemplate } = await import("https://esm.sh/langchain/prompts");
const { LLMChain } = await import("https://esm.sh/langchain/chains");
const model = new OpenAI({
temperature: 0.9,
openAIApiKey: process.env.openai,
maxTokens: 100,
stevekrouse avatar
chatGPT
@stevekrouse
ChatGPT Implemented in Val Town Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events. ⚠️ Note: Requires your own OpenAI API key to get this to run in a fork
HTTP
Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events.
**⚠️ Note: Requires your own OpenAI API key to get this to run in a fork**
import OpenAI from "npm:openai";
document.getElementById("input").addEventListener("submit", function(event) {
eventSource.onmessage = function(event) {
eventSource.onerror = function() {
const openai = new OpenAI();
const thread = await openai.beta.threads.create();
const assistant = await openai.beta.assistants.create({
await openai.beta.threads.messages.create(
const run = openai.beta.threads.runs.stream(threadId, {
junkerman2004 avatar
superiorHarlequinUrial
@junkerman2004
An interactive, runnable TypeScript val by junkerman2004
HTTP
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function(req: Request): Promise<Response> {
if (req.method === "OPTIONS") {
status: 204,
const openai = new OpenAI();
try {
model: "gpt-4-turbo",
const stream = await openai.chat.completions.create(body);
if (!body.stream) {
tsuchi_ya avatar
fondGrayRoadrunner
@tsuchi_ya
An interactive, runnable TypeScript val by tsuchi_ya
Script
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
weaverwhale avatar
chat
@weaverwhale
OpenAI ChatGPT helper function This val uses your OpenAI token if you have one, and the @std/openai if not, so it provides limited OpenAI usage for free. import { chat } from "https://esm.town/v/stevekrouse/openai"; const { content } = await chat("Hello, GPT!"); console.log(content); import { chat } from "https://esm.town/v/stevekrouse/openai"; const { content } = await chat( [ { role: "system", content: "You are Alan Kay" }, { role: "user", content: "What is the real computer revolution?"} ], { max_tokens: 50, model: "gpt-4" } ); console.log(content);
Script
# OpenAI ChatGPT helper function
This val uses your OpenAI token if you have one, and the @std/openai if not, so it provides limited OpenAI usage for free.
import { chat } from "https://esm.town/v/stevekrouse/openai";
import type { ChatCompletion, ChatCompletionCreateParamsNonStreaming, Message } from "npm:@types/openai";
async function getOpenAI() {
if (Deno.env.get("OPENAI_API_KEY") === undefined) {
const { OpenAI } = await import("https://esm.town/v/std/openai");
return new OpenAI();
const { OpenAI } = await import("npm:openai");
return new OpenAI();
* Initiates a chat conversation with OpenAI's GPT model and retrieves the content of the first response.
* This function can handle both single string inputs and arrays of message objects.
export async function chat(
bluemsn avatar
chat
@bluemsn
An interactive, runnable TypeScript val by bluemsn
Script
options = {},
// Initialize OpenAI API stub
const { OpenAI } = await import(
"https://esm.sh/openai"
const openai = new OpenAI();
const messages = typeof prompt === "string"
: prompt;
const completion = await openai.chat.completions.create({
messages: messages,
mjweaver01 avatar
PersonalizationGPT
@mjweaver01
PersonalizationGPT You are a helpful personalization assistant Use GPT to return JIT personalization for client side applications. If you fork this, you'll need to set OPENAI_API_KEY in your Val Town Secrets .
HTTP
Use GPT to return JIT personalization for client side applications.
If you fork this, you'll need to set `OPENAI_API_KEY` in your [Val Town Secrets](https://www.val.town/settings/secrets).
import { Hono } from "npm:hono";
import { OpenAI } from "npm:openai";
const defaultUser = {
const personalizationGPT = async (user: UserObject) => {
const openai = new OpenAI();
let chatCompletion = await openai.chat.completions.create({
messages: [
tmcw avatar
chatgptchess
@tmcw
ChatGPT Chess Inspired by all this hubbub about chess weirdness , this val lets you play chess against ChatGPT 4 . Expect some "too many requests" hiccups along the way. ChatGPT gets pretty bad at making valid moves after the first 10 or so exchanges. This lets it retry up to 5 times to make a valid move, but if it can't, it can't.
HTTP
import { OpenAI } from "https://esm.town/v/std/openai?v=5"
import { sqlite } from "https://esm.town/v/std/sqlite?v=6"
let game = new Chess(position)
function onDragStart(source, piece, position, orientation) {
// do not pick up pieces if the game is over
return false
function onDrop(source: string, target: string) {
// see if the move is legal
// for castling, en passant, pawn promotion
function onSnapEnd() {
board.position(game.fen())
function updateStatus() {
var status = ""
<div class='p-4'>
<h2 class='font-bold'>OpenAI Chess</h2>
<p class='pb-4'>Play chess against ChatGPT-4</p>
chess.move(san)
const openai = new OpenAI()
let messages = []
args: [c.req.param().id, `Requesting response to ${san}`],
const completion = await openai.chat.completions.create({
messages: [
maxm avatar
valTownChatGPT
@maxm
ChatGPT Implemented in Val Town Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events.
HTTP
Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events.
import OpenAI from "npm:openai";
// you fork this. We'll be adding support to the std/openai lib soon!
const openai = new OpenAI();
document.getElementById("input").addEventListener("submit", async function(event) {
// Setup the SSE connection and stream back the response. OpenAI handles determining
eventSource.onmessage = function(event) {
eventSource.onerror = function() {
const thread = await openai.beta.threads.create();
const assistant = await openai.beta.assistants.create({
await openai.beta.threads.messages.create(
yawnxyz avatar
oliveButterfly
@yawnxyz
An interactive, runnable TypeScript val by yawnxyz
Script
import { fetch } from "https://esm.town/v/std/fetch";
export async function openaiUploadFile({ key, data, purpose = "assistants" }: {
key: string;
formData.append("file", file, "data.json");
let result = await fetch("https://api.openai.com/v1/files", {
method: "POST",
if (result.error)
throw new Error("OpenAI Upload Error: " + result.error.message);
else
rozek avatar
GDI_AIChatCompletionService
@rozek
This val is part of a series of examples to introduce "val.town" in my computer science course at Stuttgart University of Applied Sciences . The idea is to motivate even first-semester students not to wait but to put their ideas into practice from the very beginning and implement web apps with frontend and backend. It contains a simple HTTP end point which expects a POST request with a JSON structure containing the properties "SystemMessage" and "UserMessage". These message are then used to run an OpenAI chat completion and produce an "assistant message" which is sent back to the client as plain text. This val is the companion of https://rozek-gdi_aichatcompletion.web.val.run/ which contains the browser part (aka "frontend") for this example. The code was created using Townie - with only a few small manual corrections. This val is licensed under the MIT License.
HTTP
structure containing the properties "SystemMessage" and "UserMessage". These
message are then used to run an OpenAI chat completion and produce an "assistant
message" which is sent back to the client as plain text.
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function (req: Request): Promise<Response> {
if (req.method !== "POST") {
return new Response("Bad Request: Invalid input", { status: 400 });
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
model: "gpt-4o-mini",
stevekrouse avatar
weatherGPT
@stevekrouse
Cron
import { email } from "https://esm.town/v/std/email?v=11";
import { OpenAI } from "npm:openai";
let location = "brooklyn ny";
).then(r => r.json());
const openai = new OpenAI();
let chatCompletion = await openai.chat.completions.create({
messages: [{
console.log(text);
export async function weatherGPT() {
await email({ subject: "Weather Today", text });
webup avatar
chatSampleFunctionSingle
@webup
An interactive, runnable TypeScript val by webup
Script
export const chatSampleFunctionSingle = (async () => {
// Example dummy function hard coded to return the same weather
// Step 1: send the conversation and available functions to GPT
const functions = [schemasWeather[0]];
functions,
function_call: "auto", // auto is default, but we'll be explicit
// Step 2: Check if GPT wanted to call a function
// Step 3: Call the function
if (!functions)
// Step 4: Send the info on the function call and function response to GPT