Search
legalAssistant
@websrai
@jsxImportSource https://esm.sh/react@18.2.0
HTTP
if (request.method === "POST") {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
const { query } = await request.json();
Query: ${query}`;
const completion = await openai.chat.completions.create({
messages: [{ role: "user", content: legalPrompt }],
kindness
@nate
An interactive, runnable TypeScript val by nate
Script
export let kindness = async () => {
return await gpt3({
openAiKey: process.env.OPENAI_API_KEY,
prompt:
"Speaking as universal consciousness, say something short, true, uplifting, loving, and kind.",
generateImageSummary
@jackd
An interactive, runnable TypeScript val by jackd
Script
import { OpenAI } from "https://esm.town/v/std/openai";
import { fileToDataURL } from "https://esm.town/v/stevekrouse/fileToDataURL";
export async function generateImageSummary(image: File): Promise<string[]> {
const openai = new OpenAI();
const dataURL = await fileToDataURL(image);
const response = await openai.chat.completions.create({
model: "gpt-4-vision-preview",
weatherGPT
@varun_balani
If you fork this, you'll need to set OPENAI_API_KEY in your Val Town Secrets .
Cron
If you fork this, you'll need to set `OPENAI_API_KEY` in your [Val Town Secrets](https://www.val.town/settings/secrets).
import { fetch } from "https://esm.town/v/std/fetch";
import { OpenAI } from "npm:openai";
let location = "manipal";
).then(r => r.json());
const openai = new OpenAI();
let chatCompletion = await openai.chat.completions.create({
messages: [{
debugValEmbeddings
@janpaul123
An interactive, runnable TypeScript val by janpaul123
Script
import _ from "npm:lodash";
import OpenAI from "npm:openai";
const openai = new OpenAI();
const queryEmbedding = (await openai.embeddings.create({
model: "text-embedding-3-small",
console.log(queryEmbedding.slice(0, 4));
const embedding = await openai.embeddings.create({
model: "text-embedding-3-small",
console.log("Hash is the same, no email sent.", { dynamiclandWebsiteHash });
const queryEmbeddingVal = (await openai.embeddings.create({
model: "text-embedding-3-small",
chat
@kirineko
An interactive, runnable TypeScript val by kirineko
Script
options = {},
// Initialize OpenAI API stub
const { Configuration, OpenAIApi } = await import(
"https://esm.sh/openai@3.3.0"
console.log(process.env);
// const configuration = new Configuration({
// apiKey: process.env.OPENAI,
// const openai = new OpenAIApi(configuration);
// // Request chat completion
// : prompt;
// const { data } = await openai.createChatCompletion({
// model: "gpt-3.5-turbo-0613",
apricotTurkey
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
Script
import { OpenAI } from "https://esm.town/v/std/openai?v=2";
const openai = new OpenAI();
const functionExpression = await openai.chat.completions.create({
"messages": [
WriterOptions
@nbbaier
An interactive, runnable TypeScript val by nbbaier
Script
import { type ClientOptions } from "npm:openai";
export interface WriterOptions extends ClientOptions {
model?: string;
semanticSearchBlobs
@janpaul123
Part of Val Town Semantic Search . Uses Val Town's blob storage to search embeddings of all vals, by downloading them all and iterating through all of them to compute distance. Slow and terrible, but it works! Get metadata from blob storage: allValsBlob${dimensions}EmbeddingsMeta (currently allValsBlob1536EmbeddingsMeta ), which has a list of all indexed vals and where their embedding is stored ( batchDataIndex points to the blob, and valIndex represents the offset within the blob). The blobs have been generated by janpaul123/indexValsBlobs . It is not run automatically. Get all blobs with embeddings pointed to by the metadata, e.g. allValsBlob1536EmbeddingsData_0 for batchDataIndex 0. Call OpenAI to generate an embedding for the search query. Go through all embeddings and compute cosine similarity with the embedding for the search query. Return list sorted by similarity.
Script
- Get all blobs with embeddings pointed to by the metadata, e.g. `allValsBlob1536EmbeddingsData_0` for `batchDataIndex` 0.
- Call OpenAI to generate an embedding for the search query.
- Go through all embeddings and compute cosine similarity with the embedding for the search query.
import _ from "npm:lodash";
import OpenAI from "npm:openai";
const dimensions = 1536;
await Promise.all(allBatchDataIndexesPromises);
const openai = new OpenAI();
const queryEmbedding = (await openai.embeddings.create({
model: "text-embedding-3-small",
OpenAIUsage
@std
OpenAI Proxy Metrics We write openAI usage data to a openai_usage sqlite table. This script val is imported into the openai proxy. Use this val to run administrative scripts: https://www.val.town/v/std/OpenAIUsageScript
Script
# OpenAI Proxy Metrics
We write openAI usage data to a `openai_usage` sqlite table. This script val is imported into the openai proxy. Use this val to run administrative scripts: https://www.val.town/v/std/OpenAIUsageScript
FROM
openai_usage,
params
model: string;
export class OpenAIUsage {
constructor() {}
async migrate() {
await sqlite.batch([`CREATE TABLE IF NOT EXISTS openai_usage (
id INTEGER PRIMARY KEY,
async drop() {
await sqlite.batch([`DROP TABLE IF EXISTS openai_usage`]);
async writeUsage(ur: UsageRow) {
sqlite.execute({
sql: "INSERT INTO openai_usage (user_id, handle, tier, tokens, model) VALUES (?, ?, ?, ?, ?)",
args: [ur.userId, ur.handle, ur.tier, ur.tokens, ur.model],
sql: `SELECT count(*)
FROM openai_usage
WHERE (
weatherGPT
@ellenchisa
If you fork this, you'll need to set OPENAI_API_KEY in your Val Town Secrets .
Cron
If you fork this, you'll need to set `OPENAI_API_KEY` in your [Val Town Secrets](https://www.val.town/settings/secrets).
import { fetch } from "https://esm.town/v/std/fetch";
import { OpenAI } from "npm:openai";
let location = "san francisco ca";
).then(r => r.json());
const openai = new OpenAI();
let chatCompletion = await openai.chat.completions.create({
messages: [{
streamingTest
@jacoblee93
An interactive, runnable TypeScript val by jacoblee93
Script
export const streamingTest = (async () => {
const { OpenAI } = await import("https://esm.sh/langchain/llms/openai");
// To enable streaming, we pass in `streaming: true` to the LLM constructor.
// Additionally, we pass in a handler for the `handleLLMNewToken` event.
const chat = new OpenAI({
maxTokens: 25,
streaming: true,
openAIApiKey: process.env.OPENAI_API_KEY,
const response = await chat.call("Tell me a joke.", undefined, [
chatgptchess
@tmcw
ChatGPT Chess Inspired by all this hubbub about chess weirdness , this val lets you play chess against ChatGPT 4 . Expect some "too many requests" hiccups along the way. ChatGPT gets pretty bad at making valid moves after the first 10 or so
exchanges. This lets it retry up to 5 times to make a valid move, but if it can't, it can't.
HTTP
import { OpenAI } from "https://esm.town/v/std/openai?v=5"
import { sqlite } from "https://esm.town/v/std/sqlite?v=6"
<div class='p-4'>
<h2 class='font-bold'>OpenAI Chess</h2>
<p class='pb-4'>Play chess against ChatGPT-4</p>
chess.move(san)
const openai = new OpenAI()
let messages = []
args: [c.req.param().id, `Requesting response to ${san}`],
const completion = await openai.chat.completions.create({
messages: [
valTownChatGPT
@stevekrouse
ChatGPT Implemented in Val Town Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events
HTTP
# ChatGPT Implemented in Val Town
Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events
/** @jsxImportSource https://esm.sh/react */
import OpenAI from "npm:openai";
import { renderToString } from "npm:react-dom/server";
const openai = new OpenAI();
import { Hono } from "npm:hono@3";
app.get("/", async (c) => {
const thread = await openai.beta.threads.create();
const assistant = await openai.beta.assistants.create({
name: "",
const message = c.req.query("message");
await openai.beta.threads.messages.create(
threadId,
"data: " + JSON.stringify(str) + "\n\n",
const run = openai.beta.threads.runs.stream(threadId, {
assistant_id: assistantId,
indexValsNeon
@janpaul123
Part of Val Town Semantic Search . Generates OpenAI embeddings for all public vals, and stores them in Neon , using the pg_vector extension. Create the vals_embeddings table in Neon if it doesn't already exist. Get all val names from the database of public vals , made by Achille Lacoin . Get all val names from the vals_embeddings table and compute the difference (which ones are missing). Iterate through all missing vals, get their code, get embeddings from OpenAI, and store the result in Neon. Can now be searched using janpaul123/semanticSearchNeon .
Cron
*Part of [Val Town Semantic Search](https://www.val.town/v/janpaul123/valtownsemanticsearch).*
Generates OpenAI embeddings for all public vals, and stores them in [Neon](https://neon.tech/), using the [pg_vector](https://neon.tech/docs/extensions/pgvector) extension.
- Create the `vals_embeddings` table in Neon if it doesn't already exist.
- Get all val names from the `vals_embeddings` table and compute the difference (which ones are missing).
- Iterate through all missing vals, get their code, get embeddings from OpenAI, and store the result in Neon.
- Can now be searched using [janpaul123/semanticSearchNeon](https://www.val.town/v/janpaul123/semanticSearchNeon).
import { blob } from "https://esm.town/v/std/blob";
import OpenAI from "npm:openai";
import { truncateMessage } from "npm:openai-tokens";
// CREATE TABLE vals_embeddings (id TEXT PRIMARY KEY, embedding VECTOR(1536));
newValsBatches.push(currentBatch);
const openai = new OpenAI();
for (const newValsBatch of newValsBatches) {
const code = getValCode(val);
const embedding = await openai.embeddings.create({
model: "text-embedding-3-small",