Search
efficientAmberUnicorn
@CoachCompanion
@jsxImportSource https://esm.sh/react@18.2.0
HTTP
if (!YOUTUBE_API_KEY) {
console.warn("YouTube API key (YOUTUBE_API_KEY2) is not set. Falling back to search URL method.");
const { OpenAI } = await import("https://esm.town/v/std/openai");
const regenerateKey = request.headers.get("Regenerate-Key") || "0";
const openai = new OpenAI();
const { sport, skillLevel, ageGroup, groupSize, selectedTopics, sessionDuration } = await request.json();
// ... (previous code remains unchanged)
add_to_notion_w_ai
@nerdymomocat
Uses instructor and open ai (with gpt-4-turbo) to process any content into a notion database entry. Use addToNotion with any database id and content. await addToNotion(
"DB_ID_GOES_HERE",
"CONTENT_GOES HERE"//"for example: $43.28 ordered malai kofta and kadhi (doordash) [me and mom] jan 3 2024"
); Prompts are created based on your database name, database description, property name, property type, property description, and if applicable, property options (and their descriptions). Supports: checkbox, date, multi_select, number, rich_text, select, status, title, url, email Uses NOTION_API_KEY , OPENAI_API_KEY stored in env variables and uses Valtown blob storage to store information about the database. Use get_notion_db_info to use the stored blob if exists or create one, use get_and_save_notion_db_info to create a new blob (and replace an existing one if exists).
Script
Supports: checkbox, date, multi_select, number, rich_text, select, status, title, url, email
- Uses `NOTION_API_KEY`, `OPENAI_API_KEY` stored in env variables and uses [Valtown blob storage](https://esm.town/v/std/blob) to store information about the database.
- Use `get_notion_db_info` to use the stored blob if exists or create one, use `get_and_save_notion_db_info` to create a new blob (and replace an existing one if exists).
import { Client } from "npm:@notionhq/client";
import OpenAI from "npm:openai";
import { z } from "npm:zod";
"email": "string_email",
const oai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY ?? undefined,
const client = Instructor({
memorySampleSummary
@toowired
// Initialize the database
Script
import { sqlite } from "https://esm.town/v/stevekrouse/sqlite";
import { OpenAI } from "https://esm.town/v/std/openai";
const KEY = new URL(import.meta.url).pathname.split("/").at(-1);
const SCHEMA_VERSION = 1;
const openai = new OpenAI();
// Initialize the database
async function generateEmbedding(text: string): Promise<number[]> {
const response = await openai.embeddings.create({
model: "text-embedding-ada-002",
ask_ai
@pomdtr
Ask gpt to update a val on your behalf Usage import { askAI } from "https://esm.town/v/pomdtr/ask_ai";
await askAI(`Add jsdoc comments for each exported function of the val @pomdtr/askAi`);
Script
import { api } from "https://esm.town/v/pomdtr/api";
import { email } from "https://esm.town/v/std/email?v=12";
import { OpenAI } from "https://esm.town/v/std/OpenAI";
async function getValByAlias({ author, name }: { author: string; name: string }) {
const { id, code, readme } = await api(`/v1/alias/${author}/${name}`, {
type: "string";
export function askAI(content: string) {
const client = new OpenAI();
const runner = client.beta.chat.completions.runTools({
model: "gpt-3.5-turbo",
emojiGuessr
@jdan
Calorie Count via Photo Uploads your photo to ChatGPT's new vision model to automatically categorize the food and estimate the calories.
HTTP
import { fileToDataURL } from "https://esm.town/v/stevekrouse/fileToDataURL";
import { modifyImage } from "https://esm.town/v/stevekrouse/modifyImage";
import { chat } from "https://esm.town/v/stevekrouse/openai";
import { Hono } from "npm:hono@3";
function esmTown(url) {
rateArticleRelevance
@iakovos
An interactive, runnable TypeScript val by iakovos
Script
export const rateArticleRelevance = async (interests: string, article: any) => {
const { default: OpenAI } = await import("npm:openai");
const openai = new OpenAI({ apiKey: process.env.OPENAI_KEY });
try {
Give a score from 0 to 10. Why did you give this score? Respond with the score only.
const response = await openai.chat.completions.create({
messages: [
VALLE
@davitchanturia
VALL-E LLM code generation for vals! Make apps with a frontend, backend, and database. It's a bit of work to get this running, but it's worth it. Fork this val to your own profile. Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in tempValsParentFolderId . If you want to use OpenAI models you need to set the OPENAI_API_KEY env var . If you want to use Anthropic models you need to set the ANTHROPIC_API_KEY env var . Create a Val Town API token , open the browser preview of this val, and use the API token as the password to log in.
HTTP
* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-variables).
* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environment-variables).
readmeGPT
@nbbaier
Val Town AI Readme Writer This val provides a class ReadmeWriter for generating readmes for vals with OpenAI. It can both draft readmes and update them directly PRs welcome! See Todos below for some ideas I have. Usage To draft a readme for a given code, use the draftReadme method: import { ReadmeWriter } from "https://esm.town/v/nbbaier/readmeGPT";
const readmeWriter = new ReadmeWriter({});
const val = "https://www.val.town/v/:username/:valname";
const generatedReadme = await readmeWriter.draftReadme(val); To write and update a readme for a given code, use the writeReadme method: import { ReadmeWriter } from "https://esm.town/v/nbbaier/readmeGPT";
const readmeWriter = new ReadmeWriter({});
const val = "https://www.val.town/v/:username/:valname";
const successMessage = await readmeWriter.writeReadme(val); API Reference Class: ReadmeWriter The ReadmeWriter class represents a utility for generating and updating README files. Constructor Creates an instance of the ReadmeWriter class. Parameters: model (optional): The model to be used for generating the readme. Defaults to "gpt-3.5-turbo". apiKey (optional): An OpenAI API key. Defaults to Deno.env.get("OPENAI_API_KEY") . Methods draftReadme(val: string): Promise<string> : Generates a readme for the given val. Parameters: val : URL of the code repository. Returns: A promise that resolves to the generated readme. writeReadme(val: string): Promise<string> : Generates and updates a readme for the given val. Parameters: val : URL of the code repository. Returns: A promise that resolves to a success message if the update is successful. Todos [ ] Additional options to pass to the OpenAI model [ ] Ability to pass more instructions to the prompt to modify how the readme is constructed
Script
This val provides a class `ReadmeWriter` for generating readmes for vals with OpenAI. It can both draft readmes and update them directly
- `apiKey` (optional): An OpenAI API key. Defaults to `Deno.env.get("OPENAI_API_KEY")`.
- [ ] Additional options to pass to the OpenAI model
import OpenAI, { type ClientOptions } from "npm:openai";
openai: OpenAI;
const { model, ...openaiOptions } = options;
this.openai = new OpenAI(openaiOptions);
private async performOpenAICall(prompt: string) {
const response = await this.openai.chat.completions.create({
throw new Error("No response from OpenAI");
throw new Error("No readme returned by OpenAI. Try again.");
const readme = await this.performOpenAICall(prompt);
getChatgpt
@bingo16
An interactive, runnable TypeScript val by bingo16
Script
"Content-Type": "application/json",
// Update your token in https://val.town/settings/secrets
Authorization: `Bearer ${token || process.env.openaiKey}`,
const getCompelitoins = async (data) => {
const response = await fetch("https://api.openai.com/v1/completions", {
method: "POST",
headers: {
aiSarcasticMotivationalMessage
@patrickjm
An interactive, runnable TypeScript val by patrickjm
Script
"Emphasize a morbid sense of humor.",
].join("\n"),
openAiKey: process.env.openai_key,
poembuilder
@stevekrouse
@jsxImportSource npm:hono@3/jsx
HTTP
/** @jsxImportSource npm:hono@3/jsx */
import { OpenAI } from "https://esm.town/v/std/openai?v=2";
import { sqlite } from "https://esm.town/v/std/sqlite?v=5";
import { Hono } from "npm:hono@3";
emeraldRaccoon
@willthereader
ChatGPT Implemented in Val Town Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events
HTTP
# ChatGPT Implemented in Val Town
Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events
/** @jsxImportSource https://esm.sh/react */
import OpenAI from "npm:openai";
import { renderToString } from "npm:react-dom/server";
const openai = new OpenAI();
import { Hono } from "npm:hono@3";
app.get("/", async (c) => {
const thread = await openai.beta.threads.create();
const assistant = await openai.beta.assistants.create({
name: "",
const message = c.req.query("message");
await openai.beta.threads.messages.create(
threadId,
"data: " + JSON.stringify(str) + "\n\n",
const run = openai.beta.threads.runs.stream(threadId, {
assistant_id: assistantId,
weatherGPT
@lisazz
Cron
import { email } from "https://esm.town/v/std/email?v=11";
import { OpenAI } from "npm:openai";
let location = "Asia, Taipei";
).then(r => r.json());
const openai = new OpenAI();
let chatCompletion = await openai.chat.completions.create({
messages: [{
exampleTopHackerNewsDailyEmail
@charlypoly
An interactive, runnable TypeScript val by charlypoly
Cron
import { email } from "https://esm.town/v/std/email?v=12";
import { OpenAI } from "https://esm.town/v/std/openai?v=4";
import { z } from "npm:zod";
.describe("Top 5 stories on Hacker News"),
// we create a OpenAI Tool that takes our schema as argument
const extractContentTool: any = {
parameters: zodToJsonSchema(schema),
const openai = new OpenAI();
// We ask OpenAI to extract the content from the given web page.
// The model will reach out to our `extract_content` tool and
// the requirement of `extract_content`s argument.
const completion = await openai.chat.completions.create({
model: "gpt-4-turbo",
tool_choice: "auto",
// we retrieve the serialized arguments generated by OpenAI
const result = completion.choices[0].message.tool_calls![0].function.arguments;
const parsed = schema.parse(JSON.parse(result));
const completion2 = await openai.chat.completions.create({
model: "gpt-4-turbo",
emojiSearchBot
@stevekrouse
Emoji search bot Replies to mentions on twitter with the emojis your photo evokes. Inspired by Devon Zuegel .
Cron
import process from "node:process";
import OpenAI from "npm:openai";
const openai = new OpenAI({ apiKey: process.env.openai });
export async function emojiSearchBot({ lastRunAt }: Interval) {
if (attachment.type !== "photo") return;
const response = await openai.chat.completions.create({
model: "gpt-4-vision-preview",