Search

Results include substring matches and semantically similar vals. Learn more
stevekrouse avatar
VALLE
@stevekrouse
Fork it and authenticate with your Val Town API token as the password. Needs an OPENAI_API_KEY env var to be set, and change the variables under "Set these to your own". https://x.com/JanPaul123/status/1812957150559211918
HTTP (deprecated)
it and authenticate with your Val Town API token as the password. Needs an `OPENAI_API_KEY` env var to be set, and change th
https://x.com/JanPaul123/status/1812957150559211918
carts avatar
motionlessPurpleBat
@carts
An interactive, runnable TypeScript val by carts
HTTP (deprecated)
import { OpenAI } from "https://esm.town/v/std/openai?v=4";
const prompt = "Tell me a dad joke. Format the response as JSON with 'setup' and 'punchline' keys.";
export default async function dailyDadJoke(req: Request): Response {
const openai = new OpenAI();
const resp = await openai.chat.completions.create({
messages: [
eyeseethru avatar
add_to_notion_w_ai
@eyeseethru
Uses instructor and open ai (with gpt-4-turbo) to process any content into a notion database entry. Use addToNotion with any database id and content. await addToNotion( "DB_ID_GOES_HERE", "CONTENT_GOES HERE"//"for example: $43.28 ordered malai kofta and kadhi (doordash) [me and mom] jan 3 2024" ); Prompts are created based on your database name, database description, property name, property type, property description, and if applicable, property options (and their descriptions). Supports: checkbox, date, multi_select, number, rich_text, select, status, title, url, email Uses NOTION_API_KEY , OPENAI_API_KEY stored in env variables and uses Valtown blob storage to store information about the database. Use get_notion_db_info to use the stored blob if exists or create one, use get_and_save_notion_db_info to create a new blob (and replace an existing one if exists).
Script
Supports: checkbox, date, multi_select, number, rich_text, select, status, title, url, email
- Uses `NOTION_API_KEY`, `OPENAI_API_KEY` stored in env variables and uses [Valtown blob storage](https://esm.town/v/std/blob
- Use `get_notion_db_info` to use the stored blob if exists or create one, use `get_and_save_notion_db_info` to create a new
import { Client } from "npm:@notionhq/client";
import OpenAI from "npm:openai";
import { z } from "npm:zod";
"email": "string_email",
const oai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY ?? undefined,
const client = Instructor({
weaverwhale avatar
GistGPT
@weaverwhale
GistGPT A helpful assistant who provides the gist of a gist How to use / and /gist - Default response is to explain this file. I believe this is effectively real-time recursion ? /gist?url={URL} - Provide a RAW file URL from Github, BitBucket, GitLab, Val Town, etc. and GistGPT will provide you the gist of the code. /about - "Tell me a little bit about yourself"
HTTP (deprecated)
import { Hono } from "npm:hono";
import { OpenAI } from "npm:openai";
const gistGPT = async (input: string, about?: boolean) => {
const chatInput = about ? input : await (await fetch(input)).text();
const openai = new OpenAI();
let chatCompletion = await openai.chat.completions.create({
messages: [
trantion avatar
VALLE
@trantion
VALL-E LLM code generation for vals! Make apps with a frontend, backend, and database. It's a bit of work to get this running, but it's worth it. Fork this val to your own profile. Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in tempValsParentFolderId . If you want to use OpenAI models you need to set the OPENAI_API_KEY env var . If you want to use Anthropic models you need to set the ANTHROPIC_API_KEY env var . Create a Val Town API token , open the browser preview of this val, and use the API token as the password to log in.
HTTP (deprecated)
* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-v
* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environ
willthereader avatar
coralCarp
@willthereader
ChatGPT Implemented in Val Town Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events
HTTP (deprecated)
# ChatGPT Implemented in Val Town
Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events
/** @jsxImportSource https://esm.sh/react */
import OpenAI from "npm:openai";
import { renderToString } from "npm:react-dom/server";
const openai = new OpenAI();
import { Hono } from "npm:hono@3";
app.get("/", async (c) => {
const thread = await openai.beta.threads.create();
const assistant = await openai.beta.assistants.create({
name: "",
const message = c.req.query("message");
await openai.beta.threads.messages.create(
threadId,
"data: " + JSON.stringify(str) + "\n\n",
const run = openai.beta.threads.runs.stream(threadId, {
assistant_id: assistantId,
heathergliffin avatar
DailyDaughterNotes
@heathergliffin
* This app generates cute daily notes for a daughter using OpenAI's GPT model. * It stores the generated notes in SQLite for persistence and displays them on a simple web interface. * The app uses React for the frontend and Deno's runtime environment in Val Town for the backend.
HTTP
* This app generates cute daily notes for a daughter using OpenAI's GPT model.
* It stores the generated notes in SQLite for persistence and displays them on a simple web interface.
async function server(request: Request): Promise<Response> {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const { sqlite } = await import("https://esm.town/v/stevekrouse/sqlite");
} else if (path === "/generate-note" && request.method === "POST") {
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
taras avatar
free_open_router
@taras
curl 'https://taras-free_open_router.web.val.run/api/v1/chat/completions' \ -H 'accept: application/json' \ -H 'authorization: Bearer THIS_IS_OVERRIDEN_ON_SERVER' \ -H 'content-type: application/json' \ --data-raw '{ "model": "auto", "temperature": 0, "messages": [ { "role": "system", "content": "stuff" }, { "role": "user", "content": "hello" } ], "stream": true }'
HTTP (deprecated)
url: "https://openrouter.ai/api/v1/models",
token: Deno.env.get("OPEN_ROUTER_API_KEY"),
url: "https://api.groq.com/openai/v1/models",
token: Deno.env.get("GROQ_API_KEY"),
// Create fetch promises for each API endpoint
if (provider === "groq") {
url.host = "api.groq.com";
url.pathname = url.pathname.replace("/api/v1", "/openai/v1");
url.port = "443";
url.protocol = "https";
junhoca avatar
add_to_notion_w_ai
@junhoca
Uses instructor and open ai (with gpt-4-turbo) to process any content into a notion database entry. Use addToNotion with any database id and content. await addToNotion( "DB_ID_GOES_HERE", "CONTENT_GOES HERE"//"for example: $43.28 ordered malai kofta and kadhi (doordash) [me and mom] jan 3 2024" ); Prompts are created based on your database name, database description, property name, property type, property description, and if applicable, property options (and their descriptions). Supports: checkbox, date, multi_select, number, rich_text, select, status, title, url, email Uses NOTION_API_KEY , OPENAI_API_KEY stored in env variables and uses Valtown blob storage to store information about the database. Use get_notion_db_info to use the stored blob if exists or create one, use get_and_save_notion_db_info to create a new blob (and replace an existing one if exists).
Script
Supports: checkbox, date, multi_select, number, rich_text, select, status, title, url, email
- Uses `NOTION_API_KEY`, `OPENAI_API_KEY` stored in env variables and uses [Valtown blob storage](https://esm.town/v/std/blob
- Use `get_notion_db_info` to use the stored blob if exists or create one, use `get_and_save_notion_db_info` to create a new
import { Client } from "npm:@notionhq/client";
import OpenAI from "npm:openai";
import { z } from "npm:zod";
"email": "string_email",
const oai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY ?? undefined,
const client = Instructor({
bluemsn avatar
modelSampleChatCall
@bluemsn
An interactive, runnable TypeScript val by bluemsn
Script
const builder = await getModelBuilder({
type: "chat",
provider: "openai",
const model = await builder();
const { SystemMessage, HumanMessage } = await import("npm:langchain/schema");
willthereader avatar
chocolateCanid
@willthereader
ChatGPT Implemented in Val Town Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events.
HTTP (deprecated)
# ChatGPT Implemented in Val Town
Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events.
<p align=center>
/** @jsxImportSource https://esm.sh/react */
import { OpenAI } from "https://esm.town/v/std/openai";
import { Hono } from "npm:hono@3";
import { renderToString } from "npm:react-dom/server";
const openai = new OpenAI();
const jsxResponse = (jsx) => {
try {
thread = await openai.chat.completions.create({
model: "gpt-3.5-turbo", // Use a faster model
messages: [{ role: "system", content: "Start a new thread" }],
assistant = await openai.chat.completions.create({
model: "gpt-3.5-turbo", // Use a faster model
try {
await openai.chat.completions.create({
model: "gpt-3.5-turbo", // Use a faster model
"data: " + JSON.stringify(str) + "\n\n",
const run = openai.chat.completions.stream({
model: "gpt-3.5-turbo", // Use a faster model
vandyand avatar
rateArticleRelevance
@vandyand
An interactive, runnable TypeScript val by vandyand
Script
export const rateArticleRelevance = async (interests: string, article: any) => {
const { default: OpenAI } = await import("npm:openai");
const openai = new OpenAI({
apiKey: untitled_tealCoral.OPENAI_API_KEY,
try {
Give a score from 0 to 10. Why did you give this score? Respond with the score only.
const response = await openai.chat.completions.create({
messages: [
trantion avatar
largeAmaranthCat
@trantion
VALL-E LLM code generation for vals! Make apps with a frontend, backend, and database. It's a bit of work to get this running, but it's worth it. Fork this val to your own profile. Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in tempValsParentFolderId . If you want to use OpenAI models you need to set the OPENAI_API_KEY env var . If you want to use Anthropic models you need to set the ANTHROPIC_API_KEY env var . Create a Val Town API token , open the browser preview of this val, and use the API token as the password to log in.
HTTP (deprecated)
* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-v
* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environ
pdebieamzn avatar
calendarEventExtractor
@pdebieamzn
// This calendar app will allow users to upload a PDF, extract events from it using OpenAI's GPT model,
HTTP
// This calendar app will allow users to upload a PDF, extract events from it using OpenAI's GPT model,
// and display them on a big calendar. We'll use react-big-calendar for the calendar component,
async function server(request: Request): Promise<Response> {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const pdfExtractText = await import("https://esm.town/v/pdebieamzn/pdfExtractText");
const fullText = await pdfExtractText.default(arrayBuffer);
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
} catch (error) {
console.error('Error parsing OpenAI response:', error);
console.log('Raw response:', completion.choices[0].message.content);
if (!Array.isArray(events)) {
console.error('Unexpected response format from OpenAI');
return new Response(JSON.stringify({ error: 'Unexpected response format' }), { status: 500, headers: { 'Content-Type':
vtdocs avatar
webscrapeWikipediaIntro
@vtdocs
An interactive, runnable TypeScript val by vtdocs
HTTP (deprecated)
const cheerio = await import("npm:cheerio");
const html = await fetchText(
"https://en.wikipedia.org/wiki/OpenAI",
const $ = cheerio.load(html);
// Cheerio accepts a CSS selector, here we pick the second <p>
…
18
…
Next