Search

Results include substring matches and semantically similar vals. Learn more
oijoijcoiejoijce avatar
VALLE
@oijoijcoiejoijce
VALL-E LLM code generation for vals! Make apps with a frontend, backend, and database. It's a bit of work to get this running, but it's worth it. Fork this val to your own profile. Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in tempValsParentFolderId . If you want to use OpenAI models you need to set the OPENAI_API_KEY env var . If you want to use Anthropic models you need to set the ANTHROPIC_API_KEY env var . Create a Val Town API token , open the browser preview of this val, and use the API token as the password to log in.
HTTP (deprecated)
* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-v
* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environ
jdan avatar
weatherBot
@jdan
An interactive, runnable TypeScript val by jdan
Script
import { weatherOfLatLon } from "https://esm.town/v/jdan/weatherOfLatLon";
import { OpenAI } from "https://esm.town/v/std/openai?v=4";
const openai = new OpenAI();
const toolbox = {
"latLngOfCity": {
openAiTool: {
type: "function",
"weatherOfLatLon": {
openAiTool: {
type: "function",
"fetchWebpage": {
openAiTool: {
type: "function",
call: fetchWebpage
const tools = Object.values(toolbox).map(({ openAiTool }) => openAiTool);
const transcript = [
async function runConversation() {
const response = await openai.chat.completions.create({
messages: transcript,
jacoblee93 avatar
chatAgentWithCustomPrompt
@jacoblee93
An interactive, runnable TypeScript val by jacoblee93
Script
export const chatAgentWithCustomPrompt = (async () => {
const { ChatOpenAI } = await import(
"https://esm.sh/langchain/chat_models/openai"
const { initializeAgentExecutorWithOptions } = await import(
"https://esm.sh/langchain/tools/calculator"
const model = new ChatOpenAI({
temperature: 0,
openAIApiKey: process.env.OPENAI_API_KEY,
const tools = [
rishabhparikh avatar
easyAQI
@rishabhparikh
easyAQI Get the Air Quality Index (AQI) for a location via open data sources. It's "easy" because it strings together multiple lower-level APIs to give you a simple interface for AQI. Accepts a location in basically any string format (ie "downtown manhattan") Uses Nominatim to turn that into longitude and latitude Finds the closest sensor to you on OpenAQ Pulls the readings from OpenAQ Calculates the AQI via EPA's NowCAST algorithm Uses EPA's ranking to classify the severity of the score (ie "Unhealthy for Sensitive Groups") It uses blob storage to cache the openai location id for your location string to skip a couple steps for the next time. Example usage @stevekrouse.easyAQI({ location: "brooklyn navy yard" }) // Returns { "aqi": 23.6, "severity": "Good" } Forkable example: val.town/v/stevekrouse.easyAQIExample Also useful for getting alerts when the AQI is unhealthy near you: https://www.val.town/v/stevekrouse.aqi
Script
6. Uses EPA's ranking to classify the severity of the score (ie "Unhealthy for Sensitive Groups")
It uses blob storage to cache the openai location id for your location string to skip a couple steps for the next time.
## Example usage
yawnxyz avatar
audioManager
@yawnxyz
Usage: import { ai } from "https://esm.town/v/yawnxyz/ai"; import { AudioManager } from "https://esm.town/v/yawnxyz/audioManager"; let audio = new AudioManager(); let joke = await ai("tell me a joke in chinese!"); console.log('text', joke) let result = await audio.textToSpeechUpload(joke, {key: "random-joke.mp3"}); console.log('result:', result)
Script
import { OpenAI } from "https://esm.town/v/yawnxyz/OpenAI";
import { fetch } from "https://esm.town/v/std/fetch";
constructor(apiKey=null, uploadFunction = null, downloadFunction = null) {
this.openai = new OpenAI(apiKey);
this.uploadFunction = uploadFunction || this.blobUpload;
const mergedOptions = { ...defaultOptions, ...options };
const transcription = await this.openai.audio.transcriptions.create(mergedOptions);
return transcription;
const mergedOptions = { ...defaultOptions, ...options };
const translation = await this.openai.audio.translations.create(mergedOptions);
return translation;
// returns an openai speech object
async textToSpeech(text, options = {}) {
const mergedOptions = { ...defaultOptions, ...options };
const speech = await this.openai.audio.speech.create(mergedOptions);
const arrayBuffer = await speech.arrayBuffer();
const mergedOptions = { ...defaultOptions, ...options };
const speech = await this.openai.audio.speech.create(mergedOptions);
const arrayBuffer = await speech.arrayBuffer();
stevekrouse avatar
chatGPT
@stevekrouse
ChatGPT Implemented in Val Town Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events. ⚠️ Note: Requires your own OpenAI API key to get this to run in a fork
HTTP (deprecated)
# ChatGPT Implemented in Val Town
Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events.
<p align=center>
</p>
**⚠️ Note: Requires your own OpenAI API key to get this to run in a fork**
import { Hono } from "npm:hono@3";
import OpenAI from "npm:openai";
import { renderToString } from "npm:react-dom/server";
eventSource.close();
const openai = new OpenAI();
const app = new Hono();
app.get("/", async (c) => {
const thread = await openai.beta.threads.create();
const assistant = await openai.beta.assistants.create({
name: "",
const message = c.req.query("message");
await openai.beta.threads.messages.create(
threadId,
"data: " + JSON.stringify(str) + "\n\n",
const run = openai.beta.threads.runs.stream(threadId, {
assistant_id: assistantId,
jacoblee93 avatar
conversationalRetrievalQAChainSummaryMemory
@jacoblee93
An interactive, runnable TypeScript val by jacoblee93
Script
export const conversationalRetrievalQAChainSummaryMemory = (async () => {
const { ChatOpenAI } = await import(
"https://esm.sh/langchain/chat_models/openai"
const { OpenAIEmbeddings } = await import(
"https://esm.sh/langchain/embeddings/openai"
const { ConversationSummaryMemory } = await import(
"https://esm.sh/langchain/chains"
const chatModel = new ChatOpenAI({
openAIApiKey: process.env.OPENAI_API_KEY,
/* Create the vectorstore */
[{ id: 2 }, { id: 1 }, { id: 3 }],
new OpenAIEmbeddings({
openAIApiKey: process.env.OPENAI_API_KEY,
/* Create the chain */
thomasatflexos avatar
createRelevantComment
@thomasatflexos
An interactive, runnable TypeScript val by thomasatflexos
Script
import { getOpenAiResponse } from "https://esm.town/v/thomasatflexos/getOpenAiResponse";
import { getRelevantContent } from "https://esm.town/v/thomasatflexos/getRelevantContent";
import process from "node:process";
\n IF you think that the Linkedin post is about new job opportunities, just respond with the text "N/A" and stop immedi
\n ELSE IF you think the Linkdedin post is not about new job opportunities, please proceed to a meaningful comment to t
let finalResponse = await getOpenAiResponse(PROMPT);
const { data1, error1 } = await supabase
.from("linkedin_seedings")
ajax avatar
annoy
@ajax
An interactive, runnable TypeScript val by ajax
Script
Copying the example above, find a new word and do as above.
console.log({ prompt });
const response = await fetch("https://api.openai.com/v1/completions", {
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": "Bearer " + process.env.OPENAI_API_KEY, // Replace with your OpenAI API Key
body: JSON.stringify({
"prompt": prompt,
jdan avatar
gpt4o_emoji
@jdan
// await getGPT4oEmoji(
Script
import { chat } from "https://esm.town/v/stevekrouse/openai?v=19";
export async function getGPT4oEmoji(url) {
const response = await chat([
stevekrouse avatar
gpt3Unsafe
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
Script
export const gpt3Unsafe = runVal("patrickjm.gpt3", {
prompt: "Write a haiku about being cool:",
openAiKey: process.env.openai,
stevekrouse avatar
browserbase_google_concerts
@stevekrouse
// Navigate to Google
Script
import puppeteer from "https://deno.land/x/puppeteer@16.2.0/mod.ts";
import { OpenAI } from "https://esm.town/v/std/openai?v=4";
import { Browserbase } from "npm:@browserbasehq/sdk";
// ask chat gpt for list of concert dates
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
xuybin avatar
openAIStreaming
@xuybin
OpenAI Streaming - Assistant and Threads An example of using OpenAI to stream back a chat with an assistant. This example sends two messages to the assistant and streams back the responses when they come in. Example response: user > What should I build today? ................ assistant > Here are a few fun Val ideas you could build on Val Town: 1. **Random Joke Generator:** Fetch a random joke from an API and display it. 2. **Daily Weather Update:** Pull weather data for your location using an API and create a daily summary. 3. **Mini Todo List:** Create a simple to-do list app with add, edit, and delete functionalities. 4. **Chuck Norris Facts:** Display a random Chuck Norris fact sourced from an API. 5. **Motivational Quote of the Day:** Fetch and display a random motivational quote each day. Which one sounds interesting to you? user > Cool idea, can you make it even cooler? ................... assistant > Sure, let's add some extra flair to make it even cooler! How about creating a **Motivational Quote of the Day** app with these features: 1. **Random Color Theme:** Each day, the background color/theme changes randomly. 2. **Quote Sharing:** Add an option to share the quote on social media. 3. **Daily Notifications:** Send a daily notification with the quote of the day. 4. **User Preferences:** Allow users to choose categories (e.g., success, happiness, perseverance) for the quotes they receive. Would you like some code snippets or guidance on implementing any of these features?
HTTP (deprecated)
# OpenAI Streaming - Assistant and Threads
An example of using OpenAI to stream back a chat with an assistant. This example sends two messages to the assistant and stre
Example response:
import OpenAI from "npm:openai";
const openai = new OpenAI();
import process from "node:process";
// Define our assistant.
const assistant = await openai.beta.assistants.create({
name: "Val Tutor",
// Create a thread to chat in.
const thread = await openai.beta.threads.create();
// These are the messages we'll send to the assistant.
}, 100);
const message = await openai.beta.threads.messages.create(
thread.id,
{ role: "user", content: messages[i] },
const run = openai.beta.threads.runs.stream(thread.id, {
assistant_id: assistant.id,
vtdocs avatar
browserlessPuppeteerExample
@vtdocs
An interactive, runnable TypeScript val by vtdocs
Script
`wss://chrome.browserless.io?token=${process.env.browserlessKey}`,
const page = await browser.newPage();
await page.goto("https://en.wikipedia.org/wiki/OpenAI");
const intro = await page.evaluate(
`document.querySelector('p:nth-of-type(2)').innerText`,
bluemsn avatar
getModelBuilder
@bluemsn
An interactive, runnable TypeScript val by bluemsn
Script
provider?: "openai" | "huggingface";
} = { type: "llm", provider: "openai" }, options?: any) {
if (spec?.provider === "openai")
args.openAIApiKey = process.env.OPENAI_API_KEY;
matches({ type: "llm", provider: "openai" }),
const { OpenAI } = await import("npm:langchain/llms/openai");
return new OpenAI(args);
matches({ type: "chat", provider: "openai" }),
const { ChatOpenAI } = await import("npm:langchain/chat_models/openai");
return new ChatOpenAI(args);