Search

Results include substring matches and semantically similar vals. Learn more
rlimit avatar
gpt4
@rlimit
* OpenAI text completion. https://platform.openai.com/docs/api-reference/completions * * val.town and rlimit.com has generously provided a free daily quota. Until the quota is met, no need to provide an API key. *
Script
import { parentReference } from "https://esm.town/v/stevekrouse/parentReference?v=3";
import { runVal } from "https://esm.town/v/std/runVal";
* OpenAI text completion. https://platform.openai.com/docs/api-reference/completions
* val.town and rlimit.com has generously provided a free daily quota. Until the quota is met, no need to provide an API key.
export const gpt4 = async (prompt: string, maxTokens?: number = 1000) => {
ejfox avatar
githubactivitysummarizer
@ejfox
GitHub Activity Summarizer This val.town script fetches a user's recent GitHub activity and generates a summarized narrative overview using OpenAI's GPT model. Features Retrieves GitHub activity for a specified user from the past week Summarizes activity using OpenAI's GPT-3.5-turbo model Returns a concise, narrative summary of the user's GitHub contributions Usage Access the script via HTTP GET request: https://https://ejfox-githubactivitysummarizer.web.val.run/?username=<github_username> Replace <github_username> with the desired GitHub username like https://https://ejfox-githubactivitysummarizer.web.val.run/?username=ejfox in the past week Note Ensure you have necessary permissions and comply with GitHub's and OpenAI's terms of service when using this script.
HTTP
# GitHub Activity Summarizer
s recent GitHub activity and generates a summarized narrative overview using OpenAI's GPT model.
## Features
- Retrieves GitHub activity for a specified user from the past week
- Summarizes activity using OpenAI's GPT-3.5-turbo model
- Returns a concise, narrative summary of the user's GitHub contributions
## Note
Ensure you have necessary permissions and comply with GitHub's and OpenAI's terms of service when using this script.
// This approach fetches GitHub activity for two users specified in the URL,
// sends it to OpenAI for analysis, and returns collaboration suggestions.
// It uses the GitHub API (which doesn't require authentication for public data)
// and the OpenAI API (which does require an API key).
// Tradeoff: We're using an inline API key for simplicity, which isn't ideal for security.
// Note: This might hit rate limits for the GitHub API due to fetching a year of data.
import { OpenAI } from "https://esm.town/v/std/openai";
const OPENAI_API_KEY = "your_openai_api_key"; // Replace with your actual OpenAI API key
export default async function main(req: Request): Promise<Response> {
const user2Summary = summarizeActivity(user2Data);
const openai = new OpenAI(OPENAI_API_KEY);
const completion = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
yawnxyz avatar
generateValCodeAPI
@yawnxyz
An interactive, runnable TypeScript val by yawnxyz
Script
export let generateValCodeAPI = (description: string) =>
generateValCode(
process.env.OPENAI_API_KEY,
description,
jacoblee93 avatar
braveAgent
@jacoblee93
// Shows how to use the Brave Search tool in a LangChain agent
Script
export const braveAgent = (async () => {
const { ChatOpenAI } = await import(
"https://esm.sh/langchain/chat_models/openai"
const { BraveSearch } = await import("https://esm.sh/langchain/tools");
"https://esm.sh/langchain/agents"
const model = new ChatOpenAI({
temperature: 0,
openAIApiKey: process.env.OPENAI_API_KEY,
const tools = [
liviu avatar
VALLE
@liviu
VALL-E LLM code generation for vals! Make apps with a frontend, backend, and database. It's a bit of work to get this running, but it's worth it. Fork this val to your own profile. Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in tempValsParentFolderId . If you want to use OpenAI models you need to set the OPENAI_API_KEY env var . If you want to use Anthropic models you need to set the ANTHROPIC_API_KEY env var . Create a Val Town API token , open the browser preview of this val, and use the API token as the password to log in.
HTTP (deprecated)
* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-v
* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environ
banebot avatar
GreetingCard
@banebot
Generate a greeting card! This script exposes two endpoints, one for a form to capture information to utilize in creating a greeting card utilizing OpenAI, and the other to generate the greeting card based off the query parameters provided.
HTTP (deprecated)
# Generate a greeting card!
form to capture information to utilize in creating a greeting card utilizing OpenAI, and the other to generate the greeting c
import { OpenAI } from "https://esm.town/v/std/openai";
import { Hono } from "npm:hono";
import { cors } from "npm:hono/cors";
const openai = new OpenAI(),
chat = openai.chat,
chatCompletions = chat.completions;
maxm avatar
valTownChatGPT
@maxm
ChatGPT Implemented in Val Town Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events.
HTTP (deprecated)
# ChatGPT Implemented in Val Town
Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events.
<p align=center>
/** @jsxImportSource https://esm.sh/react */
import OpenAI from "npm:openai";
import { renderToString } from "npm:react-dom/server";
// This uses by personal API key, you'll need to provide your own if
// you fork this. We'll be adding support to the std/openai lib soon!
const openai = new OpenAI();
import { Hono } from "npm:hono@3";
body: msgDiv.textContent,
// Setup the SSE connection and stream back the response. OpenAI handles determining
// which message is the correct response based on what was last read from the
app.get("/", async (c) => {
const thread = await openai.beta.threads.create();
const assistant = await openai.beta.assistants.create({
name: "",
let message = await c.req.text();
await openai.beta.threads.messages.create(
c.req.query("threadId"),
"data: " + JSON.stringify(str) + "\n\n",
const run = openai.beta.threads.runs.stream(threadId, {
assistant_id: assistantId,
stevekrouse avatar
cronprompt
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
HTTP (deprecated)
import { OpenAI } from "https://esm.town/v/std/openai";
export async function cronprompt(prompt: string) {
const openai = new OpenAI();
const functionExpression = await openai.chat.completions.create({
messages: [
browserbase avatar
TopHackerNewsDailyEmail
@browserbase
Browserbase Browserbase offers a reliable, high performance serverless developer platform to run, manage, and monitor headless browsers at scale. Leverage our infrastructure to power your web automation and AI agents. Get started with Browserbase for free here . If you have any questions, reach out to developer@browserbase.com.
Cron
import { loadPageContent } from "https://esm.town/v/charlypoly/browserbaseUtils";
import { OpenAI } from "https://esm.town/v/std/openai?v=4";
import { z } from "npm:zod";
.describe("Top 5 stories on Hacker News"),
// we create a OpenAI Tool that takes our schema as argument
const extractContentTool: any = {
parameters: zodToJsonSchema(schema),
const openai = new OpenAI();
// We ask OpenAI to extract the content from the given web page.
// The model will reach out to our `extract_content` tool and
// the requirement of `extract_content`s argument.
const completion = await openai.chat.completions.create({
model: "gpt-4-turbo",
tool_choice: "auto",
// we retrieve the serialized arguments generated by OpenAI
const result = completion.choices[0].message.tool_calls![0].function.arguments;
maxm avatar
openAIStreaming
@maxm
OpenAI Streaming - Assistant and Threads An example of using OpenAI to stream back a chat with an assistant. This example sends two messages to the assistant and streams back the responses when they come in. Example response: user > What should I build today? ................ assistant > Here are a few fun Val ideas you could build on Val Town: 1. **Random Joke Generator:** Fetch a random joke from an API and display it. 2. **Daily Weather Update:** Pull weather data for your location using an API and create a daily summary. 3. **Mini Todo List:** Create a simple to-do list app with add, edit, and delete functionalities. 4. **Chuck Norris Facts:** Display a random Chuck Norris fact sourced from an API. 5. **Motivational Quote of the Day:** Fetch and display a random motivational quote each day. Which one sounds interesting to you? user > Cool idea, can you make it even cooler? ................... assistant > Sure, let's add some extra flair to make it even cooler! How about creating a **Motivational Quote of the Day** app with these features: 1. **Random Color Theme:** Each day, the background color/theme changes randomly. 2. **Quote Sharing:** Add an option to share the quote on social media. 3. **Daily Notifications:** Send a daily notification with the quote of the day. 4. **User Preferences:** Allow users to choose categories (e.g., success, happiness, perseverance) for the quotes they receive. Would you like some code snippets or guidance on implementing any of these features?
HTTP (deprecated)
# OpenAI Streaming - Assistant and Threads
An example of using OpenAI to stream back a chat with an assistant. This example sends two messages to the assistant and stre
Example response:
import OpenAI from "npm:openai";
const openai = new OpenAI();
import process from "node:process";
// Define our assistant.
const assistant = await openai.beta.assistants.create({
name: "Val Tutor",
// Create a thread to chat in.
const thread = await openai.beta.threads.create();
// These are the messages we'll send to the assistant.
}, 100);
const message = await openai.beta.threads.messages.create(
thread.id,
{ role: "user", content: messages[i] },
const run = openai.beta.threads.runs.stream(thread.id, {
assistant_id: assistant.id,
nbbaier avatar
browserlessPuppeteerExample
@nbbaier
An interactive, runnable TypeScript val by nbbaier
Script
browserWSEndpoint: `wss://chrome.browserless.io?token=${process.env.browserlessKey}`,
const page = await browser.newPage();
await page.goto("https://en.wikipedia.org/wiki/OpenAI");
const intro = await page.evaluate(
`document.querySelector('p:nth-of-type(2)').innerText`,
andreterron avatar
generateValCodeAPI
@andreterron
An interactive, runnable TypeScript val by andreterron
Script
export let generateValCodeAPI = (description: string) =>
generateValCode(
process.env.VT_OPENAI_KEY,
description,
stevekrouse avatar
textToImageDalle
@stevekrouse
// Forked from @hootz.textToImageDalle
Script
export const textToImageDalle = async (
openAIToken: string,
prompt: string,
} = await fetchJSON(
"https://api.openai.com/v1/images/generations",
method: "POST",
"Content-Type": "application/json",
"Authorization": `Bearer ${openAIToken}`,
body: JSON.stringify({
thu avatar
getJoke
@thu
An interactive, runnable TypeScript val by thu
Script
export const getJoke = (async () => {
const { ChatCompletion } = await import("npm:openai");
const result = await ChatCompletion.create({
model: "gpt-3.5-turbo",
datadelaurier avatar
easyAQI
@datadelaurier
easyAQI Get the Air Quality Index (AQI) for a location via open data sources. It's "easy" because it strings together multiple lower-level APIs to give you a simple interface for AQI. Accepts a location in basically any string format (ie "downtown manhattan") Uses Nominatim to turn that into longitude and latitude Finds the closest sensor to you on OpenAQ Pulls the readings from OpenAQ Calculates the AQI via EPA's NowCAST algorithm Uses EPA's ranking to classify the severity of the score (ie "Unhealthy for Sensitive Groups") It uses blob storage to cache the openai location id for your location string to skip a couple steps for the next time. Example usage @stevekrouse.easyAQI({ location: "brooklyn navy yard" }) // Returns { "aqi": 23.6, "severity": "Good" } Forkable example: val.town/v/stevekrouse.easyAQIExample Also useful for getting alerts when the AQI is unhealthy near you: https://www.val.town/v/stevekrouse.aqi
Script
6. Uses EPA's ranking to classify the severity of the score (ie "Unhealthy for Sensitive Groups")
It uses blob storage to cache the openai location id for your location string to skip a couple steps for the next time.
## Example usage
…
11
…
Next