Search

Results include substring matches and semantically similar vals. Learn more
stevekrouse avatar
upgradeHTTPPreviewVals
@stevekrouse
Auto-Upgrade for HTTP Preview This val is experimentally testing if we can use an LLM to determine if an old-style HTTP val needs to be upgraded for the new HTTP runtime, currently in preview. You can read more about the breaking change and upgrade proccess here: https://blog.val.town/blog/http-preview/#breaking-changes In some light testing, it seems like ChatGPT 3.5 and 4o both are bad at this task, so I'm pausing experimenting with this for now. Example output from 4o: [ { "name": "harlequinChickadee", "probabilityUpgradeNeeded": true, "reason": "The current code structure has several functions and program logic outside the main handler, including word selection, game state management, and SVG generation. These parts would not re-run with the new runtime, potentially affecting functionality. They need to be moved inside the handler to ensure consistent behavior across requests." }, { "name": "redElephant", "probabilityUpgradeNeeded": "100%", "reason": "The initialization of `fs` and `vscode` objects should occur \n inside the handler in the new runtime in order to ensure that they are \n freshly created for each request. This is critical since the new runtime \n does not rerun code outside of the handler for each request.." }, { "name": "untitled_indigoNightingale", "probabilityUpgradeNeeded": false, "reason": "The code initializes and configures the Hono app outside of the handler, but it does not appear to have any stateful logic that would need to be re-calculated with each request. Each request will call the handler provided by Hono and should behave the same in the new runtime." }, { "name": "untitled_pinkRoundworm", "probabilityUpgradeNeeded": true, "reason": "The functions `addComment` and `getComments` as well as the initialization \nof the KEY variable perform actions that are intended to be run per request. These need to be moved \ninside the relevant HTTP handler to ensure the behavior remains consistent in the new runtime." }, { "name": "untitled_harlequinIguana", "probabilityUpgradeNeeded": false, "reason": "The provided code defines the handler directly without any side effects or additional code outside the handler. The behavior should remain the same in the new runtime." }, { "name": "untitled_moccasinHeron", "probabilityUpgradeNeeded": false, "reason": "The code outside the handler is just a constant string declaration, which does not change behavior between requests. The handler itself handles requests correctly and independently." }, { "name": "untitled_maroonSwallow", "probabilityUpgradeNeeded": false, "reason": "All the code, including the check for authentication,\n is inside the handler function. This means the behavior will stay \n the same with the new runtime." }, { "name": "wikiOG", "probabilityUpgradeNeeded": true, "reason": "The function `getWikipediaInfo` defined outside of the handler makes network requests and processes data for each request. In the new runtime, this function would only be executed once and cached. To ensure the same behavior in the new runtime, this function should be moved into the handler." }, { "name": "parsePostBodyExample", "probabilityUpgradeNeeded": false, "reason": "All the code is inside the handler, so the behavior will remain consistent in the new runtime." }, { "name": "discordEventReceiver", "probabilityUpgradeNeeded": false, "reason": "All the relevant code for handling requests and logging input is inside the handler.\n No code needs to be moved for the new runtime to function correctly." } ] Feel free to fork this and try it yourself! If you could get it working, it'd be a big help for us as we upgrade the thousands of HTTP vals. Future ideas: Better LLM (Claude 3.5) Better prompt More examples JSON mode having it reply with upgraded code send pull requests to users with public vals that probably need upgrading
Script
import { db } from "https://esm.town/v/sqlite/db";
import OpenAI from "npm:openai";
async function getVals(username, type, limit) {
async function checkHTTPPreviewUpgrade(code) {
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
movienerd avatar
AIMovie
@movienerd
* This program creates a movie recommendation system based on moods. * It uses OpenAI to generate movie recommendations based on the user's selected mood and feedback. * The program uses React for the frontend and handles state management for user interactions. * It uses the Val Town OpenAI proxy for backend API calls. * Movies are saved in SQLite to prevent recommending the same movie twice. * The SQLite schema version has been updated to fix the 404 error. * A dancing popcorn emoji has been added to the loading message. * The color scheme has been updated to a monochrome look with a modern style and a nice font. * The "More moods" button text has been changed to "Ask the AI for different moods". * The mood generation now keeps track of previously generated moods to avoid repetition. * A list of movies that have been liked, disliked, or not seen is displayed at the bottom of the page. * The JSON parsing has been made more robust to handle non-standard responses.
HTTP
* This program creates a movie recommendation system based on moods.
* It uses OpenAI to generate movie recommendations based on the user's selected mood and feedback.
* The program uses React for the frontend and handles state management for user interactions.
* It uses the Val Town OpenAI proxy for backend API calls.
* Movies are saved in SQLite to prevent recommending the same movie twice.
if (url.pathname === "/recommend" && request.method === "POST") {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
try {
prompt += ` Recommend a different movie for when they're feeling ${mood}.`;
const completion = await openai.chat.completions.create({
messages: [
if (url.pathname === "/more-moods" && request.method === "POST") {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
try {
const { currentMoods } = await request.json();
const completion = await openai.chat.completions.create({
messages: [
trantion avatar
excessPlumFrog
@trantion
VALL-E LLM code generation for vals! Make apps with a frontend, backend, and database. It's a bit of work to get this running, but it's worth it. Fork this val to your own profile. Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in tempValsParentFolderId . If you want to use OpenAI models you need to set the OPENAI_API_KEY env var . If you want to use Anthropic models you need to set the ANTHROPIC_API_KEY env var . Create a Val Town API token , open the browser preview of this val, and use the API token as the password to log in.
HTTP
* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-v
* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environ
hootz avatar
telegramDalleBot
@hootz
Telegram DALLE Bot A personal telegram bot you can message to create images with OpenAI's DALLE ✨ Set up yours fork this val speak to telegram’s https://t.me/botfather to create a bot and obtain a bot token set the bot token as a val town secret called telegramDalleBotToken add a random string as a val town secret called telegramDalleBotWebhookSecret set up your webhook with telegram like this: // paste and run this in your workspace on here @vtdocs.telegramSetWebhook(@me.secrets.telegramDalleBotToken, { url: /* your fork's express endpoint (click the three dots on a val) */, allowed_updates: ["message"], secret_token: @me.secrets.telegramDalleBotWebhookSecret, }); message your bot some prompts! (if you get stuck, you can refer to the telegram echo bot guide from docs.val.town)
Script
# Telegram DALLE Bot
A personal telegram bot you can message to create images with OpenAI's [DALLE](https://openai.com/dall-e-2) ✨
![DALLE: A Macintosh II sitting on a desk, painted by Picasso in his blue period.](https://i.imgur.com/uJrP5mE.png)
const imageURL =
(await textToImageDalle(
process.env.openai,
text,
1,
geoffreylitt avatar
twitterDailyDigest
@geoffreylitt
Daily Twitter "Important Updates" Digest You like getting important updates from Twitter—new projects or writing or companies to learn about. But you don't like being addicted to the feed constantly. This val lets you get daily updates from specific people you want to follow closely. It uses AI to filter out random shitposts and only get the important updates. 1. Authentication You'll need a Twitter Bearer Token. Follow these instructions to get one. Unfortunately it costs $100 / month to have a Basic Twitter Developer account. If you subscribe to Val Town Pro, you can ask Steve Krouse to borrow his token. Also rate limits seem really severe which limits how useful this is :( Need to figure out workarounds... 2. Query Update the list of usernames to people you care about; change the AI prompt if you want different filtering. 3. Notification Sends a daily email. Todos: this should filter the twitter API call to only tweets since the last run. some kind of caching to avoid rate limiting would be nice to use the user's feed instead of a username list... but not sure how easy that is
Cron
import { email } from "https://esm.town/v/std/email?v=12";
import { OpenAI } from "https://esm.town/v/std/openai?v=4";
import { discordWebhook } from "https://esm.town/v/stevekrouse/discordWebhook";
// "sliminality",
const openai = new OpenAI();
export async function twitterAlert({ lastRunAt }: Interval) {
async function filterTweets(tweets) {
const completion = await openai.chat.completions.create({
messages: [
ntontischris avatar
travelSurveyApp
@ntontischris
@jsxImportSource https://esm.sh/react
HTTP
try {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
ljus avatar
weatherGPT
@ljus
WeatherGPT Using OpenAI chat completion (GPT3) with function calls to SMHI api The API is instructed to use the current time in Europe/Stockholm timezone. If the message can not be handled with the weather API endpoint, the Open AI assistant will reply instead. Relevant API documentation SMHI, forecast documentation OPEN AI, GPT function calling documentation How to use this endpoint GET: RUN_ENDPOINT?args=["A weather question"] Response { answer?: string // If the message could not be answered with the SMHI API error?:string // data?: unknown // the actual data returned from SMHI, if the API is called summary?: string // a summary of the data, by GPT API } Examples How is the weather in the Capital of Sweden tomorrow. How is the weather at Liseberg on Friday. Packages used zod: for describing the SMHI API response and function API input zod-to-json-schema: Transform the zod schema to json schema (readable by the GPT API) gpt3-tokenizer: count the number of tokens date-fns-tz: To handle dates in a specific timezone (Europe/Stockholm)
HTTP
# WeatherGPT
Using OpenAI chat completion (GPT3) with function calls to [SMHI](https://en.wikipedia.org/wiki/Swedish_Meteorological_and_Hy
The API is instructed to use the current time in Europe/Stockholm timezone.
* [SMHI, forecast documentation](https://opendata.smhi.se/apidocs/metfcst/get-forecast.html)
* [OPEN AI, GPT function calling documentation](https://platform.openai.com/docs/guides/gpt/function-calling)
## How to use this endpoint
blur avatar
oracle
@blur
Unlimited Anonymous Emails Create anonymous emails and forward their results to your inbox.
Email
import { AttachmentData, email } from "https://esm.town/v/std/email?v=13";
import { OpenAI } from "https://esm.town/v/std/openai";
const { author, name } = extractValInfo(import.meta.url);
export async function main(e: Email) {
const openai = new OpenAI();
let attachments: AttachmentData[] = [];
disposition: "attachment",
const completion = await openai.chat.completions.create({
messages: [
pdebieamzn avatar
muddyAmethystLimpet
@pdebieamzn
// This calendar app will allow users to upload a PDF, extract events from it using OpenAI's GPT model,
HTTP
// This calendar app will allow users to upload a PDF, extract events from it using OpenAI's GPT model,
// and display them on a big calendar. We'll use react-big-calendar for the calendar component,
async function server(request: Request): Promise<Response> {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const pdfExtractText = await import("https://esm.town/v/pdebieamzn/pdfExtractText");
const fullText = await pdfExtractText.default(arrayBuffer);
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
} catch (error) {
console.error('Error parsing OpenAI response:', error);
console.log('Raw response:', completion.choices[0].message.content);
if (!Array.isArray(events)) {
console.error('Unexpected response format from OpenAI');
return new Response(JSON.stringify({ error: 'Unexpected response format' }), { status: 500, headers: { 'Content-Type':
webup avatar
memorySampleSummary
@webup
An interactive, runnable TypeScript val by webup
Script
const builder = await getMemoryBuilder({
type: "summary",
provider: "openai",
const memory = await builder();
await memory.saveContext({ input: "My favorite sport is soccer" }, {
trob avatar
prompt_to_code_auto_refresh_codebed
@trob
@jsxImportSource https://esm.sh/react
HTTP
if (request.method === "POST" && new URL(request.url).pathname === "/generate") {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
const { prompt } = await request.json();
const completion = await retryWithBackoff(() =>
openai.chat.completions.create({
messages: [
MichaelNollox avatar
VALLE
@MichaelNollox
VALL-E LLM code generation for vals! Make apps with a frontend, backend, and database. It's a bit of work to get this running, but it's worth it. Fork this val to your own profile. Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in tempValsParentFolderId . If you want to use OpenAI models you need to set the OPENAI_API_KEY env var . If you want to use Anthropic models you need to set the ANTHROPIC_API_KEY env var . Create a Val Town API token , open the browser preview of this val, and use the API token as the password to log in.
HTTP
* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-v
* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environ
mrshorts avatar
pythonLearningApp
@mrshorts
@jsxImportSource https://esm.sh/react@18.2.0
HTTP
console.log('Parsed topic:', topic);
// Dynamically import OpenAI with error handling
let OpenAI;
try {
const module = await import("https://esm.town/v/std/openai");
OpenAI = module.OpenAI;
} catch (importError) {
console.error('OpenAI import error:', importError);
return new Response(JSON.stringify({
error: `Failed to import OpenAI: ${importError.message}`
status: 500,
headers: { 'Content-Type': 'application/json' }
// Create OpenAI instance
const openai = new OpenAI();
// Generate explanation
const completion = await openai.chat.completions.create({
messages: [
prashamtrivedi avatar
valreadmegenerator
@prashamtrivedi
@jsxImportSource https://esm.sh/react@18.2.0
HTTP
valName = valNamePart;
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
const valTownClient = new ValTown({
try {
const completion = await openai.chat.completions.create({
model: "gpt-4o",
yawnxyz avatar
pageshot
@yawnxyz
// Inlined content from ./middleware.js
HTTP
"settings": {
"model": "gpt-4o-mini",
"provider": "openai",
"system": "please summarize the following article in a 2-4 sentences, describing it from a 3rd person point of vi
"prompt": "{jina.content}"