Search

Results include substring matches and semantically similar vals. Learn more
trob avatar
multilingualchatroom
@trob
A simple chat room to share with friends of other languages, where each user can talk in their own language and see others' messages in their own language. Click and hold a translated message to see the original message. Open the app in a new window to start your own, unique chatroom that you can share with your friends via the room URL. TODO: BUG: fix the issue that keeps old usernames in the "[User] is typing" section after a user changes their name. BUG: Username edit backspaces is glitchy. UI: Update the title for each unique chatroom to make the difference clear. UI: mobile friendly. Feature: the ability for the message receiver to select a part of a translation that is confusing and the author will see that highlight of the confusing words and have the opportunity to reword the message or... Feature: bump a translation to a higher LLM for more accurate translation. Feature: use prior chat context for more accurate translations. Feature: Add video feed for non-verbals while chatting.
HTTP
const { blob } = await import("https://esm.town/v/std/blob");
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
// Simple rate limiting
try {
const completion = await openai.chat.completions.create({
messages: [
stevekrouse avatar
yc_finder
@stevekrouse
YC Company Finder In an effort to get more companies using Val Town, I wanted to see which of our existing userbase was part of a YC company. I found this wonderful google sheet with all YC companies, and had Townie make me this interface to join it with an exported CSV of our users table. All the compute happens client-side, so it's safe for you to run on your customer lists too. If you want any changes, fork it & have Townie customize it for you. Feel free to send me a pull request or leave a comment :)
HTTP
brian@airbnb.com,Brian Chesky
drew@dropbox.com,Drew Houston
sam@openai.com,Sam Altman
tim@apple.com,Tim Cook
jeff@amazon.com,Jeff Bezos
eseidel avatar
twitterAlert
@eseidel
Twitter/𝕏 keyword alerts Custom notifications for when you, your company, or anything you care about is mentioned on Twitter. If you believe in Twitter/𝕏-driven development, you want to get notified when anyone is talking about your tech, even if they're not tagging you. To get this Twitter Alert bot running for you, fork this val and modify the query and where the notification gets delivered. 1. Query Change the keywords for what you want to get notified for and the excludes for what you don't want to get notified for. You can use Twitter's search operators to customize your query, for some collection of keywords, filtering out others, and much more! 2. Notification Below I'm sending these mentions to a public channel in our company Discord, but you can customize that to whatever you want, @std/email, Slack, Telegram, whatever. Twitter Data & Limitations The Twitter API has become unusable. This val gets Twitter data via SocialData , an affordable Twitter scraping API. In order to make this val easy for you to fork & use without signing up for another API, I am proxying SocialData via @stevekrouse/socialDataProxy. Val Town Pro users can call this proxy 100 times per day, so be sure not to set this cron to run more than once every 15 min. If you want to run it more, get your own SocialData API token and pay for it directly.
Cron
import { zodResponseFormat } from "https://esm.sh/openai/helpers/zod";
import { z } from "https://esm.sh/zod";
import { OpenAI } from "https://esm.town/v/std/openai";
import { discordWebhook } from "https://esm.town/v/stevekrouse/discordWebhook";
.join(" OR ") + " " + excludes;
const openai = new OpenAI();
const RelevanceSchema = z.object({
try {
const completion = await openai.beta.chat.completions.parse({
model: "gpt-4o-mini",
} catch (error) {
console.error("Error parsing OpenAI response:", error);
return { isRelevant: false, confidence: 0, reason: "Error in processing" };
nerdymomocat avatar
add_to_habitify_from_todoist_w_ai
@nerdymomocat
// if you added new items to habitify after running this script
Cron
import Jimp from "npm:jimp";
import OpenAI from "npm:openai";
import { z } from "npm:zod";
const HABITIFY_API_KEY = process.env.HABITIFY_API_KEY;
const OPENAI_API_KEY = process.env.OPENAI_API_KEY;
const DEF_TIMEZONE = "America/Los_Angeles"; // Get your timezone from here: https://stackoverflow.com/a/54500197
const todoistapi = new TodoistApi(TODOIST_API_KEY);
const oai = new OpenAI({
apiKey: OPENAI_API_KEY ?? undefined,
const client = Instructor({
jxnblk avatar
IndirectionAPI
@jxnblk
// Test without using this?
Script
const tabooResponse = await fetchOpenAIText(tabooPrompt);
return fetchOpenAIStream(cluePrompt);
async function fetchOpenAIText(content: string): string {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
export async function fetchOpenAIStream(content: string): Promise<Response> {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
stevekrouse avatar
cron_client_side_script_fork
@stevekrouse
CronGPT This is a minisite to help you create cron expressions, particularly for crons on Val Town. It was inspired by Cron Prompt , but also does the timezone conversion from wherever you are to UTC (typically the server timezone). Tech Hono for routing ( GET / and POST /compile .) Hono JSX HTML (probably overcomplicates things; should remove) @stevekrouse/openai, which is a light wrapper around @std/openai
HTTP
* HTML (probably overcomplicates things; should remove)
* @stevekrouse/openai, which is a light wrapper around @std/openai
/** @jsxImportSource npm:hono@3/jsx */
import { chat } from "https://esm.town/v/stevekrouse/openai";
import cronstrue from "npm:cronstrue";
import { Hono } from "npm:hono@3";
xsec avatar
ChatGPTTextDefinitionUserscript
@xsec
// @name Improved ChatGPT Text Definition
Script
// Configuration
const API_CONFIG = {
url: "https://willthereader-openaidefiner.web.val.run",
method: "POST",
mode: "cors",
cosmo avatar
get_weather_message
@cosmo
An interactive, runnable TypeScript val by cosmo
Script
import { chat } from "https://esm.town/v/cosmo/chat_openai";
import { getCurrentWeather } from "https://esm.town/v/cosmo/get_current_weather";
export async function getWeatherMessage(apiKey, latitude, longitude) {
mjweaver01 avatar
SermonGPTAPI
@mjweaver01
An interactive, runnable TypeScript val by mjweaver01
HTTP
if (request.method === "POST" && new URL(request.url).pathname === "/stream") {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
const { question } = await request.json();
const season = getSeason(currentDate);
const stream = await openai.chat.completions.create({
messages: [
janpaul123 avatar
getValsContextWindow
@janpaul123
An interactive, runnable TypeScript val by janpaul123
Script
el.attributes.filter(a => a.name === "href").map(a => a.value)
prompt: "Write a val that uses OpenAI",
code: `import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
"messages": [
elsif_maj avatar
topHNThreadByHour
@elsif_maj
// set at Thu Nov 30 2023 14:22:53 GMT+0000 (Coordinated Universal Time)
Email
// set at Thu Nov 30 2023 14:22:53 GMT+0000 (Coordinated Universal Time)
for 18:00 is: The Midwit Smart Home","Top thread on Hackernews for 19:00 is: OpenAI is too cheap to beat","Top thread on Hack
stevekrouse avatar
cron
@stevekrouse
CronGPT This is a minisite to help you create cron expressions, particularly for crons on Val Town. It was inspired by Cron Prompt , but also does the timezone conversion from wherever you are to UTC (typically the server timezone). Tech Hono for routing ( GET / and POST /compile .) Hono JSX HTMX (probably overcomplicates things; should remove) @stevekrouse/openai, which is a light wrapper around @std/openai I'm finding HTMX a bit overpowered for this, so I have two experimental forks without it: Vanilla client-side JavaScript: @stevekrouse/cron_client_side_script_fork Client-side ReactJS (no SSR): @stevekrouse/cron_client_react_fork I think (2) Client-side React without any SSR is the simplest architecture. Maybe will move to that.
HTTP
* HTMX (probably overcomplicates things; should remove)
* @stevekrouse/openai, which is a light wrapper around @std/openai
I'm finding HTMX a bit overpowered for this, so I have two experimental forks without it:
/** @jsxImportSource npm:hono@3/jsx */
import { chat } from "https://esm.town/v/stevekrouse/openai";
import cronstrue from "npm:cronstrue";
import { Hono } from "npm:hono@3";
roadlabs avatar
valleGetValsContextWindow
@roadlabs
An interactive, runnable TypeScript val by roadlabs
Script
el.attributes.filter(a => a.name === "href").map(a => a.value)
prompt: "Write a val that uses OpenAI",
code: `import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
"messages": [
arash2060 avatar
VALLErun
@arash2060
The actual code for VALL-E: https://www.val.town/v/janpaul123/VALLE
HTTP
import { sleep } from "https://esm.town/v/stevekrouse/sleep?v=1";
import { anthropic } from "npm:@ai-sdk/anthropic";
import { openai } from "npm:@ai-sdk/openai";
import ValTown from "npm:@valtown/sdk";
import { StreamingTextResponse, streamText } from "npm:ai";
let vercelModel;
if (model.includes("gpt")) {
vercelModel = openai(model);
} else {
vercelModel = anthropic(model);
jesi_rgb avatar
email_weather
@jesi_rgb
* This val will create a daily weather email service. * It uses the OpenWeatherMap API to fetch weather data and the Val Town email API to send emails. * The val will be triggered daily using a cron job.
Cron
import { email } from "https://esm.town/v/std/email";
import { OpenAI } from "https://esm.town/v/std/openai";
const OPENWEATHERMAP_API_KEY = Deno.env.get("WEATHER_API_KEY");
async function generateEmailContent(weatherData) {
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [