Search
egoBooster
@stevekrouse
* This ego booster app takes a selfie, sends it to GPT-4o-mini for analysis,
* and streams funny, specific compliments about the user's appearance.
* We use the WebRTC API for camera access, the OpenAI API for image analysis,
* and server-sent events for real-time streaming of compliments.
HTTP
* and streams funny, specific compliments about the user's appearance.
* We use the WebRTC API for camera access, the OpenAI API for image analysis,
* and server-sent events for real-time streaming of compliments.
console.log("Image received, size:", image.size);
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
const stream = new ReadableStream({
const base64Image = btoa(String.fromCharCode(...new Uint8Array(arrayBuffer)));
console.log("Sending request to OpenAI");
const completion = await openai.chat.completions.create({
model: "gpt-4o-mini",
stream: true,
console.log("Streaming response from OpenAI");
for await (const chunk of completion) {
} catch (error) {
console.error('Error in OpenAI processing:', error);
controller.enqueue(new TextEncoder().encode("Oops! Our AI had a little hiccup. Maybe your beauty short-circuited it! 🤖💥"));
lang_code
@faseeu
@jsxImportSource https://esm.sh/react
HTTP
// Updated LangChain-compatible imports
import { OpenAI } from "https://esm.town/v/std/openai";
// Custom LangChain-like Workflow
class CodeGenerationWorkflow {
private openai: OpenAI;
constructor() {
this.openai = new OpenAI();
private createPromptTemplate(template: string, inputVariables: string[]) {
const fullPrompt = baseCodePrompt.format({ prompt });
const completion = await this.openai.chat.completions.create({
messages: [{ role: "user", content: fullPrompt }],
const fullPrompt = reviewPrompt.format({ baseCode });
const completion = await this.openai.chat.completions.create({
messages: [{ role: "user", content: fullPrompt }],
const fullPrompt = docPrompt.format({ code });
const completion = await this.openai.chat.completions.create({
messages: [{ role: "user", content: fullPrompt }],
neatEmeraldVicuna
@stevekrouse
Twitter/𝕏 keyword alerts Custom notifications for when you, your company, or anything you care about is mentioned on Twitter. If you believe in Twitter/𝕏-driven development, you want to get notified
when anyone is talking about your tech, even if they're not tagging you. To get this Twitter Alert bot running for you,
fork this val and modify the query and where the notification gets delivered. 1. Query Change the keywords for what you want to get notified for
and the excludes for what you don't want to get notified for. You can use Twitter's search operators to customize your query, for some collection of keywords, filtering out others, and much more! 2. Notification Below I'm sending these mentions to a public channel in our company Discord, but you can customize that to whatever you want, @std/email, Slack, Telegram, whatever. Twitter Data & Limitations The Twitter API has become unusable. This val gets Twitter data via SocialData ,
an affordable Twitter scraping API. In order to make this val easy for
you to fork & use without signing up for another API, I am proxying
SocialData via @stevekrouse/socialDataProxy. Val Town Pro users can call this proxy
100 times per day, so be sure not to set this cron to run more than once every 15 min. If you want to run it more, get your own SocialData
API token and pay for it directly.
Cron
import { zodResponseFormat } from "https://esm.sh/openai/helpers/zod";
import { z } from "https://esm.sh/zod";
import { OpenAI } from "https://esm.town/v/std/openai";
import { discordWebhook } from "https://esm.town/v/stevekrouse/discordWebhook";
.join(" OR ") + " " + excludes;
const openai = new OpenAI();
const RelevanceSchema = z.object({
try {
const completion = await openai.beta.chat.completions.parse({
model: "gpt-4o-mini",
} catch (error) {
console.error("Error parsing OpenAI response:", error);
return { isRelevant: false, confidence: 0, reason: "Error in processing" };
createGeneratedVal
@yawnxyz
Use GPT to generate vals on your account! Describe the val that you need, call this function, and you'll get a new val on your workspace generated by OpenAI's API! First, ensure you have a Val Town API Token , then call @andreterron.createGeneratedVal({...}) like this example : @andreterron.createGeneratedVal({
valTownKey: @me.secrets.vt_token,
description:
"A val that given a text file position in `{line, col}` and the text contents, returns the index position",
}); This will create a val in your workspace, and here's the one created by the example above: https://www.val.town/v/andreterron.getFileIndexPosition
Script
# Use GPT to generate vals on your account!
Describe the val that you need, call this function, and you'll get a new val on your workspace generated by OpenAI's API!
First, ensure you have a [Val Town API Token](https://www.val.town/settings/api), then call `@andreterron.createGeneratedVal({...})` like this [example](https://www.val.town/v/andreterron.untitled_tomatoKiwi):
import { runVal } from "https://esm.town/v/std/runVal";
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
export const generateValCode = async (
const lodash = await import('npm:lodash');
const response = await openai.chat.completions.create({
model: "gpt-4o",
valle_tmp_50677281064121176482256801591227
@janpaul123
// Improvements Made:
HTTP
import _ from "npm:lodash";
import OpenAI from "npm:openai";
import { renderToString } from "npm:react-dom/server";
const contextWindow = await valleGetValsContextWindow(model);
const openai = new OpenAI();
const stream = await openai.chat.completions.create({
model,
telegramWebhookEchoMessage
@dcm31
// v15 was the last stable version
HTTP
import { telegramSendMessage } from "https://esm.town/v/vtdocs/telegramSendMessage?v=5";
import { OpenAI } from "https://esm.town/v/std/openai";
import { blob } from "https://esm.town/v/std/blob";
const openai = new OpenAI();
// Task structure
"suggestedActions": An array of suggested next actions or microsteps for the user
const completion = await openai.chat.completions.create({
messages: [{ role: "user", content: prompt }],
telegramWebhookEchoMessageOLD
@dcm31
// v15 was the last stable version
HTTP
import { telegramSendMessage } from "https://esm.town/v/vtdocs/telegramSendMessage?v=5";
import { OpenAI } from "https://esm.town/v/std/openai";
import { blob } from "https://esm.town/v/std/blob";
const openai = new OpenAI();
// Task structure
"suggestedActions": An array of suggested next actions or microsteps for the user
const completion = await openai.chat.completions.create({
messages: [{ role: "user", content: prompt }],
valTownChatGPT2
@janpaul123
https://x.com/JanPaul123/status/1811801305066651997 Fork it and authenticate with your Val Town API token as the password.
HTTP
import { Hono } from "npm:hono@3";
import OpenAI from "npm:openai";
import { renderToString } from "npm:react-dom/server";
</div>,
const openai = new OpenAI();
const stream = await openai.chat.completions.create({
model,
valle_tmp_563310902711919480463986409263
@janpaul123
@jsxImportSource https://esm.sh/react
HTTP
import _ from "npm:lodash@4";
import OpenAI from "npm:openai";
import { renderToString } from "npm:react-dom/server";
const contextWindow: any = await valleGetValsContextWindow(model);
const openai = new OpenAI();
const stream = await openai.chat.completions.create({
model,
telegrambot
@begoon
This val is a demo skeleton of a telegram chat bot. It requires the BOT_TOKEN environment varialbe, which the telegram bot token. Another required variable is ME. The variable is an HTTP header to call a few extra endpoints (see the code). One of those endpoints is /webhook/set , which installs the webhook to activate the bot. The code is mostly educational. The bot echos back incoming messages. There is one command, /ai , which sends the incoming messages to Open AI and forwards the reply back to the chat.
HTTP
const { endpoint } = VAL;
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
async function POST(cmd: string, data: { [key: string]: string }) {
console.log("q", q);
const completion = await openai.chat.completions.create({
messages: [
welcomeEmail
@rodrigotello
An interactive, runnable TypeScript val by rodrigotello
Script
<li style="margin-bottom:6px">Reference your vals: <div style="${CSScodeStyling};">@me.fizz.split('buzz').length</div></li>
<li style="margin-bottom:6px">Reference others' vals: <div style="${CSScodeStyling};">@stevekrouse.moreBuzz()</div></li>
<li style="margin-bottom:6px">Reference personal secrets: <div style="${CSScodeStyling};">@me.secrets.openai</div></li>
<li style="margin-bottom:6px">Import from npm: <div style="${CSScodeStyling};">const _ = await import("npm:lodash-es")</div></li>
<li>Run keyboard shortcut: <div style="${CSScodeStyling};">cmd+enter</div></li>
MicroSaasIdeaRoulette
@heltonteixeira
@jsxImportSource https://esm.sh/react
HTTP
try {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
findSlamArticles
@dupontgu
An interactive, runnable TypeScript val by dupontgu
Cron
import cheerio from "https://esm.sh/cheerio@1.0.0-rc.12";
import { OpenAI } from "https://esm.town/v/std/openai";
import { blob } from "https://esm.town/v/std/blob";
const openai = new OpenAI();
export const BLOB_PREFIX = "SLAM_HEADLINE_"
export async function feedToChatGPT(newsItems: Headline[]): Promise<Slam[] | undefined > {
const completion = await openai.chat.completions.create({
messages: [
yuktiVoiceAssistant
@Aditya230
@jsxImportSource https://esm.sh/react
HTTP
// More robust OpenAI import and initialization
const openaiModule = await import("https://esm.town/v/std/openai");
console.log("OpenAI Module:", Object.keys(openaiModule));
// Use a more flexible approach to accessing OpenAI
const OpenAI = openaiModule.default || openaiModule.OpenAI;
if (!OpenAI) {
throw new Error("Unable to find OpenAI constructor in the imported module");
const openai = new OpenAI({
// Verify the openai object has the required methods
if (!openai.chat || !openai.chat.completions || !openai.chat.completions.create) {
MuxAITranscript
@decepulis
Generate CuePoints and transcripts for your Mux video Adapted from the blog post, Build an AI-powered interactive video transcript with Mux Player CuePoints This Val exposes an HTTP endpoint that takes a Mux Asset ID and a list of speakers and Uses Mux's auto-generated captions to generate a CuePoints object for Mux Player Uses AssemblyAI for speaker labeling (diarization) Uses GPT-4o to format text Fork it and use it as a foundation for your own interactive video transcript project. Usage: Required environment variables: Mux Access token details ( MUX_TOKEN_ID , MUX_TOKEN_SECRET ) This endpoint requires an existing Mux asset that's ready with an audio-only static rendition associated with it. You can run this val to create a new one for testing. AssemblyAI API key ( ASSEMBLYAI_API_KEY ). Get it from their dashboard here OpenAI API key ( OPENAI_API_KEY ). Get it from their dashboard here Make a POST request to the Val's endpoint with the following body, replacing the values with your own asset ID and the list of speakers. Speakers are listed in order of appearance. {
"asset_id": "00OZ8VnQ01wDNQDdI8Qw3kf01FkGTtkMq2CW901ltq64Jyc",
"speakers": ["Matt", "Nick"]
} Limitations This is just a demo, so it's obviously not battle hardened. The biggest issue is that it does this whole process synchronously, so if the any step takes longer than the Val's timeout, you're hosed.
HTTP
- AssemblyAI API key (`ASSEMBLYAI_API_KEY`). Get it [from their dashboard here](https://www.assemblyai.com/app/account)
- OpenAI API key (`OPENAI_API_KEY`). Get it [from their dashboard here](https://platform.openai.com/api-keys)
Make a POST request to the Val's endpoint with the following body, replacing the values with your own asset ID and the list of speakers. Speakers are listed in order of appearance.
import { AssemblyAI } from "npm:assemblyai";
import OpenAI from "npm:openai";
import { zodResponseFormat } from "npm:openai/helpers/zod";
import z from "npm:zod";
apiKey: Deno.env.get("ASSEMBLY_AI_KEY"),
const openai = new OpenAI({
apiKey: Deno.env.get("OPEN_API_KEY"),
const responseFormat = z.object({ cues: cueFormat });
const completion = await openai.chat.completions.create({
messages: [