Search

Results include substring matches and semantically similar vals. Learn more
weaverwhale avatar
GistGPT
@weaverwhale
GistGPT A helpful assistant who provides the gist of a gist How to use / and /gist - Default response is to explain this file. I believe this is effectively real-time recursion ? /gist?url={URL} - Provide a RAW file URL from Github, BitBucket, GitLab, Val Town, etc. and GistGPT will provide you the gist of the code. /about - "Tell me a little bit about yourself"
HTTP
import { Hono } from "npm:hono";
import { OpenAI } from "npm:openai";
const gistGPT = async (input: string, about?: boolean) => {
const chatInput = about ? input : await (await fetch(input)).text();
const openai = new OpenAI();
let chatCompletion = await openai.chat.completions.create({
messages: [
heathergliffin avatar
DailyDaughterNotes
@heathergliffin
* This app generates cute daily notes for a daughter using OpenAI's GPT model. * It stores the generated notes in SQLite for persistence and displays them on a simple web interface. * The app uses React for the frontend and Deno's runtime environment in Val Town for the backend.
HTTP
* This app generates cute daily notes for a daughter using OpenAI's GPT model.
* It stores the generated notes in SQLite for persistence and displays them on a simple web interface.
async function server(request: Request): Promise<Response> {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const { sqlite } = await import("https://esm.town/v/stevekrouse/sqlite");
} else if (path === "/generate-note" && request.method === "POST") {
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
jonbo avatar
cabinAdjacentTweets
@jonbo
scans tweets and then uses an llm to decide whether to and where to send it to forked from https://www.val.town/v/stevekrouse/twitterAlert
Cron
import { OpenAI } from "https://esm.town/v/std/openai";
import { discordWebhook } from "https://esm.town/v/stevekrouse/discordWebhook";
"Jay_Pitter",
const openai = new OpenAI();
async function getUserTweets(username: string, startTime: Date, bearerToken: string) {
try {
completion = await openai.chat.completions.create({
messages: [{ role: "user", content: prompt }],
taras avatar
free_open_router
@taras
curl 'https://taras-free_open_router.web.val.run/api/v1/chat/completions' \ -H 'accept: application/json' \ -H 'authorization: Bearer THIS_IS_OVERRIDEN_ON_SERVER' \ -H 'content-type: application/json' \ --data-raw '{ "model": "auto", "temperature": 0, "messages": [ { "role": "system", "content": "stuff" }, { "role": "user", "content": "hello" } ], "stream": true }'
HTTP
url: "https://openrouter.ai/api/v1/models",
token: Deno.env.get("OPEN_ROUTER_API_KEY"),
url: "https://api.groq.com/openai/v1/models",
token: Deno.env.get("GROQ_API_KEY"),
// Create fetch promises for each API endpoint
if (provider === "groq") {
url.host = "api.groq.com";
url.pathname = url.pathname.replace("/api/v1", "/openai/v1");
url.port = "443";
url.protocol = "https";
knulp222 avatar
interview_practice
@knulp222
@jsxImportSource https://esm.sh/react
HTTP
const { intervieweeResponse, interviewPosition } = await request.json();
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
bluemsn avatar
modelSampleChatCall
@bluemsn
An interactive, runnable TypeScript val by bluemsn
Script
const builder = await getModelBuilder({
type: "chat",
provider: "openai",
const model = await builder();
const { SystemMessage, HumanMessage } = await import("npm:langchain/schema");
chrisputnam9 avatar
dailyThoughtPrompt
@chrisputnam9
Uses OpenAI to generate a thought-provoking statement and emails it to you at whatever interval you set. Clone/fork this, set the interval you want (eg. every 1 days) and hit run to test it. The email will send to your val town account email.
Cron
Uses OpenAI to generate a thought-provoking statement and emails it to you at whatever interval you set.
Clone/fork this, set the interval you want (eg. every 1 days) and hit run to test it.
// OpenAI provided by val town
import { OpenAI } from "https://esm.town/v/std/openai";
// Email provided by val town
// Now ask the AI to invent something
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
AppleLamps avatar
VEOPROMPTER
@AppleLamps
@jsxImportSource https://esm.sh/react@18.2.0
HTTP
if (url.pathname === "/generate-prompts") {
const { OpenAI } = await import("https://esm.sh/openai@4.11.1");
const openai = new OpenAI({
apiKey: Deno.env.get("OPENAI_API_KEY"),
const body = await request.json();
try {
const completion = await openai.chat.completions.create({
model: "chatgpt-4o-latest", // Changed from "gpt-4o-latest" to "chatgpt-4o-latest"
if (!response) {
throw new Error("No response from OpenAI");
const [conciseSection, detailedSection] = response.split("\n\n");
willthereader avatar
chocolateCanid
@willthereader
ChatGPT Implemented in Val Town Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events.
HTTP
# ChatGPT Implemented in Val Town
Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events.
<p align=center>
/** @jsxImportSource https://esm.sh/react */
import { OpenAI } from "https://esm.town/v/std/openai";
import { Hono } from "npm:hono@3";
import { renderToString } from "npm:react-dom/server";
const openai = new OpenAI();
const jsxResponse = (jsx) => {
try {
thread = await openai.chat.completions.create({
model: "gpt-3.5-turbo", // Use a faster model
messages: [{ role: "system", content: "Start a new thread" }],
assistant = await openai.chat.completions.create({
model: "gpt-3.5-turbo", // Use a faster model
try {
await openai.chat.completions.create({
model: "gpt-3.5-turbo", // Use a faster model
"data: " + JSON.stringify(str) + "\n\n",
const run = openai.chat.completions.stream({
model: "gpt-3.5-turbo", // Use a faster model
vandyand avatar
rateArticleRelevance
@vandyand
An interactive, runnable TypeScript val by vandyand
Script
export const rateArticleRelevance = async (interests: string, article: any) => {
const { default: OpenAI } = await import("npm:openai");
const openai = new OpenAI({
apiKey: untitled_tealCoral.OPENAI_API_KEY,
try {
Give a score from 0 to 10. Why did you give this score? Respond with the score only.
const response = await openai.chat.completions.create({
messages: [
pdebieamzn avatar
calendarEventExtractor
@pdebieamzn
// This calendar app will allow users to upload a PDF, extract events from it using OpenAI's GPT model,
HTTP
// This calendar app will allow users to upload a PDF, extract events from it using OpenAI's GPT model,
// and display them on a big calendar. We'll use react-big-calendar for the calendar component,
async function server(request: Request): Promise<Response> {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const pdfExtractText = await import("https://esm.town/v/pdebieamzn/pdfExtractText");
const fullText = await pdfExtractText.default(arrayBuffer);
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
} catch (error) {
console.error('Error parsing OpenAI response:', error);
console.log('Raw response:', completion.choices[0].message.content);
if (!Array.isArray(events)) {
console.error('Unexpected response format from OpenAI');
return new Response(JSON.stringify({ error: 'Unexpected response format' }), { status: 500, headers: { 'Content-Type':
vtdocs avatar
webscrapeWikipediaIntro
@vtdocs
An interactive, runnable TypeScript val by vtdocs
HTTP
const cheerio = await import("npm:cheerio");
const html = await fetchText(
"https://en.wikipedia.org/wiki/OpenAI",
const $ = cheerio.load(html);
// Cheerio accepts a CSS selector, here we pick the second <p>
janpaul123 avatar
indexValsTurso
@janpaul123
Part of Val Town Semantic Search . Generates OpenAI embeddings for all public vals, and stores them in Turso , using the sqlite-vss extension. Create the vals_embeddings and vss_vals_embeddings tables in Turso if they don't already exist. Get all val names from the database of public vals , made by Achille Lacoin . Get all val names from the vals_embeddings table and compute the difference (which ones are missing). Iterate through all missing vals, get their code, get embeddings from OpenAI, and store the result in Turso. When finished, update the vss_vals_embeddings table so we can efficiently query them with the sqlite-vss extension. This is blocked by a bug in Turso that doesn't allow VSS indexes past a certain size. Can now be searched using janpaul123/semanticSearchTurso .
Cron
*Part of [Val Town Semantic Search](https://www.val.town/v/janpaul123/valtownsemanticsearch).*
Generates OpenAI embeddings for all public vals, and stores them in [Turso](https://turso.tech/), using the [sqlite-vss](http
- Create the `vals_embeddings` and `vss_vals_embeddings` tables in Turso if they don't already exist.
- Get all val names from the `vals_embeddings` table and compute the difference (which ones are missing).
- Iterate through all missing vals, get their code, get embeddings from OpenAI, and store the result in Turso.
- When finished, update the `vss_vals_embeddings` table so we can efficiently query them with the [sqlite-vss](https://github
import { db as allValsDb } from "https://esm.town/v/sqlite/db?v=9";
import OpenAI from "npm:openai";
import { truncateMessage } from "npm:openai-tokens";
export default async function(interval: Interval) {
newVals.push(val);
const openai = new OpenAI();
for (const val of newVals) {
})).rows[0][0];
const embedding = await openai.embeddings.create({
model: "text-embedding-3-small",
janpaul123 avatar
valle_tmp_92279722423458817448513814852015
@janpaul123
@jsxImportSource https://esm.sh/react
HTTP
import _ from "npm:lodash@4";
import OpenAI from "npm:openai";
import { renderToString } from "npm:react-dom/server";
const contextWindow = await valleGetValsContextWindow(model);
const openai = new OpenAI();
const stream = await openai.chat.completions.create({
model,
janpaul123 avatar
valtownsemanticsearch
@janpaul123
😎 VAL VIBES: Val Town Semantic Search This val hosts an HTTP server that lets you search all vals based on vibes. If you search for "discord bot" it shows all vals that have "discord bot" vibes. It does this by comparing embeddings from OpenAI generated for the code of all public vals, to an embedding of your search query. This is an experiment to see if and how we want to incorporate semantic search in the actual Val Town search page. I implemented three backends, which you can switch between in the search UI. Check out these vals for details on their implementation. Neon: storing and searching embeddings using the pg_vector extension in Neon's Postgres database. Searching: janpaul123/semanticSearchNeon Indexing: janpaul123/indexValsNeon Blobs: storing embeddings in Val Town's standard blob storage , and iterating through all of them to compute distance. Slow and terrible, but it works! Searching: janpaul123/semanticSearchBlobs Indexing: janpaul123/indexValsBlobs Turso: storing and searching using the sqlite-vss extension. Abandoned because of a bug in Turso's implementation. Searching: janpaul123/semanticSearchTurso Indexing: janpaul123/indexValsTurso All implementations use the database of public vals , made by Achille Lacoin , which is refreshed every hour. The Neon implementation updates every 10 minutes, and the other ones are not updated. I also forked Achille's search UI for this val. Please share any feedback and suggestions, and feel free to fork our vals to improve them. This is a playground for semantic search before we implement it in the product for real!
HTTP
This val hosts an [HTTP server](https://janpaul123-valtownsemanticsearch.web.val.run/) that lets you search all vals based on
It does this by comparing [embeddings from OpenAI](https://platform.openai.com/docs/guides/embeddings) generated for the code
This is an experiment to see if and how we want to incorporate semantic search in the actual Val Town search page.