Search

Results include substring matches and semantically similar vals. Learn more
nbbaier avatar
browserlessPuppeteerExample
@nbbaier
An interactive, runnable TypeScript val by nbbaier
Script
browserWSEndpoint: `wss://chrome.browserless.io?token=${process.env.browserlessKey}`,
const page = await browser.newPage();
await page.goto("https://en.wikipedia.org/wiki/OpenAI");
const intro = await page.evaluate(
`document.querySelector('p:nth-of-type(2)').innerText`,
andreterron avatar
generateValCodeAPI
@andreterron
An interactive, runnable TypeScript val by andreterron
Script
export let generateValCodeAPI = (description: string) =>
generateValCode(
process.env.VT_OPENAI_KEY,
description,
stevekrouse avatar
textToImageDalle
@stevekrouse
// Forked from @hootz.textToImageDalle
Script
export const textToImageDalle = async (
openAIToken: string,
prompt: string,
} = await fetchJSON(
"https://api.openai.com/v1/images/generations",
method: "POST",
"Content-Type": "application/json",
"Authorization": `Bearer ${openAIToken}`,
body: JSON.stringify({
thu avatar
getJoke
@thu
An interactive, runnable TypeScript val by thu
Script
export const getJoke = (async () => {
const { ChatCompletion } = await import("npm:openai");
const result = await ChatCompletion.create({
model: "gpt-3.5-turbo",
robertmccallnz avatar
petunia_chat
@robertmccallnz
An interactive, runnable TypeScript val by robertmccallnz
HTTP
import { html } from "https://esm.sh/hono/html";
import { OpenAI } from "https://esm.town/v/std/openai";
const app = new Hono();
app.get("/market-data", async (c) => {
const openai = new OpenAI();
try {
const completion = await openai.chat.completions.create({
model: "gpt-4o-mini",
const { message } = await c.req.json();
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
model: "gpt-4o-mini",
serkanokur avatar
weatherGPT
@serkanokur
Cron
import { email } from "https://esm.town/v/std/email?v=11";
import { OpenAI } from "npm:openai";
let location = "munchen";
).then(r => r.json());
const openai = new OpenAI();
let chatCompletion = await openai.chat.completions.create({
messages: [{
ulysse avatar
bookmarkdigest
@ulysse
An interactive, runnable TypeScript val by ulysse
HTTP
import { OpenAI } from "https://esm.town/v/std/openai";
async function summarizePage(input, isUrl = true) {
text = input;
// Initialize OpenAI API using your own API key
const openai = new OpenAI({
apiKey: Deno.env.get("OPENAI_API_KEY"),
// Use OpenAI API to summarize the text
const completion = await openai.chat.completions.create({
messages: [{
"Access-Control-Allow-Methods": "GET, POST, OPTIONS",
"Access-Control-Allow-Headers": "Content-Type, X-OpenAI-Key",
"Access-Control-Max-Age": "86400",
return new Response(null, { headers, status: 204 });
const apiKey = req.headers.get("X-OpenAI-Key");
if (!apiKey) {
nisargio avatar
browserlessPuppeteerExample
@nisargio
An interactive, runnable TypeScript val by nisargio
Script
browserWSEndpoint: `wss://chrome.browserless.io?token=${process.env.browserlessKey}`,
const page = await browser.newPage();
await page.goto("https://en.wikipedia.org/wiki/OpenAI");
const intro = await page.evaluate(
`document.querySelector('p:nth-of-type(2)').innerText`,
arash2060 avatar
VALLE
@arash2060
VALL-E LLM code generation for vals! Make apps with a frontend, backend, and database. It's a bit of work to get this running, but it's worth it. Fork this val to your own profile. Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in tempValsParentFolderId . If you want to use OpenAI models you need to set the OPENAI_API_KEY env var . If you want to use Anthropic models you need to set the ANTHROPIC_API_KEY env var . Create a Val Town API token , open the browser preview of this val, and use the API token as the password to log in.
HTTP
* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-v
* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environ
steveb1313 avatar
chat
@steveb1313
An interactive, runnable TypeScript val by steveb1313
Script
options = {},
// Initialize OpenAI API stub
const { Configuration, OpenAIApi } = await import("https://esm.sh/openai");
const configuration = new Configuration({
apiKey: process.env.openAIAPI,
const openai = new OpenAIApi(configuration);
// Request chat completion
: prompt;
const { data } = await openai.createChatCompletion({
model: "gpt-3.5-turbo-0613",
juansebsol avatar
RudeAI
@juansebsol
@jsxImportSource https://esm.sh/react
HTTP
export default async function server(request: Request): Promise<Response> {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const { sqlite } = await import("https://esm.town/v/stevekrouse/sqlite");
const openai = new OpenAI();
const SCHEMA_VERSION = 2;
// Generate AI response
const completion = await openai.chat.completions.create({
messages: [
mrshorts avatar
CodeGeneratorApp
@mrshorts
@jsxImportSource https://esm.sh/react@18.2.0
HTTP
if (request.method === "POST") {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
const { prompt, language } = await request.json();
cpp: "Generate a clean, concise C++ code snippet for: "
const completion = await openai.chat.completions.create({
messages: [
developersdigest avatar
weatherGPT
@developersdigest
Cron
import { email } from "https://esm.town/v/std/email?v=11";
import { OpenAI } from "npm:openai";
let location = "toronto on";
).then(r => r.json());
const openai = new OpenAI();
let chatCompletion = await openai.chat.completions.create({
messages: [{
janpaul123 avatar
semanticSearchTurso
@janpaul123
Part of Val Town Semantic Search . Uses Turso to search embeddings of all vals, using the sqlite-vss extension. Call OpenAI to generate an embedding for the search query. Query the vss_vals_embeddings table in Turso using vss_search . The vss_vals_embeddings table has been generated by janpaul123/indexValsTurso . It is not run automatically. This table is incomplete due to a bug in Turso .
Script
Uses [Turso](https://turso.tech/) to search embeddings of all vals, using the [sqlite-vss](https://github.com/asg017/sqlite-v
- Call OpenAI to generate an embedding for the search query.
- Query the `vss_vals_embeddings` table in Turso using `vss_search`.
import { db as allValsDb } from "https://esm.town/v/sqlite/db?v=9";
import OpenAI from "npm:openai";
export default async function semanticSearchPublicVals(query) {
authToken: Deno.env.get("TURSO_AUTH_TOKEN_VALSEMBEDDINGS"),
const openai = new OpenAI();
const embedding = await openai.embeddings.create({
model: "text-embedding-3-small",
yuval_dikerman avatar
catFact
@yuval_dikerman
An interactive, runnable TypeScript val by yuval_dikerman
HTTP
"Rewrite this fact about cats as if it was written for 3 year old:\n\n" +
fact;
const story = await fetch("https://api.openai.com/v1/chat/completions", {
method: "POST",
body: JSON.stringify({
temperature: 0.7,
headers: {
"Authorization": `Bearer ${process.env.OPENAI}`,
"Content-Type": "application/json",
}).then(async (response) => {