Search

Results include substring matches and semantically similar vals. Learn more
stevekrouse avatar
translator
@stevekrouse
Press to talk, and get a translation! The app is set up so you can easily have a conversation between two people. The app will translate between the two selected languages, in each voice, as the speakers talk. Add your OpenAI API Key, and make sure to open in a separate window for Mic to work.
HTTP (deprecated)
The app is set up so you can easily have a conversation between two people. The app will translate between the two selected l
Add your OpenAI API Key, and make sure to open in a separate window for Mic to work.
import { html } from "npm:hono@3/html";
import { OpenAI } from "npm:openai";
const app = new Hono();
const openai = new OpenAI(Deno.env.get("OPENAI_API_KEY_VOICE"));
class TranscriptionService {
try {
const transcription = await openai.audio.transcriptions.create({
file: audioFile,
} catch (error) {
console.error("OpenAI API error:", error);
throw error;
try {
const response = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
} catch (error) {
console.error("OpenAI API error:", error);
return c.text("Error occurred during translation", 500);
try {
const mp3 = await openai.audio.speech.create({
model: "tts-1",
} catch (error) {
console.error("OpenAI API error:", error);
return c.text("Error occurred during speech generation", 500);
MichaelNollox avatar
tenseRoseTiglon
@MichaelNollox
VALL-E LLM code generation for vals! Make apps with a frontend, backend, and database. It's a bit of work to get this running, but it's worth it. Fork this val to your own profile. Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in tempValsParentFolderId . If you want to use OpenAI models you need to set the OPENAI_API_KEY env var . If you want to use Anthropic models you need to set the ANTHROPIC_API_KEY env var . Create a Val Town API token , open the browser preview of this val, and use the API token as the password to log in.
HTTP (deprecated)
* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-v
* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environ
import { extractValInfo } from "https://esm.town/v/pomdtr/extractValInfo?v=29";
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
ale_annini avatar
elevenlabsTTS
@ale_annini
An interactive, runnable TypeScript val by ale_annini
Script
import process from "node:process";
export const elevenlabsTTS = async (req, res) => {
// https://platform.openai.com/docs/api-reference/images/create
// https://ale_annini-elevenlabstts.express.val.run/?args=[%22{\%22text\%22:\%22it%20beautiful\%22}%22]
const payload = {
nate avatar
kindness
@nate
An interactive, runnable TypeScript val by nate
Script
export let kindness = async () => {
return await gpt3({
openAiKey: process.env.OPENAI_API_KEY,
prompt:
"Speaking as universal consciousness, say something short, true, uplifting, loving, and kind.",
tahsin avatar
tomatoMinnow
@tahsin
Vercel AI SDK with Val Town! Use the Vercel AI SDK in your Vals. Note : you must add your OpenAI key to your Val Town Env variables under OPENAI_API_KEY . If you would like to specify a different name for your API Key, you can create a custom OpenAI provider with the createOpenAI function. Prefer another AI provider? Use any supported provider by changing just two lines of code!
HTTP (deprecated)
Use the Vercel AI SDK in your Vals.
**Note**: you must add your OpenAI key to your Val Town [Env variables](https://www.val.town/settings/environment-variables)
Prefer another AI provider? Use [any supported provider](https://sdk.vercel.ai/providers/ai-sdk-providers) by changing just t
import { openai } from "npm:@ai-sdk/openai";
import { StreamingTextResponse, streamText } from "npm:ai";
const result = await streamText({
model: openai("gpt-4o"),
prompt: "Generate a fast recipe for Lasagna.",
varun_balani avatar
weatherGPT
@varun_balani
If you fork this, you'll need to set OPENAI_API_KEY in your Val Town Secrets .
Cron
If you fork this, you'll need to set `OPENAI_API_KEY` in your [Val Town Secrets](https://www.val.town/settings/secrets).
import { fetch } from "https://esm.town/v/std/fetch";
import { OpenAI } from "npm:openai";
let location = "manipal";
).then(r => r.json());
const openai = new OpenAI();
let chatCompletion = await openai.chat.completions.create({
messages: [{
janpaul123 avatar
debugValEmbeddings
@janpaul123
An interactive, runnable TypeScript val by janpaul123
Script
import _ from "npm:lodash";
import OpenAI from "npm:openai";
const openai = new OpenAI();
const queryEmbedding = (await openai.embeddings.create({
model: "text-embedding-3-small",
console.log(queryEmbedding.slice(0, 4));
const embedding = await openai.embeddings.create({
model: "text-embedding-3-small",
console.log("Hash is the same, no email sent.", { dynamiclandWebsiteHash });
const queryEmbeddingVal = (await openai.embeddings.create({
model: "text-embedding-3-small",
kirineko avatar
chat
@kirineko
An interactive, runnable TypeScript val by kirineko
Script
options = {},
// Initialize OpenAI API stub
const { Configuration, OpenAIApi } = await import(
"https://esm.sh/openai@3.3.0"
console.log(process.env);
// const configuration = new Configuration({
// apiKey: process.env.OPENAI,
// const openai = new OpenAIApi(configuration);
// // Request chat completion
// : prompt;
// const { data } = await openai.createChatCompletion({
// model: "gpt-3.5-turbo-0613",
stevekrouse avatar
apricotTurkey
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
Script
import { OpenAI } from "https://esm.town/v/std/openai?v=2";
const openai = new OpenAI();
const functionExpression = await openai.chat.completions.create({
"messages": [
nbbaier avatar
WriterOptions
@nbbaier
An interactive, runnable TypeScript val by nbbaier
Script
import { type ClientOptions } from "npm:openai";
export interface WriterOptions extends ClientOptions {
model?: string;
janpaul123 avatar
semanticSearchBlobs
@janpaul123
Part of Val Town Semantic Search . Uses Val Town's blob storage to search embeddings of all vals, by downloading them all and iterating through all of them to compute distance. Slow and terrible, but it works! Get metadata from blob storage: allValsBlob${dimensions}EmbeddingsMeta (currently allValsBlob1536EmbeddingsMeta ), which has a list of all indexed vals and where their embedding is stored ( batchDataIndex points to the blob, and valIndex represents the offset within the blob). The blobs have been generated by janpaul123/indexValsBlobs . It is not run automatically. Get all blobs with embeddings pointed to by the metadata, e.g. allValsBlob1536EmbeddingsData_0 for batchDataIndex 0. Call OpenAI to generate an embedding for the search query. Go through all embeddings and compute cosine similarity with the embedding for the search query. Return list sorted by similarity.
Script
- Get all blobs with embeddings pointed to by the metadata, e.g. `allValsBlob1536EmbeddingsData_0` for `batchDataIndex` 0.
- Call OpenAI to generate an embedding for the search query.
- Go through all embeddings and compute cosine similarity with the embedding for the search query.
import _ from "npm:lodash";
import OpenAI from "npm:openai";
const dimensions = 1536;
await Promise.all(allBatchDataIndexesPromises);
const openai = new OpenAI();
const queryEmbedding = (await openai.embeddings.create({
model: "text-embedding-3-small",
ellenchisa avatar
weatherGPT
@ellenchisa
If you fork this, you'll need to set OPENAI_API_KEY in your Val Town Secrets .
Cron
If you fork this, you'll need to set `OPENAI_API_KEY` in your [Val Town Secrets](https://www.val.town/settings/secrets).
import { fetch } from "https://esm.town/v/std/fetch";
import { OpenAI } from "npm:openai";
let location = "san francisco ca";
).then(r => r.json());
const openai = new OpenAI();
let chatCompletion = await openai.chat.completions.create({
messages: [{
jacoblee93 avatar
streamingTest
@jacoblee93
An interactive, runnable TypeScript val by jacoblee93
Script
export const streamingTest = (async () => {
const { OpenAI } = await import("https://esm.sh/langchain/llms/openai");
// To enable streaming, we pass in `streaming: true` to the LLM constructor.
// Additionally, we pass in a handler for the `handleLLMNewToken` event.
const chat = new OpenAI({
maxTokens: 25,
streaming: true,
openAIApiKey: process.env.OPENAI_API_KEY,
const response = await chat.call("Tell me a joke.", undefined, [
kora avatar
tomatoMinnow
@kora
Vercel AI SDK with Val Town! Stream OpenAI API Use the Vercel AI SDK in your Vals. Note : you must add your OpenAI key to your Val Town Env variables under OPENAI_API_KEY . If you would like to specify a different name for your API Key, you can create a custom OpenAI provider with the createOpenAI function. Prefer another AI provider? Use any supported provider by changing just two lines of code!
HTTP (deprecated)
# Vercel AI SDK with Val Town!
Stream OpenAI API
Use the Vercel AI SDK in your Vals.
**Note**: you must add your OpenAI key to your Val Town [Env variables](https://www.val.town/settings/environment-variables)
Prefer another AI provider? Use [any supported provider](https://sdk.vercel.ai/providers/ai-sdk-providers) by changing just t
import { openai } from "npm:@ai-sdk/openai";
import { StreamingTextResponse, streamText } from "npm:ai";
const result = await streamText({
model: openai("gpt-4o"),
prompt: "Generate a fast recipe for Lasagna.",
std avatar
OpenAIUsage
@std
OpenAI Proxy Metrics We write openAI usage data to a openai_usage sqlite table. This script val is imported into the openai proxy. Use this val to run administrative scripts: https://www.val.town/v/std/OpenAIUsageScript
Script
# OpenAI Proxy Metrics
We write openAI usage data to a `openai_usage` sqlite table. This script val is imported into the openai proxy. Use this val
FROM
openai_usage,
params
model: string;
export class OpenAIUsage {
constructor() {}
async migrate() {
await sqlite.batch([`CREATE TABLE IF NOT EXISTS openai_usage (
id INTEGER PRIMARY KEY,
async drop() {
await sqlite.batch([`DROP TABLE IF EXISTS openai_usage`]);
async writeUsage(ur: UsageRow) {
sqlite.execute({
sql: "INSERT INTO openai_usage (user_id, handle, tier, tokens, model) VALUES (?, ?, ?, ?, ?)",
args: [ur.userId, ur.handle, ur.tier, ur.tokens, ur.model],
let resp = await sqlite.execute({
sql: `select count(*) from openai_usage where model LIKE 'gpt-4%' and model NOT LIKE '%mini%' and user_id = ? and times
args: [userId, now.toISOString()],
…
8
…
Next