Search

Results include substring matches and semantically similar vals. Learn more
thomasatflexos avatar
generateEmbeddings
@thomasatflexos
An interactive, runnable TypeScript val by thomasatflexos
Script
message: "The URL parameter is required for this end point",
const { OpenAIEmbeddings } = await import("npm:langchain/embeddings");
const { createClient } = await import(
splittedDocs,
new OpenAIEmbeddings({
openAIApiKey: process.env.OPEN_API_KEY,
client,
nbbaier avatar
fetchAndStoreOpenAiUsage2
@nbbaier
An interactive, runnable TypeScript val by nbbaier
Cron
import { cronEvalLogger as logger } from "https://esm.town/v/nbbaier/cronLogger";
import { fetchOpenAiUsageData } from "https://esm.town/v/nbbaier/fetchOpenAiUsageData";
import { updateBlobUsageDB } from "https://esm.town/v/nbbaier/updateBlobUsageDB";
import { DateTime } from "npm:luxon";
const fetchAndStoreOpenAiUsage = async (interval: Interval) => {
const timeZone = "America/Chicago";
try {
const { data, whisper_api_data, dalle_api_data } = await fetchOpenAiUsageData(today);
const day_total = await createDayTotal(data, whisper_api_data, dalle_api_data);
console.error(error);
export default logger(fetchAndStoreOpenAiUsage);
stevekrouse avatar
openaiOpenAPI
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
Script
export let openaiOpenAPI = `
openapi: 3.0.0
info:
laidlaw avatar
books
@laidlaw
@jsxImportSource npm:hono@3/jsx
HTTP (deprecated)
import { Hono } from "npm:hono@3";
import { OpenAI } from "npm:openai";
const openai = new OpenAI();
function esmTown(url) {
try {
const response = await openai.chat.completions.create({
messages: [
yawnxyz avatar
aiHonoHtmxAlpineStreamingExample
@yawnxyz
This Frankenstein of an example shows how well Hono, htmx, and Alpine play together. Hono serves the frameworks, API calls, and functions htmx handles ajax requests, and can very powerfully request html and other content to swap out the front-end alpine handles app-like reactivity without having to always resort to server round trips
HTTP (deprecated)
import { cors } from 'npm:hono/cors';
import { stream, streamSSE } from "https://deno.land/x/hono@v4.3.11/helper.ts";
import { OpenAI } from "npm:openai";
import { ai } from "https://esm.town/v/yawnxyz/ai";
const app = new Hono();
const openai = new OpenAI();
app.use('*', cors({
origin: '*',
yawnxyz avatar
translator
@yawnxyz
Press to talk, and get a translation! The app is set up so you can easily have a conversation between two people. The app will translate between the two selected languages, in each voice, as the speakers talk. Add your OpenAI API Key, and make sure to open in a separate window for Mic to work.
HTTP (deprecated)
The app is set up so you can easily have a conversation between two people. The app will translate between the two selected l
Add your OpenAI API Key, and make sure to open in a separate window for Mic to work.
import { cors } from 'npm:hono/cors';
import { OpenAI } from "npm:openai";
const app = new Hono();
const openai = new OpenAI(Deno.env.get("OPENAI_API_KEY_VOICE"));
class TranscriptionService {
try {
const transcription = await openai.audio.transcriptions.create({
file: audioFile,
} catch (error) {
console.error('OpenAI API error:', error);
throw error;
try {
const response = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
} catch (error) {
console.error('OpenAI API error:', error);
return c.text('Error occurred during translation', 500);
try {
const mp3 = await openai.audio.speech.create({
model: "tts-1",
} catch (error) {
console.error('OpenAI API error:', error);
return c.text('Error occurred during speech generation', 500);
janpaul123 avatar
semanticSearchNeon
@janpaul123
Part of Val Town Semantic Search . Uses Neon to search embeddings of all vals, using the pg_vector extension. Call OpenAI to generate an embedding for the search query. Query the vals_embeddings table in Neon using the cosine similarity operator. The vals_embeddings table gets refreshed every 10 minutes by janpaul123/indexValsNeon .
Script
Uses [Neon](https://neon.tech/) to search embeddings of all vals, using the [pg_vector](https://neon.tech/docs/extensions/pgv
- Call OpenAI to generate an embedding for the search query.
- Query the `vals_embeddings` table in Neon using the cosine similarity operator.
import { blob } from "https://esm.town/v/std/blob";
import OpenAI from "npm:openai";
const dimensions = 1536;
await client.connect();
const openai = new OpenAI();
const queryEmbedding = (await openai.embeddings.create({
model: "text-embedding-3-small",
stevekrouse avatar
autoGPT_Test2
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
Script
export let autoGPT_Test2 = (async () => {
const { Configuration, OpenAIApi } = await import("npm:openai");
const configuration = new Configuration({
apiKey: process.env.openai,
const openai = new OpenAIApi(configuration);
const completion = await openai.createChatCompletion({
model: "gpt-3.5-turbo",
willthereader avatar
chat
@willthereader
OpenAI ChatGPT helper function This val uses your OpenAI token if you have one, and the @std/openai if not, so it provides limited OpenAI usage for free. import { chat } from "https://esm.town/v/stevekrouse/openai"; const { content } = await chat("Hello, GPT!"); console.log(content); import { chat } from "https://esm.town/v/stevekrouse/openai"; const { content } = await chat( [ { role: "system", content: "You are Alan Kay" }, { role: "user", content: "What is the real computer revolution?"} ], { max_tokens: 50, model: "gpt-4o" } ); console.log(content);
Script
# OpenAI ChatGPT helper function
This val uses your OpenAI token if you have one, and the @std/openai if not, so it provides limited OpenAI usage for free.
import { chat } from "https://esm.town/v/stevekrouse/openai";
import type { ChatCompletion, ChatCompletionCreateParamsNonStreaming, Message } from "npm:@types/openai";
async function getOpenAI() {
if (Deno.env.get("OPENAI_API_KEY") === @std/openai) {
const { OpenAI } = await import("https://esm.town/v/std/openai");
return new OpenAI();
const { OpenAI } = await import("npm:openai");
return new OpenAI();
* Initiates a chat conversation with OpenAI's GPT model and retrieves the content of the first response.
const openai = await getOpenAI();
const completion = await openai.chat.completions.create(createParams);
janpaul123 avatar
jadeMacaw
@janpaul123
An interactive, runnable TypeScript val by janpaul123
Script
import _ from "npm:lodash";
import OpenAI from "npm:openai";
export default async function semanticSearchPublicVals(query) {
db.add({ id, embedding, metadata: {} });
const openai = new OpenAI();
const embedding = await openai.embeddings.create({
model: "text-embedding-3-small",
yawnxyz avatar
embeddingsSearchExample
@yawnxyz
This is an example of in-memory search, using a combination of lunr, OpenAI embeddings, and cosine similarity
Script
This is an example of in-memory search, using a combination of lunr, OpenAI embeddings, and cosine similarity
import { embed, embedMany } from "npm:ai";
import { openai } from "npm:@ai-sdk/openai";
import lunr from "https://cdn.skypack.dev/lunr";
const { embedding } = await embed({
model: openai.embedding('text-embedding-3-small'),
value: text,
const { embeddings } = await embedMany({
model: openai.embedding('text-embedding-3-small'),
values: texts,
xsec avatar
chat
@xsec
OpenAI ChatGPT helper function This val uses your OpenAI token if you have one, and the @std/openai if not, so it provides limited OpenAI usage for free. import { chat } from "https://esm.town/v/stevekrouse/openai"; const { content } = await chat("Hello, GPT!"); console.log(content); import { chat } from "https://esm.town/v/stevekrouse/openai"; const { content } = await chat( [ { role: "system", content: "You are Alan Kay" }, { role: "user", content: "What is the real computer revolution?"} ], { max_tokens: 50, model: "gpt-4o" } ); console.log(content);
Script
# OpenAI ChatGPT helper function
This val uses your OpenAI token if you have one, and the @std/openai if not, so it provides limited OpenAI usage for free.
import { chat } from "https://esm.town/v/stevekrouse/openai";
import type { ChatCompletion, ChatCompletionCreateParamsNonStreaming, Message } from "npm:@types/openai";
async function getOpenAI() {
if (Deno.env.get("OPENAI_API_KEY") === undefined) {
const { OpenAI } = await import("https://esm.town/v/std/openai");
return new OpenAI();
const { OpenAI } = await import("npm:openai");
return new OpenAI();
* Initiates a chat conversation with OpenAI's GPT model and retrieves the content of the first response.
const openai = await getOpenAI();
const completion = await openai.chat.completions.create(createParams);
rattrayalex avatar
VERSION
@rattrayalex
An interactive, runnable TypeScript val by rattrayalex
Script
import OpenAI from "https://esm.sh/openai";
import { VERSION } from "https://esm.sh/openai/version";
console.log(VERSION);
export { VERSION };
bluemsn avatar
chat
@bluemsn
An interactive, runnable TypeScript val by bluemsn
Script
options = {},
// Initialize OpenAI API stub
const { OpenAI } = await import(
"https://esm.sh/openai"
const openai = new OpenAI();
const messages = typeof prompt === "string"
: prompt;
const completion = await openai.chat.completions.create({
messages: messages,
ktodaz avatar
gpt3Example
@ktodaz
An interactive, runnable TypeScript val by ktodaz
Script
import process from "node:process";
const gpt3Example = async () => {
const response = await fetch("https://api.openai.com/v1/completions", {
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": "Bearer " + process.env.OPENAI_API_KEY, // Replace with your OpenAI API Key
body: JSON.stringify({
"prompt": "send me a test message",
…
7
…
Next