Search
fetchAndStoreOpenAiUsage2
@nbbaier
An interactive, runnable TypeScript val by nbbaier
Cron
import { cronEvalLogger as logger } from "https://esm.town/v/nbbaier/cronLogger";
import { fetchOpenAiUsageData } from "https://esm.town/v/nbbaier/fetchOpenAiUsageData";
import { updateBlobUsageDB } from "https://esm.town/v/nbbaier/updateBlobUsageDB";
import { DateTime } from "npm:luxon";
const fetchAndStoreOpenAiUsage = async (interval: Interval) => {
const timeZone = "America/Chicago";
try {
const { data, whisper_api_data, dalle_api_data } = await fetchOpenAiUsageData(today);
const day_total = await createDayTotal(data, whisper_api_data, dalle_api_data);
console.error(error);
export default logger(fetchAndStoreOpenAiUsage);
openaiOpenAPI
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
Script
export let openaiOpenAPI = `
openapi: 3.0.0
info:
books
@laidlaw
@jsxImportSource npm:hono@3/jsx
HTTP
import { Hono } from "npm:hono@3";
import { OpenAI } from "npm:openai";
const openai = new OpenAI();
function esmTown(url) {
try {
const response = await openai.chat.completions.create({
messages: [
MILLENCHAT
@LucasMillen
// "name": "AI Chat Assistant",
HTTP
// "description": "A chat assistant using OpenAI's API",
async function callOpenAI(userMessage: string): Promise<string> {
const apiKey = Deno.env.get("OPENAI_API_KEY");
throw new Error("OpenAI API key is not configured. Please set the OPENAI_API_KEY environment variable.");
const response = await fetch('https://api.openai.com/v1/chat/completions', {
throw new Error(`OpenAI API error: ${response.status} - ${errorBody}`);
console.error("OpenAI API Call Error:", error);
const [openAIError, setOpenAIError] = useState<string | null>(null);
setOpenAIError(null);
const botReply = await callOpenAI(userMessage);
semanticSearchNeon
@janpaul123
Part of Val Town Semantic Search . Uses Neon to search embeddings of all vals, using the pg_vector extension. Call OpenAI to generate an embedding for the search query. Query the vals_embeddings table in Neon using the cosine similarity operator. The vals_embeddings table gets refreshed every 10 minutes by janpaul123/indexValsNeon .
Script
Uses [Neon](https://neon.tech/) to search embeddings of all vals, using the [pg_vector](https://neon.tech/docs/extensions/pgvector) extension.
- Call OpenAI to generate an embedding for the search query.
- Query the `vals_embeddings` table in Neon using the cosine similarity operator.
import { blob } from "https://esm.town/v/std/blob";
import OpenAI from "npm:openai";
const dimensions = 1536;
await client.connect();
const openai = new OpenAI();
const queryEmbedding = (await openai.embeddings.create({
model: "text-embedding-3-small",
autoGPT_Test2
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
Script
export let autoGPT_Test2 = (async () => {
const { Configuration, OpenAIApi } = await import("npm:openai");
const configuration = new Configuration({
apiKey: process.env.openai,
const openai = new OpenAIApi(configuration);
const completion = await openai.createChatCompletion({
model: "gpt-3.5-turbo",
compassionateBlackCatfish
@gigmx
// Add a separate API route handler for chat
HTTP
import OpenAI from "https://esm.sh/openai@4.28.4";
export default async function(req: Request): Promise<Response> {
const LEPTON_API_TOKEN = Deno.env.get('LEPTON_API_TOKEN') || '';
const openai = new OpenAI({
apiKey: LEPTON_API_TOKEN,
presence_penalty: config.presence_penalty ?? 0.5
const response = await openai.chat.completions.create(apiConfig);
return new Response(JSON.stringify(response), {
chat
@willthereader
OpenAI ChatGPT helper function This val uses your OpenAI token if you have one, and the @std/openai if not, so it provides limited OpenAI usage for free. import { chat } from "https://esm.town/v/stevekrouse/openai";
const { content } = await chat("Hello, GPT!");
console.log(content); import { chat } from "https://esm.town/v/stevekrouse/openai";
const { content } = await chat(
[
{ role: "system", content: "You are Alan Kay" },
{ role: "user", content: "What is the real computer revolution?"}
],
{ max_tokens: 50, model: "gpt-4o" }
);
console.log(content);
Script
# OpenAI ChatGPT helper function
This val uses your OpenAI token if you have one, and the @std/openai if not, so it provides limited OpenAI usage for free.
import { chat } from "https://esm.town/v/stevekrouse/openai";
import type { ChatCompletion, ChatCompletionCreateParamsNonStreaming, Message } from "npm:@types/openai";
async function getOpenAI() {
if (Deno.env.get("OPENAI_API_KEY") === @std/openai) {
const { OpenAI } = await import("https://esm.town/v/std/openai");
return new OpenAI();
const { OpenAI } = await import("npm:openai");
return new OpenAI();
* Initiates a chat conversation with OpenAI's GPT model and retrieves the content of the first response.
const openai = await getOpenAI();
const completion = await openai.chat.completions.create(createParams);
jadeMacaw
@janpaul123
An interactive, runnable TypeScript val by janpaul123
Script
import _ from "npm:lodash";
import OpenAI from "npm:openai";
export default async function semanticSearchPublicVals(query) {
db.add({ id, embedding, metadata: {} });
const openai = new OpenAI();
const embedding = await openai.embeddings.create({
model: "text-embedding-3-small",
embeddingsSearchExample
@yawnxyz
This is an example of in-memory search, using a combination of lunr, OpenAI embeddings, and cosine similarity
Script
This is an example of in-memory search, using a combination of lunr, OpenAI embeddings, and cosine similarity
import { embed, embedMany } from "npm:ai";
import { openai } from "npm:@ai-sdk/openai";
import lunr from "https://cdn.skypack.dev/lunr";
const { embedding } = await embed({
model: openai.embedding('text-embedding-3-small'),
value: text,
const { embeddings } = await embedMany({
model: openai.embedding('text-embedding-3-small'),
values: texts,
elevenlabsTTS
@ale_annini
An interactive, runnable TypeScript val by ale_annini
Script
import process from "node:process";
export const elevenlabsTTS = async (req, res) => {
// https://platform.openai.com/docs/api-reference/images/create
// https://ale_annini-elevenlabstts.express.val.run/?args=[%22{\%22text\%22:\%22it%20beautiful\%22}%22]
const payload = {
VERSION
@rattrayalex
An interactive, runnable TypeScript val by rattrayalex
Script
import OpenAI from "https://esm.sh/openai";
import { VERSION } from "https://esm.sh/openai/version";
console.log(VERSION);
export { VERSION };
gpt3Example
@ktodaz
An interactive, runnable TypeScript val by ktodaz
Script
import process from "node:process";
const gpt3Example = async () => {
const response = await fetch("https://api.openai.com/v1/completions", {
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": "Bearer " + process.env.OPENAI_API_KEY, // Replace with your OpenAI API Key
body: JSON.stringify({
"prompt": "send me a test message",
chat
@bluemsn
An interactive, runnable TypeScript val by bluemsn
Script
options = {},
// Initialize OpenAI API stub
const { OpenAI } = await import(
"https://esm.sh/openai"
const openai = new OpenAI();
const messages = typeof prompt === "string"
: prompt;
const completion = await openai.chat.completions.create({
messages: messages,
tenseRoseTiglon
@MichaelNollox
VALL-E LLM code generation for vals! Make apps with a frontend, backend, and database. It's a bit of work to get this running, but it's worth it. Fork this val to your own profile. Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in tempValsParentFolderId . If you want to use OpenAI models you need to set the OPENAI_API_KEY env var . If you want to use Anthropic models you need to set the ANTHROPIC_API_KEY env var . Create a Val Town API token , open the browser preview of this val, and use the API token as the password to log in.
HTTP
* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-variables).
* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environment-variables).
import { extractValInfo } from "https://esm.town/v/pomdtr/extractValInfo?v=29";
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [