Search

Results include substring matches and semantically similar vals. Learn more
yawnxyz avatar
semanticSearch
@yawnxyz
In-memory semantic search; load it up with valtown KV. This is a "dumb" version of vector search, for prototyping RAG responses and UIs — with both regular search (w/ Lunr) and vector search (with OpenAI embeddings + cosine similarity) Usage: import { semanticSearch } from "https://esm.town/v/yawnxyz/semanticSearch"; const documents = [ { id: 1, content: 'cats dogs' }, { id: 2, content: 'elephants giraffes lions tigers' }, { id: 3, content: 'edam camembert cheddar' } ]; async function runExample() { // Add documents to the semantic search instance await semanticSearch.addDocuments(documents); const results = await semanticSearch.search('animals', 0, 3); console.log('Top 3 search results for "animals":'); console.log(results); } runExample();
Script
In-memory semantic search; load it up with valtown KV.
sponses and UIs — with both regular search (w/ Lunr) and vector search (with OpenAI embeddings + cosine similarity)
Usage:
import { embed, embedMany } from "npm:ai";
import { openai } from "npm:@ai-sdk/openai";
import lunr from "https://cdn.skypack.dev/lunr";
allowHeaders: ['Content-Type'],
openai.apiKey = Deno.env.get("OPENAI_API_KEY");
class SemanticSearch {
const { embedding } = await embed({
model: openai.embedding(modelName),
value: text,
const { embeddings } = await embedMany({
model: openai.embedding(modelName),
values: texts,
stevekrouse avatar
openaiUploadFile
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
Script
import { fetch } from "https://esm.town/v/std/fetch";
export async function openaiUploadFile({ key, data, purpose = "assistants" }: {
key: string;
formData.append("file", file, "data.json");
let result = await fetch("https://api.openai.com/v1/files", {
method: "POST",
if (result.error)
throw new Error("OpenAI Upload Error: " + result.error.message);
else
webup avatar
complete
@webup
An interactive, runnable TypeScript val by webup
Script
export const complete = async (prompt: string | object, options = {}) => {
// Initialize OpenAI API stub
const { Configuration, OpenAIApi } = await import("https://esm.sh/openai");
const configuration = new Configuration({
apiKey: process.env.OPENAI,
const openai = new OpenAIApi(configuration);
// Request chat completion
const completion = await openai.createCompletion({
model: "text-davinci-003",
ale_annini avatar
images
@ale_annini
An interactive, runnable TypeScript val by ale_annini
Script
import { openaiImages } from "https://esm.town/v/ale_annini/openaiImages";
export const images = openaiImages({ prompt: "a dog", n: 2 });
scio avatar
ask_gpt4
@scio
An interactive, runnable TypeScript val by scio
Script
export const ask_gpt4 = async (query) => {
const { OpenAI } = await import("https://deno.land/x/openai/mod.ts");
const openAI = new OpenAI(process.env.OPENAI_KEY);
const chatCompletion = await openAI.createChatCompletion({
model: "gpt-4",
onixoni avatar
chat
@onixoni
OpenAI ChatGPT helper function This val uses your OpenAI token if you have one, and the @std/openai if not, so it provides limited OpenAI usage for free. import { chat } from "https://esm.town/v/stevekrouse/openai"; const { content } = await chat("Hello, GPT!"); console.log(content); import { chat } from "https://esm.town/v/stevekrouse/openai"; const { content } = await chat( [ { role: "system", content: "You are Alan Kay" }, { role: "user", content: "What is the real computer revolution?"} ], { max_tokens: 50, model: "gpt-4" } ); console.log(content);
Script
# OpenAI ChatGPT helper function
This val uses your OpenAI token if you have one, and the @std/openai if not, so it provides limited OpenAI usage for free.
import { chat } from "https://esm.town/v/stevekrouse/openai";
import type { ChatCompletion, ChatCompletionCreateParamsNonStreaming, Message } from "npm:@types/openai";
async function getOpenAI() {
if (Deno.env.get("OPENAI_API_KEY") === undefined) {
const { OpenAI } = await import("https://esm.town/v/std/openai");
return new OpenAI();
const { OpenAI } = await import("npm:openai");
return new OpenAI();
* Initiates a chat conversation with OpenAI's GPT model and retrieves the content of the first response.
const openai = await getOpenAI();
const completion = await openai.chat.completions.create(createParams);
nbbaier avatar
gptTag
@nbbaier
// a test comment for vt-backup
Script
import type { ChatCompletion, ChatCompletionCreateParamsNonStreaming, Message } from "npm:@types/openai";
// a test comment for vt-backup
// another test comment for vt-backup
async function getOpenAI() {
// if you don't have a key, use our std library version
if (Deno.env.get("OPENAI_API_KEY") === undefined) {
const { OpenAI } = await import("https://esm.town/v/std/openai");
return new OpenAI();
} else {
const { OpenAI } = await import("npm:openai");
return new OpenAI();
export function startChat(chatOptions: Omit<ChatCompletionCreateParamsNonStreaming, "messages"> & { system: string } = {
return async function gpt(strings, ...values) {
const openai = await getOpenAI();
const input = String.raw({ raw: strings }, ...values);
messages,
const completion = await openai.chat.completions.create(createParams);
return { ...completion, content: completion.choices[0].message.content };
athyuttamre avatar
digitalYellowRoadrunner
@athyuttamre
An interactive, runnable TypeScript val by athyuttamre
Script
import { OpenAI } from "npm:openai";
import { zodResponseFormat } from "npm:openai/helpers/zod";
import { z } from "npm:zod";
country: z.string(),
const client = new OpenAI({ apiKey: Deno.env.get("OPENAI_API_KEY") });
async function main() {
thatsmeadarsh avatar
star_a_github_repository_with_natural_language
@thatsmeadarsh
Using OpenAI Assistant API, Composio to Star a Github Repo This is an example code of using Composio to star a github Repository by creating an AI Agent using OpenAI API Goal Enable OpenAI assistants to perform tasks like starring a repository on GitHub via natural language commands. Tools List of supported tools . FAQs How to get Composio API key? Open app.composio.dev and log in to your account. Then go to app.composio.dev/settings . Navigate to the API Keys -> Generate a new API key .
Script
# Using OpenAI Assistant API, Composio to Star a Github Repo
This is an example code of using Composio to star a github Repository by creating an AI Agent using OpenAI API
## Goal
Enable OpenAI assistants to perform tasks like starring a repository on GitHub via natural language commands.
## Tools
import { OpenAI } from "https://esm.town/v/std/openai";
import { OpenAIToolSet } from "npm:composio-core";
const COMPOSIO_API_KEY = Deno.env.get("COMPOSIO_API_KEY"); // Getting the API key from the environment
const toolset = new OpenAIToolSet({ apiKey: COMPOSIO_API_KEY });
// Creating an authentication function for the user
const instruction = "Star a repo ComposioHQ/composio on GitHub";
const client = new OpenAI();
const response = await client.chat.completions.create({
vladimyr avatar
gptTag
@vladimyr
// Initialize the gpt function with the system message
Script
import type { ChatCompletion, ChatCompletionCreateParamsNonStreaming, Message } from "npm:@types/openai";
async function getOpenAI() {
// if you don't have a key, use our std library version
if (Deno.env.get("OPENAI_API_KEY") === undefined) {
const { OpenAI } = await import("https://esm.town/v/std/openai");
return new OpenAI();
} else {
const { OpenAI } = await import("npm:openai");
return new OpenAI();
function startChat(chatOptions: Omit<ChatCompletionCreateParamsNonStreaming, "messages"> & { system: string } = {
return async function gpt(strings, ...values) {
const openai = await getOpenAI();
const input = strings.reduce((result, str, i) => {
messages,
const completion = await openai.chat.completions.create(createParams);
return { ...completion, content: completion.choices[0].message.content };
yawnxyz avatar
oliveButterfly
@yawnxyz
An interactive, runnable TypeScript val by yawnxyz
Script
import { fetch } from "https://esm.town/v/std/fetch";
export async function openaiUploadFile({ key, data, purpose = "assistants" }: {
key: string;
formData.append("file", file, "data.json");
let result = await fetch("https://api.openai.com/v1/files", {
method: "POST",
if (result.error)
throw new Error("OpenAI Upload Error: " + result.error.message);
else
stevekrouse avatar
gpt4Example
@stevekrouse
GPT4 Example This uses the brand new gpt-4-1106-preview . To use this, set OPENAI_API_KEY in your Val Town Secrets .
Script
This uses the brand new `gpt-4-1106-preview`.
To use this, set `OPENAI_API_KEY` in your [Val Town Secrets](https://www.val.town/settings/secrets).
import { OpenAI } from "npm:openai";
const openai = new OpenAI();
let chatCompletion = await openai.chat.completions.create({
messages: [{
weaverwhale avatar
chat
@weaverwhale
OpenAI ChatGPT helper function This val uses your OpenAI token if you have one, and the @std/openai if not, so it provides limited OpenAI usage for free. import { chat } from "https://esm.town/v/stevekrouse/openai"; const { content } = await chat("Hello, GPT!"); console.log(content); import { chat } from "https://esm.town/v/stevekrouse/openai"; const { content } = await chat( [ { role: "system", content: "You are Alan Kay" }, { role: "user", content: "What is the real computer revolution?"} ], { max_tokens: 50, model: "gpt-4" } ); console.log(content);
Script
# OpenAI ChatGPT helper function
This val uses your OpenAI token if you have one, and the @std/openai if not, so it provides limited OpenAI usage for free.
import { chat } from "https://esm.town/v/stevekrouse/openai";
import type { ChatCompletion, ChatCompletionCreateParamsNonStreaming, Message } from "npm:@types/openai";
async function getOpenAI() {
if (Deno.env.get("OPENAI_API_KEY") === undefined) {
const { OpenAI } = await import("https://esm.town/v/std/openai");
return new OpenAI();
const { OpenAI } = await import("npm:openai");
return new OpenAI();
* Initiates a chat conversation with OpenAI's GPT model and retrieves the content of the first response.
const openai = await getOpenAI();
const completion = await openai.chat.completions.create(createParams);
snm avatar
gpt3
@snm
* OpenAI text completion. https://platform.openai.com/docs/api-reference/completions * * val.town has generously provided a free daily quota. Until the quota is met, no need to provide an API key. * To see if the quota has been met, you can run @patrickjm.openAiFreeQuotaExceeded() * * For full REST API access, see @patrickjm.openAiTextCompletion
Script
import { trackOpenAiFreeUsage } from "https://esm.town/v/snm/trackOpenAiFreeUsage";
import { openAiTextCompletion } from "https://esm.town/v/patrickjm/openAiTextCompletion?v=8";
import { openAiModeration } from "https://esm.town/v/snm/openAiModeration";
import { openAiFreeQuotaExceeded } from "https://esm.town/v/patrickjm/openAiFreeQuotaExceeded?v=2";
import { openAiFreeUsageConfig } from "https://esm.town/v/snm/openAiFreeUsageConfig";
* OpenAI text completion. https://platform.openai.com/docs/api-reference/completions
* To see if the quota has been met, you can run @patrickjm.openAiFreeQuotaExceeded()
* For full REST API access, see @patrickjm.openAiTextCompletion
openAiKey?: string,
const apiKey = params.openAiKey ?? openAiFreeUsageConfig.key;
mjweaver01 avatar
PersonalizationGPT
@mjweaver01
PersonalizationGPT You are a helpful personalization assistant Use GPT to return JIT personalization for client side applications. If you fork this, you'll need to set OPENAI_API_KEY in your Val Town Secrets .
HTTP (deprecated)
Use GPT to return JIT personalization for client side applications.
If you fork this, you'll need to set `OPENAI_API_KEY` in your [Val Town Secrets](https://www.val.town/settings/secrets).
import { Hono } from "npm:hono";
import { OpenAI } from "npm:openai";
const defaultUser = {
const personalizationGPT = async (user: UserObject) => {
const openai = new OpenAI();
let chatCompletion = await openai.chat.completions.create({
messages: [