Search

Results include substring matches and semantically similar vals. Learn more
stevekrouse avatar
valTownChatGPT
@stevekrouse
ChatGPT Implemented in Val Town Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events
HTTP (deprecated)
# ChatGPT Implemented in Val Town
Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events
/** @jsxImportSource https://esm.sh/react */
import OpenAI from "npm:openai";
import { renderToString } from "npm:react-dom/server";
const openai = new OpenAI();
import { Hono } from "npm:hono@3";
app.get("/", async (c) => {
const thread = await openai.beta.threads.create();
const assistant = await openai.beta.assistants.create({
name: "",
const message = c.req.query("message");
await openai.beta.threads.messages.create(
threadId,
"data: " + JSON.stringify(str) + "\n\n",
const run = openai.beta.threads.runs.stream(threadId, {
assistant_id: assistantId,
janpaul123 avatar
indexValsNeon
@janpaul123
Part of Val Town Semantic Search . Generates OpenAI embeddings for all public vals, and stores them in Neon , using the pg_vector extension. Create the vals_embeddings table in Neon if it doesn't already exist. Get all val names from the database of public vals , made by Achille Lacoin . Get all val names from the vals_embeddings table and compute the difference (which ones are missing). Iterate through all missing vals, get their code, get embeddings from OpenAI, and store the result in Neon. Can now be searched using janpaul123/semanticSearchNeon .
Cron
*Part of [Val Town Semantic Search](https://www.val.town/v/janpaul123/valtownsemanticsearch).*
Generates OpenAI embeddings for all public vals, and stores them in [Neon](https://neon.tech/), using the [pg_vector](https:/
- Create the `vals_embeddings` table in Neon if it doesn't already exist.
- Get all val names from the `vals_embeddings` table and compute the difference (which ones are missing).
- Iterate through all missing vals, get their code, get embeddings from OpenAI, and store the result in Neon.
- Can now be searched using [janpaul123/semanticSearchNeon](https://www.val.town/v/janpaul123/semanticSearchNeon).
import { blob } from "https://esm.town/v/std/blob";
import OpenAI from "npm:openai";
import { truncateMessage } from "npm:openai-tokens";
// CREATE TABLE vals_embeddings (id TEXT PRIMARY KEY, embedding VECTOR(1536));
newValsBatches.push(currentBatch);
const openai = new OpenAI();
for (const newValsBatch of newValsBatches) {
const code = getValCode(val);
const embedding = await openai.embeddings.create({
model: "text-embedding-3-small",
janpaul123 avatar
compareEmbeddings
@janpaul123
An interactive, runnable TypeScript val by janpaul123
Script
import _ from "npm:lodash";
import OpenAI from "npm:openai";
const comparisons = [
["chat server integration", "discord bot"],
const openai = new OpenAI();
const cache = {};
async function getEmbedding(str) {
cache[str] = cache[str] || (await openai.embeddings.create({
model: "text-embedding-3-large",
webup avatar
getVectorStoreBuilder
@webup
An interactive, runnable TypeScript val by webup
Script
type: "memory" | "baas";
provider?: "pinecone" | "milvus";
} = { type: "memory" }, embed: "openai" | "huggingface" = "openai") {
const { cond, matches } = await import("npm:lodash-es");
const builder = await getModelBuilder({
stevekrouse avatar
fineTuningJob1
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
Script
import process from "node:process";
import { openaiFineTuneData } from "https://esm.town/v/stevekrouse/openaiFineTuneData";
export let fineTuningJob1 = openaiFineTuneData({
key: process.env.openai,
data: fineTuneExMispellings,
stevekrouse avatar
librarySecretEx
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
HTTP (deprecated)
export let librarySecretEx = gpt3({
prompt: "what is the meaning of life?",
openAiKey: process.env.openai,
liaolile avatar
weatherGPT
@liaolile
If you fork this, you'll need to set OPENAI_API_KEY in your Val Town Secrets .
Cron
If you fork this, you'll need to set `OPENAI_API_KEY` in your [Val Town Secrets](https://www.val.town/settings/secrets).
import { fetch } from "https://esm.town/v/std/fetch";
import { OpenAI } from "npm:openai";
let location = "shenzhen";
).then(r => r.json());
const openai = new OpenAI();
let chatCompletion = await openai.chat.completions.create({
messages: [{
jacoblee93 avatar
langchainEx
@jacoblee93
// Forked from @stevekrouse.langchainEx
Script
export const langchainEx = (async () => {
const { ChatOpenAI } = await import(
"https://esm.sh/langchain/chat_models/openai"
const { PromptTemplate } = await import("https://esm.sh/langchain/prompts");
const { LLMChain } = await import("https://esm.sh/langchain/chains");
const model = new ChatOpenAI({
temperature: 0.9,
openAIApiKey: process.env.OPENAI_API_KEY,
verbose: true,
jacoblee93 avatar
conversationalRetrievalQAChainStreamingExample
@jacoblee93
An interactive, runnable TypeScript val by jacoblee93
Script
const { ChatOpenAI } = await import(
"https://esm.sh/langchain/chat_models/openai"
const { OpenAIEmbeddings } = await import(
"https://esm.sh/langchain/embeddings/openai"
const streamingModel = new ChatOpenAI({
openAIApiKey: process.env.OPENAI_API_KEY,
const nonStreamingModel = new ChatOpenAI({
openAIApiKey: process.env.OPENAI_API_KEY,
new OpenAIEmbeddings({
openAIApiKey: process.env.OPENAI_API_KEY,
zzz avatar
TokenizerDemo
@zzz
// Demo of tokenizer to mimic behavior of https://platform.openai.com/tokenizer
Script
import { Tokenizer } from "https://esm.town/v/zzz/Tokenizer";
// Demo of tokenizer to mimic behavior of https://platform.openai.com/tokenizer
// Tokenizer uses "gpt-3.5-turbo" model by default but this demo uses davinci to match the playground
export const TokenizerDemo = (async () => {
lolocoo avatar
googlesearch
@lolocoo
An interactive, runnable TypeScript val by lolocoo
Express
dateRestrict: "m[1]",
const getSearch = async (data) => {
const response = await fetch("https://api.openai.com/v1/completions", {
method: "GET",
body: JSON.stringify(data),
ejfox avatar
relationshipgenerator
@ejfox
// This val receives text input, sends it to OpenAI to generate relationships,
HTTP
// This val receives text input, sends it to OpenAI to generate relationships,
// and returns a newline-delimited list of relationships.
// It uses the OpenAI API to generate the relationships.
// Tradeoff: This approach relies on an external API, which may have rate limits or costs.
// curl -X POST -H "Content-Type: text/plain" -d "Your text here" https://your-val-url.web.val.run
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
export default async function main(req: Request): Promise<Response> {
const text = await req.text();
const completion = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
henrique avatar
main
@henrique
An interactive, runnable TypeScript val by henrique
HTTP (deprecated)
export async function main(prompt: string) {
const { useCursive } = await import("npm:cursive");
const cursive = useCursive({ openAI: { apiKey: process.env.OPENAI_API_KEY } });
const res = await cursive.ask({
prompt,
patrickjm avatar
openAiFreeQuotaExceeded
@patrickjm
An interactive, runnable TypeScript val by patrickjm
HTTP (deprecated)
import { openAiFreeUsage } from "https://esm.town/v/patrickjm/openAiFreeUsage";
export let openAiFreeQuotaExceeded = () =>
openAiFreeUsage.exceeded;
webup avatar
modelSampleChatGenerate
@webup
An interactive, runnable TypeScript val by webup
Script
const builder = await getModelBuilder({
type: "chat",
provider: "openai",
const model = await builder();
const { SystemMessage, HumanMessage } = await import("npm:langchain/schema");