Search

Results include substring matches and semantically similar vals. Learn more
simonw avatar
valTownChatGPT
@simonw
ChatGPT Implemented in Val Town Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events.
HTTP (deprecated)
# ChatGPT Implemented in Val Town
Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events.
<p align=center>
/** @jsxImportSource https://esm.sh/react */
import OpenAI from "npm:openai";
import { renderToString } from "npm:react-dom/server";
// This uses by personal API key, you'll need to provide your own if
// you fork this. We'll be adding support to the std/openai lib soon!
const openai = new OpenAI();
import { Hono } from "npm:hono@3";
body: msgDiv.textContent,
// Setup the SSE connection and stream back the response. OpenAI handles determining
// which message is the correct response based on what was last read from the
app.get("/", async (c) => {
const thread = await openai.beta.threads.create();
const assistant = await openai.beta.assistants.create({
name: "",
let message = await c.req.text();
await openai.beta.threads.messages.create(
c.req.query("threadId"),
"data: " + JSON.stringify(str) + "\n\n",
const run = openai.beta.threads.runs.stream(threadId, {
assistant_id: assistantId,
mebmeb avatar
weatherGPT
@mebmeb
Cron
import { email } from "https://esm.town/v/std/email?v=11";
import { OpenAI } from "npm:openai";
let location = "brooklyn ny";
).then(r => r.json());
const openai = new OpenAI();
let chatCompletion = await openai.chat.completions.create({
messages: [{
scio avatar
aleister_chatley
@scio
An interactive, runnable TypeScript val by scio
Script
aleister_chatley_countdown = 8 + Math.floor(Math.random() * 6);
const { OpenAI } = await import("https://deno.land/x/openai/mod.ts");
const openAI = new OpenAI(process.env.OPENAI_KEY);
const chatCompletion = await openAI.createChatCompletion(
prompts.philip_k_dick,
treb0r avatar
weatherGPT
@treb0r
If you fork this, you'll need to set OPENAI_API_KEY in your Val Town Secrets .
Cron
If you fork this, you'll need to set `OPENAI_API_KEY` in your [Val Town Secrets](https://www.val.town/settings/secrets).
import { fetch } from "https://esm.town/v/std/fetch";
import { OpenAI } from "npm:openai";
let location = "Halifax UK";
).then(r => r.json());
const openai = new OpenAI();
let chatCompletion = await openai.chat.completions.create({
messages: [{
nbbaier avatar
draftReadme
@nbbaier
Code Documentation Assistant The Code Documentation Assistant is an AI-powered tool that helps generate documentation for code. It uses the OpenAI GPT-3.5 Turbo model to generate readme files in GitHub-flavored markdown based on the provided code. Usage Importing the Code Documentation Assistant import { draftReadme, writeReadme } from "code-doc-assistant"; Function: draftReadme async function draftReadme(options: WriterOptions): Promise<string> The draftReadme function generates a readme file based on the provided options. Parameters options (required): An object containing the following properties: username (string): The username of the code owner. valName (string): The name of the Val containing the code. model (optional, default: "gpt-3.5-turbo"): The OpenAI model to use for generating the readme. userPrompt (optional): Additional prompt to include in the documentation. Return Value A promise that resolves to a string representing the generated readme file. Function: writeReadme async function writeReadme(options: WriterOptions): Promise<string> The writeReadme function generates a readme file and updates the readme of the corresponding Val with the generated content. Parameters options (required): An object containing the following properties: username (string): The username of the code owner. valName (string): The name of the Val containing the code. model (optional, default: "gpt-3.5-turbo"): The OpenAI model to use for generating the readme. userPrompt (optional): Additional prompt to include in the documentation. Return Value A promise that resolves to a string indicating the success of the readme update. Example import { draftReadme, writeReadme } from "code-doc-assistant"; const options = { username: "your-username", valName: "your-val-name", }; const generatedReadme = await draftReadme(options); console.log(generatedReadme); const successMessage = await writeReadme(options); console.log(successMessage); License This project is licensed under the MIT License .
Script
s an AI-powered tool that helps generate documentation for code. It uses the OpenAI GPT-3.5 Turbo model to generate readme fi
- `model` (optional, default: "gpt-3.5-turbo"): The OpenAI model to use for generating the readme.
- `model` (optional, default: "gpt-3.5-turbo"): The OpenAI model to use for generating the readme.
import OpenAI, { type ClientOptions } from "npm:openai";
async function performOpenAICall(prompt: string, model: string, openaiOptions: ClientOptions) {
const openai = new OpenAI(openaiOptions);
const response = await openai.chat.completions.create({
throw new Error("No response from OpenAI");
throw new Error("No readme returned by OpenAI. Try again.");
const { username, valName, model = "gpt-3.5-turbo", userPrompt, ...openaiOptions } = options;
const readme = await performOpenAICall(prompt, model, openaiOptions);
const { username, valName, model = "gpt-3.5-turbo", userPrompt, ...openaiOptions } = options;
const readme = await performOpenAICall(prompt, model, openaiOptions);
stevekrouse avatar
multipleKeysAndMemoryConversationChainExample
@stevekrouse
// Forked from @jacoblee93.multipleKeysAndMemoryConversationChainExample
Script
export const multipleKeysAndMemoryConversationChainExample = (async () => {
const { ChatOpenAI } = await import(
"https://esm.sh/langchain/chat_models/openai"
const { BufferMemory } = await import("https://esm.sh/langchain/memory");
const { ConversationChain } = await import("https://esm.sh/langchain/chains");
const llm = new ChatOpenAI({
modelName: "gpt-3.5-turbo",
openAIApiKey: process.env.openai,
temperature: 0,
yawnxyz avatar
getContentFromUrl
@yawnxyz
getContentFromUrl Use this for summarizers. Combines https://r.jina.ai/URL and markdown.download's Youtube transcription getter to do its best to retrieve content from URLs. https://arstechnica.com/space/2024/06/nasa-indefinitely-delays-return-of-starliner-to-review-propulsion-data https://journals.asm.org/doi/10.1128/iai.00065-23 Usage: https://yawnxyz-getcontentfromurl.web.val.run/https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10187409/ https://yawnxyz-getcontentfromurl.web.val.run/https://www.youtube.com/watch?v=gzsczZnS84Y&ab_channel=PhageDirectory
HTTP
let result = await ai({
provider: provider || "openai",
model: model || "gpt-3.5-turbo",
let result = await ai({
provider: provider || "openai",
model: model || "gpt-3.5-turbo",
let result = await ai({
provider: "openai",
embed: true,
janpaul123 avatar
indexValsBlobs
@janpaul123
Part of Val Town Semantic Search . Generates OpenAI embeddings for all public vals, and stores them in Val Town's blob storage . Create a new metadata object. Also has support for getting the previous metadata and only adding new vals, but that's currently disabled. Get all val names from the database of public vals , made by Achille Lacoin . Put val names in batches. Vals in the same batch will have their embeddings stored in the same blob, at different offsets. Iterate through all each batch, get code for all the vals, get embeddings from OpenAI, and store the result in a blob. When finished, save the metadata JSON to its own blob. Can now be searched using janpaul123/semanticSearchBlobs .
Script
*Part of [Val Town Semantic Search](https://www.val.town/v/janpaul123/valtownsemanticsearch).*
Generates OpenAI embeddings for all public vals, and stores them in Val Town's [blob storage](https://docs.val.town/std/blob
- Create a new metadata object. Also has support for getting the previous metadata and only adding new vals, but that's curre
- Put val names in batches. Vals in the same batch will have their embeddings stored in the same blob, at different offsets.
- Iterate through all each batch, get code for all the vals, get embeddings from OpenAI, and store the result in a blob.
- When finished, save the metadata JSON to its own blob.
import { blob } from "https://esm.town/v/std/blob";
import OpenAI from "npm:openai";
import { truncateMessage } from "npm:openai-tokens";
const dimensions = 1536;
...Object.values(allValsBlobEmbeddingsMeta).map((item: any) => item.batchDataIndex + 1),
const openai = new OpenAI();
for (const newValsBatch of newValsBatches) {
const code = getValCode(val);
const embedding = await openai.embeddings.create({
model: "text-embedding-3-small",
mttlws avatar
valTownChatGPT
@mttlws
ChatGPT Implemented in Val Town Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events.
HTTP (deprecated)
# ChatGPT Implemented in Val Town
Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events.
<p align=center>
/** @jsxImportSource https://esm.sh/react */
import OpenAI from "npm:openai";
import { renderToString } from "npm:react-dom/server";
// This uses by personal API key, you'll need to provide your own if
// you fork this. We'll be adding support to the std/openai lib soon!
const openai = new OpenAI();
import { Hono } from "npm:hono@3";
body: msgDiv.textContent,
// Setup the SSE connection and stream back the response. OpenAI handles determining
// which message is the correct response based on what was last read from the
app.get("/", async (c) => {
const thread = await openai.beta.threads.create();
const assistant = await openai.beta.assistants.create({
name: "",
let message = await c.req.text();
await openai.beta.threads.messages.create(
c.req.query("threadId"),
"data: " + JSON.stringify(str) + "\n\n",
const run = openai.beta.threads.runs.stream(threadId, {
assistant_id: assistantId,
jcoleman avatar
weatherGPT
@jcoleman
Cron
import { email } from "https://esm.town/v/std/email?v=11";
import { OpenAI } from "npm:openai";
import { createHeaders, PORTKEY_GATEWAY_URL } from "npm:portkey-ai";
).then(r => r.json());
const openai = new OpenAI({
baseURL: PORTKEY_GATEWAY_URL,
apiKey: Deno.env.get("PORTKEY_API_KEY"),
virtualKey: Deno.env.get("PORTKEY_OPENAI_VIRTUAL_KEY"),
let chatCompletion = await openai.chat.completions.create({
messages: [{
mjweaver01 avatar
PriestGPT
@mjweaver01
PriestGPT You are a helpful assistant for a reverent priest. Ask him about books or verses in the bible, and he will be sure to give you a short sermon about it! If you want to know more about him, he'll be happy to tell you DEMO: https://mjweaver01-priestGPT.web.val.run/ask/proverbs If you fork this, you'll need to set OPENAI_API_KEY in your Val Town Secrets .
HTTP (deprecated)
DEMO: https://mjweaver01-priestGPT.web.val.run/ask/proverbs
If you fork this, you'll need to set `OPENAI_API_KEY` in your [Val Town Secrets](https://www.val.town/settings/secrets).
import { Hono } from "npm:hono";
import { OpenAI } from "npm:openai";
const priestGPT = async (verse: string, about?: boolean) => {
const openai = new OpenAI();
let chatCompletion = await openai.chat.completions.create({
messages: [
webup avatar
modelSampleChatCall
@webup
An interactive, runnable TypeScript val by webup
Script
const builder = await getModelBuilder({
type: "chat",
provider: "openai",
const model = await builder();
const { SystemMessage, HumanMessage } = await import("npm:langchain/schema");
mrblanchard avatar
VALLE
@mrblanchard
VALL-E LLM code generation for vals! Make apps with a frontend, backend, and database. It's a bit of work to get this running, but it's worth it. Fork this val to your own profile. Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in tempValsParentFolderId . If you want to use OpenAI models you need to set the OPENAI_API_KEY env var . If you want to use Anthropic models you need to set the ANTHROPIC_API_KEY env var . Create a Val Town API token , open the browser preview of this val, and use the API token as the password to log in.
HTTP (deprecated)
* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-v
* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environ
thomasatflexos avatar
askLexi
@thomasatflexos
An interactive, runnable TypeScript val by thomasatflexos
Script
const { SupabaseVectorStore } = await import("npm:langchain/vectorstores");
const { ChatOpenAI } = await import("npm:langchain/chat_models");
const { OpenAIEmbeddings } = await import("npm:langchain/embeddings");
const { createClient } = await import(
let streamedResponse = "";
const chat = new ChatOpenAI({
modelName: "gpt-3.5-turbo",
openAIApiKey: process.env.OPEN_API_KEY,
streaming: true,
const vectorStore = await SupabaseVectorStore.fromExistingIndex(
new OpenAIEmbeddings({
openAIApiKey: process.env.OPEN_API_KEY,
client,
willthereader avatar
valTownChatGPT
@willthereader
ChatGPT Implemented in Val Town Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events
HTTP (deprecated)
# ChatGPT Implemented in Val Town
Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events
/** @jsxImportSource https://esm.sh/react */
import OpenAI from "npm:openai";
import { renderToString } from "npm:react-dom/server";
const openai = new OpenAI();
import { Hono } from "npm:hono@3";
app.get("/", async (c) => {
const thread = await openai.beta.threads.create();
const assistant = await openai.beta.assistants.create({
name: "",
const message = c.req.query("message");
await openai.beta.threads.messages.create(
threadId,
"data: " + JSON.stringify(str) + "\n\n",
const run = openai.beta.threads.runs.stream(threadId, {
assistant_id: assistantId,
…
14
…
Next