Search

Results include substring matches and semantically similar vals. Learn more
jdan avatar
weatherBot
@jdan
An interactive, runnable TypeScript val by jdan
Script
import { weatherOfLatLon } from "https://esm.town/v/jdan/weatherOfLatLon";
import { OpenAI } from "https://esm.town/v/std/openai?v=4";
const openai = new OpenAI();
const toolbox = {
"latLngOfCity": {
openAiTool: {
type: "function",
"weatherOfLatLon": {
openAiTool: {
type: "function",
"fetchWebpage": {
openAiTool: {
type: "function",
call: fetchWebpage
const tools = Object.values(toolbox).map(({ openAiTool }) => openAiTool);
const transcript = [
async function runConversation() {
const response = await openai.chat.completions.create({
messages: transcript,
jacoblee93 avatar
chatAgentWithCustomPrompt
@jacoblee93
An interactive, runnable TypeScript val by jacoblee93
Script
export const chatAgentWithCustomPrompt = (async () => {
const { ChatOpenAI } = await import(
"https://esm.sh/langchain/chat_models/openai"
const { initializeAgentExecutorWithOptions } = await import(
"https://esm.sh/langchain/tools/calculator"
const model = new ChatOpenAI({
temperature: 0,
openAIApiKey: process.env.OPENAI_API_KEY,
const tools = [
all avatar
smallweb_openapi_guide
@all
An interactive, runnable TypeScript val by all
HTTP
<p>This schema defines several components that can be used to integrate OpenAI's services into a logging and configurati
<li><strong>App</strong>: Represents an OpenAI-powered application with a name and URL.</li>
<li><strong>Config</strong>: Defines configuration options for OpenAI API integration and application settings.</li
<li><strong>ConsoleLog</strong>: Captures console output from OpenAI model interactions and application processes.<
<li><strong>CronLog</strong>: Logs scheduled tasks related to OpenAI operations, such as model fine-tuning or datas
<li><strong>HttpLog</strong>: Records HTTP requests made to and from the OpenAI API.</li>
<button class="collapsible-button">Key Components and OpenAI Use Cases</button>
Use Case: Store OpenAI API keys, model preferences, and application settings.
Example: Track rate limits, response times, and payload sizes for OpenAI API calls.
n create robust, scalable applications that effectively integrate and manage OpenAI's powerful AI capabilities while maintain
rishabhparikh avatar
easyAQI
@rishabhparikh
easyAQI Get the Air Quality Index (AQI) for a location via open data sources. It's "easy" because it strings together multiple lower-level APIs to give you a simple interface for AQI. Accepts a location in basically any string format (ie "downtown manhattan") Uses Nominatim to turn that into longitude and latitude Finds the closest sensor to you on OpenAQ Pulls the readings from OpenAQ Calculates the AQI via EPA's NowCAST algorithm Uses EPA's ranking to classify the severity of the score (ie "Unhealthy for Sensitive Groups") It uses blob storage to cache the openai location id for your location string to skip a couple steps for the next time. Example usage @stevekrouse.easyAQI({ location: "brooklyn navy yard" }) // Returns { "aqi": 23.6, "severity": "Good" } Forkable example: val.town/v/stevekrouse.easyAQIExample Also useful for getting alerts when the AQI is unhealthy near you: https://www.val.town/v/stevekrouse.aqi
Script
6. Uses EPA's ranking to classify the severity of the score (ie "Unhealthy for Sensitive Groups")
It uses blob storage to cache the openai location id for your location string to skip a couple steps for the next time.
## Example usage
stevekrouse avatar
chatGPT
@stevekrouse
ChatGPT Implemented in Val Town Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events. ⚠️ Note: Requires your own OpenAI API key to get this to run in a fork
HTTP
# ChatGPT Implemented in Val Town
Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events.
<p align=center>
</p>
**⚠️ Note: Requires your own OpenAI API key to get this to run in a fork**
import { Hono } from "npm:hono@3";
import OpenAI from "npm:openai";
import { renderToString } from "npm:react-dom/server";
eventSource.close();
const openai = new OpenAI();
const app = new Hono();
app.get("/", async (c) => {
const thread = await openai.beta.threads.create();
const assistant = await openai.beta.assistants.create({
name: "",
const message = c.req.query("message");
await openai.beta.threads.messages.create(
threadId,
"data: " + JSON.stringify(str) + "\n\n",
const run = openai.beta.threads.runs.stream(threadId, {
assistant_id: assistantId,
jacoblee93 avatar
conversationalRetrievalQAChainSummaryMemory
@jacoblee93
An interactive, runnable TypeScript val by jacoblee93
Script
export const conversationalRetrievalQAChainSummaryMemory = (async () => {
const { ChatOpenAI } = await import(
"https://esm.sh/langchain/chat_models/openai"
const { OpenAIEmbeddings } = await import(
"https://esm.sh/langchain/embeddings/openai"
const { ConversationSummaryMemory } = await import(
"https://esm.sh/langchain/chains"
const chatModel = new ChatOpenAI({
openAIApiKey: process.env.OPENAI_API_KEY,
/* Create the vectorstore */
[{ id: 2 }, { id: 1 }, { id: 3 }],
new OpenAIEmbeddings({
openAIApiKey: process.env.OPENAI_API_KEY,
/* Create the chain */
thomasatflexos avatar
createRelevantComment
@thomasatflexos
An interactive, runnable TypeScript val by thomasatflexos
Script
import { getOpenAiResponse } from "https://esm.town/v/thomasatflexos/getOpenAiResponse";
import { getRelevantContent } from "https://esm.town/v/thomasatflexos/getRelevantContent";
import process from "node:process";
\n IF you think that the Linkedin post is about new job opportunities, just respond with the text "N/A" and stop immedi
\n ELSE IF you think the Linkdedin post is not about new job opportunities, please proceed to a meaningful comment to t
let finalResponse = await getOpenAiResponse(PROMPT);
const { data1, error1 } = await supabase
.from("linkedin_seedings")
jdan avatar
gpt4o_emoji
@jdan
// await getGPT4oEmoji(
Script
import { chat } from "https://esm.town/v/stevekrouse/openai?v=19";
export async function getGPT4oEmoji(url) {
const response = await chat([
ajax avatar
annoy
@ajax
An interactive, runnable TypeScript val by ajax
Script
Copying the example above, find a new word and do as above.
console.log({ prompt });
const response = await fetch("https://api.openai.com/v1/completions", {
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": "Bearer " + process.env.OPENAI_API_KEY, // Replace with your OpenAI API Key
body: JSON.stringify({
"prompt": prompt,
stevekrouse avatar
gpt3Unsafe
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
Script
export const gpt3Unsafe = runVal("patrickjm.gpt3", {
prompt: "Write a haiku about being cool:",
openAiKey: process.env.openai,
weaverwhale avatar
jsonToDalleForm
@weaverwhale
@jsxImportSource https://esm.sh/react
HTTP
/** @jsxImportSource https://esm.sh/react */
import OpenAI from "https://esm.sh/openai";
import React, { useState } from "https://esm.sh/react";
import { systemPrompt } from "https://esm.town/v/weaverwhale/jtdPrompt";
// Move OpenAI initialization to server function
let openai;
function App() {
const url = new URL(request.url);
// Initialize OpenAI here to avoid issues with Deno.env in browser context
if (!openai) {
const apiKey = Deno.env.get("OPENAI_API_KEY");
openai = new OpenAI({ apiKey });
if (request.method === "POST" && url.pathname === "/generate") {
// Generate DALL-E prompt using GPT
const completion = await openai.chat.completions.create({
messages: [
// Generate DALL-E image
const response = await openai.images.generate({
model: "dall-e-3",
stevekrouse avatar
browserbase_google_concerts
@stevekrouse
// Navigate to Google
Script
import puppeteer from "https://deno.land/x/puppeteer@16.2.0/mod.ts";
import { OpenAI } from "https://esm.town/v/std/openai?v=4";
import { Browserbase } from "npm:@browserbasehq/sdk";
// ask chat gpt for list of concert dates
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
maxm avatar
valTownChatGPT
@maxm
ChatGPT Implemented in Val Town Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events.
HTTP
# ChatGPT Implemented in Val Town
Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events.
<p align=center>
/** @jsxImportSource https://esm.sh/react */
import OpenAI from "npm:openai";
import { renderToString } from "npm:react-dom/server";
// This uses by personal API key, you'll need to provide your own if
// you fork this. We'll be adding support to the std/openai lib soon!
const openai = new OpenAI();
import { Hono } from "npm:hono@3";
body: msgDiv.textContent,
// Setup the SSE connection and stream back the response. OpenAI handles determining
// which message is the correct response based on what was last read from the
app.get("/", async (c) => {
const thread = await openai.beta.threads.create();
const assistant = await openai.beta.assistants.create({
name: "",
let message = await c.req.text();
await openai.beta.threads.messages.create(
c.req.query("threadId"),
"data: " + JSON.stringify(str) + "\n\n",
const run = openai.beta.threads.runs.stream(threadId, {
assistant_id: assistantId,
vtdocs avatar
browserlessPuppeteerExample
@vtdocs
An interactive, runnable TypeScript val by vtdocs
Script
`wss://chrome.browserless.io?token=${process.env.browserlessKey}`,
const page = await browser.newPage();
await page.goto("https://en.wikipedia.org/wiki/OpenAI");
const intro = await page.evaluate(
`document.querySelector('p:nth-of-type(2)').innerText`,
stevekrouse avatar
reflective_qa
@stevekrouse
Reflective AI Ask me about r's in strawberry
HTTP
const { question } = await request.json();
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [