Search

Results include substring matches and semantically similar vals. Learn more
janpaul123 avatar
valleBlogV0
@janpaul123
Fork this val to your own profile. Create a Val Town API token , open the browser preview of this val, and use the API token as the password to log in.
HTTP
import { verifyToken } from "https://esm.town/v/pomdtr/verifyToken?v=1";
import { openai } from "npm:@ai-sdk/openai";
import ValTown from "npm:@valtown/sdk";
const stream = await streamText({
model: openai("gpt-4o", {
baseURL: "https://std-openaiproxy.web.val.run/v1",
apiKey: Deno.env.get("valtown"),
spinningideas avatar
reflective_qa
@spinningideas
Reflective AI Ask me about r's in strawberry
HTTP
const { question } = await request.json();
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
bluemsn avatar
getModelBuilder
@bluemsn
An interactive, runnable TypeScript val by bluemsn
Script
provider?: "openai" | "huggingface";
} = { type: "llm", provider: "openai" }, options?: any) {
if (spec?.provider === "openai")
args.openAIApiKey = process.env.OPENAI_API_KEY;
matches({ type: "llm", provider: "openai" }),
const { OpenAI } = await import("npm:langchain/llms/openai");
return new OpenAI(args);
matches({ type: "chat", provider: "openai" }),
const { ChatOpenAI } = await import("npm:langchain/chat_models/openai");
return new ChatOpenAI(args);
rlimit avatar
gpt4
@rlimit
* OpenAI text completion. https://platform.openai.com/docs/api-reference/completions * * val.town and rlimit.com has generously provided a free daily quota. Until the quota is met, no need to provide an API key. *
Script
import { parentReference } from "https://esm.town/v/stevekrouse/parentReference?v=3";
import { runVal } from "https://esm.town/v/std/runVal";
* OpenAI text completion. https://platform.openai.com/docs/api-reference/completions
* val.town and rlimit.com has generously provided a free daily quota. Until the quota is met, no need to provide an API key.
export const gpt4 = async (prompt: string, maxTokens?: number = 1000) => {
ejfox avatar
githubactivitysummarizer
@ejfox
GitHub Activity Summarizer This val.town script fetches a user's recent GitHub activity and generates a summarized narrative overview using OpenAI's GPT model. Features Retrieves GitHub activity for a specified user from the past week Summarizes activity using OpenAI's GPT-3.5-turbo model Returns a concise, narrative summary of the user's GitHub contributions Usage Access the script via HTTP GET request: https://https://ejfox-githubactivitysummarizer.web.val.run/?username=<github_username> Replace <github_username> with the desired GitHub username like https://https://ejfox-githubactivitysummarizer.web.val.run/?username=ejfox in the past week Note Ensure you have necessary permissions and comply with GitHub's and OpenAI's terms of service when using this script.
HTTP
# GitHub Activity Summarizer
s recent GitHub activity and generates a summarized narrative overview using OpenAI's GPT model.
## Features
- Retrieves GitHub activity for a specified user from the past week
- Summarizes activity using OpenAI's GPT-3.5-turbo model
- Returns a concise, narrative summary of the user's GitHub contributions
## Note
Ensure you have necessary permissions and comply with GitHub's and OpenAI's terms of service when using this script.
// This approach fetches GitHub activity for two users specified in the URL,
// sends it to OpenAI for analysis, and returns collaboration suggestions.
// It uses the GitHub API (which doesn't require authentication for public data)
// and the OpenAI API (which does require an API key).
// Tradeoff: We're using an inline API key for simplicity, which isn't ideal for security.
// Note: This might hit rate limits for the GitHub API due to fetching a year of data.
import { OpenAI } from "https://esm.town/v/std/openai";
const OPENAI_API_KEY = "your_openai_api_key"; // Replace with your actual OpenAI API key
export default async function main(req: Request): Promise<Response> {
const user2Summary = summarizeActivity(user2Data);
const openai = new OpenAI(OPENAI_API_KEY);
const completion = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
yawnxyz avatar
generateValCodeAPI
@yawnxyz
An interactive, runnable TypeScript val by yawnxyz
Script
export let generateValCodeAPI = (description: string) =>
generateValCode(
process.env.OPENAI_API_KEY,
description,
jacoblee93 avatar
braveAgent
@jacoblee93
// Shows how to use the Brave Search tool in a LangChain agent
Script
export const braveAgent = (async () => {
const { ChatOpenAI } = await import(
"https://esm.sh/langchain/chat_models/openai"
const { BraveSearch } = await import("https://esm.sh/langchain/tools");
"https://esm.sh/langchain/agents"
const model = new ChatOpenAI({
temperature: 0,
openAIApiKey: process.env.OPENAI_API_KEY,
const tools = [
willthereader avatar
openaiDefiner
@willthereader
An interactive, runnable TypeScript val by willthereader
HTTP
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
export default async function(req: Request): Promise<Response> {
const messages = prepareMessages(selection, followUp, context);
log.info("Prepared messages for OpenAI:", JSON.stringify(messages));
const openai = new OpenAI();
const stream = await openai.chat.completions.create({
messages,
stream: true,
log.info("OpenAI stream created successfully");
return streamResponse(stream);
} catch (error) {
log.error("Error in OpenAI request:", error);
return handleError(error);
if (content) {
// log.debug("Received chunk from OpenAI:", content);
const encodedChunk = encoder.encode(JSON.stringify({ chunk: content }) + "\n");
banebot avatar
GreetingCard
@banebot
Generate a greeting card! This script exposes two endpoints, one for a form to capture information to utilize in creating a greeting card utilizing OpenAI, and the other to generate the greeting card based off the query parameters provided.
HTTP
# Generate a greeting card!
form to capture information to utilize in creating a greeting card utilizing OpenAI, and the other to generate the greeting c
import { OpenAI } from "https://esm.town/v/std/openai";
import { Hono } from "npm:hono";
import { cors } from "npm:hono/cors";
const openai = new OpenAI(),
chat = openai.chat,
chatCompletions = chat.completions;
gigmx avatar
zyloxAIChatApp
@gigmx
@jsxImportSource https://esm.sh/react@18.2.0
HTTP
import { createRoot } from "https://esm.sh/react-dom@18.2.0/client";
import OpenAI from "https://esm.sh/openai@4.28.4";
function App() {
if (request.method === "POST" && new URL(request.url).pathname === "/chat") {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
const { message, characterPrompt } = await request.json();
const stream = await openai.chat.completions.create({
model: "gpt-4o-mini",
stevekrouse avatar
cronprompt
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
HTTP
import { OpenAI } from "https://esm.town/v/std/openai";
export async function cronprompt(prompt: string) {
const openai = new OpenAI();
const functionExpression = await openai.chat.completions.create({
messages: [
browserbase avatar
TopHackerNewsDailyEmail
@browserbase
Browserbase Browserbase offers a reliable, high performance serverless developer platform to run, manage, and monitor headless browsers at scale. Leverage our infrastructure to power your web automation and AI agents. Get started with Browserbase for free here . If you have any questions, reach out to developer@browserbase.com.
Cron
import { loadPageContent } from "https://esm.town/v/charlypoly/browserbaseUtils";
import { OpenAI } from "https://esm.town/v/std/openai?v=4";
import { z } from "npm:zod";
.describe("Top 5 stories on Hacker News"),
// we create a OpenAI Tool that takes our schema as argument
const extractContentTool: any = {
parameters: zodToJsonSchema(schema),
const openai = new OpenAI();
// We ask OpenAI to extract the content from the given web page.
// The model will reach out to our `extract_content` tool and
// the requirement of `extract_content`s argument.
const completion = await openai.chat.completions.create({
model: "gpt-4-turbo",
tool_choice: "auto",
// we retrieve the serialized arguments generated by OpenAI
const result = completion.choices[0].message.tool_calls![0].function.arguments;
stevekrouse avatar
getShowcaseVals
@stevekrouse
Get vals that were created in the last month, augmented with all sorts of AI and screenshots, and shoved in a sqlite database for @stevekrouse/showcase.
Script
import { OpenAI } from "https://esm.sh/openai";
import { zodResponseFormat } from "https://esm.sh/openai/helpers/zod";
import { z } from "https://esm.sh/zod";
}, { concurrency: 3 });
const openai = new OpenAI();
const ValDescriptions = z.object({
async function getDescriptions(val) {
const completion = await openai.beta.chat.completions.parse({
model: "gpt-4o-mini",
maxm avatar
openAIStreaming
@maxm
OpenAI Streaming - Assistant and Threads An example of using OpenAI to stream back a chat with an assistant. This example sends two messages to the assistant and streams back the responses when they come in. Example response: user > What should I build today? ................ assistant > Here are a few fun Val ideas you could build on Val Town: 1. **Random Joke Generator:** Fetch a random joke from an API and display it. 2. **Daily Weather Update:** Pull weather data for your location using an API and create a daily summary. 3. **Mini Todo List:** Create a simple to-do list app with add, edit, and delete functionalities. 4. **Chuck Norris Facts:** Display a random Chuck Norris fact sourced from an API. 5. **Motivational Quote of the Day:** Fetch and display a random motivational quote each day. Which one sounds interesting to you? user > Cool idea, can you make it even cooler? ................... assistant > Sure, let's add some extra flair to make it even cooler! How about creating a **Motivational Quote of the Day** app with these features: 1. **Random Color Theme:** Each day, the background color/theme changes randomly. 2. **Quote Sharing:** Add an option to share the quote on social media. 3. **Daily Notifications:** Send a daily notification with the quote of the day. 4. **User Preferences:** Allow users to choose categories (e.g., success, happiness, perseverance) for the quotes they receive. Would you like some code snippets or guidance on implementing any of these features?
HTTP
# OpenAI Streaming - Assistant and Threads
An example of using OpenAI to stream back a chat with an assistant. This example sends two messages to the assistant and stre
Example response:
import OpenAI from "npm:openai";
const openai = new OpenAI();
import process from "node:process";
// Define our assistant.
const assistant = await openai.beta.assistants.create({
name: "Val Tutor",
// Create a thread to chat in.
const thread = await openai.beta.threads.create();
// These are the messages we'll send to the assistant.
}, 100);
const message = await openai.beta.threads.messages.create(
thread.id,
{ role: "user", content: messages[i] },
const run = openai.beta.threads.runs.stream(thread.id, {
assistant_id: assistant.id,
roysarajit143 avatar
ALPHANUMERIC_TEXT_TOO_LONG
@roysarajit143
@jsxImportSource https://esm.sh/react@18.2.0
HTTP
try {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
const { messages } = await request.json();
const completion = await openai.chat.completions.create({
messages: messages,