Search

Results include substring matches and semantically similar vals. Learn more
Projects are not included in search results. Re-try your query with our new search.
Priyansh avatar
aiVideoApp
@Priyansh
@jsxImportSource https://esm.sh/react@18.2.0
HTTP
export default async function server(request: Request): Promise<Response> {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
const url = new URL(request.url);
// Simulated video analysis (in a real scenario, you'd use a more advanced video processing service)
const analysisResponse = await openai.chat.completions.create({
messages: [
// Generate video concept
const conceptResponse = await openai.chat.completions.create({
messages: [
heathergliffin avatar
DailyDaughterNotes
@heathergliffin
* This app generates cute daily notes for a daughter using OpenAI's GPT model. * It stores the generated notes in SQLite for persistence and displays them on a simple web interface. * The app uses React for the frontend and Deno's runtime environment in Val Town for the backend.
HTTP
* This app generates cute daily notes for a daughter using OpenAI's GPT model.
* It stores the generated notes in SQLite for persistence and displays them on a simple web interface.
async function server(request: Request): Promise<Response> {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const { sqlite } = await import("https://esm.town/v/stevekrouse/sqlite");
} else if (path === "/generate-note" && request.method === "POST") {
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
taras avatar
free_open_router
@taras
curl 'https://taras-free_open_router.web.val.run/api/v1/chat/completions' \ -H 'accept: application/json' \ -H 'authorization: Bearer THIS_IS_OVERRIDEN_ON_SERVER' \ -H 'content-type: application/json' \ --data-raw '{ "model": "auto", "temperature": 0, "messages": [ { "role": "system", "content": "stuff" }, { "role": "user", "content": "hello" } ], "stream": true }'
HTTP
url: "https://openrouter.ai/api/v1/models",
token: Deno.env.get("OPEN_ROUTER_API_KEY"),
url: "https://api.groq.com/openai/v1/models",
token: Deno.env.get("GROQ_API_KEY"),
// Create fetch promises for each API endpoint
if (provider === "groq") {
url.host = "api.groq.com";
url.pathname = url.pathname.replace("/api/v1", "/openai/v1");
url.port = "443";
url.protocol = "https";
knulp222 avatar
interview_practice
@knulp222
@jsxImportSource https://esm.sh/react
HTTP
const { intervieweeResponse, interviewPosition } = await request.json();
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
prashamtrivedi avatar
bhaavsynthlandipgpage
@prashamtrivedi
GujaratiGPT Landing Page A responsive landing page for the GujaratiGPT tool, which is an AI-powered blog post generator specialized in creating Gujarati content using both Anthropic's Claude and Google's Gemini models. Features Responsive design that works on mobile, tablet, and desktop Interactive UI elements including hover effects and smooth scrolling Sections that highlight the key features, workflow, and model comparison Clean and modern design with CSS variables for easy customization
HTTP
const { sqlite } = await import("https://esm.town/v/stevekrouse/sqlite");
const { email: sendEmail } = await import("https://esm.town/v/std/email");
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
const KEY = new URL(import.meta.url).pathname.split("/").at(-1);
const SCHEMA_VERSION = 1;
bluemsn avatar
modelSampleChatCall
@bluemsn
An interactive, runnable TypeScript val by bluemsn
Script
const builder = await getModelBuilder({
type: "chat",
provider: "openai",
const model = await builder();
const { SystemMessage, HumanMessage } = await import("npm:langchain/schema");
ynonp avatar
telegramWebhookEchoMessage
@ynonp
An interactive, runnable TypeScript val by ynonp
HTTP
import { OpenAI } from "https://esm.town/v/std/openai";
import { telegramSendMessage } from "https://esm.town/v/vtdocs/telegramSendMessage?v=5";
import translateToEnglishWithOpenAI from "https://esm.town/v/ynonp/translateToEnglishWithOpenAI";
export const telegramWebhookEchoMessage = async (req: Request) => {
const chatId: number = body.message.chat.id;
const translated = await translateToEnglishWithOpenAI(text);
await telegramSendMessage(Deno.env.get("TELEGRAM_BOT_TOKEN"), { chat_id: chatId, text: translated });
chrisputnam9 avatar
dailyThoughtPrompt
@chrisputnam9
Uses OpenAI to generate a thought-provoking statement and emails it to you at whatever interval you set. Clone/fork this, set the interval you want (eg. every 1 days) and hit run to test it. The email will send to your val town account email.
Cron
Uses OpenAI to generate a thought-provoking statement and emails it to you at whatever interval you set.
Clone/fork this, set the interval you want (eg. every 1 days) and hit run to test it.
// OpenAI provided by val town
import { OpenAI } from "https://esm.town/v/std/openai";
// Email provided by val town
// Now ask the AI to invent something
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
AppleLamps avatar
VEOPROMPTER
@AppleLamps
@jsxImportSource https://esm.sh/react@18.2.0
HTTP
if (url.pathname === "/generate-prompts") {
const { OpenAI } = await import("https://esm.sh/openai@4.11.1");
const openai = new OpenAI({
apiKey: Deno.env.get("OPENAI_API_KEY"),
const body = await request.json();
try {
const completion = await openai.chat.completions.create({
model: "chatgpt-4o-latest", // Changed from "gpt-4o-latest" to "chatgpt-4o-latest"
if (!response) {
throw new Error("No response from OpenAI");
const [conciseSection, detailedSection] = response.split("\n\n");
willthereader avatar
chocolateCanid
@willthereader
ChatGPT Implemented in Val Town Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events.
HTTP
# ChatGPT Implemented in Val Town
Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events.
<p align=center>
/** @jsxImportSource https://esm.sh/react */
import { OpenAI } from "https://esm.town/v/std/openai";
import { Hono } from "npm:hono@3";
import { renderToString } from "npm:react-dom/server";
const openai = new OpenAI();
const jsxResponse = (jsx) => {
try {
thread = await openai.chat.completions.create({
model: "gpt-3.5-turbo", // Use a faster model
messages: [{ role: "system", content: "Start a new thread" }],
assistant = await openai.chat.completions.create({
model: "gpt-3.5-turbo", // Use a faster model
try {
await openai.chat.completions.create({
model: "gpt-3.5-turbo", // Use a faster model
"data: " + JSON.stringify(str) + "\n\n",
const run = openai.chat.completions.stream({
model: "gpt-3.5-turbo", // Use a faster model
vandyand avatar
rateArticleRelevance
@vandyand
An interactive, runnable TypeScript val by vandyand
Script
export const rateArticleRelevance = async (interests: string, article: any) => {
const { default: OpenAI } = await import("npm:openai");
const openai = new OpenAI({
apiKey: untitled_tealCoral.OPENAI_API_KEY,
try {
Give a score from 0 to 10. Why did you give this score? Respond with the score only.
const response = await openai.chat.completions.create({
messages: [
pdebieamzn avatar
calendarEventExtractor
@pdebieamzn
// This calendar app will allow users to upload a PDF, extract events from it using OpenAI's GPT model,
HTTP
// This calendar app will allow users to upload a PDF, extract events from it using OpenAI's GPT model,
// and display them on a big calendar. We'll use react-big-calendar for the calendar component,
async function server(request: Request): Promise<Response> {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const pdfExtractText = await import("https://esm.town/v/pdebieamzn/pdfExtractText");
const fullText = await pdfExtractText.default(arrayBuffer);
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
} catch (error) {
console.error('Error parsing OpenAI response:', error);
console.log('Raw response:', completion.choices[0].message.content);
if (!Array.isArray(events)) {
console.error('Unexpected response format from OpenAI');
return new Response(JSON.stringify({ error: 'Unexpected response format' }), { status: 500, headers: { 'Content-Type':
maxm avatar
emojiVectorEmbeddings
@maxm
// Initialize OpenAI client
Script
import * as nodeEmoji from "npm:node-emoji";
import { OpenAI } from "npm:openai";
// Initialize OpenAI client
const openai = new OpenAI();
await sqlite.batch([
async function getEmbedding(emoji: string): Promise<number[]> {
const result = await openai.embeddings.create({
input: emoji,
// async function getEmbedding(emoji: string): Promise<number[]> {
// const result = await openai.embeddings.create({
// input: emoji,
vtdocs avatar
webscrapeWikipediaIntro
@vtdocs
An interactive, runnable TypeScript val by vtdocs
HTTP
const cheerio = await import("npm:cheerio");
const html = await fetchText(
"https://en.wikipedia.org/wiki/OpenAI",
const $ = cheerio.load(html);
// Cheerio accepts a CSS selector, here we pick the second <p>
bambare avatar
subtleAmaranthLoon
@bambare
An interactive, runnable TypeScript val by bambare
Email
import { sqlite } from "https://esm.town/v/stevekrouse/sqlite";
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function (e: Email) {
const KEY = new URL(import.meta.url).pathname.split("/").at(-1);
const openai = new OpenAI();
const sqlite = await import("https://esm.town/v/stevekrouse/sqlite");
try {
// Generate quotation using OpenAI
const completion = await openai.chat.completions.create({
messages: [