Search

Results include substring matches and semantically similar vals. Learn more
mjweaver01 avatar
SermonGPTAPI
@mjweaver01
An interactive, runnable TypeScript val by mjweaver01
HTTP
if (request.method === "POST" && new URL(request.url).pathname === "/stream") {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
const { question } = await request.json();
const season = getSeason(currentDate);
const stream = await openai.chat.completions.create({
messages: [
janpaul123 avatar
getValsContextWindow
@janpaul123
An interactive, runnable TypeScript val by janpaul123
Script
el.attributes.filter(a => a.name === "href").map(a => a.value)
prompt: "Write a val that uses OpenAI",
code: `import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
"messages": [
elsif_maj avatar
topHNThreadByHour
@elsif_maj
// set at Thu Nov 30 2023 14:22:53 GMT+0000 (Coordinated Universal Time)
Email
// set at Thu Nov 30 2023 14:22:53 GMT+0000 (Coordinated Universal Time)
for 18:00 is: The Midwit Smart Home","Top thread on Hackernews for 19:00 is: OpenAI is too cheap to beat","Top thread on Hack
stevekrouse avatar
cron
@stevekrouse
CronGPT This is a minisite to help you create cron expressions, particularly for crons on Val Town. It was inspired by Cron Prompt , but also does the timezone conversion from wherever you are to UTC (typically the server timezone). Tech Hono for routing ( GET / and POST /compile .) Hono JSX HTMX (probably overcomplicates things; should remove) @stevekrouse/openai, which is a light wrapper around @std/openai I'm finding HTMX a bit overpowered for this, so I have two experimental forks without it: Vanilla client-side JavaScript: @stevekrouse/cron_client_side_script_fork Client-side ReactJS (no SSR): @stevekrouse/cron_client_react_fork I think (2) Client-side React without any SSR is the simplest architecture. Maybe will move to that.
HTTP
* HTMX (probably overcomplicates things; should remove)
* @stevekrouse/openai, which is a light wrapper around @std/openai
I'm finding HTMX a bit overpowered for this, so I have two experimental forks without it:
/** @jsxImportSource npm:hono@3/jsx */
import { chat } from "https://esm.town/v/stevekrouse/openai";
import cronstrue from "npm:cronstrue";
import { Hono } from "npm:hono@3";
sakshamkapoor2911 avatar
Code_Debugger
@sakshamkapoor2911
Cerebras Inference template This val shows you how you can deploy an app using Cerebras Inference on Val Town in seconds. What is Cerebras? Cerebras is an American chip manufacturer that produces large wafer chips that deliver mind-blowing LLM inference speeds. As of this writing on Jan 17, 2025, Cerebras Inference provides Llama 3.18b, 3.1 70b, and 3.370b at a jaw-dropping 2k tokens per second – that's 50x faster than what the frontier labs produce. Llama 3.370b at 2k tokens per second is particularly noteworthy because it is a GPT-4-class model . This level of intelligence at that level of speed will unlock whole new classes of applications. Quick start Set up Cerebras Sign up for Cerebras Get a Cerebras API Key Save it in a Val Town environment variable called CEREBRAS_API_KEY Once Cerebras is set up in your Val Town account, there are two ways to get started: Fork this app and customize it (or ask Townie AI to customize it) Start a new chat with Townie AI and copy & paste the following instructions: . Use Cerebras for AI on the backend like so: const { OpenAI } = await import("https://esm.sh/openai"); const client = new OpenAI({ apiKey: Deno.env.get("CEREBRAS_API_KEY"), baseURL: "https://api.cerebras.ai/v1" }); const response = await client.chat.completions.create({ model: "llama-3.3-70b", messages: [], }); const generatedText = response.choices[0].message.content; For example, the val in this template was created by asking Townie AI to "Make a chatgpt clone", then I hit shift-enter twice, and then pasted in the instructions on how to use Cerebras from above, then hit enter. Townie built this app on its first try, in about 20 seconds. Sample apps Cerebras Searcher - a Perplexity clone that uses the SerpAPI to do RAG and summaries with Cerebras ( requires a SerpAPI key ) Cerebras Coder - an app that generates websites in a second with Cerebras Cerebras Debater - an app that truly shows Cerebras's speed: it's Cerebras talking to Cerebras in a debate
HTTP
Use Cerebras for AI on the backend like so:
const { OpenAI } = await import("https://esm.sh/openai");
const client = new OpenAI({
apiKey: Deno.env.get("CEREBRAS_API_KEY"),
const stackOverflowResults = await searchStackOverflow(lastUserMessage);
const { OpenAI } = await import("https://esm.sh/openai");
const client = new OpenAI({
apiKey: Deno.env.get("CEREBRAS_API_KEY"),
roadlabs avatar
valleGetValsContextWindow
@roadlabs
An interactive, runnable TypeScript val by roadlabs
Script
el.attributes.filter(a => a.name === "href").map(a => a.value)
prompt: "Write a val that uses OpenAI",
code: `import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
"messages": [
arash2060 avatar
VALLErun
@arash2060
The actual code for VALL-E: https://www.val.town/v/janpaul123/VALLE
HTTP
import { sleep } from "https://esm.town/v/stevekrouse/sleep?v=1";
import { anthropic } from "npm:@ai-sdk/anthropic";
import { openai } from "npm:@ai-sdk/openai";
import ValTown from "npm:@valtown/sdk";
import { StreamingTextResponse, streamText } from "npm:ai";
let vercelModel;
if (model.includes("gpt")) {
vercelModel = openai(model);
} else {
vercelModel = anthropic(model);
jesi_rgb avatar
email_weather
@jesi_rgb
* This val will create a daily weather email service. * It uses the OpenWeatherMap API to fetch weather data and the Val Town email API to send emails. * The val will be triggered daily using a cron job.
Cron
import { email } from "https://esm.town/v/std/email";
import { OpenAI } from "https://esm.town/v/std/openai";
const OPENWEATHERMAP_API_KEY = Deno.env.get("WEATHER_API_KEY");
async function generateEmailContent(weatherData) {
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
melissanf avatar
productiveIndigoJay
@melissanf
AI-Powered Content Generator 🚀 Overview This is an AI-powered tool that helps users generate blog post ideas and outlines based on a given topic. Users can easily copy or share their generated content. Features ✅ User Input – Enter a topic to get relevant blog ideas. ✅ AI-Powered Suggestions – Generates three blog post ideas or a detailed outline. ✅ Copy & Share – One-click copy button and a shareable link for easy sharing. How to Use 1️⃣ Enter a topic in the input field (e.g., "Remote Work Productivity"). 2️⃣ Generate blog ideas with AI. 3️⃣ Copy the results or share the link with others. Tech Stack Townie AI – AI-based automation OpenAI API – For generating blog content Built-in Databases – To store and retrieve topics (if needed) How to Deploy & Share Run the app on Townie AI Copy the generated link (Townie AI auto-deploys apps). Share it on social media or with your audience. Troubleshooting 🔹 Issue: Blog ideas not generating? ➡️ Click "Logs" in Townie AI and check the error message. 🔹 Issue: Copy button not working? ➡️ Ensure the event listener is correctly set for the copy function. 💡 Contributions & Feedback Have suggestions? Feel free to reach out or improve this project! 🚀
HTTP
- **Townie AI** – AI-based automation
- **OpenAI API** – For generating blog content
- **Built-in Databases** – To store and retrieve topics (if needed)
willthereader avatar
ThankYouNoteGenerator
@willthereader
An interactive, runnable TypeScript val by willthereader
HTTP
console.log(`Received ${request.method} request to ${request.url}`);
const { OpenAI } = await import("https://esm.town/v/std/openai");
const { sqlite } = await import("https://esm.town/v/stevekrouse/sqlite");
const openai = new OpenAI();
const url = new URL(request.url);
console.log("Generated prompt:", prompt);
const completion = await openai.chat.completions.create({
messages: [{ role: "user", content: prompt }],
gokulnpc avatar
cerebrasTemplate
@gokulnpc
Cerebras Inference template This val shows you how you can deploy an app using Cerebras Inference on Val Town in seconds. What is Cerebras? Cerebras is an American chip manufacturer that produces large wafer chips that deliver mind-blowing LLM inference speeds. As of this writing on Jan 17, 2025, Cerebras Inference provides Llama 3.18b, 3.1 70b, and 3.370b at a jaw-dropping 2k tokens per second – that's 50x faster than what the frontier labs produce. Llama 3.370b at 2k tokens per second is particularly noteworthy because it is a GPT-4-class model . This level of intelligence at that level of speed will unlock whole new classes of applications. Quick start Set up Cerebras Sign up for Cerebras Get a Cerebras API Key Save it in a Val Town environment variable called CEREBRAS_API_KEY Once Cerebras is set up in your Val Town account, there are two ways to get started: Fork this app and customize it (or ask Townie AI to customize it) Start a new chat with Townie AI and copy & paste the following instructions: . Use Cerebras for AI on the backend like so: const { OpenAI } = await import("https://esm.sh/openai"); const client = new OpenAI({ apiKey: Deno.env.get("CEREBRAS_API_KEY"), baseURL: "https://api.cerebras.ai/v1" }); const response = await client.chat.completions.create({ model: "llama-3.3-70b", messages: [], }); const generatedText = response.choices[0].message.content; For example, the val in this template was created by asking Townie AI to "Make a chatgpt clone", then I hit shift-enter twice, and then pasted in the instructions on how to use Cerebras from above, then hit enter. Townie built this app on its first try, in about 20 seconds. Sample apps Cerebras Searcher - a Perplexity clone that uses the SerpAPI to do RAG and summaries with Cerebras ( requires a SerpAPI key ) Cerebras Coder - an app that generates websites in a second with Cerebras Cerebras Debater - an app that truly shows Cerebras's speed: it's Cerebras talking to Cerebras in a debate
HTTP
Use Cerebras for AI on the backend like so:
const { OpenAI } = await import("https://esm.sh/openai");
const client = new OpenAI({
apiKey: Deno.env.get("CEREBRAS_API_KEY"),
stevekrouse avatar
cron_client_react_fork
@stevekrouse
CronGPT This is a minisite to help you create cron expressions, particularly for crons on Val Town. It was inspired by Cron Prompt , but also does the timezone conversion from wherever you are to UTC (typically the server timezone). Tech Hono for routing ( GET / and POST /compile .) Hono JSX HTML (probably overcomplicates things; should remove) @stevekrouse/openai, which is a light wrapper around @std/openai TODO [ ] simplify by removing HTMX (try doing the form as a GET request, manual JS script, or client side react) [ ] make the timezone picker better (fewer options, searchable) [ ] add a copy button?
HTTP
* HTML (probably overcomplicates things; should remove)
* @stevekrouse/openai, which is a light wrapper around @std/openai
## TODO
import cronstrue from "https://esm.sh/cronstrue";
import React, { useState } from "https://esm.sh/react@18.2.0";
import { chat } from "https://esm.town/v/stevekrouse/openai?v=19";
import react_http from "https://esm.town/v/stevekrouse/react_http?v=6";
export default function(req: Request) {
daisuke avatar
multilingualchatroom
@daisuke
A simple chat room to share with friends of other languages, where each user can talk in their own language and see others' messages in their own language. Click and hold a translated message to see the original message. Open the app in a new window to start your own, unique chatroom that you can share with your friends via the room URL. TODO: BUG: fix the issue that keeps old usernames in the "[User] is typing" section after a user changes their name. BUG: Username edit backspaces is glitchy. UI: Update the title for each unique chatroom to make the difference clear. UI: mobile friendly. Feature: the ability for the message receiver to select a part of a translation that is confusing and the author will see that highlight of the confusing words and have the opportunity to reword the message or... Feature: bump a translation to a higher LLM for more accurate translation. Feature: use prior chat context for more accurate translations. Feature: Add video feed for non-verbals while chatting.
HTTP
const { blob } = await import("https://esm.town/v/std/blob");
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
// Simple rate limiting
try {
const completion = await openai.chat.completions.create({
messages: [
stevekrouse avatar
valleGetValsContextWindow
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
HTTP
el.attributes.filter(a => a.name === "href").map(a => a.value)
prompt: "Write a val that uses OpenAI",
code: `import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
"messages": [
stevekrouse avatar
cron2
@stevekrouse
@jsxImportSource npm:hono@3/jsx
HTTP
/** @jsxImportSource npm:hono@3/jsx */
import { chat } from "https://esm.town/v/stevekrouse/openai";
import cronstrue from "npm:cronstrue";
import { Hono } from "npm:hono@3";