Search

Results include substring matches and semantically similar vals. Learn more
emohamed avatar
myApi
@emohamed
An interactive, runnable TypeScript val by emohamed
Script
export function myApi(name) {
return "hi " + name;
rodrigotello avatar
myApi
@rodrigotello
An interactive, runnable TypeScript val by rodrigotello
Script
export function myApi(name) {
return "hi " + name;
dotangad avatar
myApi
@dotangad
An interactive, runnable TypeScript val by dotangad
Script
export function myApi(name) {
return "hi " + name;
tech_aly avatar
myApi
@tech_aly
An interactive, runnable TypeScript val by tech_aly
Script
export function myApi(name) {
return "hi " + name;
bacondotbuild avatar
myApi
@bacondotbuild
An interactive, runnable TypeScript val by bacondotbuild
Script
export function myApi(name) {
return "hi " + name;
lol avatar
myApi
@lol
An interactive, runnable TypeScript val by lol
Script
export function myApi(name) {
return "hi " + name;
trob avatar
prompt_to_code_auto_refresh_codebed
@trob
@jsxImportSource https://esm.sh/react
HTTP
function debounce(func, wait) {
return function executedFunction(...args) {
function App() {
function client() {
function stripCodeFences(code: string): string {
function extractMessage(logOutput: string): string {
async function retryWithBackoff(fn: () => Promise<any>, maxRetries = 5) {
export default async function server(request: Request): Promise<Response> {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
dhvanil avatar
val_wGlPXo8a3y
@dhvanil
An interactive, runnable TypeScript val by dhvanil
HTTP
export async function val_wGlPXo8a3y(req) {
try {
// Execute the code directly and capture its result
stevekrouse avatar
neatEmeraldVicuna
@stevekrouse
Twitter/𝕏 keyword alerts Custom notifications for when you, your company, or anything you care about is mentioned on Twitter. If you believe in Twitter/𝕏-driven development, you want to get notified when anyone is talking about your tech, even if they're not tagging you. To get this Twitter Alert bot running for you, fork this val and modify the query and where the notification gets delivered. 1. Query Change the keywords for what you want to get notified for and the excludes for what you don't want to get notified for. You can use Twitter's search operators to customize your query, for some collection of keywords, filtering out others, and much more! 2. Notification Below I'm sending these mentions to a public channel in our company Discord, but you can customize that to whatever you want, @std/email, Slack, Telegram, whatever. Twitter Data & Limitations The Twitter API has become unusable. This val gets Twitter data via SocialData , an affordable Twitter scraping API. In order to make this val easy for you to fork & use without signing up for another API, I am proxying SocialData via @stevekrouse/socialDataProxy. Val Town Pro users can call this proxy 100 times per day, so be sure not to set this cron to run more than once every 15 min. If you want to run it more, get your own SocialData API token and pay for it directly.
Cron
import { zodResponseFormat } from "https://esm.sh/openai/helpers/zod";
import { z } from "https://esm.sh/zod";
import { OpenAI } from "https://esm.town/v/std/openai";
import { discordWebhook } from "https://esm.town/v/stevekrouse/discordWebhook";
.join(" OR ") + " " + excludes;
const openai = new OpenAI();
const RelevanceSchema = z.object({
reason: z.string(),
async function relevant(t: Tweet): Promise<z.infer<typeof RelevanceSchema>> {
const systemPrompt = `Determine if this tweet JSON is relevant to Shorebird on the following criteria:
try {
const completion = await openai.beta.chat.completions.parse({
model: "gpt-4o-mini",
} catch (error) {
console.error("Error parsing OpenAI response:", error);
return { isRelevant: false, confidence: 0, reason: "Error in processing" };
const isProd = true;
export async function twitterAlert({ lastRunAt }: Interval) {
// search
jeffreyyoung avatar
freshBeigeScorpion
@jeffreyyoung
Shows a preview and then says meow for 5 seconds https://poe.com/preview-then-slow things to note: don't put underscores in the name name, it stops working
HTTP
* Returns a response to the user's query
async function getResponse(req: Query, send: SendEventFn) {
send("meta", { content_type: "text/markdown" });
* Returns your bot's settings
async function getBotSettings(): Promise<BotSettings> {
return {
) => void;
function encodeEvent(event: string, data: any = {}) {
return new TextEncoder().encode(`event: ${event}\ndata: ${JSON.stringify(data)}\n\n`);
export default async function(req: Request): Promise<Response> {
const reqBody = await req.json()
jamiedubs avatar
runGlif
@jamiedubs
glif API mini-SDK make generative magical AI things set your GLIF_API_TOKEN in your own ENV, or you'll hit rate limits: https://glif.app/settings/api-tokens call from your val like: import { runGlif } from "https://esm.town/v/jamiedubs/runGlif"; const json = await runGlif({ id: "cluu91eda000cv8jd675qsrby", inputs: ["hello", "world"] }); console.log(json);
Script
output?: string;
outputFull?: object;
export async function runGlif({
id,
inputs,
webup avatar
getMemoryBuilder
@webup
An interactive, runnable TypeScript val by webup
Script
import { getModelBuilder } from "https://esm.town/v/webup/getModelBuilder";
export async function getMemoryBuilder(spec: {
type: "buffer" | "summary" | "vector";
provider?: "openai";
} = { type: "buffer" }, options = {}) {
return new BufferMemory();
matches({ type: "summary", provider: "openai" }),
async () => {
return new ConversationSummaryMemory({ llm, ...options });
matches({ type: "vector", provider: "openai" }),
async () => {
type: "embedding",
provider: "openai",
const model = await builder();
webup avatar
getSampleDocuments
@webup
An interactive, runnable TypeScript val by webup
Script
export async function getSampleDocuments() {
const { Document } = await import("npm:langchain/document");
const docs = [
jft51 avatar
cerebras_coder
@jft51
This is an AI code assistant powered by Cerebras , running llama3.3-70b. Inspired by Hassan's Llama Coder . Setup Sign up for Cerebras Get a Cerebras API Key Save it in a Val Town environment variable called CEREBRAS_API_KEY Todos I'm looking for collaborators to help. Fork & send me PRs! [ ] Experiment with two prompt chain (started here )
HTTP
title: "Todo App",
code:
let todos = JSON.parse(localStorage.getItem('todos')) || [];\n\n // Function to render the todo list\n functio
performance: {
tokensPerSecond: 2298.56,
title: "Markdown Editor",
code:
l(defaultMarkdown);\n preview.innerHTML = defaultHtml;\n\n // Function to convert Markdown to HTML\n fun
performance: {
tokensPerSecond: 4092.96,
prashamtrivedi avatar
valreadmegenerator
@prashamtrivedi
Val Town README Generator Welcome to the Val Town README Generator ! 🚀 Get ready to effortlessly generate elegant and informative README files for your Val Projects. This tool leverages AI to create well-structured and insightful documentation for your Val code. Features Theme Awareness: 🌑🌞 Automatically adjusts to your system's dark/light mode preference. Responsive Layout: 📱🖥️ Built with a beautiful and modern UI using React and TailwindCSS, ensuring a pleasant experience across devices. Live Preview and Copy: 📋 Instantly view the generated README and easily copy it with just a click. Direct Readme Link: 🔗 Shareable link for each generated README for quick access. How It Works Follow these simple steps to generate a README: Enter Details: Provide your Val username and the Val name you want to generate a README for. Generate: Click on 'Generate README' and watch as the application swiftly fetches and processes the Val code to create a Markdown README. Copy and Share: Use the 'Copy' button to copy the README for your use, or share the link directly with peers. Technologies Used React: For building the interactive user interface. React DOM: To render components efficiently. TailwindCSS: For styling the application. Deno: The server-side environment. ValTown SDK: Integrated to fetch Val details. OpenAI GPT-4: To generate natural language README content. JavaScript Modules (ESM): For seamless module imports. Getting Started This project is actively deployed and accessible via the browser. Simply navigate to the running instance, input your details, and let the tool handle the rest. Enjoy generating READMEs without the need for local setup. Contributing While the project is not currently set up for local development, your interest and feedback are highly valued. Feel free to explore the code and suggest enhancements or improvements via pull requests or through the issue tracker. Contributions If you've made a substantial contribution to the project, feel free to add your name here! For any questions or guidance, join the discussions in our project repository. License MIT License. Feel free to use, modify, and distribute this application as per the terms of the license. We hope you find the Val Town README Generator a valuable tool in your development workflow. Happy Documenting! 🎉
HTTP
- **ValTown SDK:** Integrated to fetch Val details.
- **OpenAI GPT-4:** To generate natural language README content.
- **JavaScript Modules (ESM):** For seamless module imports.
import React, { useRef, useState, useEffect } from "https://esm.sh/react@18.2.0";
function App() {
const [username, setUsername] = useState("");
</div>
function client() {
createRoot(document.getElementById("root")).render(<App />);
if (typeof document !== "undefined") { client(); }
export default async function server(request: Request): Promise<Response> {
const url = new URL(request.url);
valName = valNamePart;
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
const valTownClient = new ValTown({
try {
const completion = await openai.chat.completions.create({
model: "gpt-4o",
} catch (error) {
console.error('Error in server function:', error);
return new Response(`Error: ${error.message}`, { status: 500 });