Search
valle_tmp_202239440460780356420276977660784
@janpaul123
@jsxImportSource https://esm.sh/react
HTTP
import OpenAI from "npm:openai";
unless strictly necessary, for example use APIs that don't require a key, prefer internal function
functions where possible. Unless specified, don't add error handling,
The val should create a "export default async function main" which is the main function that gets
// The val should create a "export default async function main() {" which
// is the main function that gets executed, without any arguments. Don't return a Response object,
function write(text) {
const callback = function (mutationsList, observer) {
const openai = new OpenAI();
const stream = await openai.chat.completions.create({
![stevekrouse avatar](https://images.clerk.dev/uploaded/img_2PqHa2Gsy93xQrjh2w78Xu0cChW.jpeg)
exampleAuthApi
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
Script
import { verfiedCalls } from "https://esm.town/v/stevekrouse/verfiedCalls";
import { verifyAPIAuth } from "https://esm.town/v/stevekrouse/verifyAPIAuth";
export async function exampleAuthApi(arg1, arg2, auth) {
let { handle } = await verifyAPIAuth(auth);
if (handle) {
valtownsemanticsearch
@janpaul123
π VAL VIBES: Val Town Semantic Search This val hosts an HTTP server that lets you search all vals based on vibes. If you search for "discord bot" it shows all vals that have "discord bot" vibes. It does this by comparing embeddings from OpenAI generated for the code of all public vals, to an embedding of your search query. This is an experiment to see if and how we want to incorporate semantic search in the actual Val Town search page. I implemented three backends, which you can switch between in the search UI. Check out these vals for details on their implementation. Neon: storing and searching embeddings using the pg_vector extension in Neon's Postgres database. Searching: janpaul123/semanticSearchNeon Indexing: janpaul123/indexValsNeon Blobs: storing embeddings in Val Town's standard blob storage , and iterating through all of them to compute distance. Slow and terrible, but it works! Searching: janpaul123/semanticSearchBlobs Indexing: janpaul123/indexValsBlobs Turso: storing and searching using the sqlite-vss extension. Abandoned because of a bug in Turso's implementation. Searching: janpaul123/semanticSearchTurso Indexing: janpaul123/indexValsTurso All implementations use the database of public vals , made by Achille Lacoin , which is refreshed every hour. The Neon implementation updates every 10 minutes, and the other ones are not updated. I also forked Achille's search UI for this val. Please share any feedback and suggestions, and feel free to fork our vals to improve them. This is a playground for semantic search before we implement it in the product for real!
HTTP
This val hosts an [HTTP server](https://janpaul123-valtownsemanticsearch.web.val.run/) that lets you search all vals based on vibes. If you search for "discord bot" it shows all vals that have "discord bot" vibes.
It does this by comparing [embeddings from OpenAI](https://platform.openai.com/docs/guides/embeddings) generated for the code of all public vals, to an embedding of your search query.
This is an experiment to see if and how we want to incorporate semantic search in the actual Val Town search page.
const githubQuery = (query: string) => encodeURIComponent(`${query} repo:pomdtr/val-town-mirror path:vals/`);
async function handler(req: Request) {
const url = new URL(req.url);
"Content-Type": "application/opensearchdescription+xml",
function form(query, type) {
return (
![easrng avatar](https://images.clerk.dev/oauth_github/img_2Qn86tpR8Fwsj1TjoMg9t1DCmc2.png)
nodejs
@easrng
An interactive, runnable TypeScript val by easrng
Script
import { runVal } from "https://esm.town/v/std/runVal";
export async function nodejs() {
const token =
(await runVal("easrng.nodejsInternals", {
Priyam28Gpt
@priyam
@jsxImportSource https://esm.sh/react@18.2.0
HTTP
import React, { useEffect, useRef, useState } from "https://esm.sh/react@18.2.0";
function ChatGPTClone() {
const [messages, setMessages] = useState([
</div>
function client() {
createRoot(document.getElementById("root")).render(<ChatGPTClone />);
if (typeof document !== "undefined") { client(); }
export default async function server(request: Request): Promise<Response> {
// Handle chat completion route
try {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
const { messages } = await request.json();
const completion = await openai.chat.completions.create({
messages: messages,
} catch (error) {
console.error("OpenAI Error:", error);
return new Response(
val_5OBYpMAcsJ
@dhvanil
An interactive, runnable TypeScript val by dhvanil
HTTP
export async function val_5OBYpMAcsJ(req) {
try {
const body = await req.text();
// Create a function from the provided code and execute it
const userFunction = async () => {
const findPrimes = (n) => {
// Execute and capture the result
const result = await userFunction();
// Handle different types of results
cerebras_coder
@popkorn123
This is an AI code assistant powered by Cerebras , running llama3.3-70b. Inspired by Hassan's Llama Coder . Setup Sign up for Cerebras Get a Cerebras API Key Save it in a Val Town environment variable called CEREBRAS_API_KEY Todos I'm looking for collaborators to help. Fork & send me PRs! [ ] Experiment with two prompt chain (started here )
HTTP
CEREBRAS_API_KEY: "csk-ht6t8f48e4epnwxwh3c6pex6ekw2eyw2nm9knfm33f4pcetv"
// Generate code function
async function generateCode(prompt: string, currentCode: string) {
const client = new Cerebras({ apiKey: CONFIG.CEREBRAS_API_KEY });
// Rest of the function remains the same
// Default export for HTTP val
export default async function server(request: Request): Promise<Response> {
try {
valle_tmp_1369396001916440808441248753827946
@janpaul123
@jsxImportSource https://esm.sh/react
HTTP
import OpenAI from "npm:openai";
unless strictly necessary, for example use APIs that don't require a key, prefer internal function
functions where possible. Unless specified, don't add error handling,
The val should create a "export default async function main" which is the main function that gets
// The val should create a "export default async function main() {" which
// is the main function that gets executed, without any arguments. Don't return a Response object,
function write(text) {
const callback = function (mutationsList, observer) {
const openai = new OpenAI();
const stream = await openai.chat.completions.create({
![investor avatar](https://images.clerk.dev/oauth_github/img_2QyrMgD7Li94Uthz2njPIyE2KIR.png)
myApi
@investor
An interactive, runnable TypeScript val by investor
Script
export function myApi(name) {
return "hi " + name;
testPostCall
@mehmet
An interactive, runnable TypeScript val by mehmet
Script
import { fetchJSON } from "https://esm.town/v/stevekrouse/fetchJSON?v=41";
export async function testPostCall() {
const result = await fetchJSON(
"https://api.val.town/express/@stevekrouse.postWebhook1",
catFact
@yuval_dikerman
An interactive, runnable TypeScript val by yuval_dikerman
HTTP
"Rewrite this fact about cats as if it was written for 3 year old:\n\n" +
fact;
const story = await fetch("https://api.openai.com/v1/chat/completions", {
method: "POST",
body: JSON.stringify({
temperature: 0.7,
headers: {
"Authorization": `Bearer ${process.env.OPENAI}`,
"Content-Type": "application/json",
}).then(async (response) => {
valEval
@healeycodes
An interactive, runnable TypeScript val by healeycodes
Script
import { fetch } from "https://esm.town/v/std/fetch";
export const valEval = async function (
expr: string | TemplateStringsArray
): Promise<[value: any, error: undefined | string]> {
inventory
@toowired
@jsxImportSource https://esm.sh/react@18.2.0
HTTP
{ name: "Netlify", icon: "ποΈ", explanation: "An all-in-one platform for automating modern web projects", complexity: 1 },
{ name: "Vercel", icon: "π", explanation: "A cloud platform for static sites and Serverless Functions", complexity: 2 },
function client() {
const root = document.getElementById("root");
const { useState, useEffect, useRef } = React;
function App() {
const [step, setStep] = useState(0);
client();
async function server(request: Request): Promise<Response> {
const html = `
valle_tmp_25273384802368385202566002443081
@janpaul123
@jsxImportSource https://esm.sh/react
HTTP
import OpenAI from "npm:openai";
unless strictly necessary, for example use APIs that don't require a key, prefer internal function
functions where possible. Unless specified, don't add error handling,
The val should create a "export default async function main" which is the main function that gets
// The val should create a "export default async function main() {" which
// is the main function that gets executed, without any arguments. Don't return a Response object,
function write(text) {
const callback = function (mutationsList, observer) {
const openai = new OpenAI();
const stream = await openai.chat.completions.create({
falDemoApp
@spigooli
@jsxImportSource https://esm.sh/react
HTTP
import { falProxyRequest } from "https://esm.town/v/stevekrouse/falProxyRequest";
function App() {
const [prompt, setPrompt] = useState("");
</div>
function client() {
createRoot(document.getElementById("root")).render(<App />);
if (typeof document !== "undefined") { client(); }
export default async function server(req: Request): Promise<Response> {
const url = new URL(req.url);