Search

Results include substring matches and semantically similar vals. Learn more
janpaul123 avatar
semanticSearchNeon
@janpaul123
Part of Val Town Semantic Search . Uses Neon to search embeddings of all vals, using the pg_vector extension. Call OpenAI to generate an embedding for the search query. Query the vals_embeddings table in Neon using the cosine similarity operator. The vals_embeddings table gets refreshed every 10 minutes by janpaul123/indexValsNeon .
Script
Uses [Neon](https://neon.tech/) to search embeddings of all vals, using the [pg_vector](https://neon.tech/docs/extensions/pgv
- Call OpenAI to generate an embedding for the search query.
- Query the `vals_embeddings` table in Neon using the cosine similarity operator.
import { blob } from "https://esm.town/v/std/blob";
import OpenAI from "npm:openai";
const dimensions = 1536;
export default async function semanticSearchPublicVals(query) {
const client = new Client(Deno.env.get("NEON_URL_VALSEMBEDDINGS"));
await client.connect();
const openai = new OpenAI();
const queryEmbedding = (await openai.embeddings.create({
model: "text-embedding-3-small",
stevekrouse avatar
easyAQI_cached
@stevekrouse
easyAQI Get the Air Quality Index (AQI) for a location via open data sources. It's "easy" because it strings together multiple lower-level APIs to give you a simple interface for AQI. Accepts a location in basically any string format (ie "downtown manhattan") Uses Nominatim to turn that into longitude and latitude Finds the closest sensor to you on OpenAQ Pulls the readings from OpenAQ Calculates the AQI via EPA's NowCAST algorithm Uses EPA's ranking to classify the severity of the score (ie "Unhealthy for Sensitive Groups") Example usage @stevekrouse.easyAQI({ location: "brooklyn navy yard" }) // Returns { "aqi": 23.6, "severity": "Good" } Forkable example: val.town/v/stevekrouse.easyAQIExample Also useful for getting alerts when the AQI is unhealthy near you: https://www.val.town/v/stevekrouse.aqi
Script
import { openAqNowcastAQI } from "https://esm.town/v/stevekrouse/openAqNowcastAQI";
const cacheKey = location => "easyAQI_locationID_cache_" + encodeURIComponent(location);
export async function easyAQI({ location }: {
location: string;
let openAQLocation = await blob.getJSON(cacheKey(location));
MichaelNollox avatar
tenseRoseTiglon
@MichaelNollox
VALL-E LLM code generation for vals! Make apps with a frontend, backend, and database. It's a bit of work to get this running, but it's worth it. Fork this val to your own profile. Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in tempValsParentFolderId . If you want to use OpenAI models you need to set the OPENAI_API_KEY env var . If you want to use Anthropic models you need to set the ANTHROPIC_API_KEY env var . Create a Val Town API token , open the browser preview of this val, and use the API token as the password to log in.
HTTP
* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-v
* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environ
import { extractValInfo } from "https://esm.town/v/pomdtr/extractValInfo?v=29";
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
rlimit avatar
gpt4
@rlimit
* OpenAI text completion. https://platform.openai.com/docs/api-reference/completions * * val.town and rlimit.com has generously provided a free daily quota. Until the quota is met, no need to provide an API key. *
Script
import { parentReference } from "https://esm.town/v/stevekrouse/parentReference?v=3";
import { runVal } from "https://esm.town/v/std/runVal";
* OpenAI text completion. https://platform.openai.com/docs/api-reference/completions
* val.town and rlimit.com has generously provided a free daily quota. Until the quota is met, no need to provide an API key.
export const gpt4 = async (prompt: string, maxTokens?: number = 1000) => {
jacoblee93 avatar
questionsWithGuidelinesChain
@jacoblee93
An interactive, runnable TypeScript val by jacoblee93
Script
export const questionsWithGuidelinesChain = (async () => {
const { ChatOpenAI } = await import(
"https://esm.sh/langchain@0.0.150/chat_models/openai"
const { LLMChain } = await import("https://esm.sh/langchain@0.0.150/chains");
const questionChain = questionPrompt
.pipe(new ChatOpenAI({
openAIApiKey: process.env.OPENAI_API_KEY,
.pipe(new StringOutputParser()));
.pipe(
new ChatOpenAI({
openAIApiKey: process.env.OPENAI_API_KEY,
.pipe(new StringOutputParser());
// RunnableSequence.from() is equivalent to `.pipe().pipe()`
// but will coerce objects (and functions) into runnables
const questionStyleChain = RunnableSequence.from([
std avatar
OpenAIUsage
@std
OpenAI Proxy Metrics We write openAI usage data to a openai_usage sqlite table. This script val is imported into the openai proxy. Use this val to run administrative scripts: https://www.val.town/v/std/OpenAIUsageScript
Script
# OpenAI Proxy Metrics
We write openAI usage data to a `openai_usage` sqlite table. This script val is imported into the openai proxy. Use this val
FROM
openai_usage,
params
model: string;
export class OpenAIUsage {
constructor() {}
async migrate() {
await sqlite.batch([`CREATE TABLE IF NOT EXISTS openai_usage (
id INTEGER PRIMARY KEY,
async drop() {
await sqlite.batch([`DROP TABLE IF EXISTS openai_usage`]);
async writeUsage(ur: UsageRow) {
sqlite.execute({
sql: "INSERT INTO openai_usage (user_id, handle, tier, tokens, model) VALUES (?, ?, ?, ?, ?)",
args: [ur.userId, ur.handle, ur.tier, ur.tokens, ur.model],
sql: `SELECT count(*)
FROM openai_usage
WHERE (
maxm avatar
repp
@maxm
repp
HTTP
// The script we run within the worker. Don't reference anything
// outside of this function as it won't be available within the worker. We could even host this in a separate val and pass
const workerScript = () => {
const real = target[key];
if (typeof real === "function" && typeof key === "string") {
const fn = function(...args: any[]) {
logs.push({
return fn;
async function evaluate(url) {
try {
// // evaluate("let ten = 10");
// // evaluate("function cube(x) { return x ** 3 }");
// // evaluate("ten + cube(3)");
yawnxyz avatar
generateValCodeAPI
@yawnxyz
An interactive, runnable TypeScript val by yawnxyz
Script
export let generateValCodeAPI = (description: string) =>
generateValCode(
process.env.OPENAI_API_KEY,
description,
nbbaier avatar
updateValByName
@nbbaier
An interactive, runnable TypeScript val by nbbaier
Script
code: string;
name?: string;
export function updateValByName({ token, code, name }: UpdateValArgs): Promise<any> {
const body: Record<string, unknown> = {
token,
saolsen avatar
poker_agent_all_in
@saolsen
An interactive, runnable TypeScript val by saolsen
HTTP
PokerView,
} from "https://esm.town/v/saolsen/gameplay_poker";
function agent(view: PokerView, _agent_data?: JsonObject): PokerAgentResponse {
const round = view.rounds[view.round];
const player = round.active_player;
thomasatflexos avatar
askLexi
@thomasatflexos
An interactive, runnable TypeScript val by thomasatflexos
Script
const { SupabaseVectorStore } = await import("npm:langchain/vectorstores");
const { ChatOpenAI } = await import("npm:langchain/chat_models");
const { OpenAIEmbeddings } = await import("npm:langchain/embeddings");
const { createClient } = await import(
let streamedResponse = "";
const chat = new ChatOpenAI({
modelName: "gpt-3.5-turbo",
openAIApiKey: process.env.OPEN_API_KEY,
streaming: true,
const vectorStore = await SupabaseVectorStore.fromExistingIndex(
new OpenAIEmbeddings({
openAIApiKey: process.env.OPEN_API_KEY,
client,
transcendr avatar
sendSassyEmail
@transcendr
// Configuration object for customizing the sassy message generation
Cron
import { email } from "https://esm.town/v/std/email";
import { OpenAI } from "https://esm.town/v/std/openai";
// Configuration object for customizing the sassy message generation
subject: "Daily Sass Delivery 🔥",
// Prompt configuration for OpenAI
prompts: {
user: "Come up with a smart-alecky way to tell someone off",
export default async function() {
const openai = new OpenAI();
// Randomly select a prompt style
Math.floor(Math.random() * CONFIG.prompts.styles.length)
const completion = await openai.chat.completions.create({
messages: [
janpaul123 avatar
valle_tmp_140068690648343496787358158586876
@janpaul123
@jsxImportSource https://esm.sh/react
HTTP
import OpenAI from "npm:openai";
unless strictly necessary, for example use APIs that don't require a key, prefer internal function
functions where possible. Unless specified, don't add error handling,
The val should create a "export default async function main" which is the main function that gets
// The val should create a "export default async function main() {" which
// is the main function that gets executed, without any arguments. Don't return a Response object,
function write(text: string) {
function openTab(tab) {
const callback = function (mutationsList, observer) {
const openai = new OpenAI();
stevekrouse avatar
runValAPI
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
Script
export function runValAPI(name, ...args) {
return fetch(`https://api.val.town/v1/run/${name.replace("@", "")}`, {
method: "POST",
ubixsnow avatar
VALLE
@ubixsnow
VALL-E LLM code generation for vals! Make apps with a frontend, backend, and database. It's a bit of work to get this running, but it's worth it. Fork this val to your own profile. Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in tempValsParentFolderId . If you want to use OpenAI models you need to set the OPENAI_API_KEY env var . If you want to use Anthropic models you need to set the ANTHROPIC_API_KEY env var . Create a Val Town API token , open the browser preview of this val, and use the API token as the password to log in.
HTTP
* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-v
* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environ
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [