Search
indexValsTurso
@janpaul123
Part of Val Town Semantic Search . Generates OpenAI embeddings for all public vals, and stores them in Turso , using the sqlite-vss extension. Create the vals_embeddings and vss_vals_embeddings tables in Turso if they don't already exist. Get all val names from the database of public vals , made by Achille Lacoin . Get all val names from the vals_embeddings table and compute the difference (which ones are missing). Iterate through all missing vals, get their code, get embeddings from OpenAI, and store the result in Turso. When finished, update the vss_vals_embeddings table so we can efficiently query them with the sqlite-vss extension. This is blocked by a bug in Turso that doesn't allow VSS indexes past a certain size. Can now be searched using janpaul123/semanticSearchTurso .
Cron
import { decode as base64Decode, encode as base64Encode } from "https://deno.land/std@0.166.0/encoding/base64.ts";
import { createClient } from "https://esm.sh/@libsql/client@0.6.0/web";
import { sqlToJSON } from "https://esm.town/v/nbbaier/sqliteExportHelpers?v=22";
import { db as allValsDb } from "https://esm.town/v/sqlite/db?v=9";
import { truncateMessage } from "npm:openai-tokens";
export default async function(interval: Interval) {
const sqlite = createClient({
url: "libsql://valsembeddings-jpvaltown.turso.io",
authToken: Deno.env.get("TURSO_AUTH_TOKEN_VALSEMBEDDINGS"),
getSpotifyTrackUrl
@dthyresson
getSpotifyTrackUrl Get a Spotify Track Url using the Spotify Web API given an artist and a song title. Track info is cached by the query and also the spotify track id, so your popular queries won't have to fetch from Spotify over and over. Examples import { getSpotifyTrackUrl } from "https://esm.town/v/dthyresson/getSpotifyTrackUrl";
const reni = await getSpotifyTrackUrl("Stone Roses", "Fools Gold");
const ian = await getSpotifyTrackUrl("Joy Division", "Love Will Tear Us Apart");
const kim = await getSpotifyTrackUrl("Pixies", "Velouria");
console.log(reni)
console.log(ian)
console.log(kim) Info Uses getSpotifyAccessToken which requires you to set environment variables from your Spotify Developers account. SPOTIFY_CLIENT_ID
SPOTIFY_CLIENT_SECRET Your access token is cached by getSpotifyAccessToken to avoid fetching over and over.
Script
Uses `getSpotifyAccessToken` which requires you to set environment variables from your Spotify Developers account.
`SPOTIFY_CLIENT_ID`
`SPOTIFY_CLIENT_SECRET`
Your access token is cached by `getSpotifyAccessToken` to avoid fetching over and over.
telegram
@stevekrouse
Message yourself on Telegram This val lets you send yourself Telegram messages via ValTownBot . This ValTownBot saves you from creating your own Telegram Bot. However if I'm being honest, it's really simple and fun to make your own Telegram bot. (You just message BotFather .) I'd recommend most folks going that route so you have an unmediated connection to Telegram. However if you want to have the simplest possible setup to just send yourself messages, read on... Installation It takes less than a minute to set up! Start a conversation with ValTownBot Copy the secret it gives you Save it in your Val Town Environment Variables under telegram Send a message! Usage telegramText import { telegramText } from "https://esm.town/v/stevekrouse/telegram?v=14";
const statusResponse = await telegramText("Hello from Val.Town!!");
console.log(statusResponse); telegramPhoto import { telegramPhoto } from "https://esm.town/v/stevekrouse/telegram?v=14";
const statusResponse = await telegramPhoto({
photo: "https://placekitten.com/200/300",
});
console.log(statusResponse); ValTownBot Commands /roll - Roll your secret in case you accidentally leak it. /webhook - Set a webhook to receive messages you send to @ValTownBot Receiving Messages If you send /webhook to @ValTownBot, it will let you specify a webhook URL. It will then forward on any messages (that aren't recognized @ValTownBot commands) to that webhook. It's particularly useful for creating personal chatbots, like my telegram <-> DallE bot . How it works Telegram has a lovely API. I created a @ValTownBot via Bot Father . I created a webhook and registered it with telegram Whenever someone new messages @ValTownBot, I generate a secret and save it along with their Chat Id in @stevekrouse/telegramValTownBotSecrets (a private val), and message it back to them Now whenever you call this val, it calls telegramValTownAPI , which looks up your Chat Id via your secret and sends you a message Telegram Resources Val Town Telegram Echo Bot Guide Telegram <-> DallE Bot Bot Father - the father of all Telegram Bots Telegram Bot Tutorial Credits This val was originally made by pomdtr . Todo [ ] Store user data in Val Town SQLite [ ] Parse user data on the API side using Zod
Script
- [ ] Store user data in Val Town SQLite
- [ ] Parse user data on the API side using Zod
testRunner
@karfau
Test runner to be able to run a number of tests (e.g. on a different val).
check the references for seeing how it is used.
It is extracted into a val to avoid having all that clutter in the same val as your tests. Each test is a named function (which can be async), the name is used as the name for the test. the input passed as the first argument is passed to each test, great for importing assertion methods, stubs, fixed values, ... everything that you do not mutate during a test if a function is async (it returns a promise) there is a timeout of 2 seconds before the test is marked as failed. all tests are called in the declared order, but async tests run in parallel afterwards, so don't assume any order if a test starts with skip it is not executed if a test fails it throws the output, so it appears in the read box below the val and the evaluation/run is marked red if all tests pass it returns the output, so it appears in the grey box and the evaluation/run is marked green. Note: If you are using the test runner to store the result in that val, as described above, it is considered a "JSON val" and has a run button, but it also means that another of your vals could update the val with just any other (JSON) state. Alternatively you can define a function val that calls the test runner and have a separete val to keep the curretn test results, but it means after updating the tests you need to fest save that val and then reevaluate to val storing the test state.
Script
- if all tests pass it returns the output, so it appears in the grey box and the evaluation/run is marked green.
Note: If you are using the test runner to store the result in that val, as described above, it is considered a "JSON val" and has a run button, but it also means that another of your vals could update the val with just any other (JSON) state. Alternatively you can define a function val that calls the test runner and have a separete val to keep the curretn test results, but it means after updating the tests you need to fest save that val and then reevaluate to val storing the test state.
aoc_2023_12_dynamic
@robsimmons
An interactive, runnable TypeScript val by robsimmons
Script
import { Dusa } from "https://unpkg.com/dusa@0.0.10/lib/client.js";
const INPUT = `
???.### 1,1,3
nftMetadata
@jamiedubs
use by copying web API endpoint and appending "?contractAddress=...&tokenId..." - here's an example: https://jamiedubs-nftmetadata.web.val.run/?contractAddress=0x3769c5700Da07Fe5b8eee86be97e061F961Ae340&tokenId=666 uses Alchemy for indexed NFT data, via my other jamiedubs/alchemyClient val plus it's using my personal API key. don't abuse this or I'll disable it! yeesh
HTTP
https://jamiedubs-nftmetadata.web.val.run/?contractAddress=0x3769c5700Da07Fe5b8eee86be97e061F961Ae340&tokenId=666
uses [Alchemy](https://docs.alchemy.com/reference/getnftmetadata) for indexed NFT data, via my other [jamiedubs/alchemyClient val](https://www.val.town/v/jamiedubs/alchemyClient)
plus it's using my personal API key. don't abuse this or I'll disable it! yeesh
valToGH_dev2
@nbbaier
valToGH A utility function for programmatically updating a GitHub repository with code retrieved from a Val. NOTE : This function currently does not change the contents of a file if it is already present. I will however be adding that functionality. Usage import { valToGH } from 'https://esm.town/v/nbbaier/valToGH';
const repo = "yourGitHubUser/yourRepo";
const val = "valUser/valName"; // or vals = ["valUser/valName1","valUser/valName2", ...]
const ghToken = Deno.env.get("yourGitHubToken");
const result = await valToGH(repo, val, ghToken);
console.log(result); Parameters repo : The GitHub repository in the format {user}/{repo} . val : A single repository in the format {user}/{val} . ghToken : Your GitHub token for authentication (must have write permission to the target repo) Options branch : Optional target branch. Default is main . prefix : Optional directory path prefix for each file. message : Optional commit message. Default is the current date and time in the format yyyy-MM-dd T HH:mm:ss UTC . ts : Optional flag to use a .ts extension for the committed file instead of the default .tsx .
Script
async function getLatestCommit(ghUser: string, ghRepo: string, branch: string, client: Octokit)
const { data: refData } = await client.rest.git.getRef({
client: Octokit,
} = await client.rest.git.createTree({
client: Octokit,
} = await client.rest.git.createCommit({
client: Octokit,
const result = await client.rest.git.updateRef({
const client = new Octokit({ auth: ghToken });
const commitSHA = await getLatestCommit(ghUser, ghRepo, branch, client);
aoc_2023_21_1
@robsimmons
An interactive, runnable TypeScript val by robsimmons
Script
import { Dusa } from "https://unpkg.com/dusa@0.0.11/lib/client.js";
const INPUT = `
.##..S####.
CaptchaGetBalance
@augustohp
Captcha Balance (Generic) Generic method /getBalance implemented by multiple Captcha providers: CapMonster Anti-Captcha NextCaptcha Usage example: import fetchBalanceFromCaptchaProvider from "https://esm.town/v/augustohp/CaptchaGetBalance";
const remainingBalance = await fetchBalanceFromCaptchaProvider(Deno.env.get("CAPMONSTER_KEY"), "capmonster");
Script
method: "POST",
body: JSON.stringify({
clientKey: token,
if (result.errorId != 0) {
throw new Error("Could not fetch balance.");
masto_timeout_pr
@andreterron
// npm:masto doesn't clean stop, so we need to manually exit
Script
import { createRestAPIClient } from "npm:masto";
const masto = createRestAPIClient({
url: Deno.env.get("MASTO_INSTANCE_URL"),
accessToken: Deno.env.get("MASTO_ACCESS_TOKEN"),
solutions_with_next
@robsimmons
An interactive, runnable TypeScript val by robsimmons
Script
import { Dusa } from "https://unpkg.com/dusa@0.1.6/lib/client.js";
const dusa = new Dusa(`name is { "one", "two" }.`);
const iterator = dusa.solve();
snakeGameReact
@mahmudul
An interactive, runnable TypeScript val by mahmudul
HTTP
cpp
#include <QApplication>
#include <QMainWindow>
#include <QLabel>
#include <QLineEdit>
#include <QPushButton>
#include <QGridLayout>
#include <QVBoxLayout>
#include <QHBoxLayout>
#include <QMessageBox>
perplexityAPI
@nbbaier
Perplexity API Wrapper This val exports a function pplx that provides an interface to the Perplexity AI chat completions API. You'll need a Perplexity AI API key, see their documentation for how to get started with getting a key. By default, the function will use PERPLEXITY_API_KEY in your val town env variables unless overridden by setting apiKey in the function. pplx(options: PplxRequest & { apiKey?: string }): Promise<PplxResponse> Generates a model's response for the given chat conversation. Required parameters in options are the following (for other parameters, see the Types section below): model ( string ): the name of the model that will complete your prompt. See below for possible values: pplx- 7b-chat , pplx-70b-chat , pplx-7b-online , pplx-70b-online , llama-2-70b-chat , codellama-34b -instruct , mistral-7b-instruct , and mixtral-8x7b-instruct . messages ( Message[] ): A list of messages comprising the conversation so far. A message object must contain role ( system , user , or assistant ) and content (a string). You can also specify an apiKey to override the default Deno.env.get("PERPLEXITY_API_KEY") . The function returns an object of types PplxResponse , see below. Types PplxRequest Request object sent to Perplexity models. | Property | Type | Description |
|---------------------|----------------------|-----------------------------------------------------------------|
| model | Model | The name of the model that will complete your prompt. Possible values: pplx- 7b-chat , pplx-70b-chat , pplx-7b-online , pplx-70b-online , llama-2-70b-chat , codellama-34b -instruct , mistral-7b-instruct , and mixtral-8x7b-instruct . |
| messages | Message[] | A list of messages comprising the conversation so far. |
| max_tokens | number | (Optional) The maximum number of completion tokens returned by the API. The total number of tokens requested in max_tokens plus the number of prompt tokens sent in messages must not exceed the context window token limit of model requested. If left unspecified, then the model will generate tokens until either it reaches its stop token or the end of its context window. |
| temperature | number | (Optional) The amount of randomness in the response, valued between 0 inclusive and 2 exclusive. Higher values are more random, and lower values are more deterministic. You should either set temperature or top_p, but not both. |
| top_p | number | (Optional) The nucleus sampling threshold, valued between 0 and 1 inclusive. For each subsequent token, the model considers the results of the tokens with top_p probability mass. You should either alter temperature or top_p, but not both. |
| top_k | number | (Optional) The number of tokens to keep for highest top-k filtering, specified as an integer between 0 and 2048 inclusive. If set to 0, top-k filtering is disabled. |
| stream | boolean | (Optional) Flag indicating whether to stream the response. |
| presence_penalty | number | (Optional) A value between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Incompatible with frequency_penalty. |
| frequency_penalty | number | (Optional) A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. A value of 1.0 means no penalty. Incompatible with presence_penalty. | PplxResponse Response object for pplx models. | Property | Type | Description |
|------------|-------------------------|--------------------------------------------------|
| id | string | The ID of the response. |
| model | Model | The model used for generating the response. |
| object | "chat.completion" | The type of object (always "chat.completion"). |
| created | number | The timestamp indicating when the response was created. |
| choices | CompletionChoices[] | An array of completion choices. | Please refer to the code for more details and usage examples of these types. Message Represents a message in a conversation. | Property | Type | Description |
|------------|-----------------------|--------------------------------------------------------|
| role | "system" \| "user" \| "assistant" | The role of the speaker in this turn of conversation. After the (optional) system message, user and assistant roles should alternate with user then assistant, ending in user. |
| content | string | The contents of the message in this turn of conversation. | CompletionChoices The list of completion choices the model generated for the input prompt. | Property | Type | Description |
|-------------------|-----------------------------------|-----------------------------------------------------|
| index | number | The index of the choice. |
| finish_reason | "stop" \| "length" | The reason the model stopped generating tokens. Possible values are stop if the model hit a natural stopping point, or length if the maximum number of tokens specified in the request was reached. |
| message | Message | The message generated by the model. |
| delta | Message | The incrementally streamed next tokens. Only meaningful when stream = true. |
Script
| `temperature` | `number` | (Optional) The amount of randomness in the response, valued between 0 inclusive and 2 exclusive. Higher values are more random, and lower values are more deterministic. You should either set temperature or top_p, but not both. |
| `top_p` | `number` | (Optional) The nucleus sampling threshold, valued between 0 and 1 inclusive. For each subsequent token, the model considers the results of the tokens with top_p probability mass. You should either alter temperature or top_p, but not both. |
| `top_k` | `number` | (Optional) The number of tokens to keep for highest top-k filtering, specified as an integer between 0 and 2048 inclusive. If set to 0, top-k filtering is disabled. |
minizod
@zackoverflow
minizod Tiny Zod implementation. Why Zod is a dense library, and its module structure (or lack thereof) makes it difficult for bundlers to tree-shake unused modules . Additionally, using Zod in vals requires the await import syntax which means having to wrap every schema in a Promise and awaiting it. This is extremely annoying. So this is a lil-tiny-smol Zod meant for use in vals. A noteworthy use-case is using minizod to generate tyep-safe API calls to run vals outside of Val Town (such as client-side). Type-safe API call example We can use minizod to create type safe HTTP handlers and generate the corresponding code to call them using Val Town's API, all in a type-safe manner. First, create a schema for a function. The following example defines a schema for a function that takes a { name: string } parameter and returns a Promise<{ text: string }> . const minizodExampleSchema = () =>
@zackoverflow.minizod().chain((z) =>
z
.func()
.args(z.tuple().item(z.object({ name: z.string() })))
.ret(z.promise().return(z.object({ text: z.string() })))
); With a function schema, you can then create an implementation and export it as a val: const minizodExample = @me.minizodExampleSchema().impl(async (
{ name },
) => ({ text: `Hello, ${name}!` })).json() In the above example, we call .impl() on a function schema and pass in a closure which implements the actual body of the function. Here, we simply return a greeting to the name passed in. We can call this val, and it will automatically parse and validate the args we give it: // Errors at compile time and runtime for us!
const response = @me.minizodExample({ name: 420 }) Alternatively, we can use the .json() function to use it as a JSON HTTP handler: const minizodExample = @me.minizodExampleSchema().impl(async (
{ name },
) => ({ text: `Hello, ${name}!` })).json() // <-- this part We can now call minizodExample through Val Town's API. Since we defined a schema for it, we know exactly the types of its arguments and return, which means we can generate type-safe code to call the API: let generatedApiCode =
@zackoverflow.minizodFunctionGenerateTypescript(
// put your username here
"zackoverflow",
"minizodExample",
// put your auth token here
"my auth token",
@me.minizodExampleSchema(),
); This generates the following the code: export const fetchMinizodExample = async (
...args: [{ name: string }]
): Promise<Awaited<Promise<{ text: string }>>> =>
await fetch(`https://api.val.town/v1/run/zackoverflow.minizodExample`, {
method: "POST",
body: JSON.stringify({
args: [...args],
}),
headers: {
Authorization: "Bearer ksafajslfkjal;kjf;laksjl;fajsdf",
},
}).then((res) => res.json());
Script
Additionally, using Zod in vals requires the `await import` syntax which means having to wrap every schema in a Promise and awaiting it. This is extremely annoying.
So this is a lil-tiny-smol Zod meant for use in vals. A noteworthy use-case is using `minizod` to generate tyep-safe API calls to run vals outside of Val Town (such as client-side).
## Type-safe API call example
supabaseSDKInsertIntoMyFirstTable
@vtdocs
Inserting into a table using Supabase's SDK. Part of the Supabase guide on docs.val.town .
Script
import process from "node:process";
export const supabaseSDKInsertIntoMyFirstTable = (async () => {
const { createClient } = await import(
"https://esm.sh/@supabase/supabase-js@2"
const supabase = createClient(
process.env.supabaseURL,
process.env.supabaseKey,