Search

Results include substring matches and semantically similar vals. Learn more
janpaul123 avatar
indexValsTurso
@janpaul123
Part of Val Town Semantic Search . Generates OpenAI embeddings for all public vals, and stores them in Turso , using the sqlite-vss extension. Create the vals_embeddings and vss_vals_embeddings tables in Turso if they don't already exist. Get all val names from the database of public vals , made by Achille Lacoin . Get all val names from the vals_embeddings table and compute the difference (which ones are missing). Iterate through all missing vals, get their code, get embeddings from OpenAI, and store the result in Turso. When finished, update the vss_vals_embeddings table so we can efficiently query them with the sqlite-vss extension. This is blocked by a bug in Turso that doesn't allow VSS indexes past a certain size. Can now be searched using janpaul123/semanticSearchTurso .
Cron
*Part of [Val Town Semantic Search](https://www.val.town/v/janpaul123/valtownsemanticsearch).*
Generates OpenAI embeddings for all public vals, and stores them in [Turso](https://turso.tech/), using the [sqlite-vss](http
- Create the `vals_embeddings` and `vss_vals_embeddings` tables in Turso if they don't already exist.
- Get all val names from the `vals_embeddings` table and compute the difference (which ones are missing).
- Iterate through all missing vals, get their code, get embeddings from OpenAI, and store the result in Turso.
- When finished, update the `vss_vals_embeddings` table so we can efficiently query them with the [sqlite-vss](https://github
import { db as allValsDb } from "https://esm.town/v/sqlite/db?v=9";
import OpenAI from "npm:openai";
import { truncateMessage } from "npm:openai-tokens";
export default async function(interval: Interval) {
newVals.push(val);
const openai = new OpenAI();
for (const val of newVals) {
})).rows[0][0];
const embedding = await openai.embeddings.create({
model: "text-embedding-3-small",
janpaul123 avatar
valle_tmp_92279722423458817448513814852015
@janpaul123
@jsxImportSource https://esm.sh/react
HTTP (deprecated)
import _ from "npm:lodash@4";
import OpenAI from "npm:openai";
import { renderToString } from "npm:react-dom/server";
const contextWindow = await valleGetValsContextWindow(model);
const openai = new OpenAI();
const stream = await openai.chat.completions.create({
model,
janpaul123 avatar
valtownsemanticsearch
@janpaul123
😎 VAL VIBES: Val Town Semantic Search This val hosts an HTTP server that lets you search all vals based on vibes. If you search for "discord bot" it shows all vals that have "discord bot" vibes. It does this by comparing embeddings from OpenAI generated for the code of all public vals, to an embedding of your search query. This is an experiment to see if and how we want to incorporate semantic search in the actual Val Town search page. I implemented three backends, which you can switch between in the search UI. Check out these vals for details on their implementation. Neon: storing and searching embeddings using the pg_vector extension in Neon's Postgres database. Searching: janpaul123/semanticSearchNeon Indexing: janpaul123/indexValsNeon Blobs: storing embeddings in Val Town's standard blob storage , and iterating through all of them to compute distance. Slow and terrible, but it works! Searching: janpaul123/semanticSearchBlobs Indexing: janpaul123/indexValsBlobs Turso: storing and searching using the sqlite-vss extension. Abandoned because of a bug in Turso's implementation. Searching: janpaul123/semanticSearchTurso Indexing: janpaul123/indexValsTurso All implementations use the database of public vals , made by Achille Lacoin , which is refreshed every hour. The Neon implementation updates every 10 minutes, and the other ones are not updated. I also forked Achille's search UI for this val. Please share any feedback and suggestions, and feel free to fork our vals to improve them. This is a playground for semantic search before we implement it in the product for real!
HTTP (deprecated)
This val hosts an [HTTP server](https://janpaul123-valtownsemanticsearch.web.val.run/) that lets you search all vals based on
It does this by comparing [embeddings from OpenAI](https://platform.openai.com/docs/guides/embeddings) generated for the code
This is an experiment to see if and how we want to incorporate semantic search in the actual Val Town search page.
webup avatar
getMemoryBuilder
@webup
An interactive, runnable TypeScript val by webup
Script
type: "buffer" | "summary" | "vector";
provider?: "openai";
} = { type: "buffer" }, options = {}) {
return new BufferMemory();
matches({ type: "summary", provider: "openai" }),
async () => {
return new ConversationSummaryMemory({ llm, ...options });
matches({ type: "vector", provider: "openai" }),
async () => {
type: "embedding",
provider: "openai",
const model = await builder();
iamseeley avatar
convertToResumeJSON
@iamseeley
An interactive, runnable TypeScript val by iamseeley
Script
if (!tokenBucket.consume()) {
throw new Error("Rate limit reached. Please try again later.");
const endpoint = 'https://api.openai.com/v1/chat/completions';
const model = 'gpt-4';
const messages = [
andreterron avatar
generateValCode
@andreterron
An interactive, runnable TypeScript val by andreterron
Script
import { openaiChatCompletion } from "https://esm.town/v/andreterron/openaiChatCompletion";
export const generateValCode = async (
const lodash = await import('npm:lodash');
const response = await openaiChatCompletion({
openaiKey: key,
organization: org,
janpaul123 avatar
valle_tmp_2671404837576818506367901100203444
@janpaul123
@jsxImportSource https://esm.sh/react
HTTP (deprecated)
import _ from "npm:lodash@4";
import OpenAI from "npm:openai";
import { renderToString } from "npm:react-dom/server";
const contextWindow = await valleGetValsContextWindow(model);
const openai = new OpenAI();
const stream = await openai.chat.completions.create({
model,
zzz avatar
demoOpenAIGPT4Summary
@zzz
An interactive, runnable TypeScript val by zzz
Script
import { confession } from "https://esm.town/v/alp/confession?v=2";
import { runVal } from "https://esm.town/v/std/runVal";
export let demoOpenAIGPT4Summary = await runVal(
"zzz.OpenAISummary",
confession,
andreterron avatar
genval
@andreterron
Generate a Val Uses the OpenAI API to generate code for a val based on the description given by the user. TODO: [ ] Improve code detection on GPT response [ ] Give more context on val town exclusive features like console.email or @references [ ] Enable the AI to search val town to find other vals to use
HTTP (deprecated)
# [Generate a Val](https://andreterron-genval.express.val.run)
Uses the OpenAI API to generate code for a val based on the description given by the user.
TODO:
return new Response("Bad input", { status: 400 });
const code = await generateValCode(
process.env.VT_OPENAI_KEY,
value.description,
const query = new URLSearchParams({ code });
cameronpak avatar
homeless
@cameronpak
Homeless Services by OurTechnology At OurTechnology , we create technology solutions to empower and equip those who serve the homeless. We have a large data set of available resources in the US to aid in helping those experiencing homelessness find local resources, community, and support. This private ( but public to read ) API is used in our ChatGPT Assistant, Homeless Services . Why a ChatGPT Assistant ? OpenAI announced on May 13, 2024 that free users will soon be able to "discover and use GPTs and the GPT Store ( OpenAI )" There's a larger number of people experiencing homelessness who own a phone than what I imagined. ChatGPT allows for a simple interface, even with voice chat (a more natural way to navigate the tool), to find resources to help those experiencing homelessness. And, it's fast! Technical details The data set has been compiled together over the years and will continue to be updated as new techniques and partnerships make that possible. We use Typesense , a search as a service tool, to provide lightning fast search results for homeless resources near you. This endpoint is created with Hono and is an incredibly easy way to create an API. Contact OurTechnology Visit our website Email us! Find us on LinkedIn While this is on Cameron Pak's ValTown, this code is owned and operated by OurTechnology.
HTTP (deprecated)
## Why a [ChatGPT Assistant](https://chatg.pt/homeless-help)?
- OpenAI announced on May 13, 2024 that free users will soon be able to "discover and use GPTs and the GPT Store ([OpenAI](ht
- There's a larger number of people experiencing homelessness who own a phone than what I imagined.
pomdtr avatar
VALLE
@pomdtr
Fork it and authenticate with your Val Town API token as the password. Needs an OPENAI_API_KEY env var to be set. WARNING: pollutes your homepage with lots of temporary vals!! https://x.com/JanPaul123/status/1812957150559211918
HTTP (deprecated)
Fork it and authenticate with your Val Town API token as the password. Needs an `OPENAI_API_KEY` env var to be set.
WARNING: pollutes your homepage with lots of temporary vals!!
import _ from "npm:lodash@4";
import OpenAI from "npm:openai";
import { renderToString } from "npm:react-dom/server";
const contextWindow: any = await valleGetValsContextWindow(model);
const openai = new OpenAI();
const stream = await openai.chat.completions.create({
model,
patrickjm avatar
weatherTomorrowGpt3Example
@patrickjm
An interactive, runnable TypeScript val by patrickjm
Script
export let weatherTomorrowGpt3Example = weatherTomorrowGpt3({
city: "New York City",
openAiKey: process.env.openai_key,
roadlabs avatar
valleBlogV0
@roadlabs
Fork this val to your own profile. Create a Val Town API token , open the browser preview of this val, and use the API token as the password to log in.
HTTP (deprecated)
import { verifyToken } from "https://esm.town/v/pomdtr/verifyToken?v=1";
import { openai } from "npm:@ai-sdk/openai";
import ValTown from "npm:@valtown/sdk";
const stream = await streamText({
model: openai("gpt-4o", {
baseURL: "https://std-openaiproxy.web.val.run/v1",
apiKey: Deno.env.get("valtown"),
jxnblk avatar
dailyDadJoke
@jxnblk
Daily Dad Joke How do you make a programmer laugh every morning? A dad joke ~~cron job~~ website! API This val uses the OpenAI API
HTTP (deprecated)
## API
This val uses the [OpenAI API](https://www.val.town/v/std/openai)
import { OpenAI } from "https://esm.town/v/std/openai?v=4";
// try changing the prompt to get different responses, e.g. 'Tell me a joke about backend engineers'
export default async function dailyDadJoke(req: Request): Response {
const openai = new OpenAI();
const resp = await openai.chat.completions.create({
messages: [
nerdymomocat avatar
add_to_notion_w_ai_webpage
@nerdymomocat
Example usage of the add_to_notion_w_ai val Try with the money database . Read and watch the demo run here
HTTP (deprecated)
import { Client } from "npm:@notionhq/client";
import OpenAI from "npm:openai";
import { render } from "npm:preact-render-to-string";
"email": "string_email",
const oai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY ?? undefined,
const client = Instructor({
…
19
…
Next