Search

Results include substring matches and semantically similar vals. Learn more
ellenchisa avatar
weatherGPT
@ellenchisa
If you fork this, you'll need to set OPENAI_API_KEY in your Val Town Secrets .
Cron
If you fork this, you'll need to set `OPENAI_API_KEY` in your [Val Town Secrets](https://www.val.town/settings/secrets).
import { fetch } from "https://esm.town/v/std/fetch";
import { OpenAI } from "npm:openai";
let location = "san francisco ca";
).then(r => r.json());
const openai = new OpenAI();
let chatCompletion = await openai.chat.completions.create({
messages: [{
console.log(text);
export async function weatherGPT() {
await email({ subject: "Weather Today", text });
adagradschool avatar
talentedWhiteSalmon
@adagradschool
An interactive, runnable TypeScript val by adagradschool
HTTP
export default function handler(req) {
v class=\"role\">Human</div>\n <div class=\"content\">export default function handler(req) {\n return new
headers: {
"Content-Type": "text/html",
AIWB avatar
textToImagePlayground
@AIWB
🖼️ text to image playground using fal.ai model apis
HTTP
<script type="module" src="https://esm.town/v/iamseeley/realtimeFormLogic"></script>
<script>
document.getElementById('generationType').addEventListener('change', function() {
const type = this.value;
if (type === 'regular') {
</body>
</html>
export default async function handler(req: Request): Promise<Response> {
const url = new URL(req.url);
if (req.method === 'GET' && url.pathname === '/') {
bingo16 avatar
getChatgpt
@bingo16
An interactive, runnable TypeScript val by bingo16
Script
"Content-Type": "application/json",
// Update your token in https://val.town/settings/secrets
Authorization: `Bearer ${token || process.env.openaiKey}`,
const getCompelitoins = async (data) => {
const response = await fetch("https://api.openai.com/v1/completions", {
method: "POST",
headers: {
sharanbabu avatar
longOliveGuppy
@sharanbabu
// This chatbot app will use a simple React frontend to display messages and allow user input.
HTTP
// This chatbot app will use a simple React frontend to display messages and allow user input.
// The backend will use OpenAI's GPT model to generate responses.
// We'll use SQLite to store conversation history.
import { createRoot } from "https://esm.sh/react-dom/client";
function App() {
const [messages, setMessages] = useState([]);
</div>
function client() {
createRoot(document.getElementById("root")).render(<App />);
if (typeof document !== "undefined") { client(); }
async function server(request: Request): Promise<Response> {
const Cerebras = await import("https://esm.sh/@cerebras/cerebras_cloud_sdk");
adagradschool avatar
claude
@adagradschool
An interactive, runnable TypeScript val by adagradschool
HTTP
export default function handler(req) {
// The HTML string needs to be properly escaped and formatted as a template literal
const html = `<!DOCTYPE html>
nbbaier avatar
duckdbExample
@nbbaier
An interactive, runnable TypeScript val by nbbaier
Script
import "https://deno.land/x/xhr@0.3.1/mod.ts";
export let duckdbExample = (async () => {
async function createWorker(url: string) {
const workerScript = await fetch(url);
const workerURL = URL.createObjectURL(await workerScript.blob());
lisazz avatar
Test00_getModelBuilder
@lisazz
241219 練習 從 webup 而來
Script
export async function getModelBuilder(spec: {
provider?: "openai" | "huggingface";
} = { type: "llm", provider: "openai" }, options?: any) {
matches({ type: "llm", provider: "openai" }),
const { OpenAI } = await import("https://esm.sh/langchain/llms/openai");
return new OpenAI(args);
matches({ type: "chat", provider: "openai" }),
const { ChatOpenAI } = await import("https://esm.sh/langchain/chat_models/openai");
return new ChatOpenAI(args);
matches({ type: "embedding", provider: "openai" }),
std avatar
fetch
@std
Proxied fetch - Docs ↗ The Javascript Fetch API is directly available within a Val. However sometimes fetch calls are blocked by the receiving server for using particular IP addresses. Additionally, network blips or unreliable web services may lead to failures if not handled properly. The Val Town standard library contains an alternative version, std/fetch , that wraps the JavaScript Fetch API to provide additional functionality. The fetch function from std/fetch reroutes requests using a proxy vendor so that requests obtain different IP addresses. It also automatically retries failed requests several times. Note that using std/fetch will be significantly slower than directly calling the Javascript Fetch API due to extra network hops. Usage After importing std/fetch , the fetch method is used with the same signature as the Javascript Fetch API. import { fetch } from "https://esm.town/v/std/fetch"; let result = await fetch("https://api64.ipify.org?format=json"); let json = await result.json(); console.log(json.ip); If you run the above code multiple times, you'll see that it returns different IP addresses, because std/fetch uses proxies so that each request is made from a different IP address. 📝 Edit docs
Script
The Javascript [Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API) is directly available within a Val. Ho
own/v/std/fetch), that wraps the JavaScript Fetch API to provide additional functionality. The fetch function from [`std/fetc
## Usage
import { rawFetch } from "https://esm.town/v/std/rawFetch";
* Wraps the JavaScript Fetch function to anonymize where the request is
* coming from ([Docs ↗](https://docs.val.town/std/fetch))
* method, headers, etc) ([Docs ↗](https://deno.land/api@v1.42.1?s=RequestInit))
export async function fetch(input: string | URL, requestInit?: RequestInit) {
let query = new URLSearchParams({
webup avatar
chatSampleFunctionMultiple
@webup
An interactive, runnable TypeScript val by webup
Script
import { chat } from "https://esm.town/v/webup/chat";
export const chatSampleFunctionMultiple = (async () => {
// Helper function to call and print assistant response
const callAssistant = async (messages) => {
const response = await chat(messages, {
functions: schemasWeather,
typeof response === "object"
content:
"Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguou
{ role: "user", content: "What's the weather like today" },
let response = await callAssistant(messages);
// Once we provide the missing info, it will generate the appropriate function arguments
messages.push({ role: "assistant", content: response });
response = await callAssistant(messages);
// By prompting it differently, we can get it to target the other function we've told
messages.length = 1;
response = await callAssistant(messages);
// Let's provide the num of days, and model will generate the call to the other function
messages.push({ role: "assistant", content: response });
janpaul123 avatar
compareEmbeddings
@janpaul123
An interactive, runnable TypeScript val by janpaul123
Script
import _ from "npm:lodash";
import OpenAI from "npm:openai";
const comparisons = [
["chat server integration", "discord bot"],
const openai = new OpenAI();
const cache = {};
async function getEmbedding(str) {
cache[str] = cache[str] || (await openai.embeddings.create({
model: "text-embedding-3-large",
dhvanil avatar
val_NpOa7nyg47
@dhvanil
An interactive, runnable TypeScript val by dhvanil
HTTP
export async function val_NpOa7nyg47(req) {
try {
// Execute the code directly and capture its result
yuanmouren1hao avatar
myApi
@yuanmouren1hao
An interactive, runnable TypeScript val by yuanmouren1hao
Email
export function myApi(name) {
return "hi " + name;
janpaul123 avatar
semanticSearchBlobs
@janpaul123
Part of Val Town Semantic Search . Uses Val Town's blob storage to search embeddings of all vals, by downloading them all and iterating through all of them to compute distance. Slow and terrible, but it works! Get metadata from blob storage: allValsBlob${dimensions}EmbeddingsMeta (currently allValsBlob1536EmbeddingsMeta ), which has a list of all indexed vals and where their embedding is stored ( batchDataIndex points to the blob, and valIndex represents the offset within the blob). The blobs have been generated by janpaul123/indexValsBlobs . It is not run automatically. Get all blobs with embeddings pointed to by the metadata, e.g. allValsBlob1536EmbeddingsData_0 for batchDataIndex 0. Call OpenAI to generate an embedding for the search query. Go through all embeddings and compute cosine similarity with the embedding for the search query. Return list sorted by similarity.
Script
- Get all blobs with embeddings pointed to by the metadata, e.g. `allValsBlob1536EmbeddingsData_0` for `batchDataIndex` 0.
- Call OpenAI to generate an embedding for the search query.
- Go through all embeddings and compute cosine similarity with the embedding for the search query.
import _ from "npm:lodash";
import OpenAI from "npm:openai";
const dimensions = 1536;
export default async function semanticSearchPublicVals(query) {
const allValsBlobEmbeddingsMeta = (await blob.getJSON(`allValsBlob${dimensions}EmbeddingsMeta`)) ?? {};
await Promise.all(allBatchDataIndexesPromises);
const openai = new OpenAI();
const queryEmbedding = (await openai.embeddings.create({
model: "text-embedding-3-small",
stevekrouse avatar
poembuilder
@stevekrouse
@jsxImportSource npm:hono@3/jsx
HTTP
/** @jsxImportSource npm:hono@3/jsx */
import { OpenAI } from "https://esm.town/v/std/openai?v=2";
import { sqlite } from "https://esm.town/v/std/sqlite?v=5";
import { Hono } from "npm:hono@3";