Search
weatherGPT
@ellenchisa
If you fork this, you'll need to set OPENAI_API_KEY in your Val Town Secrets .
Cron
If you fork this, you'll need to set `OPENAI_API_KEY` in your [Val Town Secrets](https://www.val.town/settings/secrets).
import { fetch } from "https://esm.town/v/std/fetch";
import { OpenAI } from "npm:openai";
let location = "san francisco ca";
).then(r => r.json());
const openai = new OpenAI();
let chatCompletion = await openai.chat.completions.create({
messages: [{
console.log(text);
export async function weatherGPT() {
await email({ subject: "Weather Today", text });
talentedWhiteSalmon
@adagradschool
An interactive, runnable TypeScript val by adagradschool
HTTP
export default function handler(req) {
return new Response(`"\n <!DOCTYPE html>\n <html>\n <head>\n <title>Claude Chat Conversation</title>\n <meta charset=\"UTF-8\">\n <style>\n body {\n font-family: system-ui, -apple-system, sans-serif;\n line-height: 1.5;\n max-width: 800px;\n margin: 0 auto;\n padding: 20px;\n background: #f9fafb;\n }\n .message {\n margin: 20px 0;\n padding: 15px;\n border-radius: 8px;\n }\n .human {\n background: #e5e7eb;\n }\n .assistant {\n background: #dbeafe;\n }\n .role {\n font-weight: bold;\n margin-bottom: 8px;\n }\n </style>\n </head>\n <body>\n \n <div class=\"message human\">\n <div class=\"role\">Human</div>\n <div class=\"content\">export default function handler(req) {\n return new Response(\n <!DOCTYPE html>\n <html>\n <head>\n <title>Claude Chat Conversation</title>\n <meta charset="UTF-8">\n <style>\n body {\n font-family: system-ui, -apple-system, sans-serif;\n line-height: 1.5;\n max-width: 800px;\n margin: 0 auto;\n padding: 20px;\n background: #f9fafb;\n }\n .message {\n margin: 20px 0;\n padding: 15px;\n border-radius: 8px;\n }\n .human {\n background: #e5e7eb;\n }\n .assistant {\n background: #dbeafe;\n }\n .role {\n font-weight: bold;\n margin-bottom: 8px;\n }\n </style>\n </head>\n <body> <div class="message human">\n <div class="role">Human</div>\n <div class="content">so, how was your day</div>\n </div> <div class="message assistant">\n <div class="role">Claude</div>\n <div class="content">I aim to have authentic conversations while being direct about my nature as an AI. While I appreciate the friendly question, I should note that I don't actually experience days in the way humans do. I'd be happy to hear about your day though, or to help you with any questions or tasks you might have.</div>\n </div> </body>\n, {\n headers: {\n \"Content-Type\": \"text/html\",\n },\n });\n }What is the issue with this?</div>\n </div>\n \n <div class=\"message assistant\">\n <div class=\"role\">Claude</div>\n <div class=\"content\">Let me help you fix this code. There are a few syntax issues with the current implementation.\nFixed Next.js API HandlerClick to open code\nThe main issues with the original code were:\n\nThe HTML string was not properly formatted - it was directly inserted after the Response( without being enclosed in quotes or backticks\nHTML entities were escaped (<, >, ") when they should have been regular characters (<, >, \")\nNo proper string formatting was used (should use template literals with backticks for multiline strings in JavaScript)\n\nThe fixed version:\n\nUses a template literal (backticks) to properly contain the multiline HTML string\nUses proper HTML syntax without escaped entities\nProperly separates the HTML string from the Response constructor\nMaintains proper indentation for readability\nKeeps the same functionality but in a syntactically correct way\n\nThis should now work correctly as a Next.js API route handler that returns an HTML page.</div>\n </div>\n \n <div class=\"message human\">\n <div class=\"role\">Human</div>\n <div class=\"content\">obj = {\"success\":true,\"val\":{\"id\":\"ab43fa1c-9f05-11ef-ba42-de64eea55b61\",\"author\":{\"id\":\"a15e59cc-3d10-47ad-9065-d19718204f1b\",\"username\":\"adagradschool\"},\"name\":\"claude_d482d9ee_eff3_42e6_9779_a012b1e1f7b4\",\"type\":\"http\",\"code\":\"\\n export default function handler(req) {\\n return new Response(\\\"\\\\n <!DOCTYPE html>\\\\n <html>\\\\n <head>\\\\n <title>Claude Chat Conversation</title>\\\\n <meta charset=\\\\\\\"UTF-8\\\\\\\">\\\\n <style>\\\\n body {\\\\n font-family: system-ui, -apple-system, sans-serif;\\\\n line-height: 1.5;\\\\n max-width: 800px;\\\\n margin: 0 auto;\\\\n padding: 20px;\\\\n background: #f9fafb;\\\\n }\\\\n .message {\\\\n margin: 20px 0;\\\\n padding: 15px;\\\\n border-radius: 8px;\\\\n }\\\\n .human {\\\\n background: #e5e7eb;\\\\n }\\\\n .assistant {\\\\n background: #dbeafe;\\\\n }\\\\n .role {\\\\n font-weight: bold;\\\\n margin-bottom: 8px;\\\\n }\\\\n </style>\\\\n </head>\\\\n <body>\\\\n \\\\n <div class=\\\\\\\"message human\\\\\\\">\\\\n <div class=\\\\\\\"role\\\\\\\">Human</div>\\\\n <div class=\\\\\\\"content\\\\\\\">so, how was your day</div>\\\\n </div>\\\\n \\\\n <div class=\\\\\\\"message assistant\\\\\\\">\\\\n <div class=\\\\\\\"role\\\\\\\">Claude</div>\\\\n <div class=\\\\\\\"content\\\\\\\">I aim to have authentic conversations while being direct about my nature as an AI. While I appreciate the friendly question, I should note that I don't actually experience days in the way humans do. I'd be happy to hear about your day though, or to help you with any questions or tasks you might have.</div>\\\\n </div>\\\\n \\\\n </body>\\\\n\\\", {\\n headers: {\\n \\\"Content-Type\\\": \\\"text/html\\\",\\n },\\n });\\n }\\n \",\"privacy\":\"public\",\"public\":true,\"version\":0,\"createdAt\":\"2024-11-10T01:46:56.862Z\",\"readme\":null,\"likeCount\":0,\"referenceCount\":0,\"url\":\"https://www.val.town/v/adagradschool/claude_d482d9ee_eff3_42e6_9779_a012b1e1f7b4\",\"links\":{\"self\":\"https://api.val.town/v1/vals/ab43fa1c-9f05-11ef-ba42-de64eea55b61\",\"versions\":\"https://api.val.town/v1/vals/ab43fa1c-9f05-11ef-ba42-de64eea55b61/versions\",\"module\":\"https://esm.town/v/adagradschool/claude_d482d9ee_eff3_42e6_9779_a012b1e1f7b4?v=0\",\"endpoint\":\"https://adagradschool-claude_d482d9ee_eff3_42e6_9779_a012b1e1f7b4.web.val.run\"}}}Get val's endpoint</div>\n </div>\n \n <div class=\"message assistant\">\n <div class=\"role\">Claude</div>\n <div class=\"content\">Looking at the JSON object, the endpoint URL can be found at obj.val.links.endpoint, which is:\nhttps://adagradschool-claude_d482d9ee_eff3_42e6_9779_a012b1e1f7b4.web.val.run</div>\n </div>\n \n </body>\n </html>\n"`, {
headers: {
"Content-Type": "text/html",
textToImagePlayground
@AIWB
🖼️ text to image playground using fal.ai model apis
HTTP
<script type="module" src="https://esm.town/v/iamseeley/realtimeFormLogic"></script>
<script>
document.getElementById('generationType').addEventListener('change', function() {
const type = this.value;
if (type === 'regular') {
</body>
</html>
export default async function handler(req: Request): Promise<Response> {
const url = new URL(req.url);
if (req.method === 'GET' && url.pathname === '/') {
getChatgpt
@bingo16
An interactive, runnable TypeScript val by bingo16
Script
"Content-Type": "application/json",
// Update your token in https://val.town/settings/secrets
Authorization: `Bearer ${token || process.env.openaiKey}`,
const getCompelitoins = async (data) => {
const response = await fetch("https://api.openai.com/v1/completions", {
method: "POST",
headers: {
longOliveGuppy
@sharanbabu
// This chatbot app will use a simple React frontend to display messages and allow user input.
HTTP
// This chatbot app will use a simple React frontend to display messages and allow user input.
// The backend will use OpenAI's GPT model to generate responses.
// We'll use SQLite to store conversation history.
import { createRoot } from "https://esm.sh/react-dom/client";
function App() {
const [messages, setMessages] = useState([]);
</div>
function client() {
createRoot(document.getElementById("root")).render(<App />);
if (typeof document !== "undefined") { client(); }
async function server(request: Request): Promise<Response> {
const Cerebras = await import("https://esm.sh/@cerebras/cerebras_cloud_sdk");
claude
@adagradschool
An interactive, runnable TypeScript val by adagradschool
HTTP
export default function handler(req) {
// The HTML string needs to be properly escaped and formatted as a template literal
const html = `<!DOCTYPE html>
duckdbExample
@nbbaier
An interactive, runnable TypeScript val by nbbaier
Script
import "https://deno.land/x/xhr@0.3.1/mod.ts";
export let duckdbExample = (async () => {
async function createWorker(url: string) {
const workerScript = await fetch(url);
const workerURL = URL.createObjectURL(await workerScript.blob());
Test00_getModelBuilder
@lisazz
241219 練習 從 webup 而來
Script
export async function getModelBuilder(spec: {
provider?: "openai" | "huggingface";
} = { type: "llm", provider: "openai" }, options?: any) {
matches({ type: "llm", provider: "openai" }),
const { OpenAI } = await import("https://esm.sh/langchain/llms/openai");
return new OpenAI(args);
matches({ type: "chat", provider: "openai" }),
const { ChatOpenAI } = await import("https://esm.sh/langchain/chat_models/openai");
return new ChatOpenAI(args);
matches({ type: "embedding", provider: "openai" }),
fetch
@std
Proxied fetch - Docs ↗ The Javascript Fetch API is directly available within a Val. However sometimes fetch calls are blocked by the receiving server for using particular IP addresses. Additionally, network blips or unreliable web services may lead to failures if not handled properly. The Val Town standard library contains an alternative version, std/fetch , that wraps the JavaScript Fetch API to provide additional functionality. The fetch function from std/fetch reroutes requests using a proxy vendor so that requests obtain different IP addresses. It also automatically retries failed requests several times. Note that using std/fetch will be significantly slower than directly calling the Javascript Fetch API due to extra network hops. Usage After importing std/fetch , the fetch method is used with the same signature as the Javascript Fetch API. import { fetch } from "https://esm.town/v/std/fetch";
let result = await fetch("https://api64.ipify.org?format=json");
let json = await result.json();
console.log(json.ip); If you run the above code multiple times, you'll see that it returns different IP addresses, because std/fetch uses proxies so that each request is made from a different IP address. 📝 Edit docs
Script
The Javascript [Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API) is directly available within a Val. However sometimes fetch calls are blocked by the receiving server for using particular IP addresses. Additionally, network blips or unreliable web services may lead to failures if not handled properly.
The Val Town standard library contains an alternative version, [`std/fetch`](https://www.val.town/v/std/fetch), that wraps the JavaScript Fetch API to provide additional functionality. The fetch function from [`std/fetch`](https://www.val.town/v/std/fetch) reroutes requests using a proxy vendor so that requests obtain different IP addresses. It also automatically retries failed requests several times. Note that using [`std/fetch`](https://www.val.town/v/std/fetch) will be significantly slower than directly calling the Javascript Fetch API due to extra network hops.
## Usage
import { rawFetch } from "https://esm.town/v/std/rawFetch";
* Wraps the JavaScript Fetch function to anonymize where the request is
* coming from ([Docs ↗](https://docs.val.town/std/fetch))
* method, headers, etc) ([Docs ↗](https://deno.land/api@v1.42.1?s=RequestInit))
export async function fetch(input: string | URL, requestInit?: RequestInit) {
let query = new URLSearchParams({
chatSampleFunctionMultiple
@webup
An interactive, runnable TypeScript val by webup
Script
import { chat } from "https://esm.town/v/webup/chat";
export const chatSampleFunctionMultiple = (async () => {
// Helper function to call and print assistant response
const callAssistant = async (messages) => {
const response = await chat(messages, {
functions: schemasWeather,
typeof response === "object"
content:
"Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous.",
{ role: "user", content: "What's the weather like today" },
let response = await callAssistant(messages);
// Once we provide the missing info, it will generate the appropriate function arguments
messages.push({ role: "assistant", content: response });
response = await callAssistant(messages);
// By prompting it differently, we can get it to target the other function we've told
messages.length = 1;
response = await callAssistant(messages);
// Let's provide the num of days, and model will generate the call to the other function
messages.push({ role: "assistant", content: response });
compareEmbeddings
@janpaul123
An interactive, runnable TypeScript val by janpaul123
Script
import _ from "npm:lodash";
import OpenAI from "npm:openai";
const comparisons = [
["chat server integration", "discord bot"],
const openai = new OpenAI();
const cache = {};
async function getEmbedding(str) {
cache[str] = cache[str] || (await openai.embeddings.create({
model: "text-embedding-3-large",
val_NpOa7nyg47
@dhvanil
An interactive, runnable TypeScript val by dhvanil
HTTP
export async function val_NpOa7nyg47(req) {
try {
// Execute the code directly and capture its result
myApi
@yuanmouren1hao
An interactive, runnable TypeScript val by yuanmouren1hao
Email
export function myApi(name) {
return "hi " + name;
semanticSearchBlobs
@janpaul123
Part of Val Town Semantic Search . Uses Val Town's blob storage to search embeddings of all vals, by downloading them all and iterating through all of them to compute distance. Slow and terrible, but it works! Get metadata from blob storage: allValsBlob${dimensions}EmbeddingsMeta (currently allValsBlob1536EmbeddingsMeta ), which has a list of all indexed vals and where their embedding is stored ( batchDataIndex points to the blob, and valIndex represents the offset within the blob). The blobs have been generated by janpaul123/indexValsBlobs . It is not run automatically. Get all blobs with embeddings pointed to by the metadata, e.g. allValsBlob1536EmbeddingsData_0 for batchDataIndex 0. Call OpenAI to generate an embedding for the search query. Go through all embeddings and compute cosine similarity with the embedding for the search query. Return list sorted by similarity.
Script
- Get all blobs with embeddings pointed to by the metadata, e.g. `allValsBlob1536EmbeddingsData_0` for `batchDataIndex` 0.
- Call OpenAI to generate an embedding for the search query.
- Go through all embeddings and compute cosine similarity with the embedding for the search query.
import _ from "npm:lodash";
import OpenAI from "npm:openai";
const dimensions = 1536;
export default async function semanticSearchPublicVals(query) {
const allValsBlobEmbeddingsMeta = (await blob.getJSON(`allValsBlob${dimensions}EmbeddingsMeta`)) ?? {};
await Promise.all(allBatchDataIndexesPromises);
const openai = new OpenAI();
const queryEmbedding = (await openai.embeddings.create({
model: "text-embedding-3-small",