Search

Results include substring matches and semantically similar vals. Learn more
stevekrouse avatar
easyAQI
@stevekrouse
easyAQI Get the Air Quality Index (AQI) for a location via open data sources. It's "easy" because it strings together multiple lower-level APIs to give you a simple interface for AQI. Accepts a location in basically any string format (ie "downtown manhattan") Uses Nominatim to turn that into longitude and latitude Finds the closest sensor to you on OpenAQ Pulls the readings from OpenAQ Calculates the AQI via EPA's NowCAST algorithm Uses EPA's ranking to classify the severity of the score (ie "Unhealthy for Sensitive Groups") It uses blob storage to cache the openai location id for your location string to skip a couple steps for the next time. Example usage @stevekrouse.easyAQI({ location: "brooklyn navy yard" }) // Returns { "aqi": 23.6, "severity": "Good" } Forkable example: val.town/v/stevekrouse.easyAQIExample Also useful for getting alerts when the AQI is unhealthy near you: https://www.val.town/v/stevekrouse.aqi
Script
6. Uses EPA's ranking to classify the severity of the score (ie "Unhealthy for Sensitive Groups")
It uses blob storage to cache the openai location id for your location string to skip a couple steps for the next time.
## Example usage
import { openAqNowcastAQI } from "https://esm.town/v/stevekrouse/openAqNowcastAQI";
const cacheKey = location => "easyAQI_locationID_cache_" + encodeURIComponent(location);
export async function easyAQI({ location }: {
location: string;
let openAQLocation = await blob.getJSON(cacheKey(location));
jacoblee93 avatar
streamingTest
@jacoblee93
An interactive, runnable TypeScript val by jacoblee93
Script
export const streamingTest = (async () => {
const { OpenAI } = await import("https://esm.sh/langchain/llms/openai");
// To enable streaming, we pass in `streaming: true` to the LLM constructor.
// Additionally, we pass in a handler for the `handleLLMNewToken` event.
const chat = new OpenAI({
maxTokens: 25,
streaming: true,
openAIApiKey: process.env.OPENAI_API_KEY,
const response = await chat.call("Tell me a joke.", undefined, [
yawnxyz avatar
openaiUploadFile
@yawnxyz
An interactive, runnable TypeScript val by yawnxyz
Script
import { fetch } from "https://esm.town/v/std/fetch";
export async function openaiUploadFile({ key, data, filename = "data.json", purpose = "assistants" }: {
key: string;
formData.append("file", file, filename);
let result = await fetch("https://api.openai.com/v1/files", {
method: "POST",
if (result.error)
throw new Error("OpenAI Upload Error: " + result.error.message);
else
stevekrouse avatar
reflective_qa
@stevekrouse
Reflective AI Ask me about r's in strawberry
HTTP
import React, { useEffect, useRef, useState } from "https://esm.sh/react@18.2.0";
function App() {
const [notification, setNotification] = useState({ type: "success", message: "" });
</div>
function client() {
createRoot(document.getElementById("root")).render(<App />);
if (typeof document !== "undefined") { client(); }
export default async function server(request: Request): Promise<Response> {
if (request.method === "POST" && new URL(request.url).pathname === "/ask") {
const { question } = await request.json();
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
rozek avatar
GDI_AITranslatorService
@rozek
This val is part of a series of examples to introduce "val.town" in my computer science course at Stuttgart University of Applied Sciences . The idea is to motivate even first-semester students not to wait but to put their ideas into practice from the very beginning and implement web apps with frontend and backend. It contains a simple HTTP end point which expects a POST request with a text body. That text is translated to english with the help of OpenAI and sent back to the client This val is the companion of https://rozek-gdi_aitranslator.web.val.run/ which contains the browser part (aka "frontend") for this example. The code was created using Townie - with only a few small manual corrections. This val is licensed under the MIT License.
HTTP
It contains a simple HTTP end point which expects a POST request with a text
body. That text is translated to english with the help of OpenAI and sent back
to the client
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function (req: Request): Promise<Response> {
// Check if the request method is POST
return new Response("Request body cannot be empty", { status: 400 });
// Initialize OpenAI
const openai = new OpenAI();
// Translate the text using OpenAI
const completion = await openai.chat.completions.create({
messages: [
std avatar
limit_model_fork
@std
OpenAI Proxy This OpenAI API proxy injects Val Town's API keys. For usage documentation, check out https://www.val.town/v/std/openai
HTTP
# OpenAI Proxy
This OpenAI API proxy injects Val Town's API keys. For usage documentation, check out https://www.val.town/v/std/openai
export default async function(req: Request): Promise<Response> {
const url = new URL("." + pathname, "https://api.openai.com");
headers.set("Authorization", `Bearer ${Deno.env.get("OPENAI_API_KEY")}`);
headers.set("OpenAI-Organization", Deno.env.get("OPENAI_API_ORG"));
const openAIRes = await fetch(url, {
const res = new Response(openAIRes.body, openAIRes);
res.headers.delete("openai-organization");
async function limitFreeModel(req: Request, user: any) {
janpaul123 avatar
VALLE
@janpaul123
VALL-E LLM code generation for vals! Make apps with a frontend, backend, and database. It's a bit of work to get this running, but it's worth it. Fork this val to your own profile. Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in tempValsParentFolderId . If you want to use OpenAI models you need to set the OPENAI_API_KEY env var . If you want to use Anthropic models you need to set the ANTHROPIC_API_KEY env var . Create a Val Town API token , open the browser preview of this val, and use the API token as the password to log in.
HTTP
* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-v
* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environ
iamseeley avatar
hfApiGateway
@iamseeley
🤖 A gateway to Hugging Face's Inference API You can perform various NLP tasks using different models . The gateway supports multiple tasks, including feature extraction, text classification, token classification, question answering, summarization, translation, text generation, and sentence similarity. Features Feature Extraction : Extract features from text using models like BAAI/bge-base-en-v1.5 . Text Classification : Classify text sentiment, emotions, etc., using models like j-hartmann/emotion-english-distilroberta-base . Token Classification : Perform named entity recognition (NER) and other token-level classifications. Question Answering : Answer questions based on a given context. Summarization : Generate summaries of longer texts. Translation : Translate text from one language to another. Text Generation : Generate text based on a given prompt. Sentence Similarity : Calculate semantic similarity between sentences. Usage Send a POST request with the required inputs to the endpoint with the appropriate task and model parameters. Or use the default models. # Example Default Model Request curl -X POST -H "Content-Type: application/json" -d '{"inputs": {"source_sentence": "Hello World", "sentences": ["Goodbye World", "How are you?", "Nice to meet you."]}}' "https://iamseeley-hfapigateway.web.val.run/?task=feature-extraction" Example Requests Feature Extraction curl -X POST -H "Content-Type: application/json" -d '{"inputs": ["Hello World", "Goodbye World"]}' "https://iamseeley-hfapigateway.web.val.run/?task=feature-extraction&model=BAAI/bge-base-en-v1.5" Feature Extraction curl -X POST -H "Content-Type: application/json" -d '{"inputs": {"source_sentence": "Hello World", "sentences": ["Goodbye World", "How are you?", "Nice to meet you."]}}' "https://iamseeley-hfapigateway.web.val.run/?task=feature-extraction&model=sentence-transformers/all-MiniLM-L6-v2" Text Classification curl -X POST -H "Content-Type: application/json" -d '{"inputs": "I love programming!"}' "https://iamseeley-hfapigateway.web.val.run/?task=text-classification&model=j-hartmann/emotion-english-distilroberta-base" Token Classification curl -X POST -H "Content-Type: application/json" -d '{"inputs": "My name is John and I live in New York."}' "https://iamseeley-hfApiGateway.web.val.run/?task=token-classification&model=dbmdz/bert-large-cased-finetuned-conll03-english" Question Answering curl -X POST -H "Content-Type: application/json" -d '{"inputs": {"question": "What is the capital of France?", "context": "The capital of France is Paris, a major European city and a global center for art, fashion, gastronomy, and culture."}}' "https://iamseeley-hfapigateway.web.val.run/?task=question-answering&model=deepset/roberta-base-squad2" Summarization curl -X POST -H "Content-Type: application/json" -d '{"inputs": "The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct."}' "https://iamseeley-hfapigateway.web.val.run/?task=summarization&model=sshleifer/distilbart-cnn-12-6" Translation curl -X POST -H "Content-Type: application/json" -d '{"inputs": "Hello, how are you?"}' "https://iamseeley-hfapigateway.web.val.run/?task=translation&model=google-t5/t5-small" Text Generation curl -X POST -H "Content-Type: application/json" -d '{"inputs": "Once upon a time"}' "https://iamseeley-hfapigateway.web.val.run/?task=text-generation&model=gpt2" Sentence Similarity curl -X POST -H "Content-Type: application/json" -d '{"inputs": {"source_sentence": "Hello World", "sentences": ["Goodbye World"]}}' "https://iamseeley-hfapigateway.web.val.run/?task=sentence-similarity&model=sentence-transformers/all-MiniLM-L6-v2" Val Examples Using Pipeline import Pipeline from "https://esm.town/v/iamseeley/pipeline"; // ... } else if (req.method === "POST") { const { inputs } = await req.json(); const pipeline = new Pipeline("task", "model"); const result = await pipeline.run(inputs); return new Response(JSON.stringify(result), { headers: { "Content-Type": "application/json" } }); } } exampleTranslation exampleTextClassification exampleFeatureExtraction exampleTextGeneration exampleSummarization exampleQuestionAnswering
HTTP
"text-generation": "gpt2",
"sentence-similarity": "sentence-transformers/all-MiniLM-L6-v2"
export async function handler(req) {
const url = new URL(req.url);
const task = url.searchParams.get("task") || "feature-extraction";
inkpotmonkey avatar
instructorExample
@inkpotmonkey
Example copied https://instructor-ai.github.io/instructor-js/#usage into val.town You will need to fork this and properly set the apiKey and organisation for it to work.
Script
import Instructor from "https://esm.sh/@instructor-ai/instructor";
import OpenAI from "https://esm.sh/openai";
import { z } from "https://esm.sh/zod";
const openAISecrets = {
apiKey: getApiKey(),
organization: getOrganisationKey(),
const oai = new OpenAI(openAISecrets);
const client = Instructor({
client: oai,
mode: "FUNCTIONS",
const UserSchema = z.object({
dhvanil avatar
web_2l64kXRF3P
@dhvanil
An interactive, runnable TypeScript val by dhvanil
HTTP
export async function web_2l64kXRF3P(req) {
return new Response(`<!DOCTYPE html>
<html>
spinningideas avatar
reflective_qa
@spinningideas
Reflective AI Ask me about r's in strawberry
HTTP
import React, { useEffect, useRef, useState } from "https://esm.sh/react@18.2.0";
function App() {
const [notification, setNotification] = useState({ type: "success", message: "" });
</div>
function client() {
createRoot(document.getElementById("root")).render(<App />);
if (typeof document !== "undefined") { client(); }
export default async function server(request: Request): Promise<Response> {
if (request.method === "POST" && new URL(request.url).pathname === "/ask") {
const { question } = await request.json();
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
stevekrouse avatar
falDemoApp
@stevekrouse
@jsxImportSource https://esm.sh/react
HTTP
import { falProxyRequest } from "https://esm.town/v/stevekrouse/falProxyRequest";
function App() {
const [prompt, setPrompt] = useState("");
</div>
function client() {
createRoot(document.getElementById("root")).render(<App />);
if (typeof document !== "undefined") { client(); }
export default async function server(req: Request): Promise<Response> {
const url = new URL(req.url);
weaverwhale avatar
langchainEx
@weaverwhale
An interactive, runnable TypeScript val by weaverwhale
Script
export const langchainEx = (async () => {
const { OpenAI } = await import("https://esm.sh/langchain/llms/openai");
const { PromptTemplate } = await import("https://esm.sh/langchain/prompts");
const { LLMChain } = await import("https://esm.sh/langchain/chains");
const model = new OpenAI({
temperature: 0.9,
openAIApiKey: process.env.openai,
maxTokens: 100,
websrai avatar
legalAssistant
@websrai
@jsxImportSource https://esm.sh/react@18.2.0
HTTP
import { createRoot } from "https://esm.sh/react-dom@18.2.0/client";
function LegalDisclaimer() {
return (
</div>
function App() {
const [query, setQuery] = useState("");
</div>
function client() {
createRoot(document.getElementById("root")).render(<App />);
if (typeof document !== "undefined") { client(); }
export default async function server(request: Request): Promise<Response> {
if (request.method === "POST") {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
const { query } = await request.json();
Query: ${query}`;
const completion = await openai.chat.completions.create({
messages: [{ role: "user", content: legalPrompt }],
yuval_dikerman avatar
gpt3
@yuval_dikerman
An interactive, runnable TypeScript val by yuval_dikerman
HTTP
import { fetch } from "https://esm.town/v/std/fetch";
export let gpt3 = async (prompt: string, openAiApiKey: string): Promise<string> => {
if (!prompt || !openAiApiKey) {
let cat = await fetch("https://catfact.ninja/fact");
`Prompt text or api key was not provided. \n \n here's a cat fact: \n ${fact}`,
const content = await fetch("https://api.openai.com/v1/chat/completions", {
method: "POST",
headers: {
"Authorization": `Bearer ${openAiApiKey}`,
"Content-Type": "application/json",