Search
easyAQI
@stevekrouse
easyAQI Get the Air Quality Index (AQI) for a location via open data sources. It's "easy" because it strings together multiple lower-level APIs to give you a simple interface for AQI. Accepts a location in basically any string format (ie "downtown manhattan") Uses Nominatim to turn that into longitude and latitude Finds the closest sensor to you on OpenAQ Pulls the readings from OpenAQ Calculates the AQI via EPA's NowCAST algorithm Uses EPA's ranking to classify the severity of the score (ie "Unhealthy for Sensitive Groups") It uses blob storage to cache the openai location id for your location string to skip a couple steps for the next time. Example usage @stevekrouse.easyAQI({ location: "brooklyn navy yard" })
// Returns { "aqi": 23.6, "severity": "Good" } Forkable example: val.town/v/stevekrouse.easyAQIExample Also useful for getting alerts when the AQI is unhealthy near you: https://www.val.town/v/stevekrouse.aqi
Script
6. Uses EPA's ranking to classify the severity of the score (ie "Unhealthy for Sensitive Groups")
It uses blob storage to cache the openai location id for your location string to skip a couple steps for the next time.
## Example usage
ownOpenAI
@kora
Use my own OpenAI API key to avoid limit
Script
Use my own OpenAI API key to avoid limit
import { OpenAI } from "npm:openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
"messages": [
{ "role": "user", "content": "คุณรู้จัก Gemini ไหม" },
model: "gpt-4o", // ถ้าใช้ของ std/openai จะได้แค่ gpt-4o-mini
max_tokens: 30,
openaistreaminghtml
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
HTTP
import OpenAI from "npm:openai";
const openai = new OpenAI();
export default async (req) => {
const stream = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
openAIStreamingExample
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
HTTP
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function(req: Request): Promise<Response> {
const openai = new OpenAI();
const stream = await openai.chat.completions.create({
stream: true,
fondGrayRoadrunner
@tsuchi_ya
An interactive, runnable TypeScript val by tsuchi_ya
Script
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
getModelBuilder
@webup
An interactive, runnable TypeScript val by webup
Script
provider?: "openai" | "huggingface";
} = { type: "llm", provider: "openai" }, options?: any) {
if (spec?.provider === "openai")
args.openAIApiKey = process.env.OPENAI;
matches({ type: "llm", provider: "openai" }),
const { OpenAI } = await import("npm:langchain/llms/openai");
return new OpenAI(args);
matches({ type: "chat", provider: "openai" }),
const { ChatOpenAI } = await import("npm:langchain/chat_models/openai");
return new ChatOpenAI(args);
comfortableOrangeTyrannosaurus
@arthrod
An interactive, runnable TypeScript val by arthrod
Script
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
"messages": [
gptExample
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
Script
import process from "node:process";
import { OpenAI } from "npm:openai";
const openai = new OpenAI({ apiKey: process.env.openai });
let chatCompletion = await openai.chat.completions.create({
messages: [{ role: "user", content: "Make a short joke or pun" }],
semanticSearch
@yawnxyz
In-memory semantic search; load it up with valtown KV. This is a "dumb" version of vector search, for prototyping RAG responses and UIs — with both regular search (w/ Lunr) and vector search (with OpenAI embeddings + cosine similarity) Usage: import { semanticSearch } from "https://esm.town/v/yawnxyz/semanticSearch";
const documents = [
{ id: 1, content: 'cats dogs' },
{ id: 2, content: 'elephants giraffes lions tigers' },
{ id: 3, content: 'edam camembert cheddar' }
];
async function runExample() {
// Add documents to the semantic search instance
await semanticSearch.addDocuments(documents);
const results = await semanticSearch.search('animals', 0, 3);
console.log('Top 3 search results for "animals":');
console.log(results);
}
runExample();
Script
In-memory semantic search; load it up with valtown KV.
This is a "dumb" version of vector search, for prototyping RAG responses and UIs — with both regular search (w/ Lunr) and vector search (with OpenAI embeddings + cosine similarity)
Usage:
import { embed, embedMany } from "npm:ai";
import { openai } from "npm:@ai-sdk/openai";
import lunr from "https://cdn.skypack.dev/lunr";
allowHeaders: ['Content-Type'],
openai.apiKey = Deno.env.get("OPENAI_API_KEY");
class SemanticSearch {
const { embedding } = await embed({
model: openai.embedding(modelName),
value: text,
const { embeddings } = await embedMany({
model: openai.embedding(modelName),
values: texts,
openaiUploadFile
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
Script
import { fetch } from "https://esm.town/v/std/fetch";
export async function openaiUploadFile({ key, data, purpose = "assistants" }: {
key: string;
formData.append("file", file, "data.json");
let result = await fetch("https://api.openai.com/v1/files", {
method: "POST",
if (result.error)
throw new Error("OpenAI Upload Error: " + result.error.message);
else
complete
@webup
An interactive, runnable TypeScript val by webup
Script
export const complete = async (prompt: string | object, options = {}) => {
// Initialize OpenAI API stub
const { Configuration, OpenAIApi } = await import("https://esm.sh/openai");
const configuration = new Configuration({
apiKey: process.env.OPENAI,
const openai = new OpenAIApi(configuration);
// Request chat completion
const completion = await openai.createCompletion({
model: "text-davinci-003",
aiEmailAssistant
@simplescraper
Email AI Assistant Chat with your favorite AI via email (with PDF attachment support) What It Does This script allows you to: Send emails to OpenAI. The text will be treated as the prompt Parse PDF attachments and include their contents in the AI's analysis. Get response directly to your inbox. Setup guide Copy this Val and save it as an Email Val (choose Val type in top-right corner of editor) Add your OpenAI API key to line 8 (or use an environment variable: https://docs.val.town/reference/environment-variables/) Copy the email address of the Val (click 3 dots in top-right > Copy > Copy email address) Write your email, include any attachments, and send it to the Val email address. The AI will respond after a few seconds.
Email
- Send emails to OpenAI. The text will be treated as the prompt
2. Add your OpenAI API key to line 8 (or use an environment variable: https://docs.val.town/reference/environment-variables/)
const openaiUrl = "https://api.openai.com/v1/chat/completions";
const apiKey = Deno.env.get("OPENAI_KEY"); // replace this entire line with your OpenAI API key as a string, e.g., "sk-123..." or use environment variable: https://docs.val.town/reference/environment-variables/
"OPENAI_KEY environment variable is not set. Please set it or replace this line with your API key.",
// step 4: send prompt to openai
const openaiResponse = await sendRequestToOpenAI(prompt, openaiUrl, apiKey, model);
// log the openai response
console.log("openai response:", openaiResponse);
await sendResponseByEmail(receivedEmail.from, openaiResponse);
// helper function to generate a prompt for openai
// helper function to send a request to openai
weatherGPT
@stevekrouse
Cron
import { email } from "https://esm.town/v/std/email?v=11";
import { OpenAI } from "npm:openai";
let location = "brooklyn ny";
).then(r => r.json());
const openai = new OpenAI();
let chatCompletion = await openai.chat.completions.create({
messages: [{
images
@ale_annini
An interactive, runnable TypeScript val by ale_annini
Script
import { openaiImages } from "https://esm.town/v/ale_annini/openaiImages";
export const images = openaiImages({ prompt: "a dog", n: 2 });
ask_gpt4
@scio
An interactive, runnable TypeScript val by scio
Script
export const ask_gpt4 = async (query) => {
const { OpenAI } = await import("https://deno.land/x/openai/mod.ts");
const openAI = new OpenAI(process.env.OPENAI_KEY);
const chatCompletion = await openAI.createChatCompletion({
model: "gpt-4",