Search

Results include substring matches and semantically similar vals. Learn more
cotr avatar
launch_thrifty_idea_generator
@cotr
An interactive, runnable TypeScript val by cotr
HTTP
export default async function(req: Request): Promise<Response> {
if (req.method === "OPTIONS") {
return new Response("ok");
const searchParams = new URL(req.url).searchParams;
console.log(searchParams);
const topic = searchParams.get("topic");
console.log(topic);
try {
const apiKey = Deno.env.get("GEMINI_API_KEY");
const url = "https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-pro-latest:generateContent";
dhvanil avatar
web_whzyD0pbS0
@dhvanil
An interactive, runnable TypeScript val by dhvanil
HTTP
export async function web_whzyD0pbS0(req) {
return new Response(`<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>The Dark Hours: When AI Goes Quantum</title>
<style>
body {
font-family: 'Courier New', monospace;
nbbaier avatar
logicalTealPelican
@nbbaier
An interactive, runnable TypeScript val by nbbaier
HTTP
const app = new Hono();
const schema = type({
name: "string",
age: "number",
app.get("/", c => c.text("hono"));
app.post("/", arktypeValidator("json", schema), (c) => {
const data = c.req.valid("json");
return c.json({
success: true,
message: `${data.name} is ${data.age}`,
pomdtr avatar
ask_ai_web
@pomdtr
An interactive, runnable TypeScript val by pomdtr
Script
import { Hono } from "npm:hono";
const app = new Hono();
kora avatar
aiBasicExample
@kora
Using Vercel AI SDK
Script
Using Vercel AI SDK
import { ModelProvider, modelProvider } from "https://esm.town/v/yawnxyz/ai";
// basic text generation
let response = await modelProvider.gen({
prompt: "hello, who am I speaking to?",
provider: "google",
} as any);
console.log("res:", response);
fal avatar
creative_upscaler
@fal
Creative Upscaler link to val - https://www.val.town/v/fal/creative_upscaler Usage const upscaledImage = @fal.creative_upscaler("an owl", "https://storage.googleapis.com/falserverless/model_tests/upscale/owl.png") Usage import fal from "npm:@fal-ai/serverless-client"; const result = await fal.subscribe("fal-ai/creative-upscaler", { input: { prompt: "an owl", image_url: "https://storage.googleapis.com/falserverless/model_tests/upscale/owl.png", }, logs: true, onQueueUpdate: (update) => { if (update.status === "IN_PROGRESS") { update.logs.map((log) => log.message).forEach(console.log); } }, }); https://www.fal.ai/models/creative-upscaler
Script
## Creative Upscaler
link to val - https://www.val.town/v/fal/creative_upscaler
### Usage
```js
const upscaledImage = @fal.creative_upscaler("an owl", "https://storage.googleapis.com/falserverless/model_tests/upscale/owl.
### Usage
export let creativeUpscaler = async (
prompt: string,
image_url: string,
creativity: number = 0.5,
vlad avatar
gptApiFramework
@vlad
Allows for automatic generation of Hono API comatible with GPTs. Endpoints' inputs and outputs need to be specified via types from which the Open API spec is generated automatically and available via /gpt/schema endpoint. Usage example: import { GptApi } from "https://esm.town/v/xkonti/gptApiFramework"; import { z } from "npm:zod"; /** * COMMON TYPES */ const ResponseCommandSchema = z.object({ feedback: z.string().describe("Feedback regarding submitted action"), command: z.string().describe("The command for the Mediator AI to follow strictly"), data: z.string().optional().describe("Additional data related to the given command"), }).describe("Contains feedback and further instructions to follow"); export type ResponseCommand = z.infer<typeof ResponseCommandSchema>; /** * INITIALIZE API */ const api = new GptApi({ url: "https://xkonti-planoverseerai.web.val.run", title: "Overseer AI API", description: "The API for interacting with the Overseer AI", version: "1.0.0", }); /** * REQUIREMENTS GATHERING ENDPOINTS */ api.nothingToJson<ResponseCommand>({ verb: "POST", path: "/newproblem", operationId: "new-problem", desc: "Endpoint for informing Overseer AI about a new problem presented by the User", requestSchema: null, requestDesc: null, responseSchema: ResponseCommandSchema, responseDesc: "Instruction on how to proceed with the new problem", }, async (ctx) => { return { feedback: "User input downloaded. Problem analysis is required.", command: await getPrompt("analyze-problem"), data: "", }; }); export default api.serve();
Script
Allows for automatic generation of Hono API comatible with GPTs. Endpoints' inputs and outputs need to be specified via types
Usage example:
```ts
* COMMON TYPES
const ResponseCommandSchema = z.object({
feedback: z.string().describe("Feedback regarding submitted action"),
export interface ApiInfo {
url: string;
title: string;
description: string;
jeffreyyoung avatar
specialBlueGopher
@jeffreyyoung
An interactive, runnable TypeScript val by jeffreyyoung
HTTP
const replicate = new Replicate({
auth: Deno.env.get("REPLICATE_API_KEY"),
export default serve({
async *handleMessage(req) {
const lastMsg = req.query.at(-1);
const imgUrl = lastMsg?.attachments?.at?.(0)?.url;
const maskUrl = lastMsg?.attachments?.at?.(1)?.url;
const prompt = lastMsg?.content?.trim();
if (!imgUrl || !maskUrl || !prompt) {
yield "Please include a prompt, an image and a mask";
fal avatar
sdxl
@fal
SDXL (fastest) https://www.fal.ai/models/stable-diffusion-xl link to val - https://www.val.town/v/fal/sdxl import * as fal from "npm:@fal-ai/serverless-client"; const result = await fal.subscribe("fal-ai/fast-sdxl", { input: { prompt: "photo of a rhino dressed suit and tie sitting at a table in a bar with a bar stools, award winning photography, Elke vogelsang" }, logs: true, onQueueUpdate: (update) => { if (update.status === "IN_PROGRESS") { update.logs.map((log) => log.message).forEach(console.log); } }, });
Script
## SDXL (fastest)
https://www.fal.ai/models/stable-diffusion-xl
link to val - https://www.val.town/v/fal/sdxl
```js
const result = await fal.subscribe("fal-ai/fast-sdxl", {
input: {
export let sdxl = async (
prompt: string,
negative_prompt: string = "cartoon, illustration, animation. face. male, female",
image_size: string = "square_hd",
pomdtr avatar
smallweb
@pomdtr
More details about smallweb at https://smallweb.run
Script
More details about smallweb at <https://smallweb.run>
import type openapi from "npm:smallweb@0.14.4";
if (!Deno.env.get("SMALLWEB_API_URL")) {
throw new Error("Missing SMALLWEB_API_URL");
if (!Deno.env.get("SMALLWEB_API_TOKEN")) {
throw new Error("Missing SMALLWEB_API_TOKEN");
export const smallweb = createClient<NormalizeOAS<typeof openapi>>({
endpoint: Deno.env.get("SMALLWEB_API_URL"),
globalParams: {
headers: { Authorization: `Bearer ${Deno.env.get("SMALLWEB_API_TOKEN")}` },
kingishb avatar
grayFinch
@kingishb
Quick AI web search via email (useful for apple watch)
Email
Quick AI web search via email (useful for apple watch)
// A dumb search engine I can use from my apple watch by emailing a question.
export default async function(e: Email) {
if (!e.from.endsWith("<brian@sarriaking.com>")) {
console.error("unauthorized!", e.from);
return;
const resp = await fetch("https://api.perplexity.ai/chat/completions", {
method: "POST",
headers: {
"accept": "application/json",
tr3ntg avatar
googleGenerativeAIStreamingExample
@tr3ntg
Google Generative AI Streaming Example Example Val showing how to set up an authenticated Google GoogleGenerativeAI client. Prerequisite: Follow Google's Getting Started guide to get an API key and view some example methods.
HTTP
# Google Generative AI Streaming Example
Example Val showing how to set up an authenticated Google `GoogleGenerativeAI` client.
**Prerequisite:**
Follow Google's [Getting Started guide](https://ai.google.dev/gemini-api/docs/get-started/tutorial?lang=node) to get an API k
export default async function(req: Request): Promise<Response> {
const genAI = new GoogleGenerativeAI(Deno.env.get("your-api-key"));
const generativeModel = genAI.getGenerativeModel({
model: "gemini-1.5-flash-001",
const request = {
contents: [{ role: "user", parts: [{ text: "How are you doing today?" }] }],
iamseeley avatar
hfApiGateway
@iamseeley
🤖 A gateway to Hugging Face's Inference API You can perform various NLP tasks using different models . The gateway supports multiple tasks, including feature extraction, text classification, token classification, question answering, summarization, translation, text generation, and sentence similarity. Features Feature Extraction : Extract features from text using models like BAAI/bge-base-en-v1.5 . Text Classification : Classify text sentiment, emotions, etc., using models like j-hartmann/emotion-english-distilroberta-base . Token Classification : Perform named entity recognition (NER) and other token-level classifications. Question Answering : Answer questions based on a given context. Summarization : Generate summaries of longer texts. Translation : Translate text from one language to another. Text Generation : Generate text based on a given prompt. Sentence Similarity : Calculate semantic similarity between sentences. Usage Send a POST request with the required inputs to the endpoint with the appropriate task and model parameters. Or use the default models. # Example Default Model Request curl -X POST -H "Content-Type: application/json" -d '{"inputs": {"source_sentence": "Hello World", "sentences": ["Goodbye World", "How are you?", "Nice to meet you."]}}' "https://iamseeley-hfapigateway.web.val.run/?task=feature-extraction" Example Requests Feature Extraction curl -X POST -H "Content-Type: application/json" -d '{"inputs": ["Hello World", "Goodbye World"]}' "https://iamseeley-hfapigateway.web.val.run/?task=feature-extraction&model=BAAI/bge-base-en-v1.5" Feature Extraction curl -X POST -H "Content-Type: application/json" -d '{"inputs": {"source_sentence": "Hello World", "sentences": ["Goodbye World", "How are you?", "Nice to meet you."]}}' "https://iamseeley-hfapigateway.web.val.run/?task=feature-extraction&model=sentence-transformers/all-MiniLM-L6-v2" Text Classification curl -X POST -H "Content-Type: application/json" -d '{"inputs": "I love programming!"}' "https://iamseeley-hfapigateway.web.val.run/?task=text-classification&model=j-hartmann/emotion-english-distilroberta-base" Token Classification curl -X POST -H "Content-Type: application/json" -d '{"inputs": "My name is John and I live in New York."}' "https://iamseeley-hfApiGateway.web.val.run/?task=token-classification&model=dbmdz/bert-large-cased-finetuned-conll03-english" Question Answering curl -X POST -H "Content-Type: application/json" -d '{"inputs": {"question": "What is the capital of France?", "context": "The capital of France is Paris, a major European city and a global center for art, fashion, gastronomy, and culture."}}' "https://iamseeley-hfapigateway.web.val.run/?task=question-answering&model=deepset/roberta-base-squad2" Summarization curl -X POST -H "Content-Type: application/json" -d '{"inputs": "The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct."}' "https://iamseeley-hfapigateway.web.val.run/?task=summarization&model=sshleifer/distilbart-cnn-12-6" Translation curl -X POST -H "Content-Type: application/json" -d '{"inputs": "Hello, how are you?"}' "https://iamseeley-hfapigateway.web.val.run/?task=translation&model=google-t5/t5-small" Text Generation curl -X POST -H "Content-Type: application/json" -d '{"inputs": "Once upon a time"}' "https://iamseeley-hfapigateway.web.val.run/?task=text-generation&model=gpt2" Sentence Similarity curl -X POST -H "Content-Type: application/json" -d '{"inputs": {"source_sentence": "Hello World", "sentences": ["Goodbye World"]}}' "https://iamseeley-hfapigateway.web.val.run/?task=sentence-similarity&model=sentence-transformers/all-MiniLM-L6-v2" Val Examples Using Pipeline import Pipeline from "https://esm.town/v/iamseeley/pipeline"; // ... } else if (req.method === "POST") { const { inputs } = await req.json(); const pipeline = new Pipeline("task", "model"); const result = await pipeline.run(inputs); return new Response(JSON.stringify(result), { headers: { "Content-Type": "application/json" } }); } } exampleTranslation exampleTextClassification exampleFeatureExtraction exampleTextGeneration exampleSummarization exampleQuestionAnswering
HTTP
## 🤖 A gateway to Hugging Face's Inference API
You can perform various NLP tasks using different [models](https://huggingface.co/models). The gateway supports multiple task
## Features
- **Feature Extraction**: Extract features from text using models like `BAAI/bge-base-en-v1.5`.
- **Text Classification**: Classify text sentiment, emotions, etc., using models like `j-hartmann/emotion-english-distilrober
- **Token Classification**: Perform named entity recognition (NER) and other token-level classifications.
const defaultModels = {
"feature-extraction": "sentence-transformers/all-MiniLM-L6-v2",
"text-classification": "distilbert-base-uncased-finetuned-sst-2-english",
"token-classification": "dbmdz/bert-large-cased-finetuned-conll03-english",
isidentical avatar
falSDXLExample
@isidentical
An interactive, runnable TypeScript val by isidentical
Script
import * as fal from "npm:@fal-ai/serverless-client";
fal.config({
// Can also be auto-configured using environment variables:
credentials: Deno.env.get("FAL_KEY"),
const prompt = "a cute and happy dog";
const result: any = await fal.run("fal-ai/fast-lightning-sdxl", { input: { prompt } });
console.log(result.images[0].url);
charmaine avatar
braintrustSDK
@charmaine
Braintrust SDK Braintrust is a platform for evaluating and shipping AI products. To learn more about Braintrust or sign up for free, visit the website or check out the docs . The SDKs include utilities to: Log experiments and datasets to Braintrust Run evaluations (via the Eval framework) This template shows you how to use the Braintrust SDK. This starter template was ported from this one on GitHub . To run it: Click Fork on this val Get your Braintrust API key at https://www.braintrust.dev/app/settings?subroute=api-keys Add it to your project Environment Variables as BRAINTRUST_API_KEY Click Run on the tutorial val
Script
# Braintrust SDK
[Braintrust](https://www.braintrust.dev/) is a platform for evaluating and shipping AI products. To learn more about Braintru
visit the [website](https://www.braintrust.dev/) or check out the [docs](https://www.braintrust.dev/docs).
The SDKs include utilities to:
- Log experiments and datasets to Braintrust
- Run evaluations (via the `Eval` framework)
Eval("Say Hi Bot", {
data: () => {
return [
input: "Foo",