Search

Results include substring matches and semantically similar vals. Learn more
thu avatar
getJoke
@thu
An interactive, runnable TypeScript val by thu
Script
export const getJoke = (async () => {
const { ChatCompletion } = await import("npm:openai");
const result = await ChatCompletion.create({
model: "gpt-3.5-turbo",
zzz avatar
demoOpenAIGPTSummary
@zzz
An interactive, runnable TypeScript val by zzz
Script
import { confession } from "https://esm.town/v/alp/confession?v=2";
import { runVal } from "https://esm.town/v/std/runVal";
export let demoOpenAIGPTSummary = await runVal(
"zzz.OpenAISummary",
confession,
modelName: "gpt-3.5-turbo",
browserbase avatar
TopHackerNewsDailyEmail
@browserbase
Browserbase Browserbase offers a reliable, high performance serverless developer platform to run, manage, and monitor headless browsers at scale. Leverage our infrastructure to power your web automation and AI agents. Get started with Browserbase for free here . If you have any questions, reach out to developer@browserbase.com.
Cron
import { loadPageContent } from "https://esm.town/v/charlypoly/browserbaseUtils";
import { OpenAI } from "https://esm.town/v/std/openai?v=4";
import { z } from "npm:zod";
.describe("Top 5 stories on Hacker News"),
// we create a OpenAI Tool that takes our schema as argument
const extractContentTool: any = {
type: "function",
function: {
name: "extract_content",
parameters: zodToJsonSchema(schema),
const openai = new OpenAI();
// We ask OpenAI to extract the content from the given web page.
// The model will reach out to our `extract_content` tool and
// the requirement of `extract_content`s argument.
const completion = await openai.chat.completions.create({
model: "gpt-4-turbo",
tool_choice: "auto",
// we retrieve the serialized arguments generated by OpenAI
const result = completion.choices[0].message.tool_calls![0].function.arguments;
// the serialized arguments are parsed into a valid JavaScript array of objects
arthrod avatar
sureVioletSlug
@arthrod
An interactive, runnable TypeScript val by arthrod
HTTP
import { OpenAI } from "https://esm.town/v/std/openai";
import { email } from "https://esm.town/v/std/email";
const openai = new OpenAI();
const AUTH_TOKEN = Deno.env.get("valtown");
async function determineIntent(content: string): Promise<string> {
const completion = await openai.chat.completions.create({
messages: [
return completion.choices[0].message.content.trim().toLowerCase();
async function handleSearch(query: string): Promise<void> {
await email({
text: `Search query: ${query}`,
async function handleAnalyze(data: string): Promise<void> {
await email({
text: `Analysis request: ${data}`,
async function handleCreate(prompt: string): Promise<void> {
await email({
text: `Creation prompt: ${prompt}`,
export default async function (req: Request): Promise<Response> {
// Check for proper authentication
adagradschool avatar
claude_d482d9ee_eff3_42e6_9779_a012b1e1f7b4
@adagradschool
An interactive, runnable TypeScript val by adagradschool
HTTP
export default function handler(req) {
return new Response(`"\n <!DOCTYPE html>\n <html>\n <head>\n <title>Claude Chat Conversation</title
headers: {
xkonti avatar
gptApiFramework
@xkonti
Allows for automatic generation of Hono API compatible with GPTs. Endpoints' inputs and outputs need to be specified via types from which the Open API spec is generated automatically and available via /gpt/schema endpoint. ⚠️ Breaking changes introduced in v23 & 24: nothingToJson doesn't take the generic TResponse anymore. The type is inferred from the endpoint definition. The endpoint definition doesn't need the requestSchema , requestDesc and responseDesc defined anymore. The descriptions are inferred from the schema description. jsonToJson doesn't take the generic TRequest and TResponse anymore. Types are inferred from the endpoint definition. The endpoint definition doesn't need the requestDesc and responseDesc defined anymore. The descriptions are inferred from the schema description. Usage example: import { GptApi } from "https://esm.town/v/xkonti/gptApiFramework"; import { z } from "npm:zod"; /** * COMMON TYPES */ const ResponseCommandSchema = z.object({ feedback: z.string().describe("Feedback regarding submitted action"), command: z.string().describe("The command for the Mediator AI to follow strictly"), data: z.string().optional().describe("Additional data related to the given command"), }).describe("Contains feedback and further instructions to follow"); /** * INITIALIZE API */ const api = new GptApi({ url: "https://xkonti-planoverseerai.web.val.run", title: "Overseer AI API", description: "The API for interacting with the Overseer AI", version: "1.0.0", policyGetter: async () => { const { markdownToPrettyPage } = await import("https://esm.town/v/xkonti/markdownToHtmlPage?v=5"); return await markdownToPrettyPage("# Privacy Policy\n\n## Section 1..."); }, }); /** * REQUIREMENTS GATHERING ENDPOINTS */ api.nothingToJson({ verb: "POST", path: "/newproblem", operationId: "new-problem", desc: "Endpoint for informing Overseer AI about a new problem presented by the User", responseSchema: ResponseCommandSchema, responseDesc: "Instruction on how to proceed with the new problem", }, async (ctx) => { return { feedback: "User input downloaded. Problem analysis is required.", command: await getPrompt("analyze-problem"), data: "", }; }); export default api.serve();
Script
version: string;
* An optional function that returns the policy to be used available at `/privacypolicy`.
policyGetter?: (() => string) | (() => Promise<string>);
* @returns The generated JSON schema.
function getSchemaDesc(schema: z.Schema | null) {
if (!schema) return null;
* @returns The paths of the OpenAPI spec.
function getPathsDesc(endpoints: EndpointDefinition[]): Paths {
const paths: Paths = {};
* @returns The OpenAPI spec.
function getOpenApiSpec(
url: string,
* @param endpointDef Definition of the endpoint.
* @param handler Function that handles the request.
jsonToNothing<TRequestSchema extends z.Schema>(
* @param endpointDef Definition of the endpoint.
* @param handler Function that handles the request.
nothingToJson<TResponseSchema extends z.Schema>(
* @param endpointDef Definition of the endpoint.
* @param handler Function that handles the request.
jsonToJson<TRequestSchema extends z.Schema, TResponseSchema extends z.Schema>(
* @param endpointDef Definition of the endpoint.
* @param handler Function that handles the request.
private registerHandler(
throw new Error(`HTTP verb ${verb} not supported`);
* Returns a function that can be used to serve the API.
* @example ValTown usage:
websrai avatar
seoKeywordResearchTool
@websrai
@jsxImportSource https://esm.sh/react@18.2.0
HTTP
"OpenAI",
function App() {
const [selectedProvider, setSelectedProvider] = useState("OpenAI");
setSelectedProvider(userData.aiProvider || "OpenAI");
function client() {
export default async function server(request: Request): Promise<Response> {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
// Helper function to get or create user
async function getOrCreateUser() {
charmaine avatar
braintrustSDKBroken
@charmaine
Braintrust SDK Braintrust is a platform for evaluating and shipping AI products. To learn more about Braintrust or sign up for free, visit the website or check out the docs . The SDKs include utilities to: Log experiments and datasets to Braintrust Run evaluations (via the Eval framework) This template shows you how to use the Braintrust SDK. This starter template was ported from this one on GitHub . To run it: Click Fork on this val Get your Braintrust API key at https://www.braintrust.dev/app/settings?subroute=api-keys Add it to your project Environment Variables (on the left side bar of this project) as BRAINTRUST_API_KEY Click Run on the tutorial val
Script
import { LevenshteinScorer } from "npm:autoevals";
import { Eval } from "npm:braintrust@0.01";
export default async function handler() {
const result = {
apiKeyStatus: null,
websrai avatar
aigeneratorblog
@websrai
@jsxImportSource https://esm.sh/react@18.2.0
HTTP
"OpenAI",
function App() {
const [selectedProvider, setSelectedProvider] = useState("OpenAI");
setSelectedProvider(userData.aiProvider || "OpenAI");
function client() {
export default async function server(request: Request): Promise<Response> {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
// Helper function to get or create user
async function getOrCreateUser() {
lolocoo avatar
googlesearch
@lolocoo
An interactive, runnable TypeScript val by lolocoo
Express (deprecated)
dateRestrict: "m[1]",
const getSearch = async (data) => {
const response = await fetch("https://api.openai.com/v1/completions", {
method: "GET",
body: JSON.stringify(data),
stainless_em avatar
openapi_playground
@stainless_em
openapi playground temporarily host swagger and prism for your openapi spec
HTTP
code = `# Paste your OpenAPI spec here`;
return c.html(<App code={code} />);
function template(code: string) {
return `import { makeServer } from "https://esm.town/v/stainless_em/openapiPlaygroundServer"\nexport default makeServer(${
JSON.stringify(YAML.parse(code))
stevekrouse avatar
anthropicCaching
@stevekrouse
* This val creates an interactive webpage that demonstrates the functionality of the Anthropic API. * It uses a React frontend with an input for the API key and buttons to trigger different operations. * The Anthropic API key is stored in the frontend state and sent with each API request.
HTTP
* This val creates an interactive webpage that demonstrates the functionality of the Anthropic API.
* It uses a React frontend with an input for the API key and buttons to trigger different operations.
import { createRoot } from "https://esm.sh/react-dom/client";
function App() {
const [apiKey, setApiKey] = useState("");
</div>
function client() {
createRoot(document.getElementById("root")).render(<App />);
client();
async function server(request: Request): Promise<Response> {
const url = new URL(request.url);
headers: { "content-type": "text/html" },
async function fetchContent(): Promise<string> {
const response = await fetch("https://www.gutenberg.org/cache/epub/1342/pg1342.txt");
return bookContent;
async function runNonCachedCall(apiKey: string): Promise<string> {
const { default: anthropic } = await import("npm:@anthropic-ai/sdk@0.26.1");
Response: ${response.content[0].text}`;
async function runCachedCall(apiKey: string): Promise<string> {
const { default: anthropic } = await import("npm:@anthropic-ai/sdk@0.26.1");
Response: ${response.content[0].text}`;
async function runMultiTurnConversation(apiKey: string): Promise<string> {
const { default: anthropic } = await import("npm:@anthropic-ai/sdk@0.26.1");
webup avatar
modelSampleChatGenerate
@webup
An interactive, runnable TypeScript val by webup
Script
const builder = await getModelBuilder({
type: "chat",
provider: "openai",
const model = await builder();
const { SystemMessage, HumanMessage } = await import("npm:langchain/schema");
ejfox avatar
relationshipgenerator
@ejfox
// This val receives text input, sends it to OpenAI to generate relationships,
HTTP
// This val receives text input, sends it to OpenAI to generate relationships,
// and returns a newline-delimited list of relationships.
// It uses the OpenAI API to generate the relationships.
// Tradeoff: This approach relies on an external API, which may have rate limits or costs.
// curl -X POST -H "Content-Type: text/plain" -d "Your text here" https://your-val-url.web.val.run
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
export default async function main(req: Request): Promise<Response> {
if (req.method !== "POST") {
const text = await req.text();
const completion = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
yawnxyz avatar
getContentFromUrl
@yawnxyz
getContentFromUrl Use this for summarizers. Combines https://r.jina.ai/URL and markdown.download's Youtube transcription getter to do its best to retrieve content from URLs. https://arstechnica.com/space/2024/06/nasa-indefinitely-delays-return-of-starliner-to-review-propulsion-data https://journals.asm.org/doi/10.1128/iai.00065-23 Usage: https://yawnxyz-getcontentfromurl.web.val.run/https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10187409/ https://yawnxyz-getcontentfromurl.web.val.run/https://www.youtube.com/watch?v=gzsczZnS84Y&ab_channel=PhageDirectory
HTTP
let result = await ai({
provider: provider || "openai",
model: model || "gpt-3.5-turbo",
let result = await ai({
provider: provider || "openai",
model: model || "gpt-3.5-turbo",
let result = await ai({
provider: "openai",
embed: true,