• begoon avatar
    thisval
    @begoon
    This val default exports a function returning ValInfo. The val information comes from the Deno stack trace induced by throw new Error() . type ValInfo = { stack: string[]; // mostly for debugging endpoint: string; // val endpoint URL, empty if it is not an HTTP val user: string; // val.town user name name: string; // val name }; Here is an example program: import thisval from "https://esm.town/v/begoon/thisval"; const val = thisval(); export default async function(req: Request): Promise<Response> { return Response.json(val); } when invoked, it returns: { "stack": [ "Error", " at info (https://esm.town/v/begoon/thisval?v=12:10:11)", " at https://esm.town/v/begoon/thisvaltest?v=2:3:13" ], "endpoint": "begoon-thisvaltest.web.val.run", "user": "begoon", "name": "thisvaltest" }
    Script
  • std avatar
    openai
    @std
    OpenAI - Docs ↗ Use OpenAI's chat completion API with std/openai . This integration enables access to OpenAI's language models without needing to acquire API keys. For free Val Town users, all calls are sent to gpt-4o-mini . Usage import { OpenAI } from "https://esm.town/v/std/openai"; const openai = new OpenAI(); const completion = await openai.chat.completions.create({ messages: [ { role: "user", content: "Say hello in a creative way" }, ], model: "gpt-4", max_tokens: 30, }); console.log(completion.choices[0].message.content); Limits While our wrapper simplifies the integration of OpenAI, there are a few limitations to keep in mind: Usage Quota : We limit each user to 10 requests per minute. Features : Chat completions is the only endpoint available. If these limits are too low, let us know! You can also get around the limitation by using your own keys: Create your own API key on OpenAI's website Create an environment variable named OPENAI_API_KEY Use the OpenAI client from npm:openai : import { OpenAI } from "npm:openai"; const openai = new OpenAI(); 📝 Edit docs
    Script
1
Next
begoon-telegrambot.web.val.run
September 11, 2024