Search

Results include substring matches and semantically similar vals. Learn more
stevekrouse avatar
victoriousGreenLynx
@stevekrouse
Cerebras Inference template This val shows you how you can deploy an app using Cerebras Inference on Val Town in seconds. What is Cerebras? Cerebras is an American chip manufacturer that produces large wafer chips that deliver mind-blowing LLM inference speeds. As of this writing on Jan 17, 2025, Cerebras Inference provides Llama 3.18b, 3.1 70b, and 3.370b at a jaw-dropping 2k tokens per second – that's 50x faster than what the frontier labs produce. Llama 3.370b at 2k tokens per second is particularly noteworthy because it is a GPT-4-class model . This level of intelligence at that level of speed will unlock whole new classes of applications. Quick start There are two ways to get started: Fork this app and customize it (or ask Townie AI to customize it) Start a new chat with Townie AI and copy & paste the following instructions: . Use Cerebras for AI on the backend like so: const { OpenAI } = await import("https://esm.sh/openai"); const client = new OpenAI({ apiKey: "YOUR_CEREBRAS_API_KEY", baseURL: "https://api.cerebras.ai/v1" }); const response = await client.chat.completions.create({ model: "llama-3.3-70b", messages: [], }); const generatedText = response.choices[0].message.content; For example, the val in this template was created by asking Townie AI to "Make a chatgpt clone", then I hit shift-enter twice, and then pasted in the instructions on how to use Cerebras from above, then hit enter. Townie built this app on its first try, in about 20 seconds. Sample apps Cerebras Searcher - a Perplexity clone that uses the SerpAPI to do RAG and summaries with Cerebras ( requires a SerpAPI key ) Cerebras Coder - an app that generates websites in a second with Cerebras Cerebras Debater - an app that truly shows Cerebras's speed: it's Cerebras talking to Cerebras in a debate
HTTP
Use Cerebras for AI on the backend like so:
const { OpenAI } = await import("https://esm.sh/openai");
const client = new OpenAI({
apiKey: "YOUR_CEREBRAS_API_KEY",
import React, { useState, useEffect, useRef } from "https://esm.sh/react@18.2.0";
function App() {
const [messages, setMessages] = useState([]);
</div>
function client() {
createRoot(document.getElementById("root")).render(<App />);
client();
export default async function server(request: Request): Promise<Response> {
if (request.method === "POST" && new URL(request.url).pathname === "/chat") {
const { messages } = await request.json();
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
try {
const response = await openai.chat.completions.create({
model: "gpt-4o-mini",
} catch (error) {
console.error("Error calling OpenAI API:", error);
if (error.status === 429) {
vawogbemi avatar
Tourist
@vawogbemi
Cerebras Inference template This val shows you how you can deploy an app using Cerebras Inference on Val Town in seconds. What is Cerebras? Cerebras is an American chip manufacturer that produces large wafer chips that deliver mind-blowing LLM inference speeds. As of this writing on Jan 17, 2025, Cerebras Inference provides Llama 3.18b, 3.1 70b, and 3.370b at a jaw-dropping 2k tokens per second – that's 50x faster than what the frontier labs produce. Llama 3.370b at 2k tokens per second is particularly noteworthy because it is a GPT-4-class model . This level of intelligence at that level of speed will unlock whole new classes of applications. Quick start Set up Cerebras Sign up for Cerebras Get a Cerebras API Key Save it in a Val Town environment variable called CEREBRAS_API_KEY Once Cerebras is set up in your Val Town account, there are two ways to get started: Fork this app and customize it (or ask Townie AI to customize it) Start a new chat with Townie AI and copy & paste the following instructions: . Use Cerebras for AI on the backend like so: const { OpenAI } = await import("https://esm.sh/openai"); const client = new OpenAI({ apiKey: Deno.env.get("CEREBRAS_API_KEY"), baseURL: "https://api.cerebras.ai/v1" }); const response = await client.chat.completions.create({ model: "llama-3.3-70b", messages: [], }); const generatedText = response.choices[0].message.content; For example, the val in this template was created by asking Townie AI to "Make a chatgpt clone", then I hit shift-enter twice, and then pasted in the instructions on how to use Cerebras from above, then hit enter. Townie built this app on its first try, in about 20 seconds. Sample apps Cerebras Searcher - a Perplexity clone that uses the SerpAPI to do RAG and summaries with Cerebras ( requires a SerpAPI key ) Cerebras Coder - an app that generates websites in a second with Cerebras Cerebras Debater - an app that truly shows Cerebras's speed: it's Cerebras talking to Cerebras in a debate
HTTP
const { OpenAI } = await import("https://esm.sh/openai");
const client = new OpenAI({
function Map({ center, zoom, markers }) {
function App() {
function client() {
async function makeGoogleMapsApiCall(endpoint, params) {
export default async function server(request: Request): Promise<Response> {
const { OpenAI } = await import("https://esm.sh/openai");
const client = new OpenAI({
console.error("Error calling Cerebras API:", error);
sakshamkapoor2911 avatar
Code_Debugger
@sakshamkapoor2911
Cerebras Inference template This val shows you how you can deploy an app using Cerebras Inference on Val Town in seconds. What is Cerebras? Cerebras is an American chip manufacturer that produces large wafer chips that deliver mind-blowing LLM inference speeds. As of this writing on Jan 17, 2025, Cerebras Inference provides Llama 3.18b, 3.1 70b, and 3.370b at a jaw-dropping 2k tokens per second – that's 50x faster than what the frontier labs produce. Llama 3.370b at 2k tokens per second is particularly noteworthy because it is a GPT-4-class model . This level of intelligence at that level of speed will unlock whole new classes of applications. Quick start Set up Cerebras Sign up for Cerebras Get a Cerebras API Key Save it in a Val Town environment variable called CEREBRAS_API_KEY Once Cerebras is set up in your Val Town account, there are two ways to get started: Fork this app and customize it (or ask Townie AI to customize it) Start a new chat with Townie AI and copy & paste the following instructions: . Use Cerebras for AI on the backend like so: const { OpenAI } = await import("https://esm.sh/openai"); const client = new OpenAI({ apiKey: Deno.env.get("CEREBRAS_API_KEY"), baseURL: "https://api.cerebras.ai/v1" }); const response = await client.chat.completions.create({ model: "llama-3.3-70b", messages: [], }); const generatedText = response.choices[0].message.content; For example, the val in this template was created by asking Townie AI to "Make a chatgpt clone", then I hit shift-enter twice, and then pasted in the instructions on how to use Cerebras from above, then hit enter. Townie built this app on its first try, in about 20 seconds. Sample apps Cerebras Searcher - a Perplexity clone that uses the SerpAPI to do RAG and summaries with Cerebras ( requires a SerpAPI key ) Cerebras Coder - an app that generates websites in a second with Cerebras Cerebras Debater - an app that truly shows Cerebras's speed: it's Cerebras talking to Cerebras in a debate
HTTP
const { OpenAI } = await import("https://esm.sh/openai");
const client = new OpenAI({
function App() {
highlight: function(code, lang) {
function client() {
export default async function server(request: Request): Promise<Response> {
// Function to search StackOverflow
async function searchStackOverflow(query: string) {
const { OpenAI } = await import("https://esm.sh/openai");
const client = new OpenAI({
console.error("Error calling Cerebras API:", error);
std avatar
openai
@std
OpenAI - Docs ↗ Use OpenAI's chat completion API with std/openai . This integration enables access to OpenAI's language models without needing to acquire API keys. For free Val Town users, all calls are sent to gpt-4o-mini . Basic Usage import { OpenAI } from "https://esm.town/v/std/openai"; const openai = new OpenAI(); const completion = await openai.chat.completions.create({ messages: [ { role: "user", content: "Say hello in a creative way" }, ], model: "gpt-4", max_tokens: 30, }); console.log(completion.choices[0].message.content); Images To send an image to ChatGPT, the easiest way is by converting it to a data URL, which is easiest to do with @stevekrouse/fileToDataURL . import { fileToDataURL } from "https://esm.town/v/stevekrouse/fileToDataURL"; const dataURL = await fileToDataURL(file); const response = await chat([ { role: "system", content: `You are an nutritionist. Estimate the calories. We only need a VERY ROUGH estimate. Respond ONLY in a JSON array with values conforming to: {ingredient: string, calories: number} `, }, { role: "user", content: [{ type: "image_url", image_url: { url: dataURL, }, }], }, ], { model: "gpt-4o", max_tokens: 200, }); Limits While our wrapper simplifies the integration of OpenAI, there are a few limitations to keep in mind: Usage Quota : We limit each user to 10 requests per minute. Features : Chat completions is the only endpoint available. If these limits are too low, let us know! You can also get around the limitation by using your own keys: Create your own API key on OpenAI's website Create an environment variable named OPENAI_API_KEY Use the OpenAI client from npm:openai : import { OpenAI } from "npm:openai"; const openai = new OpenAI(); 📝 Edit docs
Script
# OpenAI - [Docs ↗](https://docs.val.town/std/openai)
Use OpenAI's chat completion API with [`std/openai`](https://www.val.town/v/std/openai). This integration enables access to O
For free Val Town users, [all calls are sent to `gpt-4o-mini`](https://www.val.town/v/std/openaiproxy?v=12#L85).
import { type ClientOptions, OpenAI as RawOpenAI } from "npm:openai";
* API Client for interfacing with the OpenAI API. Uses Val Town credentials.
export class OpenAI {
private rawOpenAIClient: RawOpenAI;
* API Client for interfacing with the OpenAI API. Uses Val Town credentials.
* @param {Core.Fetch} [opts.fetch] - Specify a custom `fetch` function implementation.
this.rawOpenAIClient = new RawOpenAI({
baseURL: "https://std-openaiproxy.web.val.run/v1",
return this.rawOpenAIClient.chat;
chat: this.rawOpenAIClient.beta.chat,
stevekrouse avatar
openai
@stevekrouse
OpenAI ChatGPT helper function This val uses your OpenAI token if you have one, and the @std/openai if not, so it provides limited OpenAI usage for free. import { chat } from "https://esm.town/v/stevekrouse/openai"; const { content } = await chat("Hello, GPT!"); console.log(content); import { chat } from "https://esm.town/v/stevekrouse/openai"; const { content } = await chat( [ { role: "system", content: "You are Alan Kay" }, { role: "user", content: "What is the real computer revolution?"} ], { max_tokens: 50, model: "gpt-4o" } ); console.log(content);
Script
# OpenAI ChatGPT helper function
This val uses your OpenAI token if you have one, and the @std/openai if not, so it provides limited OpenAI usage for free.
import { chat } from "https://esm.town/v/stevekrouse/openai";
import type { ChatCompletion, ChatCompletionCreateParamsNonStreaming, Message } from "npm:@types/openai";
async function getOpenAI() {
if (Deno.env.get("OPENAI_API_KEY") === undefined) {
const { OpenAI } = await import("https://esm.town/v/std/openai");
return new OpenAI();
const { OpenAI } = await import("npm:openai");
return new OpenAI();
* Initiates a chat conversation with OpenAI's GPT model and retrieves the content of the first response.
* This function can handle both single string inputs and arrays of message objects.
export async function chat(
yawnxyz avatar
openAiHelloWorld
@yawnxyz
// Create a secret named OPENAI_API_KEY at https://www.val.town/settings/environment-variables
Script
import { OpenAI } from "npm:openai";
// Create a secret named OPENAI_API_KEY at https://www.val.town/settings/environment-variables
const openai = new OpenAI();
const functionExpression = await openai.chat.completions.create({
"messages": [
max_tokens: 30,
console.log(functionExpression.choices[0].message.content);
yawnxyz avatar
openAiExample
@yawnxyz
// Server-side rendering
HTTP
import { cors } from 'npm:hono/cors';
import { OpenAI } from "npm:openai";
const app = new Hono();
const openai = new OpenAI();
app.use('*', cors({
<head>
<title>OpenAI Prompt Example</title>
<script defer src="https://cdn.jsdelivr.net/npm/alpinejs@3.x.x/dist/cdn.min.js"></script>
<div class="container mx-auto py-8">
<h1 class="text-4xl font-bold mb-4">OpenAI Prompt Example</h1>
<form action="/prompt" method="GET">
try {
const response = await openai.chat.completions.create({
model: "gpt-4",
} catch (error) {
console.error('OpenAI API error:', error);
return c.redirect('/?response=Error%20occurred.');
pomdtr avatar
OpenAI
@pomdtr
OpenAI Get started using OpenAI's chat completion without the need to set your own API keys. Usage Here's a quick example to get you started with the Val Town OpenAI wrapper: import { OpenAI } from "https://esm.town/v/std/openai"; const openai = new OpenAI(); const functionExpression = await openai.chat.completions.create({ "messages": [ { "role": "user", "content": "Say hello in a creative way" }, ], model: "gpt-4", max_tokens: 30, }); console.log(functionExpression.choices[0].message.content);
Script
# OpenAI
Get started using OpenAI's chat completion without the need to set your own API keys.
Here's a quick example to get you started with the Val Town OpenAI wrapper:
import { type ClientOptions, OpenAI as RawOpenAI } from "npm:openai";
* API Client for interfacing with the OpenAI API. Uses Val Town credentials.
export class OpenAI {
private rawOpenAIClient: RawOpenAI;
* API Client for interfacing with the OpenAI API. Uses Val Town credentials.
* @param {Core.Fetch} [opts.fetch] - Specify a custom `fetch` function implementation.
this.rawOpenAIClient = new RawOpenAI({
baseURL: "https://std-openaiproxy.web.val.run/v1",
return this.rawOpenAIClient.chat;
return this.rawOpenAIClient.beta;
webup avatar
chat
@webup
An interactive, runnable TypeScript val by webup
Script
options = {},
// Initialize OpenAI API stub
const { Configuration, OpenAIApi } = await import(
"https://esm.sh/openai@3.3.0"
const configuration = new Configuration({
apiKey: process.env.OPENAI,
const openai = new OpenAIApi(configuration);
// Request chat completion
: prompt;
const { data } = await openai.createChatCompletion({
model: "gpt-3.5-turbo-0613",
const message = data.choices[0].message;
return message.function_call ? message.function_call : message.content;
fgeierst avatar
openaiCompletion
@fgeierst
An interactive, runnable TypeScript val by fgeierst
Script
import process from "node:process";
export const openaiCompletion = async (prompt) => {
const { OpenAI } = await import("https://deno.land/x/openai/mod.ts");
const openAI = new OpenAI(process.env.OPENAI_API_KEY);
const completion = openAI.createCompletion({
model: "text-davinci-003",
hash0000ff avatar
OpenAI
@hash0000ff
OpenAI - Docs ↗ Use OpenAI's chat completion API with std/openai . This integration enables access to OpenAI's language models without needing to acquire API keys. For free Val Town users, all calls are sent to gpt-4o-mini . Usage import { OpenAI } from "https://esm.town/v/std/openai"; const openai = new OpenAI(); const completion = await openai.chat.completions.create({ messages: [ { role: "user", content: "Say hello in a creative way" }, ], model: "gpt-4", max_tokens: 30, }); console.log(completion.choices[0].message.content); Limits While our wrapper simplifies the integration of OpenAI, there are a few limitations to keep in mind: Usage Quota : We limit each user to 10 requests per minute. Features : Chat completions is the only endpoint available. If these limits are too low, let us know! You can also get around the limitation by using your own keys: Create your own API key on OpenAI's website Create an environment variable named OPENAI_API_KEY Use the OpenAI client from npm:openai : import { OpenAI } from "npm:openai"; const openai = new OpenAI(); 📝 Edit docs
Script
# OpenAI - [Docs ↗](https://docs.val.town/std/openai)
Use OpenAI's chat completion API with [`std/openai`](https://www.val.town/v/std/openai). This integration enables access to O
For free Val Town users, [all calls are sent to `gpt-4o-mini`](https://www.val.town/v/std/openaiproxy?v=12#L85).
import { type ClientOptions, OpenAI as RawOpenAI } from "npm:openai";
* API Client for interfacing with the OpenAI API. Uses Val Town credentials.
export class OpenAI {
private rawOpenAIClient: RawOpenAI;
* API Client for interfacing with the OpenAI API. Uses Val Town credentials.
* @param {Core.Fetch} [opts.fetch] - Specify a custom `fetch` function implementation.
this.rawOpenAIClient = new RawOpenAI({
baseURL: "https://std-openaiproxy.web.val.run/v1",
return this.rawOpenAIClient.chat;
get chat(): RawOpenAI["beta"]["chat"] {
ashryanio avatar
openAiProxy
@ashryanio
openAiProxy Overview This val is a proxy server that interacts with the OpenAI API to generate responses based on prompts in the request body. The function handles incoming HTTP POST requests, processes the prompt, and returns a response generated by the LLM. Prerequisites Server-side: (Optional) An active OpenAI API key Client-side: Something that can make POST requests (browser code, Postman, cURL, another Val, etc) Usage Endpoint The primary endpoint for this function is designed to handle HTTP POST requests. Request Method : POST Content-Type : application/json Body : JSON object containing a prompt field (e.g. {"prompt": "Help me make a boat."} ) Example Request curl -X POST https://ashryanio-openaiproxy.web.val.run -H "Content-Type: application/json" -d '{"prompt": "Hello, OpenAI!"}' Response Content-Type : application/json Body : JSON object containing the response from the OpenAI language model. Example Response { "llmResponse": "Hi there! How can I assist you today?" } Error Handling 400 Bad Request : Returned if the prompt field is missing in the request body. 405 Method Not Allowed : Returned if any method other than POST or OPTIONS is used. 500 Internal Server Error : Returned if there is an error processing the request.
HTTP
# openAiProxy
This val is a proxy server that interacts with the OpenAI API to generate responses based on prompts in the request body. The
- Server-side: (Optional) An active OpenAI API key
The primary endpoint for this function is designed to handle HTTP POST requests.
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
export default async function(req: Request): Promise<Response> {
* Gets the response from the OpenAI language model.
async function getLlmResponse(prompt: string) {
const completion = await openai.chat.completions.create({
stevekrouse avatar
gpt4vDemo
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
Script
import process from "node:process";
import OpenAI from "npm:openai";
const openai = new OpenAI({ apiKey: process.env.openai });
async function main() {
const response = await openai.chat.completions.create({
model: "gpt-4-vision-preview",
stevekrouse avatar
modelInvoke
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
Script
import process from "node:process";
import { ChatOpenAI } from "npm:langchain/chat_models/openai";
const model = new ChatOpenAI({
temperature: 0.9,
openAIApiKey: process.env.openai,
export const modelInvoke = model.invoke("What is your name?");
wangqiao1234 avatar
OpenAI
@wangqiao1234
OpenAI - Docs ↗ Use OpenAI's chat completion API with std/openai . This integration enables access to OpenAI's language models without needing to acquire API keys. For free Val Town users, all calls are sent to gpt-3.5-turbo . Streaming is not yet supported. Upvote the HTTP response streaming feature request if you need it! Usage import { OpenAI } from "https://esm.town/v/std/openai"; const openai = new OpenAI(); const completion = await openai.chat.completions.create({ messages: [ { role: "user", content: "Say hello in a creative way" }, ], model: "gpt-4", max_tokens: 30, }); console.log(completion.choices[0].message.content); Limits While our wrapper simplifies the integration of OpenAI, there are a few limitations to keep in mind: Usage Quota : We limit each user to 10 requests per minute. Features : Chat completions is the only endpoint available. If these limits are too low, let us know! You can also get around the limitation by using your own keys: Create your own API key on OpenAI's website Create an environment variable named OPENAI_API_KEY Use the OpenAI client from npm:openai : import { OpenAI } from "npm:openai"; const openai = new OpenAI(); 📝 Edit docs
HTTP
# OpenAI - [Docs ↗](https://docs.val.town/std/openai)
Use OpenAI's chat completion API with [`std/openai`](https://www.val.town/v/std/openai). This integration enables access to O
For free Val Town users, [all calls are sent to `gpt-3.5-turbo`](https://www.val.town/v/std/openaiproxy?v=5#L69).
import { type ClientOptions, OpenAI as RawOpenAI } from "npm:openai";
* API Client for interfacing with the OpenAI API. Uses Val Town credentials.
export class OpenAI {
private rawOpenAIClient: RawOpenAI;
* API Client for interfacing with the OpenAI API. Uses Val Town credentials.
* @param {Core.Fetch} [opts.fetch] - Specify a custom `fetch` function implementation.
this.rawOpenAIClient = new RawOpenAI({
baseURL: "https://std-openaiproxy.web.val.run/v1",
return this.rawOpenAIClient.chat;
get chat(): RawOpenAI["beta"]["chat"] {