Search
openAiFreeQuotaExceeded
@patrickjm
An interactive, runnable TypeScript val by patrickjm
HTTP
import { openAiFreeUsage } from "https://esm.town/v/patrickjm/openAiFreeUsage";
export let openAiFreeQuotaExceeded = () =>
openAiFreeUsage.exceeded;
valle_tmp_02267091431922629935130311039563566
@janpaul123
@jsxImportSource https://esm.sh/react
HTTP
import OpenAI from "npm:openai";
unless strictly necessary, for example use APIs that don't require a key, prefer internal function
functions where possible. Unless specified, don't add error handling,
The val should create a "export default async function main" which is the main function that gets
function write(text) {
function openTab(tab) {
const callback = function (mutationsList, observer) {
const openai = new OpenAI();
const stream = await openai.chat.completions.create({
window.addToken = function(str) {
apiProxy
@postpostscript
An interactive, runnable TypeScript val by postpostscript
HTTP
Authorization: `Bearer ${Deno.env.get("valtown")}`,
const ENDPOINT = getValEndpointFromUrl(import.meta.url);
export function api(path: string, token: string, init: RequestInit = {}) {
return fetch(`${ENDPOINT}/${path.replace(/^\/+/, "")}`, {
...init,
hello
@root
An interactive, runnable TypeScript val by root
Script
export function hello(name) {
// return "wow~ your are 叼毛 open 了 Pandora" ;
return { message: "wow~ your are 叼毛 open 了 Pandora" };
valle_tmp_2671404837576818506367901100203444
@janpaul123
@jsxImportSource https://esm.sh/react
HTTP
Sure, I'll focus on cleaning up some repeated code, improve code readability, and add some enhancements while keeping the existing functionality intact. Here's the improved version:
ts
import _ from "npm:lodash@4";
import OpenAI from "npm:openai";
import { renderToString } from "npm:react-dom/server";
return new Response(renderToString(jsx), { headers: { "Content-Type": "text/html" } });
const systemPrompt = `The conversation below is your recent interaction with the user. Your entire response should only be TypeScript. Your response should start with \`\`\`ts and end with \`\`\`. Make an HTTP val (other vals are currently not supported to run, even though we have examples for them). Start your response with a comment explaining how your approach will work, what libraries or API calls you will use, and any tradeoffs you're making. Then write the code in a concise way, the simplest way to achieve the goal, though you can add some inline comments to explain your reasoning (not for every line, but for major groups of lines). Don't use any environment variables unless strictly necessary, for example use APIs that don't require a key, prefer internal function imports (using esm.town), and prefer putting API keys as inline variables. Use built-in Deno functions where possible. Unless specified, don't add error handling, make sure that errors bubble up to the caller. There should be no comments like "more content here", it should be complete and directly runnable. The val should create a "export default async function main" which is the main function that gets executed on every HTTP request.`.replace("\n", " ");
const additionalPrompt = `Since your last response the user might have changed the code. The current version of the code is below. Keep your response as close to their current code as possible, only changing things that are strictly necessary to change.`.replace("\n", "");
const textEncoder = new TextEncoder();
function write(text) {
writer.write(textEncoder.encode(text));
<script>
function openTab(tab) {
const tabButtonCode = document.getElementById("tab-button-code");
const scrollingElement = document.getElementById("conversation-container");
const callback = function (mutationsList, observer) {
scrollingElement.scrollTo({ left: 0, top: scrollingElement.scrollHeight, behavior: "instant" });
const contextWindow = await valleGetValsContextWindow(model);
const openai = new OpenAI();
const stream = await openai.chat.completions.create({
model,
let fullStr = "";
window.addToken = function(str) {
fullStr += str;
switchbot
@stevekrouse
Open the Val Town Office Doors with Switchbot We installed two switchbot robots in our office: Ground Floor Office Door This webpage will let Val Town employees or guests to use these bots. The original version of this val was for a party. That fork is preserved here: @stevekrouse/switchbot_party Next steps [ ] Remove party theme [ ] Add one of @pomdtr's login methods [ ] Allow all @val.town emails to login [ ] Have any other email login ping me for approval or make a private val with a list of approved emails [ ] Add the office door to the site [ ] Add instructions (ie turn off wifi completely downstairs; be gentle with the office door one) Switchbot API This val authenticates to the switchbot API with SWITCHBOT_TOKEN and SWITCHBOT_KEY . Learn how to get your own Switchbot API keys here: Switchbot Docs .
HTTP
const app = new Hono();
const ValTownOfficeDeviceId = "CD6F3A810848";
async function switchbotRequest(path, args) {
const token = Deno.env.get("SWITCHBOT_TOKEN");
const secret = Deno.env.get("SWITCHBOT_KEY");
...args,
return response.json();
function botPress(device) {
return switchbotRequest(`v1.1/devices/${device}/commands`, {
method: "POST",
gpt4o_emoji
@jdan
// await getGPT4oEmoji(
Script
import { chat } from "https://esm.town/v/stevekrouse/openai?v=19";
export async function getGPT4oEmoji(url) {
const response = await chat([
role: "system",
jsonToDalleForm
@weaverwhale
@jsxImportSource https://esm.sh/react
HTTP
import OpenAI from "https://esm.sh/openai";
// Move OpenAI initialization to server function
let openai;
function App() {
function client() {
async function server(request: Request): Promise<Response> {
// Initialize OpenAI here to avoid issues with Deno.env in browser context
if (!openai) {
const apiKey = Deno.env.get("OPENAI_API_KEY");
openai = new OpenAI({ apiKey });
textToImageDalle
@hootz
A wrapper for OpenAI's DALLE API. See the API reference here: https://platform.openai.com/docs/api-reference/images/create?lang=curl
Script
A wrapper for OpenAI's DALLE API. See the API reference here: https://platform.openai.com/docs/api-reference/images/create?lang=curl
export const textToImageDalle = async (
openAIToken: string,
prompt: string,
} = await fetchJSON(
"https://api.openai.com/v1/images/generations",
method: "POST",
"Content-Type": "application/json",
"Authorization": `Bearer ${openAIToken}`,
body: JSON.stringify({
grayFinch
@kingishb
Quick AI web search via email (useful for apple watch)
Email
// A dumb search engine I can use from my apple watch by emailing a question.
import { email } from "https://esm.town/v/std/email?v=11";
export default async function(e: Email) {
if (!e.from.endsWith("<brian@sarriaking.com>")) {
console.error("unauthorized!", e.from);
weatherGPT
@varun_balani
If you fork this, you'll need to set OPENAI_API_KEY in your Val Town Secrets .
Cron
If you fork this, you'll need to set `OPENAI_API_KEY` in your [Val Town Secrets](https://www.val.town/settings/secrets).
import { fetch } from "https://esm.town/v/std/fetch";
import { OpenAI } from "npm:openai";
let location = "manipal";
).then(r => r.json());
const openai = new OpenAI();
let chatCompletion = await openai.chat.completions.create({
messages: [{
console.log(text);
export async function weatherGPT() {
await email({ subject: "Weather Today", text });
valle_tmp_563310902711919480463986409263
@janpaul123
@jsxImportSource https://esm.sh/react
HTTP
import _ from "npm:lodash@4";
import OpenAI from "npm:openai";
import { renderToString } from "npm:react-dom/server";
return new Response(renderToString(jsx), { headers: { "Content-Type": "text/html" } });
const systemprompt = `The conversation below is your recent interaction with the user. Your entire response should only be TypeScript. Your response should start with \`\`\`ts and end with \`\`\`. Make an HTTP val (other vals are currently not supported to run, even though we have examples for them). Start your response with a comment explaining how your approach will work, what libraries or API calls you will use, and any tradeoffs you're making. Then write the code in a consise way, the simplest way to achieve the goal, though you can add some inline comments to explain your reasoning (not for every line, but for major groups of lines). Don't use any environment variables unless strictly necessary, for example use APIs that don't require a key, prefer internal function imports (using esm.town), and prefer putting API keys as inline variables. Use built-in Deno functions where possible. Unless specified, don't add error handling, make sure that errors bubble up to the caller. There should be no comments like "more content here", it should be complete and directly runnable. The val should create a "export default async function main" which is the main function that gets executed on every HTTP request. `.replace("\n", " ");
// Your response should start with \`\`\`ts and end with \`\`\`. The val should create a "export default async function main() {" which is the main function that gets executed, without any arguments. Don't return a Response object, just return a plain Javascript object, array, or string.
const systemprompt2 = `Since your last response the user might have changed the code. The current version of the code is below. Keep your response as close to their current code as possible, only changing things that are strictly necessary to change.`.replace("\n", "");
const textEncoder = new TextEncoder();
function write(text) {
writer.write(textEncoder.encode(text));
<script>
function openTab(tab) {
const tabButtonCode = document.getElementById("tab-button-code");
const scrollingElement = document.getElementById("conversation-container");
const callback = function (mutationsList, observer) {
scrollingElement.scrollTo({ left: 0, top: scrollingElement.scrollHeight, behavior: "instant" });
const contextWindow: any = await valleGetValsContextWindow(model);
const openai = new OpenAI();
const stream = await openai.chat.completions.create({
model,
let fullStr = "";
window.addToken = function(str) {
fullStr += str;
launch_thrifty_idea_generator
@cotr
An interactive, runnable TypeScript val by cotr
HTTP
export default async function(req: Request): Promise<Response> {
if (req.method === "OPTIONS") {
return new Response("ok");
chocolateCanid
@willthereader
ChatGPT Implemented in Val Town Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events.
HTTP
Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events.
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
document.getElementById("input").addEventListener("submit", async function(event) {
eventSource.onmessage = function(event) {
eventSource.onerror = function() {
thread = await openai.chat.completions.create({
assistant = await openai.chat.completions.create({
await openai.chat.completions.create({
const run = openai.chat.completions.stream({
browserbase_google_concerts
@stevekrouse
// Navigate to Google
Script
import puppeteer from "https://deno.land/x/puppeteer@16.2.0/mod.ts";
import { OpenAI } from "https://esm.town/v/std/openai?v=4";
import { Browserbase } from "npm:@browserbasehq/sdk";
// ask chat gpt for list of concert dates
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [