Search

Results include substring matches and semantically similar vals. Learn more
janpaul123 avatar
valle_tmp_562580635116363837652861654341074
@janpaul123
@jsxImportSource https://esm.sh/react
HTTP
import _ from "npm:lodash@4";
import OpenAI from "npm:openai";
import { renderToString } from "npm:react-dom/server";
const contextWindow: any = await valleGetValsContextWindow(model);
const openai = new OpenAI();
const stream = await openai.chat.completions.create({
model,
ejfox avatar
weeksummary
@ejfox
@jsxImportSource https://esm.sh/react
HTTP
if (url.pathname === "/api/summary") {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
const supabaseUrl = Deno.env.get("SUPABASE_PERSONAL_URL");
title: scrap.title ? scrap.title.substring(0, 256) : "",
// Generate summary using OpenAI
try {
).join("\n");
const completion = await openai.chat.completions.create({
messages: [
thomasatflexos avatar
convertTodoItemsToGamePlans
@thomasatflexos
An interactive, runnable TypeScript val by thomasatflexos
Express (deprecated)
redirect: "follow",
let openApiResponse = await fetch(
"https://api.openai.com/v1/chat/completions",
requestOptions,
let jsonResponse = await openApiResponse.json();
stevekrouse avatar
key_safe_link
@stevekrouse
One-click environment variable Copying and pasting secret API keys into your Val Town Environment Variables is annoying and error prone. Wouldn't it be nice if you could add an environment variable in one click? What could such a protocol look like for third-party API company to be able to safely pass their customer's API keys to their customer's Val Town account. A naive approach to this would be a link that looks like this: <a href="https://www.val.town/settings/environment-variables?name=OpenAI&value=sk-123..."> Add OpenAI key to Val Town </a> However it isn't safe to put API key values in URLs like that, but it would be great if we could still put it in the URL so it can act like a simple link. We need to encrypt the API key in such a way that nobody can read it except for the Val Town app. Val Town could provide a public key for API providers to encrypt their tokens with. We could add an extra layer of security by including the timestamp in the request as well as the Val Town username that the token is intended for. All that data should be included in the encrypted package. We can also ensure that each such link is used exactly once. This scheme does feel a bit ad hoc though, so it'd be nice if there existed another protocol for doing this that we could use.
HTTP
A naive approach to this would be a link that looks like this:
<a href="https://www.val.town/settings/environment-variables?name=OpenAI&value=sk-123...">
Add OpenAI key to Val Town
</a>
stevekrouse avatar
flutteringVioletBird
@stevekrouse
CronGPT This is a minisite to help you create cron expressions, particularly for crons on Val Town. It was inspired by Cron Prompt , but also does the timezone conversion from wherever you are to UTC (typically the server timezone). Tech Hono for routing ( GET / and POST /compile .) Hono JSX HTMX (probably overcomplicates things; should remove) @stevekrouse/openai, which is a light wrapper around @std/openai I'm finding HTMX a bit overpowered for this, so I have two experimental forks without it: Vanilla client-side JavaScript: @stevekrouse/cron_client_side_script_fork Client-side ReactJS (no SSR): @stevekrouse/cron_client_react_fork I think (2) Client-side React without any SSR is the simplest architecture. Maybe will move to that.
HTTP
* HTMX (probably overcomplicates things; should remove)
* @stevekrouse/openai, which is a light wrapper around @std/openai
I'm finding HTMX a bit overpowered for this, so I have two experimental forks without it:
setIsLoading(true);
try {
const response = await fetch("https://esm.town/v/stevekrouse/openai", {
method: "POST",
headers: { "Content-Type": "application/json" },
adagradschool avatar
claude_fa6b572a_8ee1_47bc_bbcf_4c2f2a03d71e
@adagradschool
An interactive, runnable TypeScript val by adagradschool
HTTP
export default function handler(req) {
tabs - what information do I get about a tab? I want to essentially build an OpenAI thingy on top, like an AI-based thing whe
headers: {
"Content-Type": "text/html",
willthereader avatar
ChatGPTTextDefinitionUserscript
@willthereader
// @name Improved ChatGPT Text Definition with Follow-up Questions
Script
return { width: availableWidth, height: availableHeight };
const API_CONFIG = {
url: "https://willthereader-openaidefiner.web.val.run",
method: "POST",
mode: "cors",
patrickjm avatar
weatherTomorrowGpt3
@patrickjm
An interactive, runnable TypeScript val by patrickjm
Script
import { gpt3 } from "https://esm.town/v/patrickjm/gpt3";
import { simpleWeather } from "https://esm.town/v/patrickjm/simpleWeather";
export let weatherTomorrowGpt3 = (params: { openAiKey: string, city: string }) =>
simpleWeather(params.city).then((weather) =>
gpt3({
openAiKey: params.openAiKey,
prompt: `
Given a JSON sequence, give a short, plain-English summary about the weather tomorrow.
lilymachado avatar
flowingBeigePigeon
@lilymachado
@jsxImportSource https://esm.sh/react@18.2.0
HTTP
try {
// Dynamically import OpenAI to ensure server-side compatibility
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
// Detailed AI prompt for comprehensive analysis
const response = await openai.chat.completions.create({
model: "gpt-4o",
// Generate narrative description
const narrativeResponse = await openai.chat.completions.create({
model: "gpt-4o",
cephalization avatar
smsjournalertextrelay
@cephalization
* This val creates a webhook endpoint that receives text messages and sends SMS replies using the TextBelt API. * It uses blob storage to keep track of message history and conversation state. * The TextBelt API is used for sending SMS messages without requiring an API key. * The conversation history is stored as an array of message objects containing sender, content, date, and phone number. * OpenAI's GPT-4 is used to generate contextual responses based on the conversation history.
HTTP
* The conversation history is stored as an array of message objects containing sender, content, date, and phone number.
* OpenAI's GPT-4 is used to generate contextual responses based on the conversation history.
import { blob } from "https://esm.town/v/std/blob";
import { OpenAI } from "https://esm.town/v/std/openai";
import { Buffer } from "node:buffer";
async function generateAIResponse(history: Message[]): Promise<string> {
const openai = new OpenAI();
const messages: { role: "user" | "assistant" | "system"; content: string }[] = history.map(msg => ({
"You are an AI assistant communicating via SMS. Keep your responses concise and under 160 characters. Your task is to h
const completion = await openai.chat.completions.create({
messages,
stevekrouse avatar
telegramBotHandler
@stevekrouse
Telegram to DallE Bot Set up First you'll need to set yourself up to send and receive messages on Telegram. Follow all 5 steps here: https://www.val.town/v/stevekrouse.telegram Fork this @telegramBotHandler val below. Make sure you click Run to save it to your account. On your forked val, click the ⋮ menu > Endpoints > Copy express endpoint Message @ValTownBot /webhook Message @ValTownBot the express endpoint you copied You'll also need an openai key in your secrets for this particular DallE bot to work
HTTP
5. Message @ValTownBot the express endpoint you copied
6. You'll also need an `openai` key in [your secrets](/settings/secrets) for this particular DallE bot to work
try {
let resp = await textToImageDalle(
process.env.openai,
text.replace("/dalle", ""),
1,
stevekrouse avatar
emailSummaryHandler
@stevekrouse
Email Summary Service This val is an email handler replies to emails it recieves with an LLM-generated summary. To use, forward an email to paulkinlan.emailSummaryHandler@valtown.email Blog post: https://paul.kinlan.me/projects/email-summary-service/
Email
import { email } from "https://esm.town/v/std/email";
import { OpenAI } from "https://esm.town/v/std/openai";
import { extractValInfo } from "https://esm.town/v/stevekrouse/extractValInfo";
console.log(e.from, e.subject);
const openai = new OpenAI();
const summary = await openai.chat.completions.create({
messages: [
paulkinlan avatar
emailSummaryHandler
@paulkinlan
Email Summary Service This val is an email handler replies to emails it recieves with an LLM-generated summary. To use, forward an email to paulkinlan.emailSummaryHandler@valtown.email Blog post: https://paul.kinlan.me/projects/email-summary-service/
Email
import { extractValInfo } from "https://esm.town/v/stevekrouse/extractValInfo";
import { OpenAI } from "npm:openai";
function stripHtmlBackticks(html: string): string {
export default async function(e: Email) {
const openai = new OpenAI();
console.log(`from: ${e.from} to: ${e.to} subject: ${e.subject}, cc: ${e.cc}, bcc: ${e.bcc}`);
from = to;
const summary = await openai.chat.completions.create({
messages: [
aioe0x417a avatar
Storyweaver
@aioe0x417a
@jsxImportSource https://esm.sh/react
HTTP
new Uint8Array(arrayBuffer).reduce((data, byte) => data + String.fromCharCode(byte), '')
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
try {
const imageAnalysis = await withTimeout(openai.chat.completions.create({
model: "gpt-4o",
? `Previous story summary: ${previousStory[previousStory.length - 1].content}`
const story = await withTimeout(openai.chat.completions.create({
model: "gpt-4o-mini",
dazzag24 avatar
emailSummaryHandler
@dazzag24
Email Summary Service This val is an email handler replies to emails it recieves with an LLM-generated summary. To use, forward an email to paulkinlan.emailSummaryHandler@valtown.email Blog post: https://paul.kinlan.me/projects/email-summary-service/
Email
import { email } from "https://esm.town/v/std/email";
import { OpenAI } from "https://esm.town/v/std/openai";
import { extractValInfo } from "https://esm.town/v/stevekrouse/extractValInfo";
console.log(e.from, e.subject);
const openai = new OpenAI();
const summary = await openai.chat.completions.create({
messages: [