Search

Results include substring matches and semantically similar vals. Learn more
stevekrouse avatar
blobImages
@stevekrouse
Image downsizer and uploader
HTTP
import { fileToDataURL } from "https://esm.town/v/stevekrouse/fileToDataURL";
import { modifyImage } from "https://esm.town/v/stevekrouse/modifyImage";
import { chat } from "https://esm.town/v/stevekrouse/openai";
import { Hono } from "npm:hono@3";
function esmTown(url) {
jdan avatar
blobImages
@jdan
Image downsizer and uploader
HTTP
import { fileToDataURL } from "https://esm.town/v/stevekrouse/fileToDataURL";
import { modifyImage } from "https://esm.town/v/stevekrouse/modifyImage";
import { chat } from "https://esm.town/v/stevekrouse/openai";
import { Hono } from "npm:hono@3";
function esmTown(url) {
janpaul123 avatar
valwriter
@janpaul123
[ ] streaming [ ] send the code of the valwriter back to gpt (only if it's related, might need some threads, maybe a custom gpt would be a better fix, of course, could do it as a proxy...) [ ] make it easy to send errors back to gpt [ ] make it easy to get screenshots of the output back to gpt
HTTP
import { fetchText } from "https://esm.town/v/stevekrouse/fetchText";
import { chat } from "https://esm.town/v/stevekrouse/openai";
import cronstrue from "npm:cronstrue";
await email({ subject: "Subject line", text: "Body of message" });
// OpenAI
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
yawnxyz avatar
breakdown
@yawnxyz
This project is an argument summarizer that leverages AI to analyze and extract key arguments from a given text. Goals: Provide a user-friendly interface for inputting text Process the input using a large language model (LLama3 via Groq) Extract and structure key arguments, explanations, and relevant quotes Present the summarized arguments in a clear, organized format The main pipeline: User inputs text through a web interface The input is sent to an AI model for processing The AI extracts and structures the arguments The results are validated against a predefined schema The structured arguments are displayed to the user This tool aims to help users quickly understand the main points and supporting evidence in complex texts or discussions, making it valuable for research, debate preparation, or general comprehension of argumentative content.
HTTP
// export const model = "gpt-4o"
// export const provider = "openai"
export const summaryModel = "gpt-4o-mini"
export const summaryProvider = "openai"
// export const smartModel = "models/gemini-1.5-pro-latest"
export const smartModel = "gpt-4o-mini"
export const smartProvider = "openai"
export const cheapModel = "gpt-4o-mini"
export const cheapProvider = "openai"
const argschema = z.object({
trob avatar
multiUserChatwithLLM
@trob
@jsxImportSource https://esm.sh/react
HTTP
export default async function server(request: Request): Promise<Response> {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const { sqlite } = await import("https://esm.town/v/stevekrouse/sqlite");
const openai = new OpenAI();
const SCHEMA_VERSION = 2;
messages.push({ role: "user", content: `${username}: ${message}` });
const completion = await openai.chat.completions.create({
messages,
janpaul123 avatar
valle_tmp_54846529024345792066850117206065
@janpaul123
@jsxImportSource https://esm.sh/react
HTTP
import _ from "npm:lodash@4";
import OpenAI from "npm:openai";
import { renderToString } from "npm:react-dom/server";
const contextWindow: any = await valleGetValsContextWindow(model);
const openai = new OpenAI();
const stream = await openai.chat.completions.create({
model,
jacoblee93 avatar
runAgent
@jacoblee93
An interactive, runnable TypeScript val by jacoblee93
Script
const { z } = await import("npm:zod");
const { ChatOpenAI } = await import("npm:langchain/chat_models/openai");
const { ChatAnthropic } = await import("npm:langchain/chat_models/anthropic");
"npm:langchain/output_parsers"
const model = new ChatOpenAI({
openAIApiKey: process.env.OPENAI_API_KEY,
modelName: "gpt-4",
mrshorts avatar
AICodeGeneratorApp
@mrshorts
@jsxImportSource https://esm.sh/react@18.2.0
HTTP
try {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
// Generate app concept
conceptCompletion = await openai.chat.completions.create({
model: "gpt-4o-mini",
// Generate basic code structure
codeCompletion = await openai.chat.completions.create({
model: "gpt-4o-mini",
max_tokens: 500
} catch (openaiError) {
console.error('OpenAI generation error:', openaiError);
// Fallback generation
granin avatar
sqlite
@granin
LLM-Safe Fork of @std/sqlite We found that LLMs have trouble with the inputs and outputs of our @std/sqlite method. It expects the input to be (sql: string, args: any[]) but it's { sql: string, args: any[] } It expects the output to be objects, but they are arrays Instead of struggling to teach it, we built this val to be a wrapper around @std/sqlite that adheres to what the LLMs expect. This val is also backwards-compatible with @std/sqlite, so we're considering merging it in.
Script
try {
// Dynamic imports for SQLite and OpenAI
const [{ sqlite }, { OpenAI }] = await Promise.all([
import("https://esm.town/v/stevekrouse/sqlite"),
import("https://esm.town/v/std/openai")
const openai = new OpenAI();
const SPECIFICATIONS_TABLE = "val_town_agent_specifications";
// Generate implementation using GPT-4o-mini
const completion = await openai.chat.completions.create({
messages: [
webup avatar
pipeSampleLLMBind
@webup
An interactive, runnable TypeScript val by webup
Script
const mb = await getModelBuilder({
type: "chat",
provider: "openai",
const model = await mb();
const tb = await getLangSmithBuilder();
adjacent avatar
telegramDalleBot
@adjacent
Telegram DALLE Bot A personal telegram bot you can message to create images with OpenAI's DALLE ✨ Set up yours fork this val speak to telegram’s https://t.me/botfather to create a bot and obtain a bot token set the bot token as a val town secret called telegramDalleBotToken add a random string as a val town secret called telegramDalleBotWebhookSecret set up your webhook with telegram like this: // paste and run this in your workspace on here @vtdocs.telegramSetWebhook(@me.secrets.telegramDalleBotToken, { url: /* your fork's express endpoint (click the three dots on a val) */, allowed_updates: ["message"], secret_token: @me.secrets.telegramDalleBotWebhookSecret, }); message your bot some prompts! (if you get stuck, you can refer to the telegram echo bot guide from docs.val.town)
HTTP
# Telegram DALLE Bot
A personal telegram bot you can message to create images with OpenAI's [DALLE](https://openai.com/dall-e-2) ✨
![DALLE: A Macintosh II sitting on a desk, painted by Picasso in his blue period.](https://i.imgur.com/uJrP5mE.png)
try {
const imageURL = (await textToImageDalle(
process.env.openai,
text,
1,
iamseeley avatar
convertResume
@iamseeley
convert your resume content to the json resume standard
HTTP
event.preventDefault();
const resumeContent = document.getElementById('resumeContent').value;
const apiKey = '${Deno.env.get("OPENAI_API_KEY")}';
const spinner = document.getElementById('spinner');
const jsonOutput = document.getElementById('jsonOutput');
chen avatar
twitterAlert
@chen
Daily Twitter "Important Updates" Digest You like getting important updates from Twitter—new projects or writing or companies to learn about. But you don't like being addicted to the feed constantly. This val lets you get daily updates from specific people you want to follow closely. It uses AI to filter out random shitposts and only get the important updates. 1. Authentication You'll need a Twitter Bearer Token. Follow these instructions to get one. Unfortunately it costs $100 / month to have a Basic Twitter Developer account. If you subscribe to Val Town Pro, you can ask Steve Krouse to borrow his token. Also rate limits seem really severe which limits how useful this is :( Need to figure out workarounds... 2. Query Update the list of usernames to people you care about; change the AI prompt if you want different filtering. 3. Notification Sends a daily email. Todos: this should filter the twitter API call to only tweets since the last run. some kind of caching to avoid rate limiting would be nice to use the user's feed instead of a username list... but not sure how easy that is
Cron
import { email } from "https://esm.town/v/std/email?v=12";
import { OpenAI } from "https://esm.town/v/std/openai?v=4";
import { discordWebhook } from "https://esm.town/v/stevekrouse/discordWebhook";
// "sliminality",
const openai = new OpenAI();
export async function twitterAlert({ lastRunAt }: Interval) {
async function filterTweets(tweets) {
const completion = await openai.chat.completions.create({
messages: [
applemetabank avatar
sereneBeigeSheep
@applemetabank
@jsxImportSource https://esm.sh/react@18.2.0
HTTP
const { sqlite } = await import("https://esm.town/v/stevekrouse/sqlite");
const { OpenAI } = await import("https://esm.town/v/std/openai");
const KEY = new URL(import.meta.url).pathname.split("/").at(-1);
const openai = new OpenAI();
// Create tables for appointments and inquiries
Legal Categories Available: ${body.categories.join(', ')}
// Prepare messages for OpenAI, including system context
const chatMessages = [
// Generate AI response
const completion = await openai.chat.completions.create({
model: "gpt-4o-mini",
all avatar
podcast
@all
@jsxImportSource https://esm.sh/react
HTTP
if (url.searchParams.has("nextMessage")) {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
const { personality1, personality2, topic, conversation, currentSpeaker } = await req.json();
const currentPersonality = currentSpeaker === 1 ? personality1 : personality2;
const response = await openai.chat.completions.create({
messages: [