Search

Results include substring matches and semantically similar vals. Learn more
pomdtr avatar
ask_ai
@pomdtr
Ask gpt to update a val on your behalf Usage import { askAI } from "https://esm.town/v/pomdtr/ask_ai"; await askAI(`Add jsdoc comments for each exported function of the val @pomdtr/askAi`);
Script
import { api } from "https://esm.town/v/pomdtr/api";
import { email } from "https://esm.town/v/std/email?v=12";
import { OpenAI } from "https://esm.town/v/std/OpenAI";
async function getValByAlias({ author, name }: { author: string; name: string }) {
const { id, code, readme } = await api(`/v1/alias/${author}/${name}`, {
type: "string";
export function askAI(content: string) {
const client = new OpenAI();
const runner = client.beta.chat.completions.runTools({
model: "gpt-3.5-turbo",
geltoob avatar
VALLE
@geltoob
VALL-E LLM code generation for vals! Make apps with a frontend, backend, and database. It's a bit of work to get this running, but it's worth it. Fork this val to your own profile. Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in tempValsParentFolderId . If you want to use OpenAI models you need to set the OPENAI_API_KEY env var . If you want to use Anthropic models you need to set the ANTHROPIC_API_KEY env var . Create a Val Town API token , open the browser preview of this val, and use the API token as the password to log in.
HTTP (deprecated)
* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-v
* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environ
jdan avatar
emojiGuessr
@jdan
Calorie Count via Photo Uploads your photo to ChatGPT's new vision model to automatically categorize the food and estimate the calories.
HTTP (deprecated)
import { fileToDataURL } from "https://esm.town/v/stevekrouse/fileToDataURL";
import { modifyImage } from "https://esm.town/v/stevekrouse/modifyImage";
import { chat } from "https://esm.town/v/stevekrouse/openai";
import { Hono } from "npm:hono@3";
function esmTown(url) {
iakovos avatar
rateArticleRelevance
@iakovos
An interactive, runnable TypeScript val by iakovos
Script
export const rateArticleRelevance = async (interests: string, article: any) => {
const { default: OpenAI } = await import("npm:openai");
const openai = new OpenAI({ apiKey: process.env.OPENAI_KEY });
try {
Give a score from 0 to 10. Why did you give this score? Respond with the score only.
const response = await openai.chat.completions.create({
messages: [
willthereader avatar
greenCow
@willthereader
ChatGPT Implemented in Val Town Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events
HTTP (deprecated)
# ChatGPT Implemented in Val Town
Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events
/** @jsxImportSource https://esm.sh/react */
import OpenAI from "npm:openai";
import { renderToString } from "npm:react-dom/server";
const openai = new OpenAI();
import { Hono } from "npm:hono@3";
app.get("/", async (c) => {
const thread = await openai.beta.threads.create();
const assistant = await openai.beta.assistants.create({
name: "",
const message = c.req.query("message");
await openai.beta.threads.messages.create(
threadId,
"data: " + JSON.stringify(str) + "\n\n",
const run = openai.beta.threads.runs.stream(threadId, {
assistant_id: assistantId,
ttodosi avatar
VALLE
@ttodosi
VALL-E LLM code generation for vals! Make apps with a frontend, backend, and database. It's a bit of work to get this running, but it's worth it. Fork this val to your own profile. Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in tempValsParentFolderId . If you want to use OpenAI models you need to set the OPENAI_API_KEY env var . If you want to use Anthropic models you need to set the ANTHROPIC_API_KEY env var . Create a Val Town API token , open the browser preview of this val, and use the API token as the password to log in.
HTTP (deprecated)
* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-v
* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environ
davitchanturia avatar
VALLE
@davitchanturia
VALL-E LLM code generation for vals! Make apps with a frontend, backend, and database. It's a bit of work to get this running, but it's worth it. Fork this val to your own profile. Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in tempValsParentFolderId . If you want to use OpenAI models you need to set the OPENAI_API_KEY env var . If you want to use Anthropic models you need to set the ANTHROPIC_API_KEY env var . Create a Val Town API token , open the browser preview of this val, and use the API token as the password to log in.
HTTP (deprecated)
* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-v
* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environ
nbbaier avatar
readmeGPT
@nbbaier
Val Town AI Readme Writer This val provides a class ReadmeWriter for generating readmes for vals with OpenAI. It can both draft readmes and update them directly PRs welcome! See Todos below for some ideas I have. Usage To draft a readme for a given code, use the draftReadme method: import { ReadmeWriter } from "https://esm.town/v/nbbaier/readmeGPT"; const readmeWriter = new ReadmeWriter({}); const val = "https://www.val.town/v/:username/:valname"; const generatedReadme = await readmeWriter.draftReadme(val); To write and update a readme for a given code, use the writeReadme method: import { ReadmeWriter } from "https://esm.town/v/nbbaier/readmeGPT"; const readmeWriter = new ReadmeWriter({}); const val = "https://www.val.town/v/:username/:valname"; const successMessage = await readmeWriter.writeReadme(val); API Reference Class: ReadmeWriter The ReadmeWriter class represents a utility for generating and updating README files. Constructor Creates an instance of the ReadmeWriter class. Parameters: model (optional): The model to be used for generating the readme. Defaults to "gpt-3.5-turbo". apiKey (optional): An OpenAI API key. Defaults to Deno.env.get("OPENAI_API_KEY") . Methods draftReadme(val: string): Promise<string> : Generates a readme for the given val. Parameters: val : URL of the code repository. Returns: A promise that resolves to the generated readme. writeReadme(val: string): Promise<string> : Generates and updates a readme for the given val. Parameters: val : URL of the code repository. Returns: A promise that resolves to a success message if the update is successful. Todos [ ] Additional options to pass to the OpenAI model [ ] Ability to pass more instructions to the prompt to modify how the readme is constructed
Script
his val provides a class `ReadmeWriter` for generating readmes for vals with OpenAI. It can both draft readmes and update the
- `apiKey` (optional): An OpenAI API key. Defaults to `Deno.env.get("OPENAI_API_KEY")`.
- [ ] Additional options to pass to the OpenAI model
import OpenAI, { type ClientOptions } from "npm:openai";
openai: OpenAI;
const { model, ...openaiOptions } = options;
this.openai = new OpenAI(openaiOptions);
private async performOpenAICall(prompt: string) {
const response = await this.openai.chat.completions.create({
throw new Error("No response from OpenAI");
throw new Error("No readme returned by OpenAI. Try again.");
const readme = await this.performOpenAICall(prompt);
patrickjm avatar
aiSarcasticMotivationalMessage
@patrickjm
An interactive, runnable TypeScript val by patrickjm
Script
"Emphasize a morbid sense of humor.",
].join("\n"),
openAiKey: process.env.openai_key,
bingo16 avatar
getChatgpt
@bingo16
An interactive, runnable TypeScript val by bingo16
Script
"Content-Type": "application/json",
// Update your token in https://val.town/settings/secrets
Authorization: `Bearer ${token || process.env.openaiKey}`,
const getCompelitoins = async (data) => {
const response = await fetch("https://api.openai.com/v1/completions", {
method: "POST",
headers: {
stevekrouse avatar
poembuilder
@stevekrouse
@jsxImportSource npm:hono@3/jsx
HTTP (deprecated)
/** @jsxImportSource npm:hono@3/jsx */
import { OpenAI } from "https://esm.town/v/std/openai?v=2";
import { sqlite } from "https://esm.town/v/std/sqlite?v=5";
import { Hono } from "npm:hono@3";
willthereader avatar
emeraldRaccoon
@willthereader
ChatGPT Implemented in Val Town Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events
HTTP (deprecated)
# ChatGPT Implemented in Val Town
Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events
/** @jsxImportSource https://esm.sh/react */
import OpenAI from "npm:openai";
import { renderToString } from "npm:react-dom/server";
const openai = new OpenAI();
import { Hono } from "npm:hono@3";
app.get("/", async (c) => {
const thread = await openai.beta.threads.create();
const assistant = await openai.beta.assistants.create({
name: "",
const message = c.req.query("message");
await openai.beta.threads.messages.create(
threadId,
"data: " + JSON.stringify(str) + "\n\n",
const run = openai.beta.threads.runs.stream(threadId, {
assistant_id: assistantId,
xuybin avatar
valTownChatGPT
@xuybin
ChatGPT Implemented in Val Town Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events.
HTTP (deprecated)
# ChatGPT Implemented in Val Town
Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events.
<p align=center>
/** @jsxImportSource https://esm.sh/react */
import OpenAI from "npm:openai";
import { renderToString } from "npm:react-dom/server";
// This uses by personal API key, you'll need to provide your own if
// you fork this. We'll be adding support to the std/openai lib soon!
const openai = new OpenAI();
import { Hono } from "npm:hono@3";
body: msgDiv.textContent,
// Setup the SSE connection and stream back the response. OpenAI handles determining
// which message is the correct response based on what was last read from the
app.get("/", async (c) => {
const thread = await openai.beta.threads.create();
const assistant = await openai.beta.assistants.create({
name: "",
let message = await c.req.text();
await openai.beta.threads.messages.create(
c.req.query("threadId"),
"data: " + JSON.stringify(str) + "\n\n",
const run = openai.beta.threads.runs.stream(threadId, {
assistant_id: assistantId,
charlypoly avatar
exampleTopHackerNewsDailyEmail
@charlypoly
An interactive, runnable TypeScript val by charlypoly
Cron
import { email } from "https://esm.town/v/std/email?v=12";
import { OpenAI } from "https://esm.town/v/std/openai?v=4";
import { z } from "npm:zod";
.describe("Top 5 stories on Hacker News"),
// we create a OpenAI Tool that takes our schema as argument
const extractContentTool: any = {
parameters: zodToJsonSchema(schema),
const openai = new OpenAI();
// We ask OpenAI to extract the content from the given web page.
// The model will reach out to our `extract_content` tool and
// the requirement of `extract_content`s argument.
const completion = await openai.chat.completions.create({
model: "gpt-4-turbo",
tool_choice: "auto",
// we retrieve the serialized arguments generated by OpenAI
const result = completion.choices[0].message.tool_calls![0].function.arguments;
const parsed = schema.parse(JSON.parse(result));
const completion2 = await openai.chat.completions.create({
model: "gpt-4-turbo",
nerdymomocat avatar
add_to_notion_w_ai
@nerdymomocat
Uses instructor and open ai (with gpt-4-turbo) to process any content into a notion database entry. Use addToNotion with any database id and content. await addToNotion( "DB_ID_GOES_HERE", "CONTENT_GOES HERE"//"for example: $43.28 ordered malai kofta and kadhi (doordash) [me and mom] jan 3 2024" ); Prompts are created based on your database name, database description, property name, property type, property description, and if applicable, property options (and their descriptions). Supports: checkbox, date, multi_select, number, rich_text, select, status, title, url, email Uses NOTION_API_KEY , OPENAI_API_KEY stored in env variables and uses Valtown blob storage to store information about the database. Use get_notion_db_info to use the stored blob if exists or create one, use get_and_save_notion_db_info to create a new blob (and replace an existing one if exists).
Script
Supports: checkbox, date, multi_select, number, rich_text, select, status, title, url, email
- Uses `NOTION_API_KEY`, `OPENAI_API_KEY` stored in env variables and uses [Valtown blob storage](https://esm.town/v/std/blob
- Use `get_notion_db_info` to use the stored blob if exists or create one, use `get_and_save_notion_db_info` to create a new
import { Client } from "npm:@notionhq/client";
import OpenAI from "npm:openai";
import { z } from "npm:zod";
"email": "string_email",
const oai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY ?? undefined,
const client = Instructor({