Search

Results include substring matches and semantically similar vals. Learn more
janpaul123 avatar
valleGetValsContextWindow
@janpaul123
An interactive, runnable TypeScript val by janpaul123
Script
el.attributes.filter(a => a.name === "href").map(a => a.value)
prompt: "Write a val that uses OpenAI",
code: `import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
"messages": [
vawogbemi avatar
byob
@vawogbemi
BYOB - Build Your Own Bot You can chat with llms over email, the email thread functions as memory. The biggest thing is that you can instantly create a chat like interface with llms. Pair that with back end data and functions and you got something really powerful. Take it further Custom domains Use cloudflare email workers or a similiar service to create a custom email domain and route any incoming emails to this val. Use any email api set up with that domain to send emails ie. Sendgrid, Resend, Postmark. Toolings Llms can uses tools , meaning you can make this an agent and a whole lot more useful.
Email
### Toolings
* Llms can uses [tools](https://platform.openai.com/docs/guides/function-calling), meaning you can make this an agent and a w
import { zodResponseFormat } from "https://esm.sh/openai/helpers/zod";
import { z } from "https://esm.sh/zod";
import { email } from "https://esm.town/v/std/email";
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function(e: Email) {
const client = new OpenAI();
const Messages = z.object({
Bilelghrsalli avatar
dreamInterpreterApp
@Bilelghrsalli
@jsxImportSource https://esm.sh/react@18.2.0
HTTP
if (request.method === "POST" && new URL(request.url).pathname === "/interpret") {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
const { dream } = await request.json();
try {
const completion = await openai.chat.completions.create({
messages: [
rayyan avatar
generateQuiz
@rayyan
An interactive, runnable TypeScript val by rayyan
HTTP
import { OpenAI } from "https://esm.town/v/std/openai";
import { z } from "npm:zod";
{ status: 400 },
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
jahabeebs avatar
safeMessageBoard
@jahabeebs
content-checker is designed to be a modern, open-source library for programmatic and AI content moderation. Currently content-checker supports image and text moderation. Thanks to LLMs in addition to detecting specific profane words, we can detect malicious intent in text. So, a user who tries to circumvent the AI profanity filter by using a variation of a profane word, or even just a malicious phrase without a specific word in the profanity list, will still be flagged. Image moderation is also supported, using the Inception V3 model of the NSFW JS library. Future features will include moderation tools (auto-ban, bots), more powerful models, and multimedia support for video and audio moderation. To get an API key for the AI endpoints sign up free at https://www.openmoderator.com To install content-checker do npm install content-checker and check out the README: https://github.com/utilityfueled/content-checker
HTTP
// checkManualProfanityList is optional and defaults to false; it checks for the words in lang.ts (if under 50 words) bef
checkManualProfanityList: false,
ults to "google-perspective-api" (Google's Perspective API); it can also be "openai" (OpenAI Moderation API) or "google-natur
provider: "google-perspective-api",
try {
movienerd avatar
originalEmeraldPython
@movienerd
* This program creates a movie recommendation system based on moods. * It uses OpenAI to generate movie recommendations based on the user's selected mood and feedback. * The program uses React for the frontend and handles state management for user interactions. * It uses the Val Town OpenAI proxy for backend API calls. * A "More moods" button has been added to dynamically expand the list of available moods.
HTTP
* This program creates a movie recommendation system based on moods.
* It uses OpenAI to generate movie recommendations based on the user's selected mood and feedback.
* The program uses React for the frontend and handles state management for user interactions.
* It uses the Val Town OpenAI proxy for backend API calls.
* A "More moods" button has been added to dynamically expand the list of available moods.
if (url.pathname === "/recommend" && request.method === "POST") {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
try {
prompt += ` Recommend a different movie for when they're feeling ${mood}.`;
const completion = await openai.chat.completions.create({
messages: [
if (url.pathname === "/more-moods" && request.method === "GET") {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
try {
const completion = await openai.chat.completions.create({
messages: [
webup avatar
memorySampleVector
@webup
An interactive, runnable TypeScript val by webup
Script
const builder = await getMemoryBuilder({
type: "vector",
provider: "openai",
const memory = await builder();
await memory.saveContext({ input: "My favorite sport is soccer" }, {
stevekrouse avatar
tanLadybug
@stevekrouse
content-checker is designed to be a modern, open-source library for programmatic and AI content moderation. Currently content-checker supports image and text moderation. Thanks to LLMs in addition to detecting specific profane words, we can detect malicious intent in text. So, a user who tries to circumvent the AI profanity filter by using a variation of a profane word, or even just a malicious phrase without a specific word in the profanity list, will still be flagged. Image moderation is also supported, using the Inception V3 model of the NSFW JS library. Future features will include moderation tools (auto-ban, bots), more powerful models, and multimedia support for video and audio moderation. To get an API key for the AI endpoints sign up free at https://www.openmoderator.com To install content-checker do npm install content-checker and check out the README: https://github.com/utilityfueled/content-checker
HTTP
// checkManualProfanityList is optional and defaults to false; it checks for the words in lang.ts (if under 50 words) bef
checkManualProfanityList: false,
ults to "google-perspective-api" (Google's Perspective API); it can also be "openai" (OpenAI Moderation API) or "google-natur
provider: "google-perspective-api",
try {
ejfox avatar
codeshowcase
@ejfox
open "https://ejfox-codeshowcase.web.val.run/?code=$(pbpaste | jq -sRr @uri)" to get a screenshottable code snippet
HTTP
* and allows shuffling through random, animated gradients.
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function server(request: Request): Promise<Response> {
const language = url.searchParams.get("language") || "javascript";
// Generate title using OpenAI
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
rozek avatar
GDI_AITranslator
@rozek
This val is part of a series of examples to introduce "val.town" in my computer science course at Stuttgart University of Applied Sciences . The idea is to motivate even first-semester students not to wait but to put their ideas into practice from the very beginning and implement web apps with frontend and backend. It contains a simple web page which allows users to enter some german (or other non-english) text and send it to a preconfigured server. That server translates the text with the help of OpenAI and sends the result back to this app where it is finally presented to the user. This val is the companion of https://rozek-gdi_aitranslatorservice.web.val.run/ which contains the server part (aka "backend") for this example. The code was created using Townie - with only a few small manual corrections. This val is licensed under the MIT License.
HTTP
non-english) text and send it to a preconfigured server. That server translates
the text with the help of OpenAI and sends the result back to this app where it
is finally presented to the user.
karkowg avatar
lazyCook
@karkowg
@jsxImportSource https://esm.sh/react
HTTP
if (request.method === "POST" && new URL(request.url).pathname === "/recipes") {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
const { numPeople, numRecipes, difficulty, cuisine, ingredients, unit, dietaryRestrictions } = await request.json();
try {
const completion = await openai.chat.completions.create({
messages: [{ role: "user", content: prompt }],
stevekrouse avatar
valwriter
@stevekrouse
[ ] streaming [ ] send the code of the valwriter back to gpt (only if it's related, might need some threads, maybe a custom gpt would be a better fix, of course, could do it as a proxy...) [ ] make it easy to send errors back to gpt [ ] make it easy to get screenshots of the output back to gpt
HTTP
import { fetchText } from "https://esm.town/v/stevekrouse/fetchText";
import { chat } from "https://esm.town/v/stevekrouse/openai";
import cronstrue from "npm:cronstrue";
await email({ subject: "Subject line", text: "Body of message" });
// OpenAI
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
janpaul123 avatar
valle_tmp_140068690648343496787358158586876
@janpaul123
@jsxImportSource https://esm.sh/react
HTTP
import _ from "npm:lodash@4";
import OpenAI from "npm:openai";
import { renderToString } from "npm:react-dom/server";
const contextWindow: any = await valleGetValsContextWindow(model);
const openai = new OpenAI();
const stream = await openai.chat.completions.create({
model,
andreterron avatar
createGeneratedVal
@andreterron
Use GPT to generate vals on your account! Describe the val that you need, call this function, and you'll get a new val on your workspace generated by OpenAI's API! First, ensure you have a Val Town API Token , then call @andreterron.createGeneratedVal({...}) like this example : @andreterron.createGeneratedVal({ valTownKey: @me.secrets.vt_token, description: "A val that given a text file position in `{line, col}` and the text contents, returns the index position", }); This will create a val in your workspace, and here's the one created by the example above: https://www.val.town/v/andreterron.getFileIndexPosition
Script
# Use GPT to generate vals on your account!
Describe the val that you need, call this function, and you'll get a new val on your workspace generated by OpenAI's API!
First, ensure you have a [Val Town API Token](https://www.val.town/settings/api), then call `@andreterron.createGeneratedVal(
all avatar
promptGen
@all
* This val creates a website that generates optimized prompts for Val Town based on user input. * It uses the OpenAI API to generate prompts and incorporates a loading animation. * The generated prompt is tailored to Val Town's specific features and best practices.
HTTP
* This val creates a website that generates optimized prompts for Val Town based on user input.
* It uses the OpenAI API to generate prompts and incorporates a loading animation.
* The generated prompt is tailored to Val Town's specific features and best practices.
"Email sending",
"OpenAI integration",
"React support",
if (url.pathname === "/generate" && request.method === "POST") {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
try {
const { idea } = await request.json();
const completion = await openai.chat.completions.create({
messages: [
ā€¦
22
ā€¦
Next