Search

Results include substring matches and semantically similar vals. Learn more
webup avatar
getMemoryBuilder
@webup
An interactive, runnable TypeScript val by webup
Script
type: "buffer" | "summary" | "vector";
provider?: "openai";
} = { type: "buffer" }, options = {}) {
return new BufferMemory();
matches({ type: "summary", provider: "openai" }),
async () => {
return new ConversationSummaryMemory({ llm, ...options });
matches({ type: "vector", provider: "openai" }),
async () => {
type: "embedding",
provider: "openai",
const model = await builder();
iamseeley avatar
convertToResumeJSON
@iamseeley
An interactive, runnable TypeScript val by iamseeley
Script
if (!tokenBucket.consume()) {
throw new Error("Rate limit reached. Please try again later.");
const endpoint = 'https://api.openai.com/v1/chat/completions';
const model = 'gpt-4';
const messages = [
andreterron avatar
generateValCode
@andreterron
An interactive, runnable TypeScript val by andreterron
Script
import { openaiChatCompletion } from "https://esm.town/v/andreterron/openaiChatCompletion";
export const generateValCode = async (
const lodash = await import('npm:lodash');
const response = await openaiChatCompletion({
openaiKey: key,
organization: org,
kutasartem avatar
createHorror3DGame
@kutasartem
@jsxImportSource https://esm.sh/react@18.2.0
HTTP
try {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
if (!gameMechanic) {
- Potential monetization or player engagement strategy`;
const response = await openai.chat.completions.create({
model: "gpt-4o-mini",
try {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
const response = await openai.chat.completions.create({
model: "gpt-4o-mini",
janpaul123 avatar
valle_tmp_2671404837576818506367901100203444
@janpaul123
@jsxImportSource https://esm.sh/react
HTTP
import _ from "npm:lodash@4";
import OpenAI from "npm:openai";
import { renderToString } from "npm:react-dom/server";
const contextWindow = await valleGetValsContextWindow(model);
const openai = new OpenAI();
const stream = await openai.chat.completions.create({
model,
zzz avatar
demoOpenAIGPT4Summary
@zzz
An interactive, runnable TypeScript val by zzz
Script
import { confession } from "https://esm.town/v/alp/confession?v=2";
import { runVal } from "https://esm.town/v/std/runVal";
export let demoOpenAIGPT4Summary = await runVal(
"zzz.OpenAISummary",
confession,
andreterron avatar
genval
@andreterron
Generate a Val Uses the OpenAI API to generate code for a val based on the description given by the user. TODO: [ ] Improve code detection on GPT response [ ] Give more context on val town exclusive features like console.email or @references [ ] Enable the AI to search val town to find other vals to use
HTTP
# [Generate a Val](https://andreterron-genval.express.val.run)
Uses the OpenAI API to generate code for a val based on the description given by the user.
TODO:
return new Response("Bad input", { status: 400 });
const code = await generateValCode(
process.env.VT_OPENAI_KEY,
value.description,
const query = new URLSearchParams({ code });
nerdymomocat avatar
add_to_notion_w_ai_webpage
@nerdymomocat
Example usage of the add_to_notion_w_ai val Try with the money database . Read and watch the demo run here
HTTP
import { Client } from "npm:@notionhq/client";
import OpenAI from "npm:openai";
import { render } from "npm:preact-render-to-string";
"email": "string_email",
const oai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY ?? undefined,
const client = Instructor({
artem avatar
telegramAudioMessageTranscription
@artem
@jsxImportSource https://esm.sh/react@18.2.0
HTTP
margin: "0 4px",
OPENAI_API_KEY
</code>
return new Response("No file uploaded", { status: 400 });
const { OpenAI } = await import("npm:openai");
const openai = new OpenAI({ apiKey: Deno.env.get("OPENAI_API_KEY") });
const arrayBuffer = await audioFile.arrayBuffer();
const transcription = await openai.audio.transcriptions.create({
file: new File([arrayBuffer], audioFile.name, { type: audioFile.type }),
cameronpak avatar
homeless
@cameronpak
Homeless Services by OurTechnology At OurTechnology , we create technology solutions to empower and equip those who serve the homeless. We have a large data set of available resources in the US to aid in helping those experiencing homelessness find local resources, community, and support. This private ( but public to read ) API is used in our ChatGPT Assistant, Homeless Services . Why a ChatGPT Assistant ? OpenAI announced on May 13, 2024 that free users will soon be able to "discover and use GPTs and the GPT Store ( OpenAI )" There's a larger number of people experiencing homelessness who own a phone than what I imagined. ChatGPT allows for a simple interface, even with voice chat (a more natural way to navigate the tool), to find resources to help those experiencing homelessness. And, it's fast! Technical details The data set has been compiled together over the years and will continue to be updated as new techniques and partnerships make that possible. We use Typesense , a search as a service tool, to provide lightning fast search results for homeless resources near you. This endpoint is created with Hono and is an incredibly easy way to create an API. Contact OurTechnology Visit our website Email us! Find us on LinkedIn While this is on Cameron Pak's ValTown, this code is owned and operated by OurTechnology.
HTTP
## Why a [ChatGPT Assistant](https://chatg.pt/homeless-help)?
- OpenAI announced on May 13, 2024 that free users will soon be able to "discover and use GPTs and the GPT Store ([OpenAI](ht
- There's a larger number of people experiencing homelessness who own a phone than what I imagined.
patrickjm avatar
weatherTomorrowGpt3Example
@patrickjm
An interactive, runnable TypeScript val by patrickjm
Script
export let weatherTomorrowGpt3Example = weatherTomorrowGpt3({
city: "New York City",
openAiKey: process.env.openai_key,
jxnblk avatar
dailyDadJoke
@jxnblk
Daily Dad Joke How do you make a programmer laugh every morning? A dad joke ~~cron job~~ website! API This val uses the OpenAI API
HTTP
## API
This val uses the [OpenAI API](https://www.val.town/v/std/openai)
import { OpenAI } from "https://esm.town/v/std/openai?v=4";
// try changing the prompt to get different responses, e.g. 'Tell me a joke about backend engineers'
export default async function dailyDadJoke(req: Request): Response {
const openai = new OpenAI();
const resp = await openai.chat.completions.create({
messages: [
janpaul123 avatar
valle_tmp_3173618096166977554668362851031
@janpaul123
@jsxImportSource https://esm.sh/react
HTTP
import _ from "npm:lodash@4";
import OpenAI from "npm:openai";
import { renderToString } from "npm:react-dom/server";
const contextWindow = await valleGetValsContextWindow.call(null, model);
const openai = new OpenAI();
const stream = await openai.chat.completions.create({
model,
janpaul123 avatar
valle_tmp_02267091431922629935130311039563566
@janpaul123
@jsxImportSource https://esm.sh/react
HTTP
import _ from "npm:lodash@4";
import OpenAI from "npm:openai";
import { renderToString } from "npm:react-dom/server";
const contextWindow = await valleGetValsContextWindow(model);
const openai = new OpenAI();
const stream = await openai.chat.completions.create({
model,
richardkaplan avatar
webscrapeWikipediaIntro
@richardkaplan
An interactive, runnable TypeScript val by richardkaplan
Script
const cheerio = await import("npm:cheerio");
const html = await fetchText(
"https://en.wikipedia.org/wiki/OpenAI",
const $ = cheerio.load(html);
// Cheerio accepts a CSS selector, here we pick the second <p>