Search

Results include substring matches and semantically similar vals. Learn more
stevekrouse avatar
emojiSearchBot
@stevekrouse
Emoji search bot Replies to mentions on twitter with the emojis your photo evokes. Inspired by Devon Zuegel .
Cron
import process from "node:process";
import OpenAI from "npm:openai";
const openai = new OpenAI({ apiKey: process.env.openai });
export async function emojiSearchBot({ lastRunAt }: Interval) {
if (attachment.type !== "photo") return;
const response = await openai.chat.completions.create({
model: "gpt-4-vision-preview",
willthereader avatar
getContentFromUrl
@willthereader
getContentFromUrl Use this for summarizers. Combines https://r.jina.ai/URL and markdown.download's Youtube transcription getter to do its best to retrieve content from URLs. Usage: https://yawnxyz-getcontentfromurl.web.val.run/https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10187409/ https://yawnxyz-getcontentfromurl.web.val.run/https://www.youtube.com/watch?v=gzsczZnS84Y&ab_channel=PhageDirectory
HTTP (deprecated)
let result = await ai({
provider: "openai",
model: "gpt-3.5-turbo",
let result = await ai({
provider: "openai",
model: "gpt-3.5-turbo",
let result = await ai({
provider: "openai",
embed:true,
willthereader avatar
jadeGopher
@willthereader
ChatGPT Implemented in Val Town Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events
HTTP (deprecated)
# ChatGPT Implemented in Val Town
Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events
/** @jsxImportSource https://esm.sh/react */
import OpenAI from "npm:openai";
import { renderToString } from "npm:react-dom/server";
const openai = new OpenAI();
import { Hono } from "npm:hono@3";
app.get("/", async (c) => {
const thread = await openai.beta.threads.create();
const assistant = await openai.beta.assistants.create({
name: "",
const message = c.req.query("message");
await openai.beta.threads.messages.create(
threadId,
"data: " + JSON.stringify(str) + "\n\n",
const run = openai.beta.threads.runs.stream(threadId, {
assistant_id: assistantId,
patrickjm avatar
openAiFreeUsage
@patrickjm
// set at Sat Dec 09 2023 01:45:57 GMT+0000 (Coordinated Universal Time)
Script
// set at Sat Dec 09 2023 01:45:57 GMT+0000 (Coordinated Universal Time)
export let openAiFreeUsage = {"used_quota":12709400,"used_quota_usd":1.27094,"exceeded":false};
gitgrooves avatar
VALLE
@gitgrooves
VALL-E LLM code generation for vals! Make apps with a frontend, backend, and database. It's a bit of work to get this running, but it's worth it. Fork this val to your own profile. Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in tempValsParentFolderId . If you want to use OpenAI models you need to set the OPENAI_API_KEY env var . If you want to use Anthropic models you need to set the ANTHROPIC_API_KEY env var . Create a Val Town API token , open the browser preview of this val, and use the API token as the password to log in.
HTTP (deprecated)
* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-v
* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environ
maxdrake avatar
chatGPTExample
@maxdrake
An interactive, runnable TypeScript val by maxdrake
Script
let repsonse_obj = await chatGPT(
"hello assistant",
tinue a conversation, you can pass in someting of the form: https://platform.openai.com/docs/guides/chat/introduction
API_KEY
let response_text = repsonse_obj.message;
shawnbasquiat avatar
CoverLetterGenerator
@shawnbasquiat
// This val creates a cover letter generator using OpenAI's GPT model
HTTP
// This val creates a cover letter generator using OpenAI's GPT model
// It takes a resume (as a PDF file) and job description as input and returns a concise cover letter
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function server(req: Request): Promise<Response> {
console.log("Entering generateCoverLetter function");
const openai = new OpenAI();
console.log("OpenAI instance created");
try {
const completion = await openai.chat.completions.create({
model: "gpt-4o-mini",
max_tokens: 500,
console.log("OpenAI API call completed");
return completion.choices[0].message.content || "Unable to generate cover letter.";
} catch (error) {
console.error("Error in OpenAI API call:", error);
throw error;
wlxiaozhzh avatar
VALLE
@wlxiaozhzh
VALL-E LLM code generation for vals! Make apps with a frontend, backend, and database. It's a bit of work to get this running, but it's worth it. Fork this val to your own profile. Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in tempValsParentFolderId . If you want to use OpenAI models you need to set the OPENAI_API_KEY env var . If you want to use Anthropic models you need to set the ANTHROPIC_API_KEY env var . Create a Val Town API token , open the browser preview of this val, and use the API token as the password to log in.
HTTP (deprecated)
* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-v
* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environ
starrnx avatar
VALLE
@starrnx
VALL-E LLM code generation for vals! Make apps with a frontend, backend, and database. It's a bit of work to get this running, but it's worth it. Fork this val to your own profile. Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in tempValsParentFolderId . If you want to use OpenAI models you need to set the OPENAI_API_KEY env var . If you want to use Anthropic models you need to set the ANTHROPIC_API_KEY env var . Create a Val Town API token , open the browser preview of this val, and use the API token as the password to log in.
HTTP (deprecated)
* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-v
* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environ
jacoblee93 avatar
questionsWithGuidelinesChain
@jacoblee93
An interactive, runnable TypeScript val by jacoblee93
Script
export const questionsWithGuidelinesChain = (async () => {
const { ChatOpenAI } = await import(
"https://esm.sh/langchain@0.0.150/chat_models/openai"
const { LLMChain } = await import("https://esm.sh/langchain@0.0.150/chains");
const questionChain = questionPrompt
.pipe(new ChatOpenAI({
openAIApiKey: process.env.OPENAI_API_KEY,
.pipe(new StringOutputParser()));
.pipe(
new ChatOpenAI({
openAIApiKey: process.env.OPENAI_API_KEY,
.pipe(new StringOutputParser());
yawnxyz avatar
generateValCode
@yawnxyz
// import { openaiChatCompletion } from "https://esm.town/v/andreterron/openaiChatCompletion";
Script
// import { openaiChatCompletion } from "https://esm.town/v/andreterron/openaiChatCompletion";
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
export const generateValCode = async (
const lodash = await import('npm:lodash');
const response = await openai.chat.completions.create({
openaiKey: key,
organization: org,
tmcw avatar
VALLE
@tmcw
VALL-E LLM code generation for vals! Make apps with a frontend, backend, and database. It's a bit of work to get this running, but it's worth it. Fork this val to your own profile. Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in tempValsParentFolderId . If you want to use OpenAI models you need to set the OPENAI_API_KEY env var . If you want to use Anthropic models you need to set the ANTHROPIC_API_KEY env var . Create a Val Town API token , open the browser preview of this val, and use the API token as the password to log in.
HTTP (deprecated)
* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-v
* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environ
zzz avatar
demoOpenAIGPTSummary
@zzz
An interactive, runnable TypeScript val by zzz
Script
import { confession } from "https://esm.town/v/alp/confession?v=2";
import { runVal } from "https://esm.town/v/std/runVal";
export let demoOpenAIGPTSummary = await runVal(
"zzz.OpenAISummary",
confession,
modelName: "gpt-3.5-turbo",
tgrv avatar
weatherGPT
@tgrv
Cron
import { email } from "https://esm.town/v/std/email?v=11";
import { OpenAI } from "npm:openai";
let location = "Wolfsburg";
).then(r => r.json());
const openai = new OpenAI();
let chatCompletion = await openai.chat.completions.create({
messages: [{
nbbaier avatar
sqliteWriter
@nbbaier
SQLite QueryWriter The QueryWriter class is a utility for generating and executing SQL queries using natural language and OpenAI. It provides a simplified interface for interacting with your Val Town SQLite database and generating SQL queries based on user inputs. This val is inspired by prisma-gpt . PRs welcome! See Todos below for some ideas I have. Usage Import the QueryWriter class into your script: import { QueryWriter } from "https://esm.town/v/nbbaier/sqliteWriter"; Create an instance of QueryWriter, providing the desired table and an optional model: const writer = new QueryWriter({ table: "my_table", model: "gpt-4-1106-preview" }); Call the writeQuery() method to generate an SQL query based on a user input string: const userInput = "Show me all the customers with more than $1000 in purchases."; const query = await writer.writeQuery(userInput); Alternatively, use the gptQuery() method to both generate and execute the SQL query: const userInput = "Show me all the customers with more than $1000 in purchases."; const result = await writer.gptQuery(userInput); Handle the generated query or query result according to your application's needs. API new QueryWriter(args: { table: string; model?: string }): QueryWriter Creates a new instance of the QueryWriter class. table : The name of the database table to operate on. model (optional): The model to use for generating SQL queries. Defaults to "gpt-3.5-turbo". apiKey (optional): An OpenAI API key. Defaults to Deno.env.get("OPENAI_API_KEY") . writeQuery(str: string): Promise<string> Generates an SQL query based on the provided user input string. str : The user input string describing the desired query. Returns a Promise that resolves to the generated SQL query. gptQuery(str: string): Promise<any> Generates and executes an SQL query based on the provided user input string. str : The user input string describing the desired query. Returns a Promise that resolves to the result of executing the generated SQL query. Todos [ ] Handle multiple tables for more complex use cases [ ] Edit prompt to allow for more than just SELECT queries [ ] Allow a user to add to the system prompt maybe? [ ] Expand usage beyond just Turso SQLite to integrate with other databases
Script
utility for generating and executing SQL queries using natural language and OpenAI. It provides a simplified interface for i
- `apiKey` (optional): An OpenAI API key. Defaults to `Deno.env.get("OPENAI_API_KEY")`.
import OpenAI from "npm:openai";
openai: OpenAI;
const { table, model, ...openaiOptions } = options;
// this.apiKey = openaiOptions.apiKey ? openaiOptions.apiKey : Deno.env.get("OPENAI_API_KEY");
this.openai = new OpenAI(openaiOptions);
const response = await this.openai.chat.completions.create({
throw new Error("No response from OpenAI");
throw new Error("No SQL returned from OpenAI. Try again.");
const response = await this.openai.chat.completions.create({
throw new Error("No response from OpenAI");
…
16
…
Next