Search
openAiFreeUsage
@patrickjm
// set at Sat Dec 09 2023 01:45:57 GMT+0000 (Coordinated Universal Time)
Script
// set at Sat Dec 09 2023 01:45:57 GMT+0000 (Coordinated Universal Time)
export let openAiFreeUsage = {"used_quota":12709400,"used_quota_usd":1.27094,"exceeded":false};
chatGPTExample
@maxdrake
An interactive, runnable TypeScript val by maxdrake
Script
let repsonse_obj = await chatGPT(
"hello assistant",
[], // this can be an empty list, or if you're using this to continue a conversation, you can pass in someting of the form: https://platform.openai.com/docs/guides/chat/introduction
API_KEY
let response_text = repsonse_obj.message;
competentCoffeeTyrannosaurus
@shivammunday
@jsxImportSource https://esm.sh/react@18.2.0
HTTP
if (request.method === 'POST' && new URL(request.url).pathname === '/chat') {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
const body = await request.json();
Be conversational, helpful, and website-specific in your responses.`;
const completion = await openai.chat.completions.create({
messages: [
CoverLetterGenerator
@shawnbasquiat
// This val creates a cover letter generator using OpenAI's GPT model
HTTP
// This val creates a cover letter generator using OpenAI's GPT model
// It takes a resume (as a PDF file) and job description as input and returns a concise cover letter
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function server(req: Request): Promise<Response> {
console.log("Entering generateCoverLetter function");
const openai = new OpenAI();
console.log("OpenAI instance created");
try {
const completion = await openai.chat.completions.create({
model: "gpt-4o-mini",
max_tokens: 500,
console.log("OpenAI API call completed");
return completion.choices[0].message.content || "Unable to generate cover letter.";
} catch (error) {
console.error("Error in OpenAI API call:", error);
throw error;
questionsWithGuidelinesChain
@jacoblee93
An interactive, runnable TypeScript val by jacoblee93
Script
export const questionsWithGuidelinesChain = (async () => {
const { ChatOpenAI } = await import(
"https://esm.sh/langchain@0.0.150/chat_models/openai"
const { LLMChain } = await import("https://esm.sh/langchain@0.0.150/chains");
const questionChain = questionPrompt
.pipe(new ChatOpenAI({
openAIApiKey: process.env.OPENAI_API_KEY,
.pipe(new StringOutputParser()));
.pipe(
new ChatOpenAI({
openAIApiKey: process.env.OPENAI_API_KEY,
.pipe(new StringOutputParser());
generateValCode
@yawnxyz
// import { openaiChatCompletion } from "https://esm.town/v/andreterron/openaiChatCompletion";
Script
// import { openaiChatCompletion } from "https://esm.town/v/andreterron/openaiChatCompletion";
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
export const generateValCode = async (
const lodash = await import('npm:lodash');
const response = await openai.chat.completions.create({
openaiKey: key,
organization: org,
VALLE
@tmcw
VALL-E LLM code generation for vals! Make apps with a frontend, backend, and database. It's a bit of work to get this running, but it's worth it. Fork this val to your own profile. Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in tempValsParentFolderId . If you want to use OpenAI models you need to set the OPENAI_API_KEY env var . If you want to use Anthropic models you need to set the ANTHROPIC_API_KEY env var . Create a Val Town API token , open the browser preview of this val, and use the API token as the password to log in.
HTTP
* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-variables).
* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environment-variables).
demoOpenAIGPTSummary
@zzz
An interactive, runnable TypeScript val by zzz
Script
import { confession } from "https://esm.town/v/alp/confession?v=2";
import { runVal } from "https://esm.town/v/std/runVal";
export let demoOpenAIGPTSummary = await runVal(
"zzz.OpenAISummary",
confession,
modelName: "gpt-3.5-turbo",
calories
@stevekrouse
Calorie Count via Photo Uploads your photo to ChatGPT's new vision model to automatically categorize the food and estimate the calories.
HTTP
import { fileToDataURL } from "https://esm.town/v/stevekrouse/fileToDataURL";
import { modifyImage } from "https://esm.town/v/stevekrouse/modifyImage";
import { chat } from "https://esm.town/v/stevekrouse/openai";
import { Hono } from "npm:hono@3";
function esmTown(url) {
tidyRedWhale
@websrai
@jsxImportSource https://esm.sh/react
HTTP
if (request.method === 'POST' && new URL(request.url).pathname === '/chat') {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
try {
const { messages } = await request.json();
const completion = await openai.chat.completions.create({
model: "gpt-4o-mini",
sqliteWriter
@nbbaier
SQLite QueryWriter The QueryWriter class is a utility for generating and executing SQL queries using natural language and OpenAI. It provides a simplified interface for interacting with your Val Town SQLite database and generating SQL queries based on user inputs. This val is inspired by prisma-gpt . PRs welcome! See Todos below for some ideas I have. Usage Import the QueryWriter class into your script: import { QueryWriter } from "https://esm.town/v/nbbaier/sqliteWriter"; Create an instance of QueryWriter, providing the desired table and an optional model: const writer = new QueryWriter({ table: "my_table", model: "gpt-4-1106-preview" }); Call the writeQuery() method to generate an SQL query based on a user input string: const userInput = "Show me all the customers with more than $1000 in purchases.";
const query = await writer.writeQuery(userInput); Alternatively, use the gptQuery() method to both generate and execute the SQL query: const userInput = "Show me all the customers with more than $1000 in purchases.";
const result = await writer.gptQuery(userInput); Handle the generated query or query result according to your application's needs. API new QueryWriter(args: { table: string; model?: string }): QueryWriter Creates a new instance of the QueryWriter class. table : The name of the database table to operate on. model (optional): The model to use for generating SQL queries. Defaults to "gpt-3.5-turbo". apiKey (optional): An OpenAI API key. Defaults to Deno.env.get("OPENAI_API_KEY") . writeQuery(str: string): Promise<string> Generates an SQL query based on the provided user input string. str : The user input string describing the desired query. Returns a Promise that resolves to the generated SQL query. gptQuery(str: string): Promise<any> Generates and executes an SQL query based on the provided user input string. str : The user input string describing the desired query. Returns a Promise that resolves to the result of executing the generated SQL query. Todos [ ] Handle multiple tables for more complex use cases [ ] Edit prompt to allow for more than just SELECT queries [ ] Allow a user to add to the system prompt maybe? [ ] Expand usage beyond just Turso SQLite to integrate with other databases
Script
The QueryWriter class is a utility for generating and executing SQL queries using natural language and OpenAI. It provides a simplified interface for interacting with your Val Town SQLite database and generating SQL queries based on user inputs.
- `apiKey` (optional): An OpenAI API key. Defaults to `Deno.env.get("OPENAI_API_KEY")`.
import OpenAI from "npm:openai";
openai: OpenAI;
const { table, model, ...openaiOptions } = options;
// this.apiKey = openaiOptions.apiKey ? openaiOptions.apiKey : Deno.env.get("OPENAI_API_KEY");
this.openai = new OpenAI(openaiOptions);
const response = await this.openai.chat.completions.create({
throw new Error("No response from OpenAI");
throw new Error("No SQL returned from OpenAI. Try again.");
const response = await this.openai.chat.completions.create({
throw new Error("No response from OpenAI");
weatherGPT
@tgrv
Cron
import { email } from "https://esm.town/v/std/email?v=11";
import { OpenAI } from "npm:openai";
let location = "Wolfsburg";
).then(r => r.json());
const openai = new OpenAI();
let chatCompletion = await openai.chat.completions.create({
messages: [{
FindFraudTrendsUsingGPT
@mjweaver01
FraudTrendsGPT Generate real-time Fraud Trend Reports using GPT-4o Goal This agent is designed to find trends in merchant transaction reports, and produce a real time report. Reports are complete with relevant data tables and Mermaid charts. Semi-Data-Agnostic This agent is semi-agnostic to the data provided. Meaning, it will provide a report so long as the provided data is shaped similarly, or the prompt is updated to support the new data shape. Agent Reusability Moreover, this agent can be rewritten for any number of use cases. With some variable renaming, and rewriting of the prompt, this should produce very accurate data analytic reports for any data provided.
HTTP
import { Hono } from "npm:hono";
import { OpenAI } from "npm:openai";
const trendGPT = async (data, onData) => {
const openai = new OpenAI();
const chatStream = await openai.chat.completions.create({
messages: [
poembuilder3
@stevekrouse
@jsxImportSource npm:hono@3/jsx
HTTP
/** @jsxImportSource npm:hono@3/jsx */
import { OpenAI } from "https://esm.town/v/std/openai?v=2";
import { sqlite } from "https://esm.town/v/std/sqlite?v=5";
import { Hono } from "npm:hono@3";
FindTrendsUsingGPT
@weaverwhale
An interactive, runnable TypeScript val by weaverwhale
HTTP
import { Hono } from "npm:hono";
import { OpenAI } from "npm:openai";
const trendGPT = async (data, onData) => {
const openai = new OpenAI();
// Start the OpenAI stream
const chatStream = await openai.chat.completions.create({
messages: [