Search

Results include substring matches and semantically similar vals. Learn more
arthrod avatar
sureVioletSlug
@arthrod
An interactive, runnable TypeScript val by arthrod
HTTP
import { OpenAI } from "https://esm.town/v/std/openai";
import { email } from "https://esm.town/v/std/email";
const openai = new OpenAI();
const AUTH_TOKEN = Deno.env.get("valtown");
async function determineIntent(content: string): Promise<string> {
const completion = await openai.chat.completions.create({
messages: [
jacoblee93 avatar
multipleKeysAndMemoryConversationChainExample
@jacoblee93
An interactive, runnable TypeScript val by jacoblee93
Script
export const multipleKeysAndMemoryConversationChainExample = (async () => {
const { ChatOpenAI } = await import(
"https://esm.sh/langchain/chat_models/openai"
const { BufferMemory } = await import("https://esm.sh/langchain/memory");
const { ConversationChain } = await import("https://esm.sh/langchain/chains");
const llm = new ChatOpenAI({
modelName: "gpt-3.5-turbo",
openAIApiKey: process.env.OPENAI_API_KEY,
temperature: 0,
transcendr avatar
sendSassyEmail
@transcendr
// Configuration object for customizing the sassy message generation
Cron
import { email } from "https://esm.town/v/std/email";
import { OpenAI } from "https://esm.town/v/std/openai";
// Configuration object for customizing the sassy message generation
subject: "Daily Sass Delivery 🔥",
// Prompt configuration for OpenAI
prompts: {
export default async function() {
const openai = new OpenAI();
// Randomly select a prompt style
Math.floor(Math.random() * CONFIG.prompts.styles.length)
const completion = await openai.chat.completions.create({
messages: [
jxnblk avatar
VALLE
@jxnblk
VALL-E LLM code generation for vals! Make apps with a frontend, backend, and database. It's a bit of work to get this running, but it's worth it. Fork this val to your own profile. Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in tempValsParentFolderId . If you want to use OpenAI models you need to set the OPENAI_API_KEY env var . If you want to use Anthropic models you need to set the ANTHROPIC_API_KEY env var . Create a Val Town API token , open the browser preview of this val, and use the API token as the password to log in.
HTTP
* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-v
* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environ
thomaskangah avatar
healthtech4africa
@thomaskangah
AI Mental health support app for precision neuroscience
HTTP
export default async function server(request: Request): Promise<Response> {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const { sqlite } = await import("https://esm.town/v/stevekrouse/sqlite");
const openai = new OpenAI();
const SCHEMA_VERSION = 2;
systemMessage = "You are an AI mental health therapist providing follow-up support for a Kenyan youth who may be experi
const completion = await openai.chat.completions.create({
messages: [
martinbowling avatar
emailValHandler
@martinbowling
Email AI Assistant Chat with your favorite AI via email (with enhanced attachment and content support) What It Does This advanced email AI assistant allows you to: Send emails to an AI for comprehensive analysis and responses Automatically transform your queries into structured research objectives Parse and analyze various types of content: PDF attachments Image attachments (using GPT-4 Vision) Website content from links in your email Get detailed, context-aware responses directly to your inbox Setup Guide Copy this Val and save it as an Email Val (choose Val type in the top-right corner of the editor) Set up the required environment variables: OPENAI_API_KEY: Your OpenAI API key MD_API_KEY: Your API key for the markdown extraction service (optional) You can set these using Val Town's environment variables: https://docs.val.town/reference/environment-variables/ Copy the email address of the Val (click 3 dots in top-right > Copy > Copy email address) Compose your email: Write your query or request in the email body Attach any relevant PDFs or images Include links to websites you want analyzed Send it to the Val email address Wait for the AI's response, which will arrive in your inbox shortly How to Use Effectively Be clear and specific in your queries Provide context when necessary Utilize attachments and links to give the AI more information to work with The AI will transform your query into a structured research objective, so even simple questions may yield comprehensive answers Supported File Types and Limitations PDFs: Text content will be extracted and analyzed Images: Will be analyzed using GPT-4 Vision API Websites: Content will be extracted and converted to markdown for analysis Other file types are not currently supported and will be ignored Note: There may be size limitations for attachments and processing times may vary based on the complexity of the content. The AI uses advanced prompt transformation to enhance your queries, providing more detailed and structured responses. This process helps in generating comprehensive and relevant answers to your questions.
Email
- OPENAI_API_KEY: Your OpenAI API key
- OPENAI_API_KEY: Your OpenAI API key
const openaiUrl = "https://api.openai.com/v1/chat/completions";
const openaiKey = Deno.env.get("OPENAI_API_KEY");
if (!openaiKey) {
throw new Error("OPENAI_KEY environment variable is not set.");
const transformedPrompt = await transformPrompt(receivedEmail.text, openaiUrl, openaiKey, model);
const { pdfTexts, imageAnalysis } = await processAttachments(attachments, openaiKey, transformedPrompt);
// Step 5: Send to OpenAI and get response
const openaiResponse = await sendRequestToOpenAI(finalPrompt, transformedPrompt, openaiUrl, openaiKey, model);
await sendResponseByEmail(receivedEmail.from, openaiResponse);
chatgpt avatar
chat
@chatgpt
// Forked from @webup.chat
Script
options = {},
// Initialize OpenAI API stub
const { Configuration, OpenAIApi } = await import("https://esm.sh/openai");
const configuration = new Configuration({
apiKey: process.env.OPENAI,
const openai = new OpenAIApi(configuration);
// Request chat completion
: prompt;
const { data } = await openai.createChatCompletion({
model: "gpt-3.5-turbo-0613",
prashamtrivedi avatar
reasoninghelper
@prashamtrivedi
@jsxImportSource https://esm.sh/react@18.2.0
HTTP
const { OpenAI } = await import("npm:openai");
const { zodResponseFormat } = await import("npm:openai/helpers/zod");
const openai = new OpenAI();
console.log("OpenAI API Request:", { prompt, topic });
completion = await openai.beta.chat.completions.parse({
completion = await openai.beta.chat.completions.parse({
completion = await openai.beta.chat.completions.parse({
completion = await openai.beta.chat.completions.parse({
console.log("OpenAI API Response:", response);
console.error("Error in OpenAI API call:", error);
lisazz avatar
Test00_getModelBuilder
@lisazz
241219 練習 從 webup 而來
Script
provider?: "openai" | "huggingface";
} = { type: "llm", provider: "openai" }, options?: any) {
matches({ type: "llm", provider: "openai" }),
const { OpenAI } = await import("https://esm.sh/langchain/llms/openai");
return new OpenAI(args);
matches({ type: "chat", provider: "openai" }),
const { ChatOpenAI } = await import("https://esm.sh/langchain/chat_models/openai");
return new ChatOpenAI(args);
matches({ type: "embedding", provider: "openai" }),
const { OpenAIEmbeddings } = await import("https://esm.sh/langchain/embeddings/openai");
stevekrouse avatar
autoGPT_Test
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
Script
export let autoGPT_Test = (async () => {
const { Configuration, OpenAIApi } = await import("npm:openai@3.2.1");
const configuration = new Configuration({
apiKey: process.env.openai,
const openai = new OpenAIApi(configuration);
const completion = await openai.createChatCompletion({
model: "gpt-4",
zzz avatar
OpenAISummary
@zzz
// Create a summary from a given text using GPT 4
Script
// Create a summary from a given text using GPT 4
export const OpenAISummary = async (text: string, config: {
apiKey?: string;
"anon",
"@zzz.OpenAISummary",
2,
const agent = await AIAgent(
apiKey || process.env.OPENAI_API_KEY_GPT4,
const response = await agent.summarize(text, modelName);
wolf avatar
generateFunction
@wolf
An interactive, runnable TypeScript val by wolf
Script
import { OpenAI } from "https://esm.town/v/std/openai";
function extractCode(str: string): string {
return "Please provide a function name";
const openai = new OpenAI();
const prompt =
`Generate a TypeScript function named "${functionName}" with the following parameters: ${parameters}. ONLY RETURN VALID J
const completion = await openai.chat.completions.create({
messages: [
nbbaier avatar
draftReadme
@nbbaier
Code Documentation Assistant The Code Documentation Assistant is an AI-powered tool that helps generate documentation for code. It uses the OpenAI GPT-3.5 Turbo model to generate readme files in GitHub-flavored markdown based on the provided code. Usage Importing the Code Documentation Assistant import { draftReadme, writeReadme } from "code-doc-assistant"; Function: draftReadme async function draftReadme(options: WriterOptions): Promise<string> The draftReadme function generates a readme file based on the provided options. Parameters options (required): An object containing the following properties: username (string): The username of the code owner. valName (string): The name of the Val containing the code. model (optional, default: "gpt-3.5-turbo"): The OpenAI model to use for generating the readme. userPrompt (optional): Additional prompt to include in the documentation. Return Value A promise that resolves to a string representing the generated readme file. Function: writeReadme async function writeReadme(options: WriterOptions): Promise<string> The writeReadme function generates a readme file and updates the readme of the corresponding Val with the generated content. Parameters options (required): An object containing the following properties: username (string): The username of the code owner. valName (string): The name of the Val containing the code. model (optional, default: "gpt-3.5-turbo"): The OpenAI model to use for generating the readme. userPrompt (optional): Additional prompt to include in the documentation. Return Value A promise that resolves to a string indicating the success of the readme update. Example import { draftReadme, writeReadme } from "code-doc-assistant"; const options = { username: "your-username", valName: "your-val-name", }; const generatedReadme = await draftReadme(options); console.log(generatedReadme); const successMessage = await writeReadme(options); console.log(successMessage); License This project is licensed under the MIT License .
Script
s an AI-powered tool that helps generate documentation for code. It uses the OpenAI GPT-3.5 Turbo model to generate readme fi
- `model` (optional, default: "gpt-3.5-turbo"): The OpenAI model to use for generating the readme.
- `model` (optional, default: "gpt-3.5-turbo"): The OpenAI model to use for generating the readme.
import OpenAI, { type ClientOptions } from "npm:openai";
async function performOpenAICall(prompt: string, model: string, openaiOptions: ClientOptions) {
const openai = new OpenAI(openaiOptions);
const response = await openai.chat.completions.create({
throw new Error("No response from OpenAI");
throw new Error("No readme returned by OpenAI. Try again.");
const { username, valName, model = "gpt-3.5-turbo", userPrompt, ...openaiOptions } = options;
const readme = await performOpenAICall(prompt, model, openaiOptions);
const { username, valName, model = "gpt-3.5-turbo", userPrompt, ...openaiOptions } = options;
const readme = await performOpenAICall(prompt, model, openaiOptions);
treb0r avatar
weatherGPT
@treb0r
If you fork this, you'll need to set OPENAI_API_KEY in your Val Town Secrets .
Cron
If you fork this, you'll need to set `OPENAI_API_KEY` in your [Val Town Secrets](https://www.val.town/settings/secrets).
import { fetch } from "https://esm.town/v/std/fetch";
import { OpenAI } from "npm:openai";
let location = "Halifax UK";
).then(r => r.json());
const openai = new OpenAI();
let chatCompletion = await openai.chat.completions.create({
messages: [{
scio avatar
aleister_chatley
@scio
An interactive, runnable TypeScript val by scio
Script
aleister_chatley_countdown = 8 + Math.floor(Math.random() * 6);
const { OpenAI } = await import("https://deno.land/x/openai/mod.ts");
const openAI = new OpenAI(process.env.OPENAI_KEY);
const chatCompletion = await openAI.createChatCompletion(
prompts.philip_k_dick,