Search
![steveb1313 avatar](https://images.clerk.dev/oauth_github/img_2ScQNOcM00tE7lUO4C1SNYWjjY4.png)
chat
@steveb1313
An interactive, runnable TypeScript val by steveb1313
Script
options = {},
// Initialize OpenAI API stub
const { Configuration, OpenAIApi } = await import("https://esm.sh/openai");
const configuration = new Configuration({
apiKey: process.env.openAIAPI,
const openai = new OpenAIApi(configuration);
// Request chat completion
: prompt;
const { data } = await openai.createChatCompletion({
model: "gpt-3.5-turbo-0613",
const message = data.choices[0].message;
return message.function_call ? message.function_call : message.content;
![jacoblee93 avatar](https://images.clerk.dev/oauth_github/img_2QIS74zfuoUpIQDz75ze0alKMAy.jpeg)
conversationalRetrievalQAChainStreamingExample
@jacoblee93
An interactive, runnable TypeScript val by jacoblee93
Script
const { ChatOpenAI } = await import(
"https://esm.sh/langchain/chat_models/openai"
const { OpenAIEmbeddings } = await import(
"https://esm.sh/langchain/embeddings/openai"
const streamingModel = new ChatOpenAI({
openAIApiKey: process.env.OPENAI_API_KEY,
const nonStreamingModel = new ChatOpenAI({
openAIApiKey: process.env.OPENAI_API_KEY,
new OpenAIEmbeddings({
openAIApiKey: process.env.OPENAI_API_KEY,
emailValHandlerNo
@martinbowling
Email AI Assistant Chat with your favorite AI via email (with enhanced attachment and content support) What It Does This advanced email AI assistant allows you to: Send emails to an AI for comprehensive analysis and responses Automatically transform your queries into structured research objectives Parse and analyze various types of content: PDF attachments Image attachments (using GPT-4 Vision) Website content from links in your email Get detailed, context-aware responses directly to your inbox Setup Guide Copy this Val and save it as an Email Val (choose Val type in the top-right corner of the editor) Set up the required environment variables: OPENAI_API_KEY: Your OpenAI API key MD_API_KEY: Your API key for the markdown extraction service You can set these using Val Town's environment variables: https://docs.val.town/reference/environment-variables/ Copy the email address of the Val (click 3 dots in top-right > Copy > Copy email address) Compose your email: Write your query or request in the email body Attach any relevant PDFs or images Include links to websites you want analyzed Send it to the Val email address Wait for the AI's response, which will arrive in your inbox shortly How to Use Effectively Be clear and specific in your queries Provide context when necessary Utilize attachments and links to give the AI more information to work with The AI will transform your query into a structured research objective, so even simple questions may yield comprehensive answers Supported File Types and Limitations PDFs: Text content will be extracted and analyzed Images: Will be analyzed using GPT-4 Vision API Websites: Content will be extracted and converted to markdown for analysis Other file types are not currently supported and will be ignored Note: There may be size limitations for attachments and processing times may vary based on the complexity of the content. The AI uses advanced prompt transformation to enhance your queries, providing more detailed and structured responses. This process helps in generating comprehensive and relevant answers to your questions.
Email
- OPENAI_API_KEY: Your OpenAI API key
- OPENAI_API_KEY: Your OpenAI API key
// Main controller function
export default async function emailValHandler(receivedEmail) {
const openaiUrl = "https://api.openai.com/v1/chat/completions";
const openaiKey = Deno.env.get("OPENAI_API_KEY");
if (!openaiKey) {
throw new Error("OPENAI_KEY environment variable is not set.");
const transformedPrompt = await transformPrompt(receivedEmail.text, openaiUrl, openaiKey, model);
const { pdfTexts, imageAnalysis } = await processAttachments(attachments, openaiKey, transformedPrompt);
// Step 5: Send to OpenAI and get response
![ale_annini avatar](https://images.clerk.dev/oauth_github/img_2Skmr9zZoVReRRizkCh6hc8FoCm.jpeg)
elevenlabsTTS
@ale_annini
An interactive, runnable TypeScript val by ale_annini
Script
import process from "node:process";
export const elevenlabsTTS = async (req, res) => {
// https://platform.openai.com/docs/api-reference/images/create
// https://ale_annini-elevenlabstts.express.val.run/?args=[%22{\%22text\%22:\%22it%20beautiful\%22}%22]
const payload = {
jarvisPrototype
@pashaabhi
@jsxImportSource https://esm.sh/react@18.2.0
HTTP
import React, { useEffect, useRef, useState } from "https://esm.sh/react@18.2.0";
function App() {
const [conversation, setConversation] = useState<{ role: string; content: string }[]>([]);
</div>
function client() {
createRoot(document.getElementById("root")).render(<App />);
if (typeof document !== "undefined") { client(); }
export default async function server(request: Request): Promise<Response> {
if (request.method === "POST") {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
const body = await request.json();
const messages = body.messages || [];
const completion = await openai.chat.completions.create({
messages: messages,
![pomdtr avatar](https://images.clerk.dev/oauth_github/img_2RCoAITJZH1QencEgtVjh4Qirj4.jpeg)
api
@pomdtr
An interactive, runnable TypeScript val by pomdtr
Script
import { API_URL } from "https://esm.town/v/std/API_URL";
export async function api<T = any>(
path: string,
options?: RequestInit & {
SpeakEnglishToMe_bot
@ynonp
An interactive, runnable TypeScript val by ynonp
HTTP
import { OpenAI } from "https://esm.town/v/std/openai";
import { telegramSendMessage } from "https://esm.town/v/vtdocs/telegramSendMessage?v=5";
import translateToEnglishWithOpenAI from "https://esm.town/v/ynonp/translateToEnglishWithOpenAI";
export const telegramWebhookEchoMessage = async (req: Request) => {
const chatId: number = body.message.chat.id;
const translated = await translateToEnglishWithOpenAI(text);
await telegramSendMessage(Deno.env.get("TELEGRAM_BOT_TOKEN"), { chat_id: chatId, text: translated });
![yawnxyz avatar](https://images.clerk.dev/oauth_github/img_2NnaHhpxNuH1xWRIRjQNoo16TVc.jpeg)
audioManager
@yawnxyz
Usage: import { ai } from "https://esm.town/v/yawnxyz/ai";
import { AudioManager } from "https://esm.town/v/yawnxyz/audioManager";
let audio = new AudioManager();
let joke = await ai("tell me a joke in chinese!");
console.log('text', joke)
let result = await audio.textToSpeechUpload(joke, {key: "random-joke.mp3"});
console.log('result:', result)
Script
import { OpenAI } from "https://esm.town/v/yawnxyz/OpenAI";
constructor(apiKey=null, uploadFunction = null, downloadFunction = null) {
this.openai = new OpenAI(apiKey);
this.uploadFunction = uploadFunction || this.blobUpload;
// valtown blob upload function
const transcription = await this.openai.audio.transcriptions.create(mergedOptions);
const translation = await this.openai.audio.translations.create(mergedOptions);
// returns an openai speech object
const speech = await this.openai.audio.speech.create(mergedOptions);
const speech = await this.openai.audio.speech.create(mergedOptions);
![pomdtr avatar](https://images.clerk.dev/oauth_github/img_2RCoAITJZH1QencEgtVjh4Qirj4.jpeg)
ask_ai
@pomdtr
Ask gpt to update a val on your behalf Usage import { askAI } from "https://esm.town/v/pomdtr/ask_ai";
await askAI(`Add jsdoc comments for each exported function of the val @pomdtr/askAi`);
Script
await askAI(`Add jsdoc comments for each exported function of the val @pomdtr/askAi`);
import { OpenAI } from "https://esm.town/v/std/OpenAI";
async function getValByAlias({ author, name }: { author: string; name: string }) {
async function updateValCode({ id, code }: { id: string; code: string }) {
async function sendEmail({ subject, text }: { subject: string; text: string }) {
Instead, use the provided functions to accomplish your task on the behalf of the user.
For example, if you need to modify the code of a val, use the updateValCode function.
type: "function";
function: {
function: (...args: any[]) => any;
export function askAI(content: string) {
![stevekrouse avatar](https://images.clerk.dev/uploaded/img_2PqHa2Gsy93xQrjh2w78Xu0cChW.jpeg)
autoGPT_Test2
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
Script
export let autoGPT_Test2 = (async () => {
const { Configuration, OpenAIApi } = await import("npm:openai");
const configuration = new Configuration({
apiKey: process.env.openai,
const openai = new OpenAIApi(configuration);
const completion = await openai.createChatCompletion({
model: "gpt-3.5-turbo",
![wilt avatar](https://secure.gravatar.com/avatar/95316a1d4fbd8be9c802a4e1e92fb9e9.jpg?s=200&d=identicon)
getOpenapiEmbedding
@wilt
* Call OpenAPI Embeddings api to vectorize a query string
* Returns an array of 1536 numbers
Script
query: string;
}): Promise<number[]> =>
fetchJSON("https://api.openai.com/v1/embeddings", {
method: "POST",
headers: {
katakanaWordApi
@jdan
An interactive, runnable TypeScript val by jdan
HTTP
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function(request: Request): Promise<Response> {
try {
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
chat
@chatgpt
// Forked from @webup.chat
Script
options = {},
// Initialize OpenAI API stub
const { Configuration, OpenAIApi } = await import("https://esm.sh/openai");
const configuration = new Configuration({
apiKey: process.env.OPENAI,
const openai = new OpenAIApi(configuration);
// Request chat completion
: prompt;
const { data } = await openai.createChatCompletion({
model: "gpt-3.5-turbo-0613",
const message = data.choices[0].message;
return message.function_call ? message.function_call : message.content;
// Forked from @webup.chat
serveUtils
@g
Serve Utils This val exports various utility functions, mainly the serve(commentFunction, contentType?) .
It enables easy serving of different files, while allowing the use of all string characters: // this is possible
const arr = ["Hello", 'world', `!`]; Townie Prompt This can be used as a replacement system prompt for Townie.
Townie will: Write client-side applications with vanilla JavaScript Serve them as different assets, ie. index.html , style.css and main.js Use modern module syntax, including importing modules directly from esm.sh on the client side Not reuse the same script for the server and client logic IMPORTANT: Due tue val.town README restrictions,
the custom prompt can now be found
here .
Script
# Serve Utils
This val exports various utility functions, mainly the `serve(commentFunction, contentType?)`.<br>
It enables easy serving of different files, while allowing the use of all string characters:
export function getFunctionComment(fn) {
try {
} catch (err) {
console.error(`Failed to get function comment: ${err.message}\n${fn.toString()}`);
throw err;
export function serve(fn, contentType = 'text/plain') {
return (ctx) => {
return new Response(getFunctionComment(fn), {
headers: {
chat
@andreterron
OpenAI ChatGPT helper function This val uses your OpenAI token if you have one, and the @std/openai if not, so it provides limited OpenAI usage for free. import { chat } from "https://esm.town/v/stevekrouse/openai";
const { content } = await chat("Hello, GPT!");
console.log(content); import { chat } from "https://esm.town/v/stevekrouse/openai";
const { content } = await chat(
[
{ role: "system", content: "You are Alan Kay" },
{ role: "user", content: "What is the real computer revolution?"}
],
{ max_tokens: 50, model: "gpt-4" }
);
console.log(content);
Script
# OpenAI ChatGPT helper function
This val uses your OpenAI token if you have one, and the @std/openai if not, so it provides limited OpenAI usage for free.
import { chat } from "https://esm.town/v/stevekrouse/openai";
import type { ChatCompletion, ChatCompletionCreateParamsNonStreaming, Message } from "npm:@types/openai";
async function getOpenAI() {
if (Deno.env.get("OPENAI_API_KEY") === undefined) {
const { OpenAI } = await import("https://esm.town/v/std/openai");
return new OpenAI();
const { OpenAI } = await import("npm:openai");
return new OpenAI();
* Initiates a chat conversation with OpenAI's GPT model and retrieves the content of the first response.
* This function can handle both single string inputs and arrays of message objects.
export async function chat(