Search

Results include substring matches and semantically similar vals. Learn more
bingo16 avatar
newChatGPT35
@bingo16
An interactive, runnable TypeScript val by bingo16
Script
content: "请介绍一下你自己",
const getCompletion = async () => {
const response = await fetch("https://api.openai.com/v1/chat/completions", {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${process.env.openaiKey}`,
body: JSON.stringify(postData),
const data = await response.json();
stevekrouse avatar
unserializeableLogEx
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
Script
export let unserializeableLogEx = (async () => {
const { Configuration, OpenAIApi } = await import("npm:openai");
const configuration = new Configuration({
apiKey: process.env.openai,
const openai = new OpenAIApi(configuration);
console.log(openai);
dantaeyoung avatar
weatherGPT
@dantaeyoung
If you fork this, you'll need to set OPENAI_API_KEY in your Val Town Secrets .
Cron
If you fork this, you'll need to set `OPENAI_API_KEY` in your [Val Town Secrets](https://www.val.town/settings/secrets).
import { email } from "https://esm.town/v/std/email?v=11";
import { OpenAI } from "npm:openai";
let location = "brooklyn ny";
).then(r => r.json());
const openai = new OpenAI();
let chatCompletion = await openai.chat.completions.create({
messages: [{
iamseeley avatar
resumeRecs
@iamseeley
An interactive, runnable TypeScript val by iamseeley
Script
if (!tokenBucket.consume()) {
throw new Error("Rate limit reached. Please try again later.");
const endpoint = 'https://api.openai.com/v1/chat/completions';
const model = 'gpt-4';
const messages = [
dthyresson avatar
movieMashup
@dthyresson
Movie Mashup It's Blader Runner meets Pretty in Pink. OpenAI generated movie maship title, tagline and treatments. Fal generated movie posters.
HTTP
It's Blader Runner meets Pretty in Pink.
OpenAI generated movie maship title, tagline and treatments.
Fal generated movie posters.
sharanbabu avatar
longOliveGuppy
@sharanbabu
// This chatbot app will use a simple React frontend to display messages and allow user input.
HTTP
// This chatbot app will use a simple React frontend to display messages and allow user input.
// The backend will use OpenAI's GPT model to generate responses.
// We'll use SQLite to store conversation history.
/** @jsxImportSource https://esm.sh/react */
jonataaroeira avatar
webgen
@jonataaroeira
To-dos: Spruce up styles a bit Write this README ~Add a cache!~ ~Try moving style tag to the bottom by prompting so content appears immediately and then becomes styled~ didn't work b/c CSS parsing isn't progressive Need more prompting to get the model not to generate placeholder-y content Better root URL page / index page with links to some good sample generations
HTTP
let pageDescriptionInstructions = "";
let pageResult = "";
// // 2. Do one OpenAI inference to expand that URL to a longer page description
const pageDescriptionStream = await togetherAI.inference("mistralai/Mixtral-8x7B-Instruct-v0.1", {
prompt: `
arthrod avatar
sureVioletSlug
@arthrod
An interactive, runnable TypeScript val by arthrod
HTTP
import { OpenAI } from "https://esm.town/v/std/openai";
import { email } from "https://esm.town/v/std/email";
const openai = new OpenAI();
const AUTH_TOKEN = Deno.env.get("valtown");
async function determineIntent(content: string): Promise<string> {
const completion = await openai.chat.completions.create({
messages: [
jacoblee93 avatar
multipleKeysAndMemoryConversationChainExample
@jacoblee93
An interactive, runnable TypeScript val by jacoblee93
Script
export const multipleKeysAndMemoryConversationChainExample = (async () => {
const { ChatOpenAI } = await import(
"https://esm.sh/langchain/chat_models/openai"
const { BufferMemory } = await import("https://esm.sh/langchain/memory");
const { ConversationChain } = await import("https://esm.sh/langchain/chains");
const llm = new ChatOpenAI({
modelName: "gpt-3.5-turbo",
openAIApiKey: process.env.OPENAI_API_KEY,
temperature: 0,
transcendr avatar
sendSassyEmail
@transcendr
// Configuration object for customizing the sassy message generation
Cron
import { email } from "https://esm.town/v/std/email";
import { OpenAI } from "https://esm.town/v/std/openai";
// Configuration object for customizing the sassy message generation
subject: "Daily Sass Delivery 🔥",
// Prompt configuration for OpenAI
prompts: {
export default async function() {
const openai = new OpenAI();
// Randomly select a prompt style
Math.floor(Math.random() * CONFIG.prompts.styles.length)
const completion = await openai.chat.completions.create({
messages: [
jxnblk avatar
VALLE
@jxnblk
VALL-E LLM code generation for vals! Make apps with a frontend, backend, and database. It's a bit of work to get this running, but it's worth it. Fork this val to your own profile. Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in tempValsParentFolderId . If you want to use OpenAI models you need to set the OPENAI_API_KEY env var . If you want to use Anthropic models you need to set the ANTHROPIC_API_KEY env var . Create a Val Town API token , open the browser preview of this val, and use the API token as the password to log in.
HTTP
* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-v
* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environ
thomaskangah avatar
healthtech4africa
@thomaskangah
AI Mental health support app for precision neuroscience
HTTP
export default async function server(request: Request): Promise<Response> {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const { sqlite } = await import("https://esm.town/v/stevekrouse/sqlite");
const openai = new OpenAI();
const SCHEMA_VERSION = 2;
systemMessage = "You are an AI mental health therapist providing follow-up support for a Kenyan youth who may be experi
const completion = await openai.chat.completions.create({
messages: [
chatgpt avatar
chat
@chatgpt
// Forked from @webup.chat
Script
options = {},
// Initialize OpenAI API stub
const { Configuration, OpenAIApi } = await import("https://esm.sh/openai");
const configuration = new Configuration({
apiKey: process.env.OPENAI,
const openai = new OpenAIApi(configuration);
// Request chat completion
: prompt;
const { data } = await openai.createChatCompletion({
model: "gpt-3.5-turbo-0613",
prashamtrivedi avatar
reasoninghelper
@prashamtrivedi
@jsxImportSource https://esm.sh/react@18.2.0
HTTP
const { OpenAI } = await import("npm:openai");
const { zodResponseFormat } = await import("npm:openai/helpers/zod");
const openai = new OpenAI();
console.log("OpenAI API Request:", { prompt, topic });
completion = await openai.beta.chat.completions.parse({
completion = await openai.beta.chat.completions.parse({
completion = await openai.beta.chat.completions.parse({
completion = await openai.beta.chat.completions.parse({
console.log("OpenAI API Response:", response);
console.error("Error in OpenAI API call:", error);
lisazz avatar
Test00_getModelBuilder
@lisazz
241219 練習 從 webup 而來
Script
provider?: "openai" | "huggingface";
} = { type: "llm", provider: "openai" }, options?: any) {
matches({ type: "llm", provider: "openai" }),
const { OpenAI } = await import("https://esm.sh/langchain/llms/openai");
return new OpenAI(args);
matches({ type: "chat", provider: "openai" }),
const { ChatOpenAI } = await import("https://esm.sh/langchain/chat_models/openai");
return new ChatOpenAI(args);
matches({ type: "embedding", provider: "openai" }),
const { OpenAIEmbeddings } = await import("https://esm.sh/langchain/embeddings/openai");