Search

Results include substring matches and semantically similar vals. Learn more
Aizen avatar
adeptSalmonOx
@Aizen
An interactive, runnable TypeScript val by Aizen
Script
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
canglangdahai avatar
beigeEarthworm
@canglangdahai
An interactive, runnable TypeScript val by canglangdahai
HTTP
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function(req: Request): Promise<Response> {
if (req.method === "OPTIONS") {
status: 204,
const openai = new OpenAI();
try {
model: "gpt-4-turbo",
const stream = await openai.chat.completions.create(body);
if (!body.stream) {
uuz avatar
api
@uuz
An interactive, runnable TypeScript val by uuz
Cron
export let api = async ({ prompt = "讲一个笑话" }) => {
return gpt3({
openAiKey: process.env.openai_key,
prompt,
.then((result) => result.split("\n"));
jacoblee93 avatar
model
@jacoblee93
An interactive, runnable TypeScript val by jacoblee93
Script
import { ChatOpenAI } from "langchain/chat_models/openai";
const model = new ChatOpenAI({
temperature: 0.9,
openAIApiKey: @me.secrets.OPENAI_API_KEY,
return model.invoke("What is your name?");
stevekrouse avatar
gptExample
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
Script
import process from "node:process";
import { OpenAI } from "npm:openai";
const openai = new OpenAI({ apiKey: process.env.openai });
let chatCompletion = await openai.chat.completions.create({
messages: [{ role: "user", content: "Make a short joke or pun" }],
AIWB avatar
translator
@AIWB
Press to talk, and get a translation! The app is set up so you can easily have a conversation between two people. The app will translate between the two selected languages, in each voice, as the speakers talk. Add your OpenAI API Key, and make sure to open in a separate window for Mic to work.
HTTP
Add your OpenAI API Key, and make sure to open in a separate window for Mic to work.
import { OpenAI } from "npm:openai";
const openai = new OpenAI({ apiKey: Deno.env.get("OPENAI_API_KEY_VOICE") });
const transcription = await openai.audio.transcriptions.create({
console.error("OpenAI API error:", error);
// Helper function to get the supported MIME type
function getSupportedMimeType() {
const response = await openai.chat.completions.create({
console.error("OpenAI API error:", error);
const mp3 = await openai.audio.speech.create({
console.error("OpenAI API error:", error);
yawnxyz avatar
aiHonoHtmxAlpineStreamingExample
@yawnxyz
This Frankenstein of an example shows how well Hono, htmx, and Alpine play together. Hono serves the frameworks, API calls, and functions htmx handles ajax requests, and can very powerfully request html and other content to swap out the front-end alpine handles app-like reactivity without having to always resort to server round trips
HTTP
This Frankenstein of an example shows how well Hono, htmx, and Alpine play together.
- Hono serves the frameworks, API calls, and functions
- htmx handles ajax requests, and can very powerfully request html and other content to swap out the front-end
import { stream, streamSSE } from "https://deno.land/x/hono@v4.3.11/helper.ts";
import { OpenAI } from "npm:openai";
import { ai } from "https://esm.town/v/yawnxyz/ai";
const app = new Hono();
const openai = new OpenAI();
app.use('*', cors({
const SOURCE_URL = ""; // leave blank for deno deploy / native
// in this example we use a custom function instead of htmx's custom sse extension, since it never closes!
function getResponse({prompt}) {
const outputDiv = document.getElementById('output');
<script>
// instead of htmx sse we pass it a custom function
${getResponse}
stevekrouse avatar
openaiStreamingDemo
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
HTTP
import OpenAI from "npm:openai";
const openai = new OpenAI();
export default async (req) => {
const stream = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
yawnxyz avatar
aiImageExample
@yawnxyz
// Function to handle image and text input using OpenAI's GPT-4-turbo
Script
import { z } from "npm:zod";
// Function to handle image and text input using OpenAI's GPT-4-turbo
async function handleImageChat() {
const initialMessages = [
model: "gpt-4-turbo",
provider: "openai",
messages: [
console.log("Generated Response:", JSON.stringify(response, null, 2));
// Run the function
await handleImageChat();
wolf avatar
generateFunction
@wolf
An interactive, runnable TypeScript val by wolf
Script
import { OpenAI } from "https://esm.town/v/std/openai";
function extractCode(str: string): string {
export async function generateFunction(
functionName: string,
if (!functionName) {
return "Please provide a function name";
const openai = new OpenAI();
`Generate a TypeScript function named "${functionName}" with the following parameters: ${parameters}. ONLY RETURN VALID J
const completion = await openai.chat.completions.create({
"You are a helpful assistant that generates JAVASCRIPT functions. Be fuzzy with typing since you do not know what t
stevekrouse avatar
gpt4TurboExample
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
Script
import { OpenAI } from "npm:openai";
const openai = new OpenAI();
let chatCompletion = await openai.chat.completions.create({
messages: [{ role: "user", content: "Teach me a word I don't know" }],
thatsmeadarsh avatar
ai_bounty_finder
@thatsmeadarsh
// define the apps you want to use
Script
import readline from "node:readline";
import { createOpenAI } from "npm:@ai-sdk/openai";
import { type CoreMessage, streamText } from "npm:ai";
const messages: CoreMessage[] = [];
const openai = createOpenAI({
apiKey: Deno.env.get("OPENAI_API_KEY") || "",
// define the apps you want to use
const result = streamText({
model: openai("gpt-4o"),
tools,
victorli avatar
aigreeting
@victorli
An interactive, runnable TypeScript val by victorli
HTTP
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function(req: Request): Promise<Response> {
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
yawnxyz avatar
openAIHonoChatStreamExample
@yawnxyz
Example of using Hono to stream OpenAI's streamed chat responses!
HTTP
Example of using Hono to stream OpenAI's streamed chat responses!
import { stream } from "https://deno.land/x/hono@v4.3.11/helper.ts";
import { OpenAI } from "npm:openai";
const app = new Hono();
const openai = new OpenAI();
app.use('*', cors({
const SOURCE_URL = ""; // leave blank for deno deploy / native
// const SOURCE_URL = "https://yawnxyz-openAIHonoChatStreamSample.web.val.run"; // valtown as generator - no SSE
// const SOURCE_URL = "https://funny-crow-81.deno.dev"; // deno deploy as generator
<meta charset="UTF-8" />
<title>OpenAI Streaming Example</title>
<style>
<body>
<h1>OpenAI Streaming Example</h1>
<label for="prompt">Prompt:</label>
const outputDiv = document.getElementById('output');
function getResponse() {
const prompt = document.getElementById('prompt').value;
const outputDiv = document.getElementById('output');
function testStream() {
const url = "${SOURCE_URL}/chat/test";
try {
const chatStream = await openai.chat.completions.create({
model: "gpt-4",
royshaon avatar
utmostBlackChicken
@royshaon
An interactive, runnable TypeScript val by royshaon
Script
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [