Search

Results include substring matches and semantically similar vals. Learn more
maxm avatar
ministerialLavenderSloth
@maxm
An interactive, runnable TypeScript val by maxm
HTTP (deprecated)
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function(req: Request): Promise<Response> {
const openai = new OpenAI();
const stream = await openai.chat.completions.create({
stream: true,
wilt avatar
getOpenapiEmbedding
@wilt
* Call OpenAPI Embeddings api to vectorize a query string * Returns an array of 1536 numbers
Script
query: string;
}): Promise<number[]> =>
fetchJSON("https://api.openai.com/v1/embeddings", {
method: "POST",
headers: {
weaverwhale avatar
langchainEx
@weaverwhale
An interactive, runnable TypeScript val by weaverwhale
Script
export const langchainEx = (async () => {
const { OpenAI } = await import("https://esm.sh/langchain/llms/openai");
const { PromptTemplate } = await import("https://esm.sh/langchain/prompts");
const { LLMChain } = await import("https://esm.sh/langchain/chains");
const model = new OpenAI({
temperature: 0.9,
openAIApiKey: process.env.openai,
maxTokens: 100,
stevekrouse avatar
gpt4FunctionCallingExample
@stevekrouse
// TODO pull out function call and initial message
Script
import { OpenAI } from "npm:openai";
const openai = new OpenAI();
const functionExpression = await openai.chat.completions.create({
"messages": [
let functionCallResult = { "temperature": "22", "unit": "celsius", "description": "Sunny" };
const result = await openai.chat.completions.create({
"messages": [
yawnxyz avatar
openAIHonoChatStreamExample
@yawnxyz
Example of using Hono to stream OpenAI's streamed chat responses!
HTTP (deprecated)
Example of using Hono to stream OpenAI's streamed chat responses!
import { stream } from "https://deno.land/x/hono@v4.3.11/helper.ts";
import { OpenAI } from "npm:openai";
const app = new Hono();
const openai = new OpenAI();
app.use('*', cors({
const SOURCE_URL = ""; // leave blank for deno deploy / native
// const SOURCE_URL = "https://yawnxyz-openAIHonoChatStreamSample.web.val.run"; // valtown as generator - no SSE
// const SOURCE_URL = "https://funny-crow-81.deno.dev"; // deno deploy as generator
<meta charset="UTF-8" />
<title>OpenAI Streaming Example</title>
<style>
<body>
<h1>OpenAI Streaming Example</h1>
<label for="prompt">Prompt:</label>
try {
const chatStream = await openai.chat.completions.create({
model: "gpt-4",
janpaul123 avatar
blogPostEmbeddingsDimensionalityReduction
@janpaul123
An interactive, runnable TypeScript val by janpaul123
Script
import _ from "npm:lodash";
import OpenAI from "npm:openai";
export default async function blogPostEmbeddingsDimensionalityReduction() {
"chat server integration",
const openai = new OpenAI();
async function getEmbedding(str) {
return (await openai.embeddings.create({
model: "text-embedding-3-large",
stevekrouse avatar
langchainEx
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
Script
export const langchainEx = (async () => {
const { OpenAI } = await import("https://esm.sh/langchain/llms/openai");
const { PromptTemplate } = await import("https://esm.sh/langchain/prompts");
const { LLMChain } = await import("https://esm.sh/langchain/chains");
const model = new OpenAI({
temperature: 0.9,
openAIApiKey: process.env.openai,
maxTokens: 100,
weaverwhale avatar
BlogChatbotServer
@weaverwhale
// This approach will create a chatbot using OpenAI's SDK with access to a blog's content stored in a JSON.
HTTP
// This approach will create a chatbot using OpenAI's SDK with access to a blog's content stored in a JSON.
// We'll use Hono for routing, OpenAI for the chatbot, and stream the responses to the client.
// The blog content will be stored as JSON and used as a tool in the chatbot's chain.
import { streamSSE } from "https://esm.sh/hono/streaming";
import { OpenAI } from "https://esm.town/v/std/openai";
const app = new Hono();
const { message } = await c.req.json();
const openai = new OpenAI();
return streamSSE(c, async (stream) => {
const blogSearchResults = searchBlogContent(message);
const completion = await openai.chat.completions.create({
model: "gpt-4-mini",
mxis avatar
chat
@mxis
OpenAI ChatGPT helper function This val uses your OpenAI token if you have one, and the @std/openai if not, so it provides limited OpenAI usage for free. import { chat } from "https://esm.town/v/stevekrouse/openai"; const { content } = await chat("Hello, GPT!"); console.log(content); import { chat } from "https://esm.town/v/stevekrouse/openai"; const { content } = await chat( [ { role: "system", content: "You are Alan Kay" }, { role: "user", content: "What is the real computer revolution?"} ], { max_tokens: 50, model: "gpt-4" } ); console.log(content);
Script
# OpenAI ChatGPT helper function
This val uses your OpenAI token if you have one, and the @std/openai if not, so it provides limited OpenAI usage for free.
import { chat } from "https://esm.town/v/stevekrouse/openai";
import type { ChatCompletion, ChatCompletionCreateParamsNonStreaming, Message } from "npm:@types/openai";
async function getOpenAI() {
if (Deno.env.get("OPENAI_API_KEY") === undefined) {
const { OpenAI } = await import("https://esm.town/v/std/openai");
return new OpenAI();
const { OpenAI } = await import("npm:openai");
return new OpenAI();
* Initiates a chat conversation with OpenAI's GPT model and retrieves the content of the first response.
const openai = await getOpenAI();
const completion = await openai.chat.completions.create(createParams);
patrickjm avatar
openAiModeration
@patrickjm
* Calls the OpenAI moderation model. Useful for determining if OpenAI will flag something you did. * https://platform.openai.com/docs/api-reference/moderations
Script
import { fetchJSON } from "https://esm.town/v/stevekrouse/fetchJSON?v=41";
* Calls the OpenAI moderation model. Useful for determining if OpenAI will flag something you did.
* https://platform.openai.com/docs/api-reference/moderations
export let openAiModeration = async ({
apiKey,
if (!apiKey) {
throw new Error("You must provide an OpenAI API Key");
const body: { model?: string, input: string|string[] } = {
const result = await fetchJSON(
"https://api.openai.com/v1/moderations",
method: "POST",
l2046a avatar
oddTanRoundworm
@l2046a
An interactive, runnable TypeScript val by l2046a
HTTP
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function(req: Request): Promise<Response> {
status: 204,
const openai = new OpenAI();
try {
model: "gpt-4-turbo",
const stream = await openai.chat.completions.create(body);
if (!body.stream) {
stevekrouse avatar
weatherGPT
@stevekrouse
Cron
import { email } from "https://esm.town/v/std/email?v=11";
import { OpenAI } from "npm:openai";
let location = "brooklyn ny";
).then(r => r.json());
const openai = new OpenAI();
let chatCompletion = await openai.chat.completions.create({
messages: [{
kyutarou avatar
gpt4Example
@kyutarou
GPT4 Example This uses the brand new gpt-4-1106-preview . To use this, set OPENAI_API_KEY in your Val Town Secrets .
Script
This uses the brand new `gpt-4-1106-preview`.
To use this, set `OPENAI_API_KEY` in your [Val Town Secrets](https://www.val.town/settings/secrets).
import { OpenAI } from "npm:openai";
Deno.env.get("OPENAI_API_KEY");
const openai = new OpenAI();
let chatCompletion = await openai.chat.completions.create({
messages: [{
inkpotmonkey avatar
instructorExample
@inkpotmonkey
Example copied https://instructor-ai.github.io/instructor-js/#usage into val.town You will need to fork this and properly set the apiKey and organisation for it to work.
Script
import Instructor from "https://esm.sh/@instructor-ai/instructor";
import OpenAI from "https://esm.sh/openai";
import { z } from "https://esm.sh/zod";
const openAISecrets = {
apiKey: getApiKey(),
organization: getOrganisationKey(),
const oai = new OpenAI(openAISecrets);
const client = Instructor({
jacoblee93 avatar
conversationalQAChainEx
@jacoblee93
An interactive, runnable TypeScript val by jacoblee93
Script
const { ChatOpenAI } = await import(
"https://esm.sh/langchain/chat_models/openai"
const { OpenAIEmbeddings } = await import(
"https://esm.sh/langchain/embeddings/openai"
const gpt35 = new ChatOpenAI({
openAIApiKey: process.env.OPENAI_API_KEY,
const gpt4 = new ChatOpenAI({
openAIApiKey: process.env.OPENAI_API_KEY,
new OpenAIEmbeddings({
openAIApiKey: process.env.OPENAI_API_KEY,