Search

Results include substring matches and semantically similar vals. Learn more
pattysi avatar
OpenAI
@pattysi
OpenAI - Docs ↗ Use OpenAI's chat completion API with std/openai . This integration enables access to OpenAI's language models without needing to acquire API keys. Streaming is not yet supported. Upvote the HTTP response streaming feature request if you need it! Usage import { OpenAI } from "https://esm.town/v/std/openai"; const openai = new OpenAI(); const completion = await openai.chat.completions.create({ messages: [ { role: "user", content: "Say hello in a creative way" }, ], model: "gpt-4", max_tokens: 30, }); console.log(completion.choices[0].message.content); Limits While our wrapper simplifies the integration of OpenAI, there are a few limitations to keep in mind: Usage Quota : We limit each user to 10 requests per minute. Features : Chat completions is the only endpoint available. If these limits are too low, let us know! You can also get around the limitation by using your own keys: Create your own API key on OpenAI's website Create an environment variable named OPENAI_API_KEY Use the OpenAI client from npm:openai : import { OpenAI } from "npm:openai"; const openai = new OpenAI(); 📝 Edit docs
Script
# OpenAI - [Docs ↗](https://docs.val.town/std/openai)
Use OpenAI's chat completion API with [`std/openai`](https://www.val.town/v/std/openai). This integration enables access to O
import { OpenAI } from "https://esm.town/v/std/openai";
import { type ClientOptions, OpenAI as RawOpenAI } from "npm:openai";
* API Client for interfacing with the OpenAI API. Uses Val Town credentials.
export class OpenAI {
private rawOpenAIClient: RawOpenAI;
* API Client for interfacing with the OpenAI API. Uses Val Town credentials.
* @param {Core.Fetch} [opts.fetch] - Specify a custom `fetch` function implementation.
this.rawOpenAIClient = new RawOpenAI({
baseURL: "https://std-openaiproxy.web.val.run/v1",
return this.rawOpenAIClient.chat;
get chat(): RawOpenAI["beta"]["chat"] {
patrickjm avatar
gpt3
@patrickjm
* OpenAI text completion. https://platform.openai.com/docs/api-reference/completions * * val.town has generously provided a free daily quota. Until the quota is met, no need to provide an API key. * To see if the quota has been met, you can run @patrickjm.openAiFreeQuotaExceeded() * * For full REST API access, see @patrickjm.openAiTextCompletion
RPC (deprecated)
import { trackOpenAiFreeUsage } from "https://esm.town/v/patrickjm/trackOpenAiFreeUsage";
import { openAiTextCompletion } from "https://esm.town/v/patrickjm/openAiTextCompletion";
import { openAiModeration } from "https://esm.town/v/patrickjm/openAiModeration";
import { openAiFreeQuotaExceeded } from "https://esm.town/v/patrickjm/openAiFreeQuotaExceeded";
import { openAiFreeUsageConfig } from "https://esm.town/v/patrickjm/openAiFreeUsageConfig";
* OpenAI text completion. https://platform.openai.com/docs/api-reference/completions
* To see if the quota has been met, you can run @patrickjm.openAiFreeQuotaExceeded()
* For full REST API access, see @patrickjm.openAiTextCompletion
openAiKey?: string,
const apiKey = params.openAiKey ?? openAiFreeUsageConfig.key;
scio avatar
gpt4_playground
@scio
An interactive, runnable TypeScript val by scio
Script
export const gpt4_playground = (async (query) => {
const { OpenAI } = await import("https://deno.land/x/openai/mod.ts");
const openAI = new OpenAI(process.env.OPENAI_KEY);
const chatCompletion = openAI.createChatCompletion({
model: "gpt-4",
return await chatCompletion;
})("Please explain how OpenAI GPT-4 is better than GPT-3");
jidun avatar
Albert
@jidun
Superintelligent AI System Prompt Core Identity You are a superintelligent analytical system with comprehensive knowledge across all domains. Primary Protocol Execute precise multi-step analysis of queries through: Atomic decomposition of user questions Multi-perspective analysis Rigorous fact-checking Logical flow verification Double-validation of all outputs Core Methodology Parse queries with extreme precision Identify explicit/implicit requirements Verify across knowledge domains Cross-check calculations Validate logical consistency Assess practical applicability Pre-Response Checklist Outline key points Verify accuracy Check calculations twice Validate assumptions Assess edge cases Response Criteria Maintain exceptional precision while preserving clarity Highlight confidence levels Note key assumptions Provide cross-domain insights when relevant Quality Standards Verify all claims Validate mathematical accuracy Ensure logical consistency Confirm practical utility Highlight uncertainties Acknowledge limitations Communication Guidelines Present structured, clear information Use precise terminology Explain complex concepts thoroughly Maintain scholarly rigor while ensuring accessibility Critical Thinking Framework Apply formal logic Utilize statistical reasoning Implement systems thinking Evaluate evidence quality Identify potential biases Final Verification Protocol Perform comprehensive self-review before output submission to ensure: Accuracy Completeness Practical value
HTTP
function addMessage(content, type) {
function loadChatHistories() {
savedChats.forEach(function(chat, index) {
function saveChat() {
const messages = Array.from(chatMessages.children).map(function(msg) {
function loadSelectedChat() {
selectedChat.messages.forEach(function(msg) {
function deleteSelectedChat() {
async function sendMessage() {
messageInput.addEventListener('keypress', function(e) {
DFB avatar
askAI
@DFB
An interactive, runnable TypeScript val by DFB
Script
import { OpenAI } from "https://deno.land/x/openai@v4.54.0/mod.ts";
const apiKey = Deno.env.get("OPENAI_API_KEY");
const openai = new OpenAI({ apiKey });
export async function askAI(msg: string) {
const cfg = {
max_tokens: 3000,
} satisfies OpenAI.ChatCompletionCreateParamsNonStreaming;
const chat = await openai.chat.completions.create(cfg);
return chat.choices?.[0].message.content;
webup avatar
getModelBuilder
@webup
An interactive, runnable TypeScript val by webup
Script
export async function getModelBuilder(spec: {
provider?: "openai" | "huggingface";
} = { type: "llm", provider: "openai" }, options?: any) {
if (spec?.provider === "openai")
args.openAIApiKey = process.env.OPENAI;
matches({ type: "llm", provider: "openai" }),
const { OpenAI } = await import("npm:langchain/llms/openai");
return new OpenAI(args);
matches({ type: "chat", provider: "openai" }),
const { ChatOpenAI } = await import("npm:langchain/chat_models/openai");
yawnxyz avatar
gettingOpenAiStreamingtoWork
@yawnxyz
Blatantly copied code from theseephist's webgen: https://www.val.town/v/thesephist/webgen. Couldn't get streaming to work in valtown myself!!
HTTP
import OpenAI from "npm:openai";
const openai = new OpenAI();
export default async (req) => {
// Generate the AI response
const stream = await openai.chat.completions.create({
model: "gpt-4o",
stevekrouse avatar
openaiFineTune
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
Script
import { fetchJSON } from "https://esm.town/v/stevekrouse/fetchJSON";
export function openaiFineTune({ key, model, trainingFile }: {
key: string;
model?: string;
trainingFile: string;
return fetchJSON(
"https://api.openai.com/v1/fine_tuning/jobs",
method: "POST",
body: JSON.stringify({
stevekrouse avatar
tealBadger
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
HTTP
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function(req: Request): Promise<Response> {
const openai = new OpenAI();
const stream = await openai.chat.completions.create({
stream: true,
sky_porie_fire443 avatar
robustCopperCardinal
@sky_porie_fire443
An interactive, runnable TypeScript val by sky_porie_fire443
Script
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
scio avatar
ask_gpt4
@scio
An interactive, runnable TypeScript val by scio
Script
export const ask_gpt4 = async (query) => {
const { OpenAI } = await import("https://deno.land/x/openai/mod.ts");
const openAI = new OpenAI(process.env.OPENAI_KEY);
const chatCompletion = await openAI.createChatCompletion({
model: "gpt-4",
jidun avatar
Ms_Spangler
@jidun
This is Ms. Spangler, an advanced AI assistant specialized in U.S. education, with particular expertise in Texas junior high school curriculum (grades 6-8). Your primary goal is to provide accurate, age-appropriate academic support while fostering critical thinking and understanding. Core Attributes: Explain concepts clearly using grade-appropriate language Provide relevant examples from everyday life Break down complex topics into manageable steps Encourage problem-solving rather than giving direct answers Maintain alignment with Texas Essential Knowledge and Skills (TEKS) standards Adapt explanations based on student comprehension level Promote growth mindset and learning from mistakes Subject Matter Expertise: Mathematics: Pre-algebra and introductory algebra Geometry fundamentals Rational numbers and operations Statistical thinking and probability Mathematical problem-solving strategies Science: Life science and biology basics Physical science principles Earth and space science Scientific method and inquiry Laboratory safety and procedures English Language Arts: Reading comprehension strategies Writing composition and structure Grammar and mechanics Literary analysis Research skills Social Studies: Texas history and geography U.S. history through reconstruction World cultures and geography Civics and government Economics fundamentals Response Guidelines: First assess the student's current understanding level Use scaffolding techniques to build on existing knowledge Provide visual aids or diagrams when beneficial Include practice problems or examples Offer positive reinforcement and constructive feedback Suggest additional resources for further learning Check for understanding through targeted questions Safety and Ethics: Maintain academic integrity Encourage independent thinking Protect student privacy Provide accurate, fact-based information Promote digital citizenship Support inclusive learning environments When responding to questions: Acknowledge the question and verify understanding Connect to relevant TEKS standards Present information in clear, logical steps Use multiple modalities (visual, verbal, mathematical) Provide opportunities for practice Check for comprehension Offer extension activities for advanced learning Always prioritize: Student safety and well-being Academic integrity Grade-level appropriateness TEKS alignment Growth mindset development Critical thinking skills Real-world applications
HTTP
function addMessage(content, type) {
function loadChatHistories() {
savedChats.forEach(function(chat, index) {
function saveChat() {
const messages = Array.from(chatMessages.children).map(function(msg) {
function loadSelectedChat() {
selectedChat.messages.forEach(function(msg) {
function deleteSelectedChat() {
async function sendMessage() {
messageInput.addEventListener('keypress', function(e) {
vladimyr avatar
gptTag
@vladimyr
// Initialize the gpt function with the system message
Script
import type { ChatCompletion, ChatCompletionCreateParamsNonStreaming, Message } from "npm:@types/openai";
async function getOpenAI() {
if (Deno.env.get("OPENAI_API_KEY") === undefined) {
const { OpenAI } = await import("https://esm.town/v/std/openai");
return new OpenAI();
const { OpenAI } = await import("npm:openai");
return new OpenAI();
function startChat(chatOptions: Omit<ChatCompletionCreateParamsNonStreaming, "messages"> & { system: string } = {
return async function gpt(strings, ...values) {
const openai = await getOpenAI();
kakiagp avatar
ai
@kakiagp
An interactive, runnable TypeScript val by kakiagp
HTTP
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function(req: Request): Promise<Response> {
if (req.method === "OPTIONS") {
status: 204,
const openai = new OpenAI();
try {
model: "gpt-4-turbo",
const stream = await openai.chat.completions.create(body);
if (!body.stream) {
yawnxyz avatar
translator
@yawnxyz
Press to talk, and get a translation! The app is set up so you can easily have a conversation between two people. The app will translate between the two selected languages, in each voice, as the speakers talk. Add your OpenAI API Key, and make sure to open in a separate window for Mic to work.
HTTP
Add your OpenAI API Key, and make sure to open in a separate window for Mic to work.
import { OpenAI } from "npm:openai";
const openai = new OpenAI(Deno.env.get("OPENAI_API_KEY_VOICE"));
const transcription = await openai.audio.transcriptions.create({
console.error('OpenAI API error:', error);
// Helper function to get the supported MIME type
function getSupportedMimeType() {
const response = await openai.chat.completions.create({
console.error('OpenAI API error:', error);
const mp3 = await openai.audio.speech.create({
console.error('OpenAI API error:', error);