Search

Results include substring matches and semantically similar vals. Learn more
athyuttamre avatar
digitalYellowRoadrunner
@athyuttamre
An interactive, runnable TypeScript val by athyuttamre
Script
import { OpenAI } from "npm:openai";
import { zodResponseFormat } from "npm:openai/helpers/zod";
import { z } from "npm:zod";
country: z.string(),
const client = new OpenAI({ apiKey: Deno.env.get("OPENAI_API_KEY") });
async function main() {
thatsmeadarsh avatar
star_a_github_repository_with_natural_language
@thatsmeadarsh
Using OpenAI Assistant API, Composio to Star a Github Repo This is an example code of using Composio to star a github Repository by creating an AI Agent using OpenAI API Goal Enable OpenAI assistants to perform tasks like starring a repository on GitHub via natural language commands. Tools List of supported tools . FAQs How to get Composio API key? Open app.composio.dev and log in to your account. Then go to app.composio.dev/settings . Navigate to the API Keys -> Generate a new API key .
Script
# Using OpenAI Assistant API, Composio to Star a Github Repo
This is an example code of using Composio to star a github Repository by creating an AI Agent using OpenAI API
## Goal
Enable OpenAI assistants to perform tasks like starring a repository on GitHub via natural language commands.
## Tools
import { OpenAI } from "https://esm.town/v/std/openai";
import { OpenAIToolSet } from "npm:composio-core";
const COMPOSIO_API_KEY = Deno.env.get("COMPOSIO_API_KEY"); // Getting the API key from the environment
const toolset = new OpenAIToolSet({ apiKey: COMPOSIO_API_KEY });
// Creating an authentication function for the user
const instruction = "Star a repo ComposioHQ/composio on GitHub";
const client = new OpenAI();
const response = await client.chat.completions.create({
vladimyr avatar
gptTag
@vladimyr
// Initialize the gpt function with the system message
Script
import type { ChatCompletion, ChatCompletionCreateParamsNonStreaming, Message } from "npm:@types/openai";
async function getOpenAI() {
// if you don't have a key, use our std library version
if (Deno.env.get("OPENAI_API_KEY") === undefined) {
const { OpenAI } = await import("https://esm.town/v/std/openai");
return new OpenAI();
} else {
const { OpenAI } = await import("npm:openai");
return new OpenAI();
function startChat(chatOptions: Omit<ChatCompletionCreateParamsNonStreaming, "messages"> & { system: string } = {
return async function gpt(strings, ...values) {
const openai = await getOpenAI();
const input = strings.reduce((result, str, i) => {
messages,
const completion = await openai.chat.completions.create(createParams);
return { ...completion, content: completion.choices[0].message.content };
yawnxyz avatar
oliveButterfly
@yawnxyz
An interactive, runnable TypeScript val by yawnxyz
Script
import { fetch } from "https://esm.town/v/std/fetch";
export async function openaiUploadFile({ key, data, purpose = "assistants" }: {
key: string;
formData.append("file", file, "data.json");
let result = await fetch("https://api.openai.com/v1/files", {
method: "POST",
if (result.error)
throw new Error("OpenAI Upload Error: " + result.error.message);
else
stevekrouse avatar
gpt4Example
@stevekrouse
GPT4 Example This uses the brand new gpt-4-1106-preview . To use this, set OPENAI_API_KEY in your Val Town Secrets .
Script
This uses the brand new `gpt-4-1106-preview`.
To use this, set `OPENAI_API_KEY` in your [Val Town Secrets](https://www.val.town/settings/secrets).
import { OpenAI } from "npm:openai";
const openai = new OpenAI();
let chatCompletion = await openai.chat.completions.create({
messages: [{
weaverwhale avatar
chat
@weaverwhale
OpenAI ChatGPT helper function This val uses your OpenAI token if you have one, and the @std/openai if not, so it provides limited OpenAI usage for free. import { chat } from "https://esm.town/v/stevekrouse/openai"; const { content } = await chat("Hello, GPT!"); console.log(content); import { chat } from "https://esm.town/v/stevekrouse/openai"; const { content } = await chat( [ { role: "system", content: "You are Alan Kay" }, { role: "user", content: "What is the real computer revolution?"} ], { max_tokens: 50, model: "gpt-4" } ); console.log(content);
Script
# OpenAI ChatGPT helper function
This val uses your OpenAI token if you have one, and the @std/openai if not, so it provides limited OpenAI usage for free.
import { chat } from "https://esm.town/v/stevekrouse/openai";
import type { ChatCompletion, ChatCompletionCreateParamsNonStreaming, Message } from "npm:@types/openai";
async function getOpenAI() {
if (Deno.env.get("OPENAI_API_KEY") === undefined) {
const { OpenAI } = await import("https://esm.town/v/std/openai");
return new OpenAI();
const { OpenAI } = await import("npm:openai");
return new OpenAI();
* Initiates a chat conversation with OpenAI's GPT model and retrieves the content of the first response.
const openai = await getOpenAI();
const completion = await openai.chat.completions.create(createParams);
snm avatar
gpt3
@snm
* OpenAI text completion. https://platform.openai.com/docs/api-reference/completions * * val.town has generously provided a free daily quota. Until the quota is met, no need to provide an API key. * To see if the quota has been met, you can run @patrickjm.openAiFreeQuotaExceeded() * * For full REST API access, see @patrickjm.openAiTextCompletion
Script
import { trackOpenAiFreeUsage } from "https://esm.town/v/snm/trackOpenAiFreeUsage";
import { openAiTextCompletion } from "https://esm.town/v/patrickjm/openAiTextCompletion?v=8";
import { openAiModeration } from "https://esm.town/v/snm/openAiModeration";
import { openAiFreeQuotaExceeded } from "https://esm.town/v/patrickjm/openAiFreeQuotaExceeded?v=2";
import { openAiFreeUsageConfig } from "https://esm.town/v/snm/openAiFreeUsageConfig";
* OpenAI text completion. https://platform.openai.com/docs/api-reference/completions
* To see if the quota has been met, you can run @patrickjm.openAiFreeQuotaExceeded()
* For full REST API access, see @patrickjm.openAiTextCompletion
openAiKey?: string,
const apiKey = params.openAiKey ?? openAiFreeUsageConfig.key;
jidun avatar
Ms_Spangler
@jidun
This is Ms. Spangler, an advanced AI assistant specialized in U.S. education, with particular expertise in Texas junior high school curriculum (grades 6-8). Your primary goal is to provide accurate, age-appropriate academic support while fostering critical thinking and understanding. Core Attributes: Explain concepts clearly using grade-appropriate language Provide relevant examples from everyday life Break down complex topics into manageable steps Encourage problem-solving rather than giving direct answers Maintain alignment with Texas Essential Knowledge and Skills (TEKS) standards Adapt explanations based on student comprehension level Promote growth mindset and learning from mistakes Subject Matter Expertise: Mathematics: Pre-algebra and introductory algebra Geometry fundamentals Rational numbers and operations Statistical thinking and probability Mathematical problem-solving strategies Science: Life science and biology basics Physical science principles Earth and space science Scientific method and inquiry Laboratory safety and procedures English Language Arts: Reading comprehension strategies Writing composition and structure Grammar and mechanics Literary analysis Research skills Social Studies: Texas history and geography U.S. history through reconstruction World cultures and geography Civics and government Economics fundamentals Response Guidelines: First assess the student's current understanding level Use scaffolding techniques to build on existing knowledge Provide visual aids or diagrams when beneficial Include practice problems or examples Offer positive reinforcement and constructive feedback Suggest additional resources for further learning Check for understanding through targeted questions Safety and Ethics: Maintain academic integrity Encourage independent thinking Protect student privacy Provide accurate, fact-based information Promote digital citizenship Support inclusive learning environments When responding to questions: Acknowledge the question and verify understanding Connect to relevant TEKS standards Present information in clear, logical steps Use multiple modalities (visual, verbal, mathematical) Provide opportunities for practice Check for comprehension Offer extension activities for advanced learning Always prioritize: Student safety and well-being Academic integrity Grade-level appropriateness TEKS alignment Growth mindset development Critical thinking skills Real-world applications
HTTP
try {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
} catch (error) {
console.error("OpenAI error:", error);
return c.json({ response: "Neural networks malfunctioning. Try again, human." });
jidun avatar
Albert
@jidun
Superintelligent AI System Prompt Core Identity You are a superintelligent analytical system with comprehensive knowledge across all domains. Primary Protocol Execute precise multi-step analysis of queries through: Atomic decomposition of user questions Multi-perspective analysis Rigorous fact-checking Logical flow verification Double-validation of all outputs Core Methodology Parse queries with extreme precision Identify explicit/implicit requirements Verify across knowledge domains Cross-check calculations Validate logical consistency Assess practical applicability Pre-Response Checklist Outline key points Verify accuracy Check calculations twice Validate assumptions Assess edge cases Response Criteria Maintain exceptional precision while preserving clarity Highlight confidence levels Note key assumptions Provide cross-domain insights when relevant Quality Standards Verify all claims Validate mathematical accuracy Ensure logical consistency Confirm practical utility Highlight uncertainties Acknowledge limitations Communication Guidelines Present structured, clear information Use precise terminology Explain complex concepts thoroughly Maintain scholarly rigor while ensuring accessibility Critical Thinking Framework Apply formal logic Utilize statistical reasoning Implement systems thinking Evaluate evidence quality Identify potential biases Final Verification Protocol Perform comprehensive self-review before output submission to ensure: Accuracy Completeness Practical value
HTTP
try {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
} catch (error) {
console.error("OpenAI error:", error);
return c.json({ response: "Neural networks malfunctioning. Try again, human." });
mjweaver01 avatar
PersonalizationGPT
@mjweaver01
PersonalizationGPT You are a helpful personalization assistant Use GPT to return JIT personalization for client side applications. If you fork this, you'll need to set OPENAI_API_KEY in your Val Town Secrets .
HTTP
Use GPT to return JIT personalization for client side applications.
If you fork this, you'll need to set `OPENAI_API_KEY` in your [Val Town Secrets](https://www.val.town/settings/secrets).
import { Hono } from "npm:hono";
import { OpenAI } from "npm:openai";
const defaultUser = {
const personalizationGPT = async (user: UserObject) => {
const openai = new OpenAI();
let chatCompletion = await openai.chat.completions.create({
messages: [
yawnxyz avatar
ai
@yawnxyz
An http and class wrapper for Vercel's AI SDK Usage: Groq: https://yawnxyz-ai.web.val.run/generate?prompt="tell me a beer joke"&provider=groq&model=llama3-8b-8192 Perplexity: https://yawnxyz-ai.web.val.run/generate?prompt="what's the latest phage directory capsid & tail article about?"&provider=perplexity Mistral: https://yawnxyz-ai.web.val.run/generate?prompt="tell me a joke?"&provider=mistral&model="mistral-small-latest" async function calculateEmbeddings(text) { const url = `https://yawnxyz-ai.web.val.run/generate?embed=true&value=${encodeURIComponent(text)}`; try { const response = await fetch(url); const data = await response.json(); return data; } catch (error) { console.error('Error calculating embeddings:', error); return null; } }
HTTP
import { createOpenAI } from "npm:@ai-sdk/openai";
const openai = createOpenAI({
// apiKey = Deno.env.get("OPENAI_API_KEY");
apiKey: Deno.env.get("OPENAI_API_KEY_COVERSHEET")
const groq = createOpenAI({
baseURL: 'https://api.groq.com/openai/v1',
const perplexity = createOpenAI({
this.defaultProvider = options.provider || 'openai';
case 'openai':
result = await this.generateOpenAIResponse({ model, prompt, maxTokens, temperature, streaming, schema, system, messag
jdan avatar
grayWildfowl
@jdan
An interactive, runnable TypeScript val by jdan
Script
import { OpenAI } from "https://esm.town/v/std/openai?v=4";
const openai = new OpenAI();
async function runConversation() {
`.replaceAll(/\s+/g, "");
const response = await openai.chat.completions.create({
messages: [
patrickjm avatar
openAiTextCompletion
@patrickjm
An interactive, runnable TypeScript val by patrickjm
Script
import { fetchJSON } from "https://esm.town/v/stevekrouse/fetchJSON?v=41";
export let openAiTextCompletion = async (params: {
/** https://beta.openai.com/account/api-keys */
apiKey: string,
/** Optional. https://beta.openai.com/account/org-settings */
org?: string,
// REST args, see https://beta.openai.com/docs/api-reference/completions/create
prompt: string,
throw new Error(
"Please provide 'apiKey' param. See: https://beta.openai.com/account/api-keys "
const { apiKey, org, ...args } = params;
args.stream = false;
const response = await fetchJSON("https://api.openai.com/v1/completions", {
method: "POST",
Authorization: `Bearer ${params.apiKey}`,
...(params.org ? { "OpenAI-Organization": params.org } : {}),
body: JSON.stringify(args),
stevekrouse avatar
modelInvoke
@stevekrouse
An interactive, runnable TypeScript val by stevekrouse
Script
import process from "node:process";
import { ChatOpenAI } from "npm:langchain/chat_models/openai";
const model = new ChatOpenAI({
temperature: 0.9,
openAIApiKey: process.env.openai,
export const modelInvoke = model.invoke("What is your name?");
jdan avatar
katakanaWordApi
@jdan
An interactive, runnable TypeScript val by jdan
HTTP
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function(request: Request): Promise<Response> {
try {
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [