Search

205 results found for openai (455ms)

Code
202

_2 or _3) to create a fresh table.
### OpenAI
```ts
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
{ role: "user", content: "Say hello in a creative way" },
Note: When changing a SQLite table's schema, change the table's name (e.g., add _2 or _3) to cre
### OpenAI
```ts
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
{ role: "user", content: "Say hello in a creative way" },
_2 or _3) to create a fresh table.
### OpenAI
```ts
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
{ role: "user", content: "Say hello in a creative way" },
- **Handwriting canvas** — chisel-nib calligraphy engine with quill physics
- **OCR via Tesseract.js** — your handwriting → text, right in the browser
- **AI memory** — powered by OpenAI, with persistent conversation per session
- **The aesthetic** — parchment textures, ink fade effects, no UI chrome
import { OpenAI } from "https://esm.town/v/std/openai";
import { sqlite } from "https://esm.town/v/std/sqlite@14-main/main.ts";
import { SYSTEM_PROMPT } from "./prompt.ts";
import { html } from "./page.ts";
const openai = new OpenAI();
// --- Session storage via SQLite ---
// Send last 16 messages for context (fits in token window easily)
const chatResult = await openai.chat.completions.create({
model: "gpt-4o-mini",
max_tokens: 200,
import { OpenAI } from "https://esm.sh/openai@4.20.1";
export default async function handler(req: Request): Promise<Response> {
console.log(`📸 Processing ${base64Images.length} image(s)`);
const apiKey = Deno.env.get("OPENAI_API_KEY");
if (!apiKey) {
console.error("❌ OPENAI_API_KEY not set");
return new Response(
JSON.stringify({
}
const openai = new OpenAI({ apiKey });
console.log("🤖 Calling OpenAI for fish identification...");
// Language-specific instruction
}
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages: [
if (!content) {
throw new Error("No response from OpenAI");
}
if (error instanceof Error) {
if (
error.message === "No response from OpenAI" ||
error.message === "Invalid response format from AI"
) {
dcm31/editme/main.ts
11 matches
* Add self-editing capability to any Val. Import this and mount
* the handler on /editme and /mcp. Then POST instructions
* and an OpenAI agent will modify your Val's files via a self-hosted MCP.
*
* Usage:
* const config = {
* val: "yourhandle/yourval",
* openaiApiKey: Deno.env.get("OPENAI_API_KEY")!,
* valtownApiKey: Deno.env.get("VALTOWN_API_KEY")!,
* bearerToken: Deno.env.get("EDITME_SECRET"), // optional auth
*/
import OpenAI from "npm:openai";
// ============= Types =============
/** Val identifier: "handle/valName" format */
val: string;
/** OpenAI API key */
openaiApiKey: string;
/** Val Town API key (needs write access to the val) */
valtownApiKey: string;
/** Optional bearer token to protect the /editme endpoint */
bearerToken?: string;
/** OpenAI model (default: gpt-5.2) — can also be overridden per-request */
model?: string;
/** Reasoning effort: "low" | "medium" | "high" (default: "medium") */
modelOverride?: string,
): Promise<{ summary: string; steps: string[]; model: string }> {
const openai = new OpenAI({ apiKey: config.openaiApiKey });
const model = modelOverride || config.model || "gpt-5.2";
const reasoningEffort = config.reasoningEffort || "medium";
}
console.log(`🔌 OpenAI Responses API (model=${model}, mcp=${mcpUrl})`);
let response = await openai.responses.create(createOpts);
for (let turn = 0; turn < MAX_TURNS; turn++) {
contOpts.reasoning = { effort: reasoningEffort };
}
response = await openai.responses.create(contOpts);
} else {
console.log(`⚠️ Unexpected status: ${response.status}`);
/**
* Create an HTTP handler that accepts edit instructions and applies them
* to the specified Val using an OpenAI agent backed by a self-hosted MCP.
*
* IMPORTANT: You must also mount createMcpHandler on /mcp (or specify mcpPath).
| `TODOS_DB_ID` | Database ID where todos are synced (32-char ID without hyphens)
### Recommended: OpenAI API Key
| Variable | Description |
| ---------------- | ----------------------------------------------------- |
| `OPENAI_API_KEY` | OpenAI API key for AI features (strongly recommended) |
**What AI does:**
- Disambiguates when multiple contacts match a name
**Without `OPENAI_API_KEY`:** Falls back to Val Town's shared OpenAI client (10 req/min limit).
### Required Database Properties
Visit the root URL of your val (`main.http.tsx`) to see a dashboard showing:
- **Connection status**: Notion API and OpenAI connectivity
- **Property mappings**: Which properties are configured and whether they exist in your database
| ----------------------- | --------------------------------------------- |
| `NOTION_WEBHOOK_SECRET` | API key for protecting endpoints |
| `OPENAI_API_KEY` | For AI features (see Quick Start for details) |
---
- **Notion API** - Subject to Notion's terms of service and API limitations
- **OpenAI API** (optional) - Subject to OpenAI's terms of service
- **Val Town** - Subject to Val Town's terms of service
1. Click [**Remix**](/?intent=remix)
2. Add environment variables:
- `OPENAI_API_KEY` — for AI lead qualification
- `GITHUB_TOKEN` — for accessing GitHub API
([create one here](https://github.com/settings/tokens))
jubertioai
hello-realtime
Sample app for the OpenAI Realtime API
Public
openai-agents
kidjs
openai-agents
Template to use the OpenAI Agents SDK
Public
openai-agents
EatPraySin
openai-agents
Template to use the OpenAI Agents SDK
Public

Users

No users found

Docs

No docs found