Search

4,142 results found for openai (1426ms)

Code
4,034

│ GROQ API (External) │
│ │
│ POST https://api.groq.com/openai/v1/chat/completions │
│ { │
│ "model": "qwen/qwen3-32b", │
stream?: boolean;
systemPrompt?: string;
// Reasoning/thinking options (supported by Groq, OpenAI o1/o3, Anthropic)
reasoningEffort?: 'low' | 'medium' | 'high';
// Groq-specific options
imageDetail?: 'low' | 'auto' | 'high';
modelVersion?: 'latest' | 'stable' | string;
// OpenAI-specific options
responseFormat?: { type: string; [key: string]: unknown };
// API Key override - if provided, use this instead of default
private apiKey: string | undefined;
private baseUrl = "https://api.groq.com/openai/v1";
constructor() {
Different models have different requirements:
- **Qwen3-32B**: `reasoning_effort` must be `'none'` or `'default'`
- **OpenAI o1/o3**: `reasoning_effort` can be `'low'`, `'medium'`, `'high'`
- **Anthropic**: Uses `extended_thinking` budget tokens instead
- **Some models**: Don't support certain parameters at all
// OpenAI Chat Provider
import type { ChatProvider, ChatMessage, ChatCompletionOptions, ChatCompletionResponse, StreamCh
export class OpenAIProvider implements ChatProvider {
name = "openai";
defaultModel = "gpt-4o";
private apiKey: string | undefined;
private baseUrl = "https://api.openai.com/v1";
constructor() {
this.apiKey = Deno.env.get("OPENAI_API_KEY");
}
if (!apiKey) {
throw new Error("OPENAI_API_KEY not configured. Please add your API key in the chat settin
}
if (!response.ok) {
const errorText = await response.text();
console.error("OpenAI API error:", response.status, errorText);
let errorMsg = "Failed to get response from OpenAI API";
try {
const errorData = JSON.parse(errorText);
errorMsg = errorData.error?.message || errorData.message || errorMsg;
if (response.status === 401 || response.status === 403) {
errorMsg = "OpenAI API key is invalid or missing. Please check your API key in Setting
}
} catch {
// If error text isn't JSON, use the status code
if (response.status === 401 || response.status === 403) {
errorMsg = "OpenAI API key is invalid or missing. Please check your API key in Setting
} else {
errorMsg = `OpenAI API error (${response.status}): ${errorText.substring(0, 200)}`;
}
}
const data = await response.json();
console.log("[OpenAI API complete response]:", JSON.stringify(data, null, 2));
if (!data.choices || !data.choices[0] || !data.choices[0].message) {
throw new Error("Invalid response from OpenAI API");
}
if (!apiKey) {
throw new Error("OPENAI_API_KEY not configured. Please add your API key in the chat settin
}
}
console.log("[OpenAI stream] Request body:", JSON.stringify(requestBody, null, 2));
const response = await fetch(this.baseUrl + "/chat/completions", {
if (!response.ok) {
const errorText = await response.text();
console.error("[OpenAI stream] API error:", response.status, errorText);
let errorMsg = "Failed to start stream from OpenAI API";
try {
const errorData = JSON.parse(errorText);
errorMsg = errorData.error?.message || errorData.message || errorMsg;
if (response.status === 401 || response.status === 403) {
errorMsg = "OpenAI API key is invalid or missing. Please check your API key in Setting
}
} catch {
// If error text isn't JSON, use the status code
if (response.status === 401 || response.status === 403) {
errorMsg = "OpenAI API key is invalid or missing. Please check your API key in Setting
} else {
errorMsg = `OpenAI API error (${response.status}): ${errorText.substring(0, 200)}`;
}
}
if (done) {
console.log("[OpenAI stream] Reader done, remaining buffer:", buffer);
yield { content: "", done: true };
break;
lineCount++;
if (lineCount <= 5) {
console.log("[OpenAI stream] Line " + lineCount + ":", trimmed.substring(0, 200));
}
if (parsed.error) {
const errorMsg = parsed.error?.message || JSON.stringify(parsed.error) || "Unknown
console.error("[OpenAI stream] API error:", errorMsg);
throw new Error("OpenAI API error: " + errorMsg);
}
}
} catch (e) {
if (e instanceof Error && e.message.startsWith("OpenAI API error:")) {
throw e;
}
console.error("[OpenAI stream] JSON parse error:", e, "data:", data.substring(0, 100
}
}
}
} finally {
console.log("[OpenAI stream] Total lines processed:", lineCount);
reader.releaseLock();
}
id: string; // Model ID (e.g., 'llama-3.3-70b-versatile')
name: string; // Display name
provider: string; // Provider name (e.g., 'groq', 'openai')
description?: string;
parameters: ModelParameter[];
},
// OpenAI GPT-OSS Models (Thinking + Tools)
{
id: 'openai/gpt-oss-120b',
name: 'GPT OSS 120B',
provider: 'groq',
description: 'OpenAI\'s flagship 120B parameter model with browser search and reasoning.',
parameters: THINKING_PARAMETERS,
capabilities: { web: true, thinking: true, streaming: true, tools: true }
},
{
id: 'openai/gpt-oss-20b',
name: 'GPT OSS 20B',
provider: 'groq',
},
{
id: 'openai/gpt-oss-safeguard-20b',
name: 'Safety GPT OSS 20B',
provider: 'groq',
// =============================================================================
// OpenAI Base Parameters
// =============================================================================
const OPENAI_BASE_PARAMETERS: ModelParameter[] = [
{
key: 'temperature',
];
const OPENAI_REASONING_PARAMETERS: ModelParameter[] = [
...OPENAI_BASE_PARAMETERS,
{
key: 'reasoningEffort',
// =============================================================================
// OpenAI Model Registry
// =============================================================================
export const OPENAI_MODEL_CONFIGS: ModelConfig[] = [
// GPT-5.2 Series (Latest - Dec 2025)
{
id: 'gpt-5.2',
name: 'GPT-5.2',
provider: 'openai',
description: 'The best model for coding and agentic tasks across industries.',
parameters: OPENAI_BASE_PARAMETERS,
capabilities: { image: true, streaming: true, tools: true, thinking: true }
},
id: 'gpt-5.2-pro',
name: 'GPT-5.2 Pro',
provider: 'openai',
description: 'Version of GPT-5.2 that produces smarter and more precise responses.',
parameters: OPENAI_REASONING_PARAMETERS,
capabilities: { image: true, streaming: true, tools: true, thinking: true }
},
id: 'gpt-5.1',
name: 'GPT-5.1',
provider: 'openai',
description: 'The best model for coding and agentic tasks with configurable reasoning effort
parameters: OPENAI_REASONING_PARAMETERS,
capabilities: { image: true, streaming: true, tools: true, thinking: true }
},
id: 'gpt-5',
name: 'GPT-5',
provider: 'openai',
description: 'Flagship model for coding, reasoning, and agentic tasks across domains.',
parameters: OPENAI_BASE_PARAMETERS,
capabilities: { image: true, streaming: true, tools: true, thinking: true }
},
id: 'gpt-5-pro',
name: 'GPT-5 Pro',
provider: 'openai',
description: 'Most advanced reasoning model that uses more compute for smarter, more precise
parameters: OPENAI_REASONING_PARAMETERS,
capabilities: { image: true, streaming: true, tools: true, thinking: true }
},
id: 'gpt-5-mini',
name: 'GPT-5 Mini',
provider: 'openai',
description: 'Faster, cost-efficient version of GPT-5 for well-defined tasks.',
parameters: OPENAI_BASE_PARAMETERS,
capabilities: { image: true, streaming: true, tools: true }
},
id: 'gpt-5-nano',
name: 'GPT-5 Nano',
provider: 'openai',
description: 'Fastest, most cost-efficient version of GPT-5 for summarization and classifica
parameters: OPENAI_BASE_PARAMETERS,
capabilities: { image: true, streaming: true, tools: true }
},
id: 'gpt-4.1',
name: 'GPT-4.1',
provider: 'openai',
description: 'Smartest non-reasoning model - excels at instruction following and tool callin
parameters: OPENAI_BASE_PARAMETERS,
capabilities: { image: true, streaming: true, tools: true }
},
id: 'o3-pro',
name: 'o3 Pro',
provider: 'openai',
description: 'Highest reasoning model with reinforcement learning - uses more compute for co
parameters: OPENAI_REASONING_PARAMETERS,
capabilities: { thinking: true, streaming: true, image: true, tools: true }
},
id: 'o3-deep-research',
name: 'o3 Deep Research',
provider: 'openai',
description: 'Most advanced deep research model for complex multi-step research with interne
parameters: OPENAI_REASONING_PARAMETERS,
capabilities: { thinking: true, streaming: true, image: true, tools: true }
},
id: 'o4-mini-deep-research',
name: 'o4-mini Deep Research',
provider: 'openai',
description: 'Faster, affordable deep research model for complex multi-step research tasks w
parameters: OPENAI_REASONING_PARAMETERS,
capabilities: { thinking: true, streaming: true, image: true, tools: true }
},
id: 'o3',
name: 'o3',
provider: 'openai',
description: 'Advanced reasoning model for complex multi-step tasks.',
parameters: OPENAI_REASONING_PARAMETERS,
capabilities: { thinking: true, streaming: true, image: true, tools: true }
},
id: 'o3-mini',
name: 'o3-mini',
provider: 'openai',
description: 'Fast, flexible reasoning for coding and problem-solving.',
parameters: OPENAI_REASONING_PARAMETERS,
capabilities: { thinking: true, streaming: true }
},
id: 'o1',
name: 'o1',
provider: 'openai',
description: 'Original reasoning model for complex analysis.',
parameters: OPENAI_REASONING_PARAMETERS,
capabilities: { thinking: true, streaming: true, image: true }
},
id: 'o1-mini',
name: 'o1-mini',
provider: 'openai',
description: 'Efficient reasoning model for coding and math.',
parameters: OPENAI_REASONING_PARAMETERS,
capabilities: { thinking: true, streaming: true }
},
id: 'o1-pro',
name: 'o1-pro',
provider: 'openai',
description: 'Enhanced compute for hard problems.',
parameters: OPENAI_REASONING_PARAMETERS,
capabilities: { thinking: true, streaming: true, image: true }
},
id: 'gpt-4o',
name: 'GPT-4o',
provider: 'openai',
description: 'High-intelligence model with vision, fast and affordable.',
parameters: OPENAI_BASE_PARAMETERS,
capabilities: { image: true, streaming: true, tools: true }
},
id: 'gpt-4o-mini',
name: 'GPT-4o Mini',
provider: 'openai',
description: 'Small, fast, and affordable. Great for most tasks.',
parameters: OPENAI_BASE_PARAMETERS,
capabilities: { image: true, streaming: true, tools: true }
}
// =============================================================================
// Custom Provider Parameters (OpenAI-compatible)
// =============================================================================
export const ALL_MODEL_CONFIGS: ModelConfig[] = [
...GROQ_MODEL_CONFIGS,
...OPENAI_MODEL_CONFIGS,
...ANTHROPIC_MODEL_CONFIGS,
...CUSTOM_MODEL_CONFIGS,
// Anthropic Chat Provider
// Uses Anthropic's Messages API which has a different format than OpenAI
import type { ChatProvider, ChatMessage, ChatCompletionOptions, ChatCompletionResponse, StreamCh
}
// Convert OpenAI-style messages to Anthropic format
private convertMessages(messages: ChatMessage[], systemPrompt?: string): { system?: string; me
const anthropicMessages: AnthropicMessage[] = [];
**DON'T support imageDetail:**
- `openai/gpt-oss-120b`
- `openai/gpt-oss-20b`
- `llama-3.3-70b-versatile`
- `qwen/qwen3-32b`
1. User options include imageDetail: 'auto'
2. Transform: imageDetail: 'auto' (no mapping needed)
3. Check: modelSupportsParameter('openai/gpt-oss-120b', 'imageDetail') → false
4. Result: image_detail NOT added to request ✓
```
Different AI models/providers have incompatible parameter requirements, causing a "whack-a-mole"
- **Qwen3-32B**: Requires `reasoning_effort` to be `'none'` or `'default'`
- **OpenAI o1/o3**: Accepts `'low'`, `'medium'`, `'high'`
- **GPT-OSS models**: Don't support `image_detail` at all
- **Vision models**: Support `image_detail` parameter
this.apiKeys = Object.assign({
groq: '',
openai: '',
anthropic: ''
}, keys);
this.apiKeys = {
groq: '',
openai: '',
anthropic: ''
};

Vals

94
View more
Ronsykes
hello-realtime
Sample app for the OpenAI Realtime API
Public
Ronsykes
hello-realtime-rs
Sample app for the OpenAI Realtime API
Public
dcm31
turso_events_estimator
Estimate OpenAI calls from Turso GitHub events
Public
fancylamp
hello-realtime
Sample app for the OpenAI Realtime API
Public
peterqliu
PineconeIndex
Vector db's on Pinecone, with OpenAI embeddings
Public

Docs

11
View more
No docs found