xuybin-openaistreaming.web.val.run
Readme

OpenAI Streaming - Assistant and Threads

An example of using OpenAI to stream back a chat with an assistant. This example sends two messages to the assistant and streams back the responses when they come in.

Example response:

user > What should I build today? ................ assistant > Here are a few fun Val ideas you could build on Val Town: 1. **Random Joke Generator:** Fetch a random joke from an API and display it. 2. **Daily Weather Update:** Pull weather data for your location using an API and create a daily summary. 3. **Mini Todo List:** Create a simple to-do list app with add, edit, and delete functionalities. 4. **Chuck Norris Facts:** Display a random Chuck Norris fact sourced from an API. 5. **Motivational Quote of the Day:** Fetch and display a random motivational quote each day. Which one sounds interesting to you? user > Cool idea, can you make it even cooler? ................... assistant > Sure, let's add some extra flair to make it even cooler! How about creating a **Motivational Quote of the Day** app with these features: 1. **Random Color Theme:** Each day, the background color/theme changes randomly. 2. **Quote Sharing:** Add an option to share the quote on social media. 3. **Daily Notifications:** Send a daily notification with the quote of the day. 4. **User Preferences:** Allow users to choose categories (e.g., success, happiness, perseverance) for the quotes they receive. Would you like some code snippets or guidance on implementing any of these features?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
import OpenAI from "npm:openai";
const openai = new OpenAI();
import process from "node:process";
// Define our assistant.
const assistant = await openai.beta.assistants.create({
name: "Val Tutor",
instructions: `You are a personal Val tutor.
You help brainstorm ideas for fun Vals to write on Val Town.
You only suggest ideas that can be implemented on https://val.town.
You keep your responses brief and to the point. `,
model: "gpt-4o",
});
// Create a thread to chat in.
const thread = await openai.beta.threads.create();
// These are the messages we'll send to the assistant.
const messages = ["What should I build today?", "Very cool. Can you make it even cooler?"];
export default async function(req: Request): Promise<Response> {
const url = new URL(req.url);
if (url.pathname === "/favicon.ico") {
return new Response(null, { status: 404 });
}
let ended = false;
let interval;
// Return out SSE ReadableStream
const body = new ReadableStream({
async start(controller) {
const write = (str: string) => {
!ended && controller.enqueue(new TextEncoder().encode(str));
};
for (let i = 0; i < messages.length; i++) {
if (ended) break;
await new Promise<void>(async (resolve) => {
write("\nuser > " + messages[i] + "\n");
// Prints dots to indicate that we're loading.
interval = setInterval(() => {
write(".");
}, 100);
const message = await openai.beta.threads.messages.create(
thread.id,
{ role: "user", content: messages[i] },
);
const run = openai.beta.threads.runs.stream(thread.id, {
assistant_id: assistant.id,
// Make sure we only display messages we haven't seen yet.
truncation_strategy: { type: "auto" },
})
.on("textCreated", (text) => {
clearInterval(interval);
write("\nassistant > ");
})
.on("textDelta", (textDelta, snapshot) => write(textDelta.value))
.on("textDone", async () => {
resolve();
if (i === 1) {
// Clean up when we're done.
!ended && controller.close();
}
});
});
}
},
cancel() {
// Stop chatting if the request is terminated.
ended = true;
clearInterval(interval);
},
});
return new Response(body, {
headers: {
"Content-Type": "text/event-stream",
},
});
}
Val Town is a social website to write and deploy JavaScript.
Build APIs and schedule functions from your browser.
Comments
Nobody has commented on this val yet: be the first!
June 13, 2024