Search

Results include substring matches and semantically similar vals. Learn more
mjweaver01 avatar
FindFraudTrendsUsingGPT
@mjweaver01
FraudTrendsGPT Generate real-time Fraud Trend Reports using GPT-4o Goal This agent is designed to find trends in merchant transaction reports, and produce a real time report. Reports are complete with relevant data tables and Mermaid charts. Semi-Data-Agnostic This agent is semi-agnostic to the data provided. Meaning, it will provide a report so long as the provided data is shaped similarly, or the prompt is updated to support the new data shape. Agent Reusability Moreover, this agent can be rewritten for any number of use cases. With some variable renaming, and rewriting of the prompt, this should produce very accurate data analytic reports for any data provided.
HTTP
import { Hono } from "npm:hono";
import { OpenAI } from "npm:openai";
const trendGPT = async (data, onData) => {
const openai = new OpenAI();
const chatStream = await openai.chat.completions.create({
messages: [
stevekrouse avatar
poembuilder3
@stevekrouse
@jsxImportSource npm:hono@3/jsx
HTTP (deprecated)
/** @jsxImportSource npm:hono@3/jsx */
import { OpenAI } from "https://esm.town/v/std/openai?v=2";
import { sqlite } from "https://esm.town/v/std/sqlite?v=5";
import { Hono } from "npm:hono@3";
willthereader avatar
purpleKangaroo
@willthereader
ChatGPT Implemented in Val Town Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events.
HTTP (deprecated)
# ChatGPT Implemented in Val Town
Demonstrated how to use assistants and threads with the OpenAI SDK and how to stream the response with Server-Sent Events.
<p align=center>
/** @jsxImportSource https://esm.sh/react */
import OpenAI from "npm:openai";
import { renderToString } from "npm:react-dom/server";
// This uses by personal API key, you'll need to provide your own if
// you fork this. We'll be adding support to the std/openai lib soon!
const openai = new OpenAI();
import { Hono } from "npm:hono@3";
body: msgDiv.textContent,
// Setup the SSE connection and stream back the response. OpenAI handles determining
// which message is the correct response based on what was last read from the
app.get("/", async (c) => {
const thread = await openai.beta.threads.create();
const assistant = await openai.beta.assistants.create({
name: "",
let message = await c.req.text();
await openai.beta.threads.messages.create(
c.req.query("threadId"),
"data: " + JSON.stringify(str) + "\n\n",
const run = openai.beta.threads.runs.stream(threadId, {
assistant_id: assistantId,
weaverwhale avatar
FindTrendsUsingGPT
@weaverwhale
An interactive, runnable TypeScript val by weaverwhale
HTTP (deprecated)
import { Hono } from "npm:hono";
import { OpenAI } from "npm:openai";
const trendGPT = async (data, onData) => {
const openai = new OpenAI();
// Start the OpenAI stream
const chatStream = await openai.chat.completions.create({
messages: [
tgrv avatar
VALLE
@tgrv
VALL-E LLM code generation for vals! Make apps with a frontend, backend, and database. It's a bit of work to get this running, but it's worth it. Fork this val to your own profile. Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in tempValsParentFolderId . If you want to use OpenAI models you need to set the OPENAI_API_KEY env var . If you want to use Anthropic models you need to set the ANTHROPIC_API_KEY env var . Create a Val Town API token , open the browser preview of this val, and use the API token as the password to log in.
HTTP (deprecated)
* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-v
* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environ
triptych avatar
webgen
@triptych
To-dos: Spruce up styles a bit Write this README ~Add a cache!~ ~Try moving style tag to the bottom by prompting so content appears immediately and then becomes styled~ didn't work b/c CSS parsing isn't progressive Need more prompting to get the model not to generate placeholder-y content Better root URL page / index page with links to some good sample generations
HTTP (deprecated)
import { blob } from "https://esm.town/v/std/blob?v=12";
import OpenAI from "npm:openai";
const openai = new OpenAI();
const getCacheKey = (url: string): string => {
let pageResult = "";
// // 2. Do one OpenAI inference to expand that URL to a longer page description
const pageDescriptionStream = await openai.chat.completions.create({
model: "gpt-4o",
// 3. Generate the page
const stream = await openai.chat.completions.create({
model: "gpt-4o",
stevekrouse avatar
webgen
@stevekrouse
Made on Val Town livestream. This project is a kind of a mini tribute to Websim . To-dos: Spruce up styles a bit Write this README ~Add a cache!~ ~Try moving style tag to the bottom by prompting so content appears immediately and then becomes styled~ didn't work b/c CSS parsing isn't progressive Need more prompting to get the model not to generate placeholder-y content Better root URL page / index page with links to some good sample generations
HTTP (deprecated)
import { blob } from "https://esm.town/v/std/blob?v=12";
import OpenAI from "npm:openai";
const openai = new OpenAI();
const getCacheKey = (url: string): string => {
let pageResult = "";
// // 2. Do one OpenAI inference to expand that URL to a longer page description
const pageDescriptionStream = await openai.chat.completions.create({
model: "gpt-4o",
// 3. Generate the page
const stream = await openai.chat.completions.create({
model: "gpt-4o",
seflless avatar
weatherGPT
@seflless
If you fork this, you'll need to set OPENAI_API_KEY in your Val Town Secrets .
HTTP (deprecated)
If you fork this, you'll need to set `OPENAI_API_KEY` in your [Val Town Secrets](https://www.val.town/settings/secrets).
return !data ? <p>Loading...</p> : <p>{data}</p>;
export default async function weatherGPT(req: Request) {
const { OpenAI } = await import("npm:openai");
if (new URL(req.url).pathname === "/data") {
return Response.json({
stevekrouse avatar
calories
@stevekrouse
Calorie Count via Photo Uploads your photo to ChatGPT's new vision model to automatically categorize the food and estimate the calories.
HTTP (deprecated)
import { fileToDataURL } from "https://esm.town/v/stevekrouse/fileToDataURL";
import { modifyImage } from "https://esm.town/v/stevekrouse/modifyImage";
import { chat } from "https://esm.town/v/stevekrouse/openai";
import { Hono } from "npm:hono@3";
function esmTown(url) {
thesephist avatar
webgen
@thesephist
Made on Val Town livestream. This project is a kind of a mini tribute to Websim . To-dos: Spruce up styles a bit Write this README ~Add a cache!~ ~Try moving style tag to the bottom by prompting so content appears immediately and then becomes styled~ didn't work b/c CSS parsing isn't progressive Need more prompting to get the model not to generate placeholder-y content Better root URL page / index page with links to some good sample generations
HTTP (deprecated)
import { blob } from "https://esm.town/v/std/blob?v=12";
import OpenAI from "npm:openai";
const openai = new OpenAI();
const getCacheKey = (url: string): string => {
let pageResult = "";
// // 2. Do one OpenAI inference to expand that URL to a longer page description
const pageDescriptionStream = await openai.chat.completions.create({
model: "gpt-4o",
// 3. Generate the page
const stream = await openai.chat.completions.create({
model: "gpt-4o",
ejfox avatar
umap
@ejfox
UMAP Dimensionality Reduction API This is a high-performance dimensionality reduction microservice using UMAP (Uniform Manifold Approximation and Projection). It provides an efficient way to reduce high-dimensional data to 2D or 3D representations, making it easier to visualize and analyze complex datasets. When to Use This Service Visualizing high-dimensional data in 2D or 3D space Reducing dimensionality of large datasets for machine learning tasks Exploring relationships and clusters in complex data Preprocessing step for other machine learning algorithms Common Use Cases Visualizing word embeddings in a scatterplotcs Exploring customer segmentation in marketing analytics Visualizing image embeddings in computer vision tasks
HTTP
<div class="example">
<h3>Example with OpenAI Embeddings:</h3>
<p>This example shows how to use the UMAP service with OpenAI embeddings:</p>
<pre>
// First, generate embeddings using OpenAI API
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
async function getEmbeddings(texts) {
const response = await openai.embeddings.create({
model: "text-embedding-ada-002",
// Then, use these embeddings with the UMAP service
const texts = ["Hello world", "OpenAI is amazing", "UMAP reduces dimensions"];
const embeddings = await getEmbeddings(texts);
stevekrouse avatar
chatGPTPlugin
@stevekrouse
ChatGPT Plugin for Val Town Run code on Val Town from ChatGPT. Usage I haven't been able to get it to do very useful things yet. It certainly can evaluate simple JS code: It would be awesome if it knew how to use other APIs and make fetch calls to them, but it has been failing at that . Limitations This plugin currently only has unauthenticated access to POST /v1/eval, which basically means that all it can do is evaluate JavaScript or TypeScript. In theory it could refer to any existing vals in Val Town, but it wouldn't know about those unless you told it. Future directions Once we have more robust APIs to search for existing vals, this plugin could be WAY more valuable! In theory GPT4 could first search for vals to do a certain task and then if it finds one it could then write code based on that val. In practice however, that might require too many steps for poor GPT. We might need to use some sort of agent or langchain thing if we wanted that sort of behavior. Adding authentication could also enable it to make requests using your secrets and private vals and create new vals for you. However I am dubious that this would actually be practically useful. Installation Select GPT-4 (requires ChatGPT Plus) Click No plugins enabled Click "Install an unverified plugin" or "Develop your own plugin" (I'm not sure the difference) Paste in this val's express endpoint https://stevekrouse-chatGPTPlugin.express.val.run Click through the prompts until it's installed
Express
![](https://i.imgur.com/lLUAcVc.png)
d make `fetch` calls to them, but it has been [failing at that](https://chat.openai.com/share/428183eb-8e6d-4008-b295-f3b0ef2
## Limitations
import { fetchJSON } from "https://esm.town/v/stevekrouse/fetchJSON";
import { openaiOpenAPI } from "https://esm.town/v/stevekrouse/openaiOpenAPI";
// https://stevekrouse-chatgptplugin.express.val.run/.well-known/ai-plugin.json
// only POST /v1/eval for now
res.send(openaiOpenAPI);
else if (req.path === "/v1/eval") {
abar04 avatar
weatherGPT
@abar04
Cron
import { email } from "https://esm.town/v/std/email?v=11";
import { OpenAI } from "npm:openai";
let location = "Perth WA";
).then(r => r.json());
const openai = new OpenAI();
let chatCompletion = await openai.chat.completions.create({
messages: [{
peterzakin avatar
VALLE
@peterzakin
Fork it and authenticate with your Val Town API token as the password. Needs an OPENAI_API_KEY env var to be set, and change the variables under "Set these to your own". https://x.com/JanPaul123/status/1812957150559211918
HTTP
it and authenticate with your Val Town API token as the password. Needs an `OPENAI_API_KEY` env var to be set, and change th
https://x.com/JanPaul123/status/1812957150559211918
davitchanturia avatar
xenophobicAquamarineHorse
@davitchanturia
VALL-E LLM code generation for vals! Make apps with a frontend, backend, and database. It's a bit of work to get this running, but it's worth it. Fork this val to your own profile. Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in tempValsParentFolderId . If you want to use OpenAI models you need to set the OPENAI_API_KEY env var . If you want to use Anthropic models you need to set the ANTHROPIC_API_KEY env var . Create a Val Town API token , open the browser preview of this val, and use the API token as the password to log in.
HTTP (deprecated)
* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-v
* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environ
…
17
…
Next