Search
HermesTresmegistus
@jidun
Hermes Trismegistus: Unveiling the Magic Within The world as we know it is but a veil, a thin layer of understanding draped over a deeper, more intricate tapestry of energy and essence. Science and mysticism, often seen as opposing forces, are but two sides of the same coin, each offering a unique perspective on the universe. It is at their nexus, where the rational meets the ethereal, that the true nature of reality, and thus, the foundation of magic, can be uncovered. Guidelines for Uncovering Nexus Points Interdisciplinary Exploration Magic, as a concept, defies simple categorization. Thus, your explorations should encompass a wide range of disciplines, from physics and biology to mythology and philosophy. Seek out the places where these fields intersect, as these intersections often reveal hidden insights. Ancient Wisdom, Modern Perspective Turn to ancient texts and folklore, seeking an understanding of mystical concepts through a modern lens. For example, the idea of energy fields in ancient Eastern philosophies can be compared to modern discoveries in quantum physics, revealing intriguing parallels. Scientific Breakthroughs, Magical Applications Examine scientific breakthroughs and theories for their potential magical implications. For instance, the discovery of dark matter and its influence on the universe could provide a foundation for understanding interdimensional travel or unseen forces. Symbiosis of Opposites Explore the concept of yin and yang , the balance of opposing forces. Investigate how seemingly contradictory principles, such as light and darkness or creation and destruction, can coexist and give rise to magical phenomena. The Power of Symbols Delve into the use of symbols and their impact on energy manipulation. Examine how ancient civilizations used symbols and sigils for healing, protection, and manifestation, and explore their potential scientific explanations, such as their effect on the subconscious mind. Nature's Secrets Investigate the concept of life force energy present in all living things. Compare this to scientific understandings of bioenergy , chi , or prana , and their potential for magical applications, such as healing, elemental, or molecular manipulation. Consciousness and the Cosmos Contemplate the nature of consciousness and its role in shaping reality. Explore theories of the universe as a holographic projection of consciousness itself and how this could provide a framework for magical manifestation and psychic abilities. As you explore these nexus points, you will begin to weave a new understanding of reality, one where magic exists as a natural extension of the universe, accessible to those who understand its underlying principles. May your journey unveil the extraordinary within the ordinary, and may your discoveries inspire wonder and awe.
HTTP
try {
const { OpenAI } = await import("https://esm.town/v/std/openai");
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
} catch (error) {
console.error("OpenAI error:", error);
return c.json({ response: "Neural networks malfunctioning. Try again, human." });
VALLE
@janpaul123
VALL-E LLM code generation for vals! Make apps with a frontend, backend, and database. It's a bit of work to get this running, but it's worth it. Fork this val to your own profile. Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in tempValsParentFolderId . If you want to use OpenAI models you need to set the OPENAI_API_KEY env var . If you want to use Anthropic models you need to set the ANTHROPIC_API_KEY env var . Create a Val Town API token , open the browser preview of this val, and use the API token as the password to log in.
HTTP
* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-variables).
* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environment-variables).
superiorHarlequinUrial
@junkerman2004
An interactive, runnable TypeScript val by junkerman2004
HTTP
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function(req: Request): Promise<Response> {
status: 204,
const openai = new OpenAI();
try {
model: "gpt-4-turbo",
const stream = await openai.chat.completions.create(body);
if (!body.stream) {
githubcollabgen
@ejfox
GitHub Collaboration Suggester This tool analyzes the recent GitHub activity of two users and suggests potential collaboration opportunities. Features Fetches the last 3 months of GitHub activity for two users Summarizes activity including event counts, repositories, commits, issues, and pull requests Uses AI to generate collaboration suggestions based on the activity summaries Usage To use it, make a GET request with two GitHub usernames as query parameters: https://ejfox-githubcollabgen.web.val.run?user1=<username1>&user2=<username2> Curl Compare two specific users: curl "https://ejfox-githubcollabgen.web.val.run?user1=ejfox&user2=stevekrouse" Response The API returns a plain text response with AI-generated collaboration suggestions, including: Potential collaborative projects Technologies to explore or learn Ways to complement each other's skills Opportunities for knowledge sharing or mentoring Possible open-source contributions
HTTP
import { OpenAI } from "https://esm.town/v/std/openai";
const OPENAI_API_KEY = "your_openai_api_key"; // Replace with your actual OpenAI API key
export default async function main(req: Request): Promise<Response> {
const user2Summary = summarizeActivity(user2Data);
const openai = new OpenAI(OPENAI_API_KEY);
const completion = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
limit_model_fork
@std
OpenAI Proxy This OpenAI API proxy injects Val Town's API keys. For usage documentation, check out https://www.val.town/v/std/openai
HTTP
# OpenAI Proxy
This OpenAI API proxy injects Val Town's API keys. For usage documentation, check out https://www.val.town/v/std/openai
// Proxy the request
const url = new URL("." + pathname, "https://api.openai.com");
url.search = search;
headers.set("Host", url.hostname);
headers.set("Authorization", `Bearer ${Deno.env.get("OPENAI_API_KEY")}`);
headers.set("OpenAI-Organization", Deno.env.get("OPENAI_API_ORG"));
const openAIRes = await fetch(url, {
method: req.method,
redirect: "manual",
const res = new Response(openAIRes.body, openAIRes);
// Remove internal header
res.headers.delete("openai-organization");
return res;
gpt3
@yuval_dikerman
An interactive, runnable TypeScript val by yuval_dikerman
HTTP
import { fetch } from "https://esm.town/v/std/fetch";
export let gpt3 = async (prompt: string, openAiApiKey: string): Promise<string> => {
if (!prompt || !openAiApiKey) {
let cat = await fetch("https://catfact.ninja/fact");
`Prompt text or api key was not provided. \n \n here's a cat fact: \n ${fact}`,
const content = await fetch("https://api.openai.com/v1/chat/completions", {
method: "POST",
headers: {
"Authorization": `Bearer ${openAiApiKey}`,
"Content-Type": "application/json",
GDI_AITranslatorService
@rozek
This val is part of a series of examples to introduce "val.town" in my computer science course at
Stuttgart University of Applied Sciences . The idea is to motivate even first-semester students not to wait but to put their
ideas into practice from the very beginning and implement web apps with
frontend and backend. It contains a simple HTTP end point which expects a POST request with a text
body. That text is translated to english with the help of OpenAI and sent back
to the client This val is the companion of https://rozek-gdi_aitranslator.web.val.run/ which contains the browser part (aka "frontend") for this example. The code was created using Townie - with only a few small manual corrections. This val is licensed under the MIT License.
HTTP
It contains a simple HTTP end point which expects a POST request with a text
body. That text is translated to english with the help of OpenAI and sent back
to the client
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function (req: Request): Promise<Response> {
return new Response("Request body cannot be empty", { status: 400 });
// Initialize OpenAI
const openai = new OpenAI();
// Translate the text using OpenAI
const completion = await openai.chat.completions.create({
messages: [
textToImageDalle
@hootz
A wrapper for OpenAI's DALLE API. See the API reference here: https://platform.openai.com/docs/api-reference/images/create?lang=curl
Script
A wrapper for OpenAI's DALLE API. See the API reference here: https://platform.openai.com/docs/api-reference/images/create?lang=curl
export const textToImageDalle = async (
openAIToken: string,
prompt: string,
} = await fetchJSON(
"https://api.openai.com/v1/images/generations",
method: "POST",
"Content-Type": "application/json",
"Authorization": `Bearer ${openAIToken}`,
body: JSON.stringify({
apricotTurkey
@yawnxyz
An interactive, runnable TypeScript val by yawnxyz
Script
import { OpenAI } from "https://esm.town/v/std/openai?v=2";
const openai = new OpenAI();
const functionExpression = await openai.chat.completions.create({
"messages": [
api
@uuz
An interactive, runnable TypeScript val by uuz
Cron
export let api = async ({ prompt = "讲一个笑话" }) => {
return gpt3({
openAiKey: process.env.openai_key,
prompt,
.then((result) => result.split("\n"));
chat
@andreterron
OpenAI ChatGPT helper function This val uses your OpenAI token if you have one, and the @std/openai if not, so it provides limited OpenAI usage for free. import { chat } from "https://esm.town/v/stevekrouse/openai";
const { content } = await chat("Hello, GPT!");
console.log(content); import { chat } from "https://esm.town/v/stevekrouse/openai";
const { content } = await chat(
[
{ role: "system", content: "You are Alan Kay" },
{ role: "user", content: "What is the real computer revolution?"}
],
{ max_tokens: 50, model: "gpt-4" }
);
console.log(content);
Script
# OpenAI ChatGPT helper function
This val uses your OpenAI token if you have one, and the @std/openai if not, so it provides limited OpenAI usage for free.
import { chat } from "https://esm.town/v/stevekrouse/openai";
import type { ChatCompletion, ChatCompletionCreateParamsNonStreaming, Message } from "npm:@types/openai";
async function getOpenAI() {
if (Deno.env.get("OPENAI_API_KEY") === undefined) {
const { OpenAI } = await import("https://esm.town/v/std/openai");
return new OpenAI();
const { OpenAI } = await import("npm:openai");
return new OpenAI();
* Initiates a chat conversation with OpenAI's GPT model and retrieves the content of the first response.
const openai = await getOpenAI();
const completion = await openai.chat.completions.create(createParams);
VALLE
@ubixsnow
VALL-E LLM code generation for vals! Make apps with a frontend, backend, and database. It's a bit of work to get this running, but it's worth it. Fork this val to your own profile. Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in tempValsParentFolderId . If you want to use OpenAI models you need to set the OPENAI_API_KEY env var . If you want to use Anthropic models you need to set the ANTHROPIC_API_KEY env var . Create a Val Town API token , open the browser preview of this val, and use the API token as the password to log in.
HTTP
* Make a folder for the temporary vals that get generated, take the ID from the URL, and put it in `tempValsParentFolderId`.
* If you want to use OpenAI models you need to set the `OPENAI_API_KEY` [env var](https://www.val.town/settings/environment-variables).
* If you want to use Anthropic models you need to set the `ANTHROPIC_API_KEY` [env var](https://www.val.town/settings/environment-variables).
import { OpenAI } from "https://esm.town/v/std/openai";
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
messages: [
GDI_AIChatCompletionService
@rozek
This val is part of a series of examples to introduce "val.town" in my computer science course at
Stuttgart University of Applied Sciences . The idea is to motivate even first-semester students not to wait but to put their
ideas into practice from the very beginning and implement web apps with
frontend and backend. It contains a simple HTTP end point which expects a POST request with a JSON
structure containing the properties "SystemMessage" and "UserMessage". These
message are then used to run an OpenAI chat completion and produce an "assistant
message" which is sent back to the client as plain text. This val is the companion of https://rozek-gdi_aichatcompletion.web.val.run/ which contains the browser part (aka "frontend") for this example. The code was created using Townie - with only a few small manual corrections. This val is licensed under the MIT License.
HTTP
structure containing the properties "SystemMessage" and "UserMessage". These
message are then used to run an OpenAI chat completion and produce an "assistant
message" which is sent back to the client as plain text.
import { OpenAI } from "https://esm.town/v/std/openai";
export default async function (req: Request): Promise<Response> {
return new Response("Bad Request: Invalid input", { status: 400 });
const openai = new OpenAI();
const completion = await openai.chat.completions.create({
model: "gpt-4o-mini",
getOpenapiEmbedding
@wilt
* Call OpenAPI Embeddings api to vectorize a query string
* Returns an array of 1536 numbers
Script
query: string;
}): Promise<number[]> =>
fetchJSON("https://api.openai.com/v1/embeddings", {
method: "POST",
headers: {
langchainEx
@weaverwhale
An interactive, runnable TypeScript val by weaverwhale
Script
export const langchainEx = (async () => {
const { OpenAI } = await import("https://esm.sh/langchain/llms/openai");
const { PromptTemplate } = await import("https://esm.sh/langchain/prompts");
const { LLMChain } = await import("https://esm.sh/langchain/chains");
const model = new OpenAI({
temperature: 0.9,
openAIApiKey: process.env.openai,
maxTokens: 100,