Search
![yawnxyz avatar](https://images.clerk.dev/oauth_github/img_2NnaHhpxNuH1xWRIRjQNoo16TVc.jpeg)
perlinNoiseFabric
@yawnxyz
Full credit goes to @yuruyurau: https://twitter.com/yuruyurau/status/1830677030259839352
HTTP
export default async function (req: Request): Promise<Response> {
const html = `<!DOCTYPE html>
<html>
float t = 0;
float w = 400;
// Function to calculate the vertex positions
float[] a(float x, float y) {
float k = w * noise(t) - x;
fluentAmberHyena
@deepmojo
@jsxImportSource https://esm.sh/react
HTTP
import { createRoot } from "https://esm.sh/react-dom/client";
function App() {
const [domain, setDomain] = useState("");
</div>
function client() {
createRoot(document.getElementById("root")).render(<App />);
client();
export default async function server(req: Request): Promise<Response> {
const url = new URL(req.url);
roastCast
@jamiedubs
Farcaster action that summons a https://glif.app bot to roast the cast, using glif AI magic #farcaster #glif
HTTP
import process from "node:process";
async function fetchCast({ fid, hash }: { fid: string; hash: string }) {
console.log("fetchCast", { fid, hash });
output?: string;
async function runGlif({ inputs }: { inputs: string[] }): Promise<RunGlifResponse> {
const id = "cluu91eda000cv8jd675qsrby" as const;
return json;
async function postCast({ text, replyToFid, replyToHash }: { text: string; replyToFid: string; replyToHash: string }) {
if (!process.env.FARCASTER_SIGNER_ID || !process.env.PINATA_JWT) {
return json;
async function runGlifAndPost({ inputs, cast }: { inputs: string[]; cast: { fid: string; hash: string } }) {
const glifRes = await runGlif({ inputs });
return glifRes;
export default async function(req: Request): Promise<Response> {
// console.log("req", req);
myApi
@zachkorner
An interactive, runnable TypeScript val by zachkorner
Script
export function myApi(name) {
return "hi " + name;
![yawnxyz avatar](https://images.clerk.dev/oauth_github/img_2NnaHhpxNuH1xWRIRjQNoo16TVc.jpeg)
honoAlpineHtmxDemo
@yawnxyz
This example shows how Hono, Alpine, and Htmx can play together. Alpine is great at reactive client-side updates Htmx is great at sending/receiving content from server These two can be combined for reactive clients plus easy AJAX
HTTP
output: null,
init() {
// if you don't use an arrow function (this) will refer to document, not alpine
document.addEventListener('htmx:afterRequest', (event) => this.updateOutput(event));
updateOutput(event) {
![yawnxyz avatar](https://images.clerk.dev/oauth_github/img_2NnaHhpxNuH1xWRIRjQNoo16TVc.jpeg)
loadPython
@yawnxyz
// Mocking necessary global objects to mimic browser environment in Deno
Script
(globalThis as any).navigator = {};
// Function to load Pyodide and Python packages
export async function loadPython(params?: { packages: string[] }) {
const pyodide = await pyodideModule.loadPyodide();
await pyodide.loadPackage(pkg);
// Returns a function that can run Python code asynchronously
return (strings: TemplateStringsArray) => pyodide.runPythonAsync(strings[0]);
// Function to run Python code
async function runHelloWorld() {
const runPython = await loadPython();
await runPython`print("Hello, World!")`;
// Execute the function
runHelloWorld().catch(console.error);
![pomdtr avatar](https://images.clerk.dev/oauth_github/img_2RCoAITJZH1QencEgtVjh4Qirj4.jpeg)
vt_playground_demo
@pomdtr
An interactive, runnable TypeScript val by pomdtr
HTTP
</body>
</html>
export default function() {
return new Response(body, {
headers: {
perplexityAPI
@nbbaier
Perplexity API Wrapper This val exports a function pplx that provides an interface to the Perplexity AI chat completions API. You'll need a Perplexity AI API key, see their documentation for how to get started with getting a key. By default, the function will use PERPLEXITY_API_KEY in your val town env variables unless overridden by setting apiKey in the function. pplx(options: PplxRequest & { apiKey?: string }): Promise<PplxResponse> Generates a model's response for the given chat conversation. Required parameters in options are the following (for other parameters, see the Types section below): model ( string ): the name of the model that will complete your prompt. See below for possible values: pplx- 7b-chat , pplx-70b-chat , pplx-7b-online , pplx-70b-online , llama-2-70b-chat , codellama-34b -instruct , mistral-7b-instruct , and mixtral-8x7b-instruct . messages ( Message[] ): A list of messages comprising the conversation so far. A message object must contain role ( system , user , or assistant ) and content (a string). You can also specify an apiKey to override the default Deno.env.get("PERPLEXITY_API_KEY") . The function returns an object of types PplxResponse , see below. Types PplxRequest Request object sent to Perplexity models. | Property | Type | Description |
|---------------------|----------------------|-----------------------------------------------------------------|
| model | Model | The name of the model that will complete your prompt. Possible values: pplx- 7b-chat , pplx-70b-chat , pplx-7b-online , pplx-70b-online , llama-2-70b-chat , codellama-34b -instruct , mistral-7b-instruct , and mixtral-8x7b-instruct . |
| messages | Message[] | A list of messages comprising the conversation so far. |
| max_tokens | number | (Optional) The maximum number of completion tokens returned by the API. The total number of tokens requested in max_tokens plus the number of prompt tokens sent in messages must not exceed the context window token limit of model requested. If left unspecified, then the model will generate tokens until either it reaches its stop token or the end of its context window. |
| temperature | number | (Optional) The amount of randomness in the response, valued between 0 inclusive and 2 exclusive. Higher values are more random, and lower values are more deterministic. You should either set temperature or top_p, but not both. |
| top_p | number | (Optional) The nucleus sampling threshold, valued between 0 and 1 inclusive. For each subsequent token, the model considers the results of the tokens with top_p probability mass. You should either alter temperature or top_p, but not both. |
| top_k | number | (Optional) The number of tokens to keep for highest top-k filtering, specified as an integer between 0 and 2048 inclusive. If set to 0, top-k filtering is disabled. |
| stream | boolean | (Optional) Flag indicating whether to stream the response. |
| presence_penalty | number | (Optional) A value between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Incompatible with frequency_penalty. |
| frequency_penalty | number | (Optional) A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. A value of 1.0 means no penalty. Incompatible with presence_penalty. | PplxResponse Response object for pplx models. | Property | Type | Description |
|------------|-------------------------|--------------------------------------------------|
| id | string | The ID of the response. |
| model | Model | The model used for generating the response. |
| object | "chat.completion" | The type of object (always "chat.completion"). |
| created | number | The timestamp indicating when the response was created. |
| choices | CompletionChoices[] | An array of completion choices. | Please refer to the code for more details and usage examples of these types. Message Represents a message in a conversation. | Property | Type | Description |
|------------|-----------------------|--------------------------------------------------------|
| role | "system" \| "user" \| "assistant" | The role of the speaker in this turn of conversation. After the (optional) system message, user and assistant roles should alternate with user then assistant, ending in user. |
| content | string | The contents of the message in this turn of conversation. | CompletionChoices The list of completion choices the model generated for the input prompt. | Property | Type | Description |
|-------------------|-----------------------------------|-----------------------------------------------------|
| index | number | The index of the choice. |
| finish_reason | "stop" \| "length" | The reason the model stopped generating tokens. Possible values are stop if the model hit a natural stopping point, or length if the maximum number of tokens specified in the request was reached. |
| message | Message | The message generated by the model. |
| delta | Message | The incrementally streamed next tokens. Only meaningful when stream = true. |
Script
# Perplexity API Wrapper
This val exports a function `pplx` that provides an interface to the Perplexity AI chat completions API.
You'll need a Perplexity AI API key, see [their documentation](https://docs.perplexity.ai/) for how to get started with getting a key. By default, the function will use `PERPLEXITY_API_KEY` in your val town env variables unless overridden by setting `apiKey` in the function.
## `pplx(options: PplxRequest & { apiKey?: string }): Promise<PplxResponse>`
You can also specify an `apiKey` to override the default `Deno.env.get("PERPLEXITY_API_KEY")`.
The function returns an object of types `PplxResponse`, see below.
## Types
plantIdentifierApp
@mahtabtattur
@jsxImportSource https://esm.sh/react
HTTP
import OpenAI from "https://esm.sh/openai";
function PlantDetailCard({ plant, confidence }) {
// Plant Identification Function
async function identifyPlant(file) {
const openai = new OpenAI({
const response = await openai.chat.completions.create({
function PlantIdentifierApp() {
// Client-side rendering function
function client() {
export default async function server(request: Request): Promise<Response> {
val_ITkJGreHZz
@dhvanil
An interactive, runnable TypeScript val by dhvanil
HTTP
export async function val_ITkJGreHZz(req) {
try {
// Execute the code directly and capture its result
![rodrigotello avatar](https://images.clerk.dev/uploaded/img_2Q1sy7hYih37Ssro2ac4uxgyWtH.png)
handleOnboardingUseCases
@rodrigotello
// Triggered when someone tells us what they want to use VT for
Script
import process from "node:process";
// Triggered when someone tells us what they want to use VT for
export async function handleOnboardingUseCases({ auth, handle, useCases }: {
auth: string;
handle: string;
![desenmeng avatar](https://images.clerk.dev/oauth_github/img_2T5nVTk5tJ2uOfwMnPJMv404UQs.jpeg)
myApi
@desenmeng
An interactive, runnable TypeScript val by desenmeng
Script
export function myApi(name) {
return "hi " + name;
![pranjaldotdev avatar](https://images.clerk.dev/oauth_github/img_2Skxz9KcxZsvrWVdlalLqQhxbzm.jpeg)
cfwod
@pranjaldotdev
An interactive, runnable TypeScript val by pranjaldotdev
HTTP
import axios from "https://esm.sh/axios"; // latest
const URL = "https://www.crossfit.com/workout/";
export default async function main(req: Request): Promise<Response> {
let wodToday = null;
try {
test
@maxsshole
An interactive, runnable TypeScript val by maxsshole
Script
export async function test() {
console.log("testing");
![stevekrouse avatar](https://images.clerk.dev/uploaded/img_2PqHa2Gsy93xQrjh2w78Xu0cChW.jpeg)
karate_classes
@stevekrouse
@jsxImportSource https://esm.sh/react@18.2.0
HTTP
function App({ email, isAdmin }) {
function Home() {
function Login() {
function ClassList({ email }) {
function ClassDetails({ email, isAdmin }) {
function Admin() {
function ClassForm() {
function BulkAddUsers() {
function client() {
async function handler(request: Request): Promise<Response> {