Exam6918 avatar
Exam6918
aqi
Cron
An interactive, runnable TypeScript val by Exam6918
0
stevekrouse avatar
stevekrouse
resend
Script
Send an email via resend Requires a resend API key, which you can get for free. Usage @stevekrouse.resend({ from: "onboarding@resend.dev", to: "steve@val.town", subject: "Hello World", html: "Congrats on sending your first email!", apiKey: @stevekrouse.secrets.resend, });
0
christian avatar
christian
cityLookup
Script
// Cities named Brooklyn
0
theswiftcoder avatar
theswiftcoder
dailyDadJoke
Cron
Forked from stevekrouse/dailyDadJoke
0
neverstew avatar
neverstew
getMe
Script
GET /v1/me Fetches information about yourself. Requires a secret called valtownToken , set to your API Token. See more about authentication to understand how to generate a token.
0
cas avatar
cas
githubFollowing
Script
An interactive, runnable TypeScript val by cas
0
ayush37 avatar
ayush37
umbrellaReminder
Cron
Forked from stevekrouse/umbrellaReminder
0
stevekrouse avatar
stevekrouse
openaiFineTune
Script
An interactive, runnable TypeScript val by stevekrouse
0
stevekrouse avatar
stevekrouse
dateme_sqlite
HTTP
This script is buggy... import { setupDatabase } from "https://esm.town/v/stevekrouse/dateme_sqlite" await setupDatabase()
0
nbbaier avatar
nbbaier
perplexityAPI
Script
Perplexity API Wrapper This val exports a function pplx that provides an interface to the Perplexity AI chat completions API. You'll need a Perplexity AI API key, see their documentation for how to get started with getting a key. By default, the function will use PERPLEXITY_API_KEY in your val town env variables unless overridden by setting apiKey in the function. pplx(options: PplxRequest & { apiKey?: string }): Promise<PplxResponse> Generates a model's response for the given chat conversation. Required parameters in options are the following (for other parameters, see the Types section below): model ( string ): the name of the model that will complete your prompt. See below for possible values: pplx- 7b-chat , pplx-70b-chat , pplx-7b-online , pplx-70b-online , llama-2-70b-chat , codellama-34b -instruct , mistral-7b-instruct , and mixtral-8x7b-instruct . messages ( Message[] ): A list of messages comprising the conversation so far. A message object must contain role ( system , user , or assistant ) and content (a string). You can also specify an apiKey to override the default Deno.env.get("PERPLEXITY_API_KEY") . The function returns an object of types PplxResponse , see below. Types PplxRequest Request object sent to Perplexity models. | Property | Type | Description | |---------------------|----------------------|-----------------------------------------------------------------| | model | Model | The name of the model that will complete your prompt. Possible values: pplx- 7b-chat , pplx-70b-chat , pplx-7b-online , pplx-70b-online , llama-2-70b-chat , codellama-34b -instruct , mistral-7b-instruct , and mixtral-8x7b-instruct . | | messages | Message[] | A list of messages comprising the conversation so far. | | max_tokens | number | (Optional) The maximum number of completion tokens returned by the API. The total number of tokens requested in max_tokens plus the number of prompt tokens sent in messages must not exceed the context window token limit of model requested. If left unspecified, then the model will generate tokens until either it reaches its stop token or the end of its context window. | | temperature | number | (Optional) The amount of randomness in the response, valued between 0 inclusive and 2 exclusive. Higher values are more random, and lower values are more deterministic. You should either set temperature or top_p, but not both. | | top_p | number | (Optional) The nucleus sampling threshold, valued between 0 and 1 inclusive. For each subsequent token, the model considers the results of the tokens with top_p probability mass. You should either alter temperature or top_p, but not both. | | top_k | number | (Optional) The number of tokens to keep for highest top-k filtering, specified as an integer between 0 and 2048 inclusive. If set to 0, top-k filtering is disabled. | | stream | boolean | (Optional) Flag indicating whether to stream the response. | | presence_penalty | number | (Optional) A value between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Incompatible with frequency_penalty. | | frequency_penalty | number | (Optional) A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. A value of 1.0 means no penalty. Incompatible with presence_penalty. | PplxResponse Response object for pplx models. | Property | Type | Description | |------------|-------------------------|--------------------------------------------------| | id | string | The ID of the response. | | model | Model | The model used for generating the response. | | object | "chat.completion" | The type of object (always "chat.completion"). | | created | number | The timestamp indicating when the response was created. | | choices | CompletionChoices[] | An array of completion choices. | Please refer to the code for more details and usage examples of these types. Message Represents a message in a conversation. | Property | Type | Description | |------------|-----------------------|--------------------------------------------------------| | role | "system" \| "user" \| "assistant" | The role of the speaker in this turn of conversation. After the (optional) system message, user and assistant roles should alternate with user then assistant, ending in user. | | content | string | The contents of the message in this turn of conversation. | CompletionChoices The list of completion choices the model generated for the input prompt. | Property | Type | Description | |-------------------|-----------------------------------|-----------------------------------------------------| | index | number | The index of the choice. | | finish_reason | "stop" \| "length" | The reason the model stopped generating tokens. Possible values are stop if the model hit a natural stopping point, or length if the maximum number of tokens specified in the request was reached. | | message | Message | The message generated by the model. | | delta | Message | The incrementally streamed next tokens. Only meaningful when stream = true. |
0
rlimit avatar
rlimit
ratelimit
Script
rlimit.com - rate-limiting as a service This val abstracts rlimit.com service and allows val.town users to enjoy simple rate-limiting for their vals. Using @rlimit.ratelimit users can benefit out of a special free tier without signing up for rlimit.com account. If anyone require more than what free tier has to offer, they just sign up and pass their namespace ID created in the rlimit.com dashboard. Usage Following val example will limit calling val to 10 requests per 60s. let limitedVal = (async () => { const limit = await @rlimit.ratelimit("10/1m"); if (!limit.ok) { throw new Error("limit reached"); } console.log("limit ok", limit); // continue with expensive operation })(); A couple of examples link to val1 link to val2
0
andreterron avatar
andreterron
tidbytDeviceInfo
Script
An interactive, runnable TypeScript val by andreterron
0
stevekrouse avatar
stevekrouse
testMutateSemantics
Script
An interactive, runnable TypeScript val by stevekrouse
0
pomdtr avatar
pomdtr
aliasValExample
Script
An interactive, runnable TypeScript val by pomdtr
0
raphaelrk avatar
raphaelrk
nameNationality
Script
// Predict the nationality of a name
0
funkie avatar
funkie
subredditExample
Script
// Reddit recent posts from /r/aww (cute animals)
0
Updated: July 2, 2024