This is an 'Omni API' that generates itself just in time.
The intended users of this API are Glide customers, who want to do relatively simple data transformations. For example:
- Create a new text file in my google cloud storage bucket
- Wrap a weather lookup API in a try/catch and return "Weather not available" on any errors
- Get me the day of the week for tomorrow, ie "Monday"
- Make an Asana task
- Send a newsletter
- Parse an ical feed and get today’s events and return them as JSON
You supply a prompt and inputs to the desired API. On the first occurance of that prompt:
- We ask an LLM to generate the code for endpoint to the prompt's specifications
- We create an HTTP val with that code
- We call that HTTP val, passing along the inputs
- We return the response to the original requester
On future occurances of the exact same prompt, we lookup the val that corresponds to that prompt, pass along the inputs, and the resulting response.
This Val Town project is responsible for the top-level API. We create the new llm-generated vals in a brand new Val Town account. This adds an extra layer of sandboxing, so that the llm-generated vals cannot access our environment variables or other private resources. Importantly, the llm-generated vals cannot edit this meta val that is creating them.
If the code requires third-party API tokens, those will need to be supplied on every request to the code that requires them. An issue here is that currently all request headers and the request body are stored in Val Town logs for at least 10 days. If neccesary, we could build something to disable this.
There is a known security issue with this architecture though: the llm-generated vals will have the API scope to edit themselves, and other vals in that account. In order to prevent this, we at Val Town will need to add the ability to configure Val API scopes in the Val Town API. (Potentially we can hack it with our trpc API?) Either way, before this API is used with untrusted users, this vulnerability should be addressed. Otherwise user-supplied inputs (which can include secret api tokens to third-party services) are at risk of being man-in-the-middle leaked.