Search

101 results found for embeddings

Code
94

It does this by comparing [embeddings from OpenAI](https://platform.openai.com/docs/guides/embed
I implemented three backends, which you can switch between in the search UI. Check out these val
- **Neon:** storing and searching embeddings using the [pg_vector](https://neon.tech/docs/extens
- Searching: [janpaul123/semanticSearchNeon](https://www.val.town/v/janpaul123/semanticSearchN
- Indexing: [janpaul123/indexValsNeon](https://www.val.town/v/janpaul123/indexValsNeon)
- **Blobs:** storing embeddings in Val Town's [standard blob storage](https://docs.val.town/std/
- Searching: [janpaul123/semanticSearchBlobs](https://www.val.town/v/janpaul123/semanticSearch
);
const { OpenAIEmbeddings } = await import(
"https://esm.sh/langchain/embeddings/openai"
);
[{ id: 2 }, { id: 1 }, { id: 3 }],
new OpenAIEmbeddings({
openAIApiKey: process.env.OPENAI_API_KEY,
import { searchEmojis } from "https://esm.town/v/maxm/emojiVectorEmbeddings";
import { extractValInfo } from "https://esm.town/v/pomdtr/extractValInfo";
<br />
Built on Val Town with sqlite vector search and openai embeddings.
<br />
Uses vector embeddings to get "vibes" search on emojis
```
async function calculateEmbeddings(text) {
const url = `https://yawnxyz-ai.web.val.run/generate?embed=true&value=${encodeURIComponent(t
} catch (error) {
console.error('Error calculating embeddings:', error);
return null;
);
const { OpenAIEmbeddings } = await import(
"https://esm.sh/langchain/embeddings/openai"
);
[{ id: 2 }, { id: 1 }, { id: 3 }, { id: 4 }, { id: 5 }],
new OpenAIEmbeddings({
openAIApiKey: process.env.OPENAI_API_KEY,
/**
* Call OpenAPI Embeddings api to vectorize a query string
* Returns an array of 1536 numbers
}): Promise<number[]> =>
fetchJSON("https://api.openai.com/v1/embeddings", {
method: "POST",
);
const { OpenAIEmbeddings } = await import(
"https://esm.sh/langchain/embeddings/openai"
);
[{ id: 2 }, { id: 1 }, { id: 3 }],
new OpenAIEmbeddings({
openAIApiKey: process.env.OPENAI_API_KEY,
async function generateEmbedding(text: string): Promise<number[]> {
const response = await openai.embeddings.create({
model: "text-embedding-ada-002",
/** @jsxImportSource npm:hono@3/jsx */
import bots from "https://esm.town/v/tmcw/surprisingEmbeddings/bots"
import * as v from "jsr:@valibot/valibot"
<p>
Embeddings. They're one of the parts of the LLM/AI wave that I sort of like.
</p>
<p>
Embeddings are pretty cool when they work, because they sort of capture the idea of{" "}
<a href="https://blog.val.town/blog/val-vibes/">'vibes', which makes them useful for searc
This project is based on this{" "}
https://www.linkedin.com/pulse/insanity-relying-vector-embeddings-why-rag-fails-michael-wood-4ie
blog post I read last year
const input = inputResult.output
const embeddings = await client.embed({
input,
word,
embedding: embeddings.data?.at(i)?.embedding,
}
<head>
<title>Surprising embeddings</title>
<link rel="stylesheet" href="https://unpkg.com/missing.css@1.1.3" />
<h3>
Surprising embeddings
<sub-title>
</script>
t type="module" src="https://esm.town/v/tmcw/surprisingEmbeddings/visualization"></script>
</div>
<head>
<title>Surprising embeddings</title>
<link rel="stylesheet" href="https://unpkg.com/missing.css@1.1.3" />
<h3>
Surprising embeddings
<sub-title>
const embeddings = await client.embed({
input,
word,
embedding: embeddings.data?.at(i)?.embedding,
}
try {
// let embeddings = await modelProvider.gen({
// embed: true,
// });
// console.log(`Adding link ${counter}:`, link.title, link.url, embeddings);
console.log(`Adding link ${counter}:`, link.title, link.url);
data: JSON.stringify(link),
// embeddings: embeddings.embedding.join(','),
});
export default async function semanticSearchPublicVals(query) {
const allValsBlobEmbeddingsMeta = (await blob.getJSON(`allValsBlob${dimensions}EmbeddingsMeta`
allBatchDataIndexes = _.uniq(Object.values(allValsBlobEmbeddingsMeta).map((item: any) => item.b
const embeddingsBatches = [];
const allBatchDataIndexesPromises = [];
for (const batchDataIndex of allBatchDataIndexes) {
const embeddingsBatchBlobName = `allValsBlob${dimensions}EmbeddingsData_${batchDataIndex}`;
const promise = blob.get(embeddingsBatchBlobName).then((response) => response.arrayBuffer())
promise.then((data) => {
embeddingsBatches[batchDataIndex as any] = data;
console.log(`Loaded ${embeddingsBatchBlobName} (${data.byteLength} bytes)`);
});
const openai = new OpenAI();
const queryEmbedding = (await openai.embeddings.create({
model: "text-embedding-3-small",
const res = [];
for (const id in allValsBlobEmbeddingsMeta) {
const meta = allValsBlobEmbeddingsMeta[id];
const embedding = new Float32Array(
embeddingsBatches[meta.batchDataIndex],
dimensions * 4 * meta.valIndex,
tmcw
surprisingEmbeddings
Visualizing embedding distances
Public
yawnxyz
embeddingsSearchExample
 
Public
thomasatflexos
generateEmbeddings
 
Public
janpaul123
compareEmbeddings
 
Public
janpaul123
blogPostEmbeddingsDimensionalityReduction
 
Public
janpaul123
debugValEmbeddings
 
Public
maxm
emojiVectorEmbeddings
 
Public

Users

No users found