meilisearch / arroy Goto Github PK
View Code? Open in Web Editor NEWAnnoy-inspired Approximate Nearest Neighbors in Rust, based on LMDB and optimized for memory usage :boom:
Home Page: https://docs.rs/arroy
License: MIT License
Annoy-inspired Approximate Nearest Neighbors in Rust, based on LMDB and optimized for memory usage :boom:
Home Page: https://docs.rs/arroy
License: MIT License
One of our customers is interested in binary Indexes. It could be interesting to look into this. We can potential find a good fit with this.
BIN_FLAT
This index is exactly the same as FLAT except that this can only be used for binary embeddings.For vector similarity search applications that require perfect accuracy and depend on relatively small (million-scale) datasets, the BIN_FLAT index is a good choice. BIN_FLAT does not compress vectors, and is the only index that can guarantee exact search results. Results from BIN_FLAT can also be used as a point of comparison for results produced by other indexes that have less than 100% recall.
BIN_FLAT is accurate because it takes an exhaustive approach to search, which means for each query the target input is compared to every vector in a dataset. This makes BIN_FLAT the slowest index on our list, and poorly suited for querying massive vector data. There are no parameters for the BIN_FLAT index in Milvus, and using it does not require data training or additional storage.
BIN_IVF_FLAT
This index is exactly the same as IVF_FLAT except that this can only be used for binary embeddings.BIN_IVF_FLAT divides vector data into nlist cluster units, and then compares distances between the target input vector and the center of each cluster. Depending on the number of clusters the system is set to query (nprobe), similarity search results are returned based on comparisons between the target input and the vectors in the most similar cluster(s) only — drastically reducing query time.
By adjusting nprobe, an ideal balance between accuracy and speed can be found for a given scenario. Query time increases sharply as both the number of target input vectors (nq), and the number of clusters to search (nprobe), increase.
BIN_IVF_FLAT is the most basic BIN_IVF index, and the encoded data stored in each unit is consistent with the original data.
This ItemIter
type is not public and must be. Is there a way to break the CI whenever we forget to expose a type?
If we can use both options simultaneously, we can avoid storing the length of the keys and compare the keys as big-endian.
When porting the build tree functions from Annoy, we kept the constant value for the number of descendants we could fit into a descendant node. The reason why they were doing that is because they needed constant-length nodes. However, our system no longer needs this, as LMDB entries can have any length.
At Meilisearch, we must ensure documents or vectors are hidden from some queries. The first version of the vector store feature was to iterate on the best results in the order found in the HNSW and conditionally ignore them. The complexity was O(n)
, which is not great when the user filters the results on a couple, e.g., 5, 10, 100.
We can do better than that now that we control the whole source code. There are two main solutions to implement:
Descendant
variant when "iterating" over the tree nodes in the nns_by_item/by_vector
. We can barely not even touch the build phase. However, it would also be great to store bitmaps instead of lists of u32
s in the Descendant
nodes to be able to perform faster intersections.We will also probably need to provide a nss_builder
to reduce the number of conditional parameters to specify. For example, we can provide the RoaringBitmap
to filter with, the number of trees to explore, and maybe two methods to query the database: with and without the distances (is it useful?).
We should be able to run one make_tree
per thread without issue
A user tried to call the clear function in arroy and got an interesting result, to say the least:
https://discord.com/channels/1006923006964154428/1197422056711667792/1198185699401289789
That simply shouldn't happen
When building the indexes. If we are writing the last prefix of the database, we can use append
instead of put
which should be faster.
We must take three parameters into account:
Fun fact: the lowest in the tree you are, the less impact a dummy plane has on the search cost.
Lines 248 to 259 in 7fc6031
fn split_imbalance(left_indices_len: usize, right_indices_len: usize) -> f64 {
let ls = left_indices_len as f64;
let rs = right_indices_len as f64;
let f = ls / (ls + rs + f64::EPSILON); // Avoid 0/0
f.max(1.0 - f)
}
fn main() {
dbg!(split_imbalance(29464, 18394));
dbg!(split_imbalance(30000, 30000));
dbg!(split_imbalance(30000, 1580));
}
Line 136 in 4f193fd
Here we should remove the selected leaf from the list of possible leafs.
That means, when we have less than 200 leafs to split we can exit early.
It’ll impact relevancy but I’m pretty sure it won’t have a negative impact
It would be great to support binary quantization in arroy. The main principle is to convert the dimensions values x <= 0
to 0
and x > 0
to 1. This way, we can represent the quantized vector with 32x less space and compute the distances in a much faster and CPU-friendly. We are currently limited to something like 15M (float 32bit, 768dims) on a 63GiB machine, but with binary quantization, we can go up to 480M vectors on the same machine.
Here is an example of implementing the Euclidean distance with binary data. Here is the formula:
This means that computing the difference at the power of two is equivalent to a xor:
Ultimately, the Euclidean operation is the sum of the XORed dimensions of both vectors squared: u8::BitXor
and u8::count_ones
methods will be SIMD-optimized by itself 🤔
We must expose a method to get the set of known items.
It is the part that takes up to 22% to copy, 5% to bzero the vector in advance, and again 5% to drop the allocated vectors. It seems like the AVX implementation can be switched to non-aligned f32 slices.
It is no longer possible to append items in the database because we are updating the updated item IDs that live after the item entries. We could move the metadata information before the item entries. We must add a test for this method.
Lines 166 to 176 in 19e0a07
It would be a good thing to create a small benchmark showing the incremental indexing performances.
Once we have that, we should profile an indexing process and see where most time is spent and if that's easy to fix (related to #60)
This could improve the relevancy, it needs to be tested
I wrote a visualizer of every iteration of two_means
+ the final repartition between the left children and right children on a split node.
At the very end of the split node function, we found two pretty good centroids:
But the final result we chose is this one:
The more dimensions we add, the less the issue shows.
When the database already contains items and vectors we can guess the dimensions from them.
Sometimes we need to delete new nodes previously inserted in the tmp_nodes
.
From my understanding, it was always either the last one or the last last one, but it looks like it's not.
It would be good to check measure if we often do a lot of iterations, and if that's the case try to optimize it.
Line 83 in 5e23882
By keeping a list of unreachable_items
that corresponds to the newly added or modified items we can make sure that the Writer::build
method keeps track of them and insert them incrementally into all of the trees.
However, it must keep the split plane balanced and therefore recompute some of the normal along the way.
It must also make sure that the tree depth is growing with the number of items too. By appending new split plane if necessary and reducing the branch size. That could be a rebuild
method that erase everything and start from the beginning to keep a good balance.
By reducing the number of digits we print for each floats
Make sure that a user doesn't open a database without the original distance.
We renamed the Writer::prepare
method to Writer::new
; this new method can no longer fail. We must change the API to return the Writer
directly.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.