doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
nulllengths 4
398
⌀ | journal_ref
nulllengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2401.09350 | 0 | 4 2 0 2
n a J 7 1 ] S D . s c [
1 v 0 5 3 9 0 . 1 0 4 2 : v i X r a
Sebastian Bruch
# Foundations of Vector Retrieval
# Preface
We are witness to a few years of remarkable developments in Artificial Intelligence with the use of advanced machine learning algorithms, and in particular, deep learning. Gargantuan, complex neural networks that can learn through self-supervisionâand quickly so with the aid of special- ized hardwareâtransformed the research landscape so dramatically that, incremental overnight it seems, many fields experienced not the usual, progress, but rather a leap forward. Machine translation, natural language understanding, information retrieval, recommender systems, and computer vision are but a few examples of research areas that have had to grapple with the shock. Countless other disciplines beyond computer science such as robotics, biology, and chemistry too have benefited from deep learning. | 2401.09350#0 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 1 | These neural networks and their training algorithms may be complex, and the scope of their impact broad and wide, but nonetheless they are simply functions in a high-dimensional space. A trained neural network takes a vector as input, crunches and transforms it in various ways, and produces another vector, often in some other space. An image may thereby be turned into a vector, a song into a sequence of vectors, and a social network as a structured collection of vectors. It seems as though much of human knowledge, or at least what is expressed as text, audio, image, and video, has a vector representation in one form or another.
It should be noted that representing data as vectors is not unique to neural networks and deep learning. In fact, long before learnt vector representations of pieces of dataâwhat is commonly known as âembeddingsââcame along, data was often encoded as hand-crafted feature vectors. Each feature quanti- fied into continuous or discrete values some facet of the data that was deemed relevant to a particular task (such as classification or regression). Vectors of that form, too, reflect our understanding of a real-world object or concept. | 2401.09350#1 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 2 | If new and old knowledge can be squeezed into a collection of learnt or hand-crafted vectors, what useful things does that enable us to do? A metaphor that might help us think about that question is this: An ever- evolving database full of such vectors that capture various pieces of data can
v
# vi
be understood as a memory of sorts. We can then recall information from this memory to answer questions, learn about past and present events, reason about new problems, generate new content, and more.
# Vector Retrieval
Mathematically, ârecalling informationâ translates to finding vectors that are most similar to a query vector. The query vector represents what we wish to know more about, or recall information for. So, if we have a particular question in mind, the query is the vector representation of that question. If we wish to know more about an event, our query is that event expressed as a vector. If we wish to predict the function of a protein, perhaps we may learn a thing or two from known proteins that have a similar structure to the one in question, making a vector representation of the structure of our new protein a query. | 2401.09350#2 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 3 | Similarity is then a function of two vectors, quantifying how similar two vectors are. It may, for example, be based on the Euclidean distance between the query vector and a database vector, where similar vectors have a smaller distance. Or it may instead be based on the inner product between two vec- tors. Or their angle. Whatever function we use to measure similarity between pieces of data defines the structure of a database.
Finding k vectors from a database that have the highest similarity to a query vector is known as the top-k retrieval problem. When similarity is based on the Euclidean distance, the resulting problem is known as near- est neighbor search. Inner product for similarity leads to a problem known as maximum inner product search. Angular distance gives maximum cosine similarity search. These are mathematical formulations of the mechanism we called ârecalling information.â
The need to search for similar vectors from a large database arises in virtually every single one of our online transactions. Indeed, when we search the web for information about a topic, the search engine itself performs this similarity search over millions of web documents to find what may lexically or semantically match our query. Recommender systems find the most similar items to your browsing history by encoding items as vectors and, effectively, searching through a database of such items. Finding an old photo in a photo library, as another routine example, boils down to performing a similarity search over vector representations of images. | 2401.09350#3 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 4 | A neural network that is trained to perform a general task such as question- answering, could conceivably augment its view of the world by ârecallingâ in- formation from such a database and finding answers to new questions. This is particularly useful for generative agents such as chatbots who would oth- erwise be frozen in time, and whose knowledge limited to what they were
Preface
# Preface
Preface
exposed to during their training. With a vector database on the side, how- ever, they would have access to real-time information and can deduce new observations about content that is new to them. This is, in fact, the cor- nerstone of what is known as retrieval-augmented generation, an emerging learning paradigm.
Finding the most similar vectors to a query vector is easy when the database is small or when time is not of the essence: We can simply com- pare every vector in the database with the query and sort them by similarity. When the database grows large and the time budget is limited, as is often the case in practice, a na¨ıve, exhaustive comparison of a query with database vectors is no longer realistic. That is where vector retrieval algorithms become relevant. | 2401.09350#4 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 5 | For decades now, research on vector retrieval has sought to improve the efficiency of search over large vector databases. The resulting literature is rich with solutions ranging from heavily theoretical results to performant empir- ical heuristics. Many of the proposed algorithms have undergone rigorous benchmarking and have been challenged in competitions at major confer- ences. Technology giants and startups alike have invested heavily in devel- oping open-source libraries and managed infrastructure that offer fast and scalable vector retrieval.
That is not the end of that story, however. Research continues to date. In fact, how we do vector retrieval today faces a stress-test as databases grow orders of magnitude larger than ever before. None of the existing methods, for example, proves easy to scale to a database of billions of high-dimensional vectors, or a database whose records change frequently.
# About This Monograph
The need to conduct more research underlines the importance of making the existing literature more readily available and the research area more inviting. That is partially fulfilled with existing surveys that report the state of the art at various points in time. However, these publications are typically focused on a single class of vector retrieval algorithms, and compare and contrast published methods by their empirical performance alone. Importantly, no manuscript has yet summarized major algorithmic milestones in the vast vector retrieval literature, or has been prepared to serve as a reference for new and established researchers. | 2401.09350#5 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 6 | That gap is what this monograph intends to close. With the goal of present- ing the fundamentals of vector retrieval as a sub-discipline, this manuscript delves into important data structures and algorithms that have emerged in the literature to solve the vector retrieval problem efficiently and effectively.
vii
# viii
viii
Preface
# Structure
This monograph is divided into four parts. The first part introduces the prob- lem of vector retrieval and formalizes the concepts involved. The second part delves into retrieval algorithms that help solve the vector retrieval problem efficiently and effectively. Part three is devoted to vector compression. Fi- nally, the fourth part presents a review of background material in a series of appendices.
# Introduction
We start with a thorough introduction to the problem itself in Chapter 1 where we define the various flavors of vector retrieval. We then elaborate what is so difficult about the problem in high-dimensional spaces in Chapter 2.
In fact, sometimes high-dimensional spaces are hopeless. However, in re- ality data often lie on some low-dimensional space, even though their na¨ıve vector representations are in high dimensions. In those cases, it turns out, we can do much better. Exactly how we characterize this low âintrinsicâ dimensionality is the topic of Chapter 3.
# Retrieval Algorithms | 2401.09350#6 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 7 | # Retrieval Algorithms
With that foundation in place and the question clearly formulated, the second part of the monograph explores the different classes of existing solutions in great depth. We close each chapter with a summary of algorithmic insights. There, we will also discuss what remains challenging and explore future re- search directions.
We start with branch-and-bound algorithms in Chapter 4. The high- level idea is to lay a hierarchical mesh over the space, then given a query point navigate the hierarchy to the cell that likely contains the solution. We will see, however, that in high dimensions, the basic forms of these methods become highly inefficient to the point where an exhaustive search likely performs much better. | 2401.09350#7 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 8 | Alternatively, instead of laying a mesh over the space, we may define a fixed number of buckets and map data points to these buckets with the property that, if two data points are close to each other according to the distance func- tion, they are more likely to be mapped to the same bucket. When processing a query, we find which bucket it maps to and search the data points in that bucket. This is the intuition that led to the family of Locality Sensitive Hashing (LSH) algorithmsâa topic we will discuss in depth in Chapter 5. Yet another class of ideas adopts the view that data points are nodes in a graph. We place an edge between two nodes if they are among each othersâ nearest neighbors. When presented with a query point, we enter the graph
# Preface
Preface
through one of the nodes and greedily traverse the edges by taking the edge that leads to the minimum distance with the query. This process is repeated until we are stuck in some (local) optima. This is the core idea in graph algorithms, as we will learn in Chapter 6.
The final major approach is the simplest of all: Organize the data points into small clusters during pre-processing. When a query point arrives, solve theâcluster retrievalâ problem first, then solve retrieval on the chosen clusters. We will study this clustering method in detail in Chapter 7. | 2401.09350#8 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 9 | As we examine vector retrieval algorithms, it is inevitable that we must ink in extra pages to discuss why similarity based on inner product is special and why it poses extra challenges for the algorithms in each categoryâmany of these difficulties will become clear in the introductory chapters.
There is, however, a special class of algorithms specifically for inner prod- uct. Sampling algorithms take advantage of the linearity of inner product to reduce the dependence of the time complexity on the number of dimensions. We will review example algorithms in Chapter 8.
# Compression
The third part of this monograph concerns the storage of vectors and their distance computation. After all, the vector retrieval problem is not just con- cerned with the time complexity of the retrieval process itself, but also aims to reduce the size of the data structure that helps answer queriesâknown as the index. Compression helps that cause.
In Chapter 9 we will review how vectors can be quantized to reduce the size of the index while simultaneously facilitating fast computation of the distance function in the compressed domain! That is what makes quantization effective but challenging. | 2401.09350#9 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 10 | Related to the topic of compression is the concept of sketching. Sketching is a technique to project a high-dimensional vector into a low-dimensional vector, called a sketch, such that certain properties (e.g., the L2 norm, or inner products between any two vectors) are approximately preserved. This probabilistic method of reducing dimensionality naturally connects to vector retrieval. We offer a peek into the vast sketching literature in Chapter 10 and discuss its place in the vector retrieval research. We do so with a particular focus on sparse vectors in an inner product spaceâcontrasting sketching with quantization methods that are more appropriate for dense vectors.
# Objective
It is important to stress, however, that the purpose of this monograph is not to provide a comprehensive survey or comparative analysis of every published
# ix
# x
Preface
work that has appeared in the vector retrieval literature. There is simply too many empirical works with volumes of heuristics and engineering solutions to cover. Instead, we will give an in-depth, didactic treatment of foundational ideas that have caused a seismic shift in how we approach the problem, and the theory that underpins them.
By consolidating these ideas, this monograph hopes to make this fasci- nating field more invitingâespecially to the uninitiatedâand enticing as a research topic to new and established researchers. We hope the reader will find that this monograph delivers on these objectives.
# Intended Audience | 2401.09350#10 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 11 | # Intended Audience
This monograph is intended as an introductory text for graduate students who wish to embark on research on vector retrieval. It is also meant to serve as a self-contained reference that captures important developments in the field, and as such, may be useful to established researchers as well.
As the work is geared towards researchers, however, it naturally empha- sizes the theoretical aspects of algorithms as opposed to their empirical be- havior or experimental performance. We present theorems and their proofs, for example. We do not, on the other hand, present experimental results or compare algorithms on datasets systematically. There is also no discussion around the use of the presented algorithms in practice, notes on implementa- tion and libraries, or practical insights and heuristics that are often critical to making these algorithms work on real data. As a result, practitioners or applied researchers may not find the material immediately relevant.
Finally, while we make every attempt to articulate the theoretical results and explain the proofs thoroughly, having some familiarity with linear al- gebra and probability theory helps digest the results more easily. We have included a review of the relevant concepts and results from these subjects in Appendices B (probability), C (concentration inequalities), and D (linear algebra) for convenience. Should the reader wish to skip the proofs, however, the narrative should still paint a complete picture of how each algorithm works.
# Acknowledgements | 2401.09350#11 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 12 | # Acknowledgements
I am forever indebted to my dearest colleagues Edo Liberty, Amir Ingber, Brian Hentschel, and Aditya Krishnan. This incredible but humble group of scholars at Pinecone are generous with their time and knowledge, patiently teaching me what I do not know, and letting me use them as a sounding board without fail. Their encouragement throughout the process of writing this manuscript, too, was the force that drove this work to completion.
I am also grateful to Claudio Lucchese, a dear friend, a co-author, and a professor of computer science at the Caâ Foscari University of Venice, Italy. I conceived of the idea for this monograph as I lectured at Caâ Foscari on the topic of retrieval and ranking, upon Claudioâs kind invitation.
I would not be writing these words were it not for the love, encouragement, and wisdom of Franco Maria Nardini, of the ISTI CNR in Pisa, Italy. In the mad and often maddening world of research, Franco is the one knowledgeable and kind soul who restores my faith in research and guides me as I navigate the landscape.
Finally, there are no words that could possibly convey my deepest gratitude to my partner, Katherine, for always supporting me and my ambitions; for showing by example what dedication, tenacity, and grit ought to mean; and for finding me when I am lost.
xi | 2401.09350#12 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 13 | xi
# Notation
This section summarizes the special symbols and notation used throughout this work. We often repeat these definitions in context as a reminder, espe- cially if we choose to abuse notation for brevity or other reasons.
Paragraphs that are highlighted in a gray box such as this contain important statements, often conveying key findings or observations, or a detail that will be important to recall in later chapters.
# Terminology
We use the terms âvectorâ and âpointâ interchangeably. In other words, we refer to an ordered list of d real values as a d-dimensional vector or a point in Rd.
We say that a point is a data point if it is part of the collection of points we wish to sift through. It is a query point if it is the input to the search procedure, and for which we are expected to return the top-k similar data points from the collection.
# Symbols
Reserved Symbols
X Used exclusively to denote a collection of vectors.
xiii
xiv Acknowledgements m q d e1, e2, . . . , ed We use this symbol exclusively to denote the cardinal- ity of a collection of data points, X . Used singularly to denote a query point. We use this symbol exclusively to refer to the number of dimensions. Standard basis vectors in Rd
# Sets | 2401.09350#13 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 14 | # Sets
J |·| [n] B(u, r) \ â³ 1p Calligraphic font typically denotes sets. The cardinality (number of items) of a finite set. The {1, 2, 3, . . . , n}. The closed ball of radius r centered at point u: {v | δ(u, v) ⤠r} where δ(·, ·) is the distance function. The set difference operator: A \ B = {x â A | x /â B}. The symmetric difference of two sets. The indicator function. It is 1 if the predicate p is true, and 0 otherwise. set of integers from 1 to n (inclusive):
# Vectors and Vector Space
{a, D| The closed interval from a to b.
[a, b] Z Rd Sdâ1 u, v, w ui, vi, wi
The closed interval from a to b. The set of integers. d-dimensional Euclidean space. The hypersphere in Rd. Lowercase letters denote vectors. Subscripts identify a specific coordinate of a vector, so that ui is the i-th coordinate of vector u.
Z The set of integers.
R¢ d-dimensional Euclidean space.
g@-1 The hypersphere in R¢.
U,U,W Lowercase letters denote vectors. | 2401.09350#14 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 15 | Z The set of integers.
R¢ d-dimensional Euclidean space.
g@-1 The hypersphere in R¢.
U,U,W Lowercase letters denote vectors.
Ui, Vi, Wi Subscripts identify a specific coordinate of a vector, so that u,; is the i-th coordinate of vector u.
# Functions and Operators
nz (·)
The set of non-zero coordinates of a vector: nz (u) = {i | ui ̸= 0}. We use the symbol δ exclusively to denote the dis- tance function, taking two vectors and producing a real value. The Jaccard similarity index of two vectors: J(u, v) = |nz (u) ⩠nz (v)|/|nz (u) ⪠nz (v)|.
δ(·, ·)
J(·, ·)
Acknowledgements
â¨Â·, ·⩠â¥Â·â¥p â i uivi. i|ui|p)1/p.
Probabilities and Distributions
E[·] Var[·] P[·] â§, ⨠Z
The expected value of a random variable. The variance of a random variable. The probability of an event. Logical AND and OR operators. We generally use uppercase letters to denote random variables.
# xv
# Contents | 2401.09350#15 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 17 | Intrinsic Dimensionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.1 High-Dimensional Data and Low-Dimensional Manifolds . . . . . 23 3.2 Doubling Measure and Expansion Rate . . . . . . . . . . . . . . . . . . . . 24 3.3 Doubling Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.3.1 Properties of the Doubling Dimension . . . . . . . . . . . . . . . 27 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
# Part I Introduction | 2401.09350#17 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 18 | 1 Vector Retrieval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1 Vector Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Vectors as Units of Retrieval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Flavors of Vector Retrieval 7 1.3.1 Nearest Neighbor Search . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.3.2 Maximum Cosine Similarity Search . . . . . . . . . . . . . . . . . 8 9 1.3.3 Maximum Inner Product Search . . . . . . . . . . . . . . . . . . . . 1.4 Approximate Vector Retrieval . . . . . . . . . . . . . . . . . . | 2401.09350#18 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 20 | 2 Retrieval Stability in High Dimensions . . . . . . . . . . . . . . . . . . . 17 2.1 Intuition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.2 Formal Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.3 Empirical Demonstration of Instability . . . . . . . . . . . . . . . . . . . . 20 2.3.1 Maximum Inner Product Search . . . . . . . . . . . . . . . . . . . . 21 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3
xvii
xviii
Contents
# Part II Retrieval Algorithms | 2401.09350#20 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 21 | Locality Sensitive Hashing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 5.1 Intuition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 5.2 Top-k Retrieval with LSH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 5.2.1 The Point Location in Equal Balls Problem . . . . . . . . . . 59 5.2.2 Back to the Approximate Retrieval Problem . . . . . . . . . 62 5.3 LSH Families . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 5.3.1 Hamming Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 5.3.2 Angular Distance . . . . . . . . . . . . . . | 2401.09350#21 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 22 | . . . . . . . . . . . . . . . . . 63 5.3.2 Angular Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 5.3.3 Euclidean Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 5.3.4 Inner Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 5.4 Closing Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 | 2401.09350#22 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 23 | 4 Branch-and-Bound Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.1 Intuition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.2 k-dimensional Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.2.1 Complexity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.2.2 Failure in High Dimensions . . . . . . . . . . . . . . . . . . . . . . . . 35 4.3 Randomized Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 4.3.1 Randomized Partition Trees . . . . . . . . . . . . . . . . . . . . . . . 38 4.3.2 | 2401.09350#23 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 24 | 4.3.1 Randomized Partition Trees . . . . . . . . . . . . . . . . . . . . . . . 38 4.3.2 Spill Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.4 Cover Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 4.4.1 The Abstract Cover Tree and its Properties . . . . . . . . . . 47 4.4.2 The Search Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.4.3 The Construction Algorithm . . . . . . . . . . . . . . . . . . . . . . . 49 4.4.4 The Concrete Cover Tree . . . . . . . . . . . . . . . . . . . . . . . . . . 51 4.5 Closing Remarks . . . . . . . . . . . . . . . . . . . . . . . . . | 2401.09350#24 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 27 | 6 Graph Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 6.1 Intuition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 6.1.1 The Research Question . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 6.2 The Delaunay Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 6.2.1 Voronoi Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 6.2.2 Delaunay Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 6.2.3 Top-1 Retrieval . . . . . . . . . . . . . . . . . . . . | 2401.09350#27 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 28 | . . . . . . . . 78 6.2.3 Top-1 Retrieval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 6.2.4 Top-k Retrieval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 6.2.5 The k-NN Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 6.2.6 The Case of Inner Product . . . . . . . . . . . . . . . . . . . . . . . . 83 6.3 The Small World Phenomenon . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 6.3.1 Lattice Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 | 2401.09350#28 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 30 | 6.3.2 Extension to the Delaunay Graph . . . . . . . . . . . . . . . . . . 91 6.3.3 Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 6.4 Neighborhood Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 6.4.1 From SNG to α-SNG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 6.5 Closing Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Sampling Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 2401.09350#30 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 31 | . 101 Sampling Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 8.1 Intuition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 8.2 Approximating the Ranks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 8.2.1 Non-negative Data and Queries . . . . . . . . . . . . . . . . . . . . 113 8.2.2 The General Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 8.2.3 Sample Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 8.3 Approximating the Scores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 | 2401.09350#31 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 32 | Approximating the Scores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 8.3.1 The BoundedME Algorithm . . . . . . . . . . . . . . . . . . . . . . . 118 8.3.2 Proof of Correctness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 8.4 Closing Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 | 2401.09350#32 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 34 | 9 Quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 9.1 Vector Quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 9.1.1 Codebooks and Codewords . . . . . . . . . . . . . . . . . . . . . . . . 128 9.2 Product Quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 9.2.1 Distance Computation with PQ . . . . . . . . . . . . . . . . . . . . 129 9.2.2 Optimized Product Quantization . . . . . . . . . . . . . . . . . . . 130 9.2.3 Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 9.3 Additive Quantization . . . . . . . . . . . | 2401.09350#34 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 35 | . . . . . . . . . . . . . . . . . . . . . 131 9.3 Additive Quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 9.3.1 Distance Computation with AQ . . . . . . . . . . . . . . . . . . . . 133 9.3.2 AQ Encoding and Codebook Learning . . . . . . . . . . . . . . 133 9.4 Quantization for Inner Product . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 9.4.1 Score-aware Quantization . . . . . . . . . . . . . . . . . . . . . . . . . 135 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 | 2401.09350#35 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 36 | Quantization... 0.0... .0 0. eee ene 9.1 Vector Quantization ...... 6. . eee eee eee 9.1.1 Codebooks and Codewords .............. 000 eee eee 9.2 Product Quantization ........ 0.0. e eee eee eee 9.2.1 Distance Computation with PQ .................04. 9.2.2 Optimized Product Quantization ................00. 9.2.3 Extensions .............0 000002 9.3 Additive Quantization ........ 0... eee eee eee eee 9.3.1 Distance Computation with AQ................2005 9.3.2 AQ Encoding and Codebook Learning .............. 9.4 Quantization for Inner Product................0 cee eee eee 9.4.1 Score-aware Quantization ............... 0c eee eee References 0.0... cece teen eee ene Sketching ..........00..0 0000s 10.1 Intuition 10.2 Linear Sketching with the JL Transform................... 10.2.1 Theoretical Analysis ...........00. 0.00.00 c cece eee 10.3 Asymmetric Sketching ............0 0.000 e cece eee eee 10.3.1 The Sketching Algorithm 27 27 28 28 29 30 31 32 33 33 35 35 40 43 43 45 46 49 50 | 2401.09350#36 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 37 | 10 Sketching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 10.1 Intuition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 10.2 Linear Sketching with the JL Transform . . . . . . . . . . . . . . . . . . . 145 10.2.1 Theoretical Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 10.3 Asymmetric Sketching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 10.3.1 The Sketching Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 150
# xix
# xx
Contents | 2401.09350#37 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 38 | 10.3.2 Inner Product Approximation . . . . . . . . . . . . . . . . . . . . . . 151 10.3.3 Theoretical Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 10.3.4 Fixing the Sketch Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 10.4 Sketching by Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 10.4.1 The Sketching Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 160 10.4.2 Inner Product Approximation . . . . . . . . . . . . . . . . . . . . . . 161 10.4.3 Theoretical Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 References . . . . . . . . . . . . . . . . . . . . . . . . . . . | 2401.09350#38 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 41 | A B Collections ........... 000. References 0.0... cece teen eee ene Probability Review ....................0.0 00000 e eee eee B.1 Probability ........ 0.0... eee B.2 Random Variables .............00..00 000000 c cece eee eee B.3 Conditional Probability ..........0..00 0.00. c eee eee eee B.4 Independence B.5 Expectation and Variance .................0000 0000 e eee B.6 Central Limit Theorem............ 0.0... c cece eee eee ee Concentration of Measure ..............6 0000 c cece C.1 Markovâs Inequality... 2.0.0.0... 0002s C.2 Chebyshevâs Inequality .........00. 0000.0 cece cece eee C.3 Chernoff Bounds .......... 0.00 c cece eee eet ee C.4 Hoeffdingâs Inequality ...........0 0000s C.5 Bennetâs Inequality ........... 0.0.0 Linear Algebra Review ...................0000 000. e cece D.1 Inner Product....... 0.0... eee eee D.2 Norms ...... 0. D.3 Distance .... 6. cee eee eee | 2401.09350#41 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 43 | Part I Introduction
Chapter 1 Vector Retrieval
Abstract This chapter sets the stage for the remainder of this monograph. It explains where vectors come from, how they have come to represent data of any modality, and why they are a useful mathematical tool in machine learning. It then describes the structure we typically expect from a collection of vectors: that similar objects get vector representations that are close to each other in an inner product or metric space. We then define the problem of top-k retrieval over a well-structured collection of vectors, and explore its different flavors, including approximate retrieval.
# 1.1 Vector Representations
We routinely use ordered lists of numbers, or vectors, to describe objects of any shape or form. Examples abound. Any geographic location on earth can be recognized as a vector consisting of its latitude and longitude. A desk can be described as a vector that represents its dimensions, area, color, and other quantifiable properties. A photograph as a list of pixel values that together paint a picture. A sound wave as a sequence of frequencies. | 2401.09350#43 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 44 | Vector representations of objects have long been an integral part of the machine learning literature. Indeed, a classifier, a regression model, or a rank- ing function learns patterns from, and acts on, vector representations of data. In the past, this vector representation of an object was nothing more than a collection of its features. Every feature described some facet of the object (for example, the color intensity of a pixel in a photograph) as a continuous or discrete value. The idea was that, while individual features describe only a small part of the object, together they provide sufficiently powerful statistics about the object and its properties for the machine learnt model to act on. The features that led to the vector representation of an object were gen- erally hand-crafted functions. To make sense of that, let us consider a text document in English. Strip the document of grammar and word order, and
3
4
1 Vector Retrieval
brown Pox brown Fox jumps | ota o fr =O ee = os 2D The quick brown Fox â>(0_0 1% = 0 jumps over the lazy dog... Zs games over une 1027 OG") The five boxing wizards op jume auickly... (00 7% 0 = 0 = Wo) | 2401.09350#44 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 45 | Fig. 1.1: Vector representation of a piece of text by adopting a âbag of wordsâ view: A text document, when stripped of grammar and word order, can be thought of as a vector, where each coordinate represents a term in our vo- cabulary and its value records the frequency of that term in the document or some function of it. The resulting vectors are typically sparse; that is, they have very few non-zero coordinates.
we end up with a set of words, more commonly known as a âbag of words.â This set can be summarized as a histogram.
If we designated every term in the English vocabulary to be a dimension in a (naturally) high-dimensional space, then the histogram representation of the document can be encoded as a vector. The resulting vector has relatively few non-zero coordinates, and each non-zero coordinate records the frequency of a term present in the document. This is illustrated in Figure 1.1 for a toy example. More generally, non-zero values may be a function of a termâs frequency in the document and its propensity in a collectionâthat is, the likelihood of encountering the term [Salton and Buckley, 1988]. | 2401.09350#45 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 46 | The advent of deep learning and, in particular, Transformer-based mod- els [Vaswani et al., 2017] brought about vector representations that are be- yond the elementary formation above. The resulting representation is often, as a single entity, referred to as an embedding, instead of a âfeature vector,â though the underlying concept remains unchanged: an object is encoded as a real d-dimensional vector, a point in Rd.
Let us go back to the example from earlier to see how the embedding of a text document could be different from its representation as a frequency-based feature vector. Let us maintain the one-to-one mapping between coordinates and terms in the English vocabulary. Remember that in the âlexicalâ rep- resentation from earlier, if a coordinate was non-zero, that implied that the corresponding term was present in the document and its value indicated its frequency-based feature. Here we instead learn to turn coordinates on or off and, when we turn a coordinate on, we want its value to predict the significance of the corresponding term based on semantics and contextual in- formation. For example, the (absent) synonyms of a (present) term may get a non-zero value, and terms that offer little discriminative power in the given context become 0 or close to it. This basic idea has been explored extensively
1.2 Vectors as Units of Retrieval | 2401.09350#46 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 47 | 1.2 Vectors as Units of Retrieval
by many recent models of text [Bai et al., 2020, Formal et al., 2021, 2022, Zhuang and Zuccon, 2022, Dai and Callan, 2020, Gao et al., 2021, Mallia et al., 2021, Zamani et al., 2018, Lin and Ma, 2021] and has been shown to produce effective representations.
Vector representations of text need not be sparse. While sparse vectors with dimensions that are grounded in the vocabulary are inherently in- terpretable, text documents can also be represented with lower-dimensional dense vectors (where every coordinate is almost surely non-zero). This is, in fact, the most dominant form of vector representation of text documents in the literature [Lin et al., 2021, Karpukhin et al., 2020, Xiong et al., 2021, Reimers and Gurevych, 2019, Santhanam et al., 2022, Khattab and Zaharia, 2020]. Researchers have also explored hybrid representations of text where vectors have a dense subspace and an orthogonal sparse subspace [Chen et al., 2022, Bruch et al., 2023, Wang et al., 2021, Kuzi et al., 2020, Karpukhin et al., 2020, Ma et al., 2021, 2020, Wu et al., 2019]. | 2401.09350#47 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 48 | Unsurprisingly, the same embedding paradigm can be extended to other data modalities beyond text: Using deep learning models, one may embed images, videos, and audio recordings into vectors. In fact, it is even possible to project different data modalities (e.g., images and text) together into the same vector space and preserve some property of interest [Zhang et al., 2020, Guo et al., 2019].
It appears, then, that vectors are everywhere. Whether they are the result of hand-crafted functions that capture features of the data or are the output of learnt models; whether they are dense, sparse, or both, they are effective representations of data of any modality.
But what precisely is the point of turning every piece of data into a vector? One answer to that question takes us to the fascinating world of retrieval.
# 1.2 Vectors as Units of Retrieval
It would make for a vapid exercise if all we had were vector representations of data without any structure governing a collection of them. To give a collection of points some structure, we must first ask ourselves what goal we are trying to achieve by turning objects into vectors. It turns out, we often intend for the vector representation of two similar objects to be âcloseâ to each other according to some well-defined distance function. | 2401.09350#48 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 49 | That is the structure we desire: Similarity in the vector space must imply similarity between objects. So, as we engineer features to be extracted from an object, or design a protocol to learn a model to produce embeddings of data, we must choose the dimensionality d of the target space (a subset of Rd)
5
6
1 Vector Retrieval
along with a distance function δ(·, ·). Together, these define an inner product or metric space.
Consider again the lexical representation of a text document where d is the size of the English vocabulary. Let δ be the distance variant of the Jaccard index, δ(u, v) = âJ(u, v) â â|nz (u) â© nz (v)|/|nz (u) ⪠nz (v)|, where nz (u) = {i | ui ̸= 0} with ui denoting the i-th coordinate of vector u.
In the resulting space, if vectors u and v have a smaller distance than vec- tors u and w, then we can clearly conclude that the document represented by u is lexically more similar to the one represented by v than it is to the document w represents. That is because the distance (or, in this case, simi- larity) function reflects the amount of overlap between the terms present in one document with another. | 2401.09350#49 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 50 | We should be able to make similar arguments given a semantic embedding of text documents. Again consider the sparse embeddings with d being the size of the vocabulary, and more concretely, take SPLADE [Formal et al., 2021] as a concrete example. This model produces real-valued sparse vectors in an inner product space. In other words, the objective of its learning procedure is to maximize the inner product between similar vectors, where the inner product between two vectors u and v is denoted by (u,v) and is computed using >>, uri.
In the resulting space, if u, v, and w are generated by Splade with the property that â¨u, vâ© > â¨u, wâ©, then we can conclude that, according to Splade, documents represented by u and v are semantically more similar to each other than u is to w. There are numerous other examples of models that optimize for the angular distance or Euclidean distance (L2) between vectors to preserve (semantic) similarity.
What can we do with a well-characterized collection of vectors that rep- resent real-world objects? Quite a lot, it turns out. One use case is the topic of this monograph: the fundamental problem of retrieval. | 2401.09350#50 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 51 | We are often interested in finding k objects that have the highest degree of similarity to a query object. When those objects are represented by vectors in a collection X , where the distance function δ(·, ·) is reflective of similarity, we may formalize this top-k question mathematically as finding the k minimizers of distance with the query point!
We state that formally in the following definition:
Definition 1.1 (Top-k Retrieval) Given a distance function δ(·, ·), we wish to pre-process a collection of data points X â Rd in time that is poly- nomial in |X | and d, to form a data structure (the âindexâ) whose size is
1.3 Flavors of Vector Retrieval
polynomial in |X | and d, so as to efficiently solve the following in time o(|X |d) for an arbitrary query q â Rd:
(k) arg min uâX δ(q, u). (1.1) | 2401.09350#51 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 52 | (k) arg min uâX δ(q, u). (1.1)
A web search engine, for example, finds the most relevant documents to your query by first formulating it as a top-k retrieval problem over a collection of (not necessarily text-based) vectors. In this way, it quickly finds the subset of documents from the entire web that may satisfy the information need cap- tured in your query. Question answering systems, conversational agents (such as Siri, Alexa, and ChatGPT), recommendation engines, image search, out- lier detectors, and myriad other applications that are at the forefront of many online services and in many consumer gadgets depend on data structures and algorithms that can answer the top-k retrieval question as efficiently and as effectively as possible.
# 1.3 Flavors of Vector Retrieval
We create an instance of the deceptively simple problem formalized in Def- inition 1.1 the moment we acquire a collection of vectors X together with a distance function δ. In the remainder of this monograph, we assume that there is some function, either manually engineered or learnt, that transforms objects into vectors. So, from now on, X is a given.
The distance function then, specifies the flavor of the top-k retrieval prob- lem we need to solve. We will review these variations and explore what each entails.
# 1.3.1 Nearest Neighbor Search | 2401.09350#52 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 53 | # 1.3.1 Nearest Neighbor Search
In many cases, the distance function is derived from a proper metric where non-negativity, coincidence, symmetry, and triangle inequality hold for δ. A clear example of this is the L2 distance: δ(u, v) = â¥u â vâ¥2. The resulting problem, illustrated for a toy example in Figure 1.2(a), is known as k-Nearest Neighbors (k-NN) search:
(k) arg min uâX â¥q â uâ¥2 = (k) arg min uâX â¥q â uâ¥2 2. (1.2)
7
8
1 Vector Retrieval | 2401.09350#53 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 55 | Fig. 1.2: Variants of vector retrieval for a toy vector collection in R2. In Nearest Neighbor search, we find the data point whose L2 distance to the query point is minimal (v for top-1 search). In Maximum Cosine Similarity search, we instead find the point whose angular distance to the query point is minimal (v and p are equidistant from the query). In Maximum Inner Product Search, we find a vector that maximizes the inner product with the query vector. This can be understood as letting the hyperplane orthogonal to the query point sweep the space towards the origin; the first vector to touch the sweeping plane is the maximizer of inner product. Another interpretation is this: the shaded region in the figure contains all the points y for which p is the answer to arg maxxâ{u,v,w,p}â¨x, yâ©.
# 1.3.2 Maximum Cosine Similarity Search
The distance function may also be the angular distance between vectors, which is again a proper metric. The resulting minimization problem can be stated as follows, though its equivalent maximization problem (involving the cosine of the angle between vectors) is perhaps more recognizable: | 2401.09350#55 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 56 | (k) arg min uâX 1 â â¨q, uâ© â¥qâ¥2â¥uâ¥2 = (k) arg max uâX â¨q, uâ© â¥uâ¥2 . (1.3)
The latter is referred to as the k-Maximum Cosine Similarity (k-MCS) prob- lem. Note that, because the norm of the query point, â¥qâ¥2, is a constant in the optimization problem, it can simply be discarded; the resulting distance function is rank-equivalent to the angular distance. Figure 1.2(b) visualizes this problem on a toy collection of vectors.
1.3 Flavors of Vector Retrieval
# 1.3.3 Maximum Inner Product Search
Both of the problems in Equations (1.2) and (1.3) are special instances of a more general problem known as k-Maximum Inner Product Search (k-MIPS):
(k) arg max uâX â¨q, uâ©. (1.4) | 2401.09350#56 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 57 | (k) arg max uâX â¨q, uâ©. (1.4)
This is easy to see for k-MCS: If, in a pre-processing step, we L2-normalized all vectors in X so that u is transformed to uâ² = u/â¥uâ¥2, then â¥uâ²â¥2 = 1 and therefore Equation (1.3) reduces to Equation (1.4).
As for a reduction of k-NN to k-MIPS, we can expand Equation (1.2) as follows:
(k) arg min uâX â¥q â uâ¥2 2 = (k) arg min uâX â¥qâ¥2 2 â 2â¨q, uâ© + â¥uâ¥2 2 = (k) arg max uâ²âX â² â¨qâ², uâ²â©,
2, and defined qâ² â Rd+1 as where we have discarded the constant term, â¥qâ¥2 the concatenation of q â Rd and a 1-dimensional vector with value â1/2 (i.e., qâ² = [q, â1/2]), and uâ² â Rd+1 as [u, â¥uâ¥2 | 2401.09350#57 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 58 | The k-MIPS problem, illustrated on a toy collection in Figure 1.2(c), does not come about just as the result of the reductions shown above. In fact, there exist embedding models (such as Splade, as discussed earlier) that learn vector representations with respect to inner product as the distance function. In other words, k-MIPS is an important problem in its own right.
# 1.3.3.1 Properties of MIPS
In a sense, then, it is sufficient to solve the k-MIPS problem as it is the umbrella problem for much of vector retrieval. Unfortunately, k-MIPS is a much harder problem than the other variants. That is because inner product is not a proper metric. In particular, it is not non-negative and does not satisfy the triangle inequality, so that â¨u, vâ© â® â¨u, wâ© + â¨w, vâ© in general.
Perhaps more troubling is the fact that even âcoincidenceâ is not guar- anteed. In other words, it is not true in general that a vector u maximizes inner product with itself: u ̸= arg maxvâX â¨v, uâ©! | 2401.09350#58 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 59 | As an example, suppose v and p = αv for some α > 1 are vectors in the collection X âa case demonstrated in Figure 1.2(c). Clearly, we have that
9
10
1 Vector Retrieval
â¨v, pâ© = αâ¨v, vâ© > â¨v, vâ©, so that p (and not v) is the solution to MIPS1 for the query point v.
In high-enough dimensions and under certain statistical conditions, how- ever, coincidence is reinstated for MIPS with high probability. One such case is stated in the following theorem.
Theorem 1.1 Suppose data points X are independent and identically dis- tributed ( iid) in each dimension and drawn from a zero-mean distribution. Then, for any u â X :
lim P [u = argmax(u, v)] = 1. doo VEX | 2401.09350#59 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 60 | lim P [u = argmax(u, v)] = 1. doo VEX
Proof. Denote by Var[·] and E[·] the variance and expected value operators. By the conditions of the theorem, it is clear that E[â¨u, uâ©] = d E[Z 2] where Z is the random variable that generates each coordinate of the vector. We can also see that E[â¨u, Xâ©] = 0 for a random data point X, and that Var[â¨u, Xâ©] = â¥uâ¥2 2 We wish to claim that u â X is the solution to a MIPS problem where u is also the query point. That happens if and only if every other vector in X has an inner product with u that is smaller than â¨u, uâ©. So that:
P [wu = arg max(u,v)] =P [(u,v) < (uu) Vue a} = VEX 1-P[Svea st. (u,v) > (u,u)] > (by Union Bound) 1- Ss P[(u,v) > (u,u)] = (by id) vex 1-|&|P[(u,X) > (u,u)]. | 2401.09350#60 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 61 | Let us turn to the last term and bound the probability for a random data point:
P [(u,X) > (u,u)] =P [ (u, X) â (u,u) + dE[Z?] > dE[Z?]}. Y
The expected value of Y is 0. Denote by Ï2 its variance. By the application of the one-sided Chebyshevâs inequality,2 we arrive at the following bound:
2 P [lu X) > (0) < rapa
1 When k = 1, we drop the symbol k from the name of the retrieval problem. So we write MIPS instead of 1-MIPS. ? The one-sided Chebyshevâs inequality for a random variable X with mean p and variance o states that P [x âp> t] < a? /(0? + 0).
1.3 Flavors of Vector Retrieval
11
1.0f/ sw. : ¢ --©--Exp(1) : ia o-u(â-VB/2, VB/2) if â> U(0, 12) 10! 10? DIMENSIONALITY (d) | 2401.09350#61 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 62 | Frver-TasB Ee . NQ-ESPIADE + NQ-TasB GLoVp-200. QuoraâSpLAaDE GioVE-100 ° > Quora-MintLM Quora-ESPLADE FEVER-MINILM . GLoYE-50 ° 2.0. NQ-MintLM GI OVE-25 0.0 : 102 10° 104 DIMENSIONALITY (d)
(a) Synthetic (b) Real
Fig. 1.3: Probability that u â X is the solution to MIPS over X with query u versus the dimensionality d, for various synthetic and real collections X . For synthetic collections, |X | = 100,000. Appendix A gives a description of the real collections. Note that, for real collections, we estimate the reported probability by sampling 10,000 data points and using them as queries. Fur- thermore, we do not pre-process the vectorsâimportantly, we do not L2- normalize the collections.
Note that, 0? is a function of the sum of iid random variables, and, as such, grows linearly with d. In the limit. this probability tends to 0. We have thus shown that limg.. P [u = arg max,¢(u,v)] > 1 which concludes the proof. Oo | 2401.09350#62 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 63 | # 1.3.3.2 Empirical Demonstration of the Lack of Coincidence
Let us demonstrate the effect of Theorem 1.1 empirically. First, let us choose distributions that meet the requirements of the theorem: a Gaussian dis- tribution with mean 0 and variance 1, and a uniform distribution over [â 12/2] (with variance 1) will do. For comparison, choose another set of distributions that do not have the requisite properties: Exponential 12]. Having fixed the distributions, we with rate 1 and uniform over [0, next sample 100,000 random vectors from them to form a collection X . We then take each data point, use it as a query in MIPS over X , and report the proportion of data points that are solutions to their own search.
Figure 1.3(a) illustrates the results of this experiment. As expected, for the Gaussian and centered uniform distributions, the ratio of interest approaches 1 when d is sufficiently large. Surprisingly, even when the distributions do not strictly satisfy the conditions of the theorem, we still observe the convergence
12
1 Vector Retrieval
(a) k-NN (b) k-MCS (c) k-MIPS | 2401.09350#63 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 64 | Fig. 1.4: Approximate variants of top-1 retrieval for a toy collection in R2. In NN, we admit vectors that are at most ϵ away from the optimal solution. As such, x and y are both valid solutions as they are in a ball with radius (1 + ϵ)δ(q, x) centered at the query. Similarly, in MCS, we accept a vector (e.g., x) if its angle with the query point is at most 1+ϵ greater than the angle between the query and the optimal vector (i.e., v). For the MIPS example, assuming that the inner product of query and x is at most (1 â ϵ)-times the inner product of query and p, then x is an acceptable solution.
of that ratio to 1. So it appears that the requirements of Theorem 1.1 are more forgiving than one may imagine. | 2401.09350#64 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 65 | of that ratio to 1. So it appears that the requirements of Theorem 1.1 are more forgiving than one may imagine.
We also repeat the exercise above on several real-world collections, a de- scription of which can be found in Appendix A along with salient statistics. The results of these experiments are visualized in Figure 1.3(b). As expected, whether a data point maximizes inner product with itself entirely depends on the underlying data distribution. We can observe that, for some collections in high dimensions, we are likely to encounter coincidence in the sense we defined earlier, but for others that is clearly not the case. It is important to keep this difference between synthetic and real collections in mind when designing experiments that evaluate the performance of MIPS systems.
# 1.4 Approximate Vector Retrieval
Saying one problem is harder than another neither implies that we cannot approach the harder problem, nor does it mean that the âeasierâ problem is easy to solve. In fact, none of these variants of vector retrieval (k-NN, k- MCS, and k-MIPS) can be solved exactly and efficiently in high dimensions. Instead, we must either accept that the solution would be inefficient (in terms of space- or time-complexity), or allow some degree of error.
1.4 Approximate Vector Retrieval | 2401.09350#65 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 66 | 1.4 Approximate Vector Retrieval
The first case of solving the problem exactly but inefficiently is uninterest- ing: If we are looking to find the solution for k = 1, for example, it is enough to compute the distance function for every vector in the collection and the query, resulting in linear complexity. When k > 1, the total time complexity is O(|X |d log k), where |X | is the size of the collection. So it typically makes more sense to investigate the second strategy of admitting error.
That argument leads naturally to the class of ϵ-approximate vector re- trieval problems. This idea can be formalized rather easily for the special case where k = 1: The approximate solution for the top-1 retrieval is satis- factory so long as the vector u returned by the algorithm is at most (1 + ϵ) factor farther than the optimal vector uâ, according to δ(·, ·) and for some arbitrary ϵ > 0:
δ(q, u) ⤠(1 + ϵ)δ(q, uâ). (1.5)
# Figure 1.4 renders the solution space for an example collection in R2. | 2401.09350#66 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 67 | # Figure 1.4 renders the solution space for an example collection in R2.
The formalism above extends to the more general case where k > 1 in an obvious way: a vector u is a valid solution to the ϵ-approximate top-k problem if its distance to the query point is at most (1 + ϵ) times the distance to the k-th optimal vector. This is summarized in the following definition:
Given a distance Definition 1.2 (ϵ-Approximate Top-k Retrieval) function δ(·, ·), we wish to pre-process a collection of data points X â Rd in time that is polynomial in |X | and d, to form a data structure (the âin- dexâ) whose size is polynomial in |X | and d, so as to efficiently solve the following in time o(|X |d) for an arbitrary query q â Rd and ϵ > 0:
S = (k) arg min uâX δ(q, u),
such that for all u â S, Equation (1.5) is satisfied where uâ is the k-th optimal vector obtained by solving the problem in Definition 1.1. | 2401.09350#67 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 68 | such that for all u â S, Equation (1.5) is satisfied where uâ is the k-th optimal vector obtained by solving the problem in Definition 1.1.
Despite the extension to top-k above, it is more common to characterize the effectiveness of an approximate top-k solution as the percentage of correct vectors that are present in the solution. Concretely, if S = arg max(k) uâX δ(q, u) is the exact set of top-k vectors, and ËS is the approximate set, then the accuracy of the approximate algorithm can be reported as |S â© ËS|/k.3
This monograph primarily studies the approximate4 retrieval problem. As such, while we state a retrieval problem using the arg max or arg min notation, we are generally only interested in approximate solutions to it.
3 This quantity is also known as recall number of vectors our algorithm recalls from the exact solution set. 4 We drop ϵ from the name when it is clear from context.
in the literature, because we are counting the
13
14
1 Vector Retrieval
# References
Y. Bai, X. Li, G. Wang, C. Zhang, L. Shang, J. Xu, Z. Wang, F. Wang, and Q. Liu. Sparterm: Learning term-based sparse representation for fast text retrieval, 2020. | 2401.09350#68 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 69 | S. Bruch, S. Gai, and A. Ingber. An analysis of fusion functions for hybrid retrieval. ACM Transactions on Information Systems, 42(1), 8 2023.
T. Chen, M. Zhang, J. Lu, M. Bendersky, and M. Najork. Out-of-domain semantics to the rescue! zero-shot hybrid retrieval models. In Advances in Information Retrieval: 44th European Conference on IR Research, pages 95â110, 2022.
Z. Dai and J. Callan. Context-aware term weighting for first stage passage retrieval. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1533â1536, 2020.
T. Formal, B. Piwowarski, and S. Clinchant. Splade: Sparse lexical and expan- sion model for first stage ranking. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Re- trieval, pages 2288â2292, 2021.
T. Formal, C. Lassance, B. Piwowarski, and S. Clinchant. From distillation to hard negative sampling: Making sparse neural ir models more effective. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, page 2353â2359, 2022. | 2401.09350#69 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 70 | L. Gao, Z. Dai, and J. Callan. COIL: revisit exact lexical match in informa- tion retrieval with contextualized inverted list. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 3030â3042, 2021.
W. Guo, J. Wang, and S. Wang. Deep multimodal representation learning: A survey. IEEE Access, 7:63373â63394, 2019.
V. Karpukhin, B. Oguz, S. Min, P. Lewis, L. Wu, S. Edunov, D. Chen, and W.-t. Yih. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, pages 6769â6781, Nov. 2020.
O. Khattab and M. Zaharia. Colbert: Efficient and effective passage search via contextualized late interaction over bert. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 39â48, 2020. | 2401.09350#70 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 71 | S. Kuzi, M. Zhang, C. Li, M. Bendersky, and M. Najork. Leveraging semantic and lexical matching to improve the recall of document retrieval systems: A hybrid approach, 2020.
J. Lin and X. Ma. A few brief notes on deepimpact, coil, and a conceptual framework for information retrieval techniques, 2021.
J. Lin, R. Nogueira, and A. Yates. Pretrained Transformers for Text Ranking: BERT and Beyond. Springer Cham, 2021.
References
J. Ma, I. Korotkov, K. Hall, and R. T. McDonald. Hybrid first-stage retrieval models for biomedical literature. In CLEF, 2020.
X. Ma, K. Sun, R. Pradeep, and J. J. Lin. A replication study of dense passage retriever, 2021.
A. Mallia, O. Khattab, T. Suel, and N. Tonellotto. Learning passage impacts for inverted indexes. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1723â1727, 2021. | 2401.09350#71 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 72 | N. Reimers and I. Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 11 2019.
G. Salton and C. Buckley. Term-weighting approaches in automatic text retrieval. Information Processing and Management, 24(5):513â523, 1988. K. Santhanam, O. Khattab, J. Saad-Falcon, C. Potts, and M. Zaharia. Col- BERTv2: Effective and efficient retrieval via lightweight late interaction. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3715â3734, July 2022.
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, page 6000â6010, 2017. | 2401.09350#72 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 73 | S. Wang, S. Zhuang, and G. Zuccon. Bert-based dense retrievers require interpolation with bm25 for effective passage retrieval. In Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval, page 317â324, 2021.
X. Wu, R. Guo, D. Simcha, D. Dopson, and S. Kumar. Efficient inner product approximation in hybrid spaces, 2019.
L. Xiong, C. Xiong, Y. Li, K.-F. Tang, J. Liu, P. Bennett, J. Ahmed, and A. Overwijk. Approximate nearest neighbor negative contrastive learning for dense text retrieval. In International Conference on Learning Repre- sentations, 4 2021.
H. Zamani, M. Dehghani, W. B. Croft, E. Learned-Miller, and J. Kamps. From neural re-ranking to neural ranking: Learning a sparse representa- tion for inverted indexing. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, pages 497â506, 2018.
C. Zhang, Z. Yang, X. He, and L. Deng. Multimodal intelligence: Repre- sentation learning, information fusion, and applications. IEEE Journal of Selected Topics in Signal Processing, 14(3):478â493, 2020. | 2401.09350#73 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 74 | S. Zhuang and G. Zuccon. Fast passage re-ranking with contextualized exact term matching and efficient passage expansion. In Workshop on Reaching Efficiency in Neural Information Retrieval, the 45th International ACM
15
16
1 Vector Retrieval
SIGIR Conference on Research and Development in Information Retrieval, 2022.
# Chapter 2 Retrieval Stability in High Dimensions
Abstract We are about to embark on a comprehensive survey and analysis of vector retrieval methods in the remainder of this monograph. It may thus sound odd to suggest that you may not need any of these clever ideas in order to perform vector retrieval. Sometimes, under bizarrely general conditions that we will explore formally in this chapter, an exhaustive search (where we compute the distance between query and every data point, sort, and return the top k) is likely to perform much better in both accuracy and search latency! The reason why that may be the case has to do with the approximate nature of algorithms and the oddities of high dimensions. We elaborate this point by focusing on the top-1 case.
# 2.1 Intuition | 2401.09350#74 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 75 | # 2.1 Intuition
Consider the case of proper distance functions where δ(·, ·) is a metric. Recall from Equation (1.5) that a vector u is an acceptable ϵ-approximate solution if its distance to the query q according to δ(·, ·) is at most (1 + ϵ)δ(q, uâ), where uâ is the optimal vector and ϵ is an arbitrary parameter. As shown in Figure 1.4(a) for NN, this means that, if you centered an Lp ball around q with radius δ(q, (1 + ϵ)uâ), then u is in that ball.
So, what if we find ourselves in a situation where no matter how small ϵ is, too many vectors, or indeed all vectors, from our collection X end up in the (1 + ϵ)-enlarged ball? Then, by definition, every vector is an ϵ-approximate nearest neighbor of q!
In such a configuration of points, it is questionable whether the notion of ânearest neighborâ has any meaning at all: If the query point were perturbed by some noise as small as ϵ, then its true nearest neighbor would suddenly change, making NN unstable. Because of that instability, any approximate algorithm will need to examine a large portion or nearly all of the data
17
18 | 2401.09350#75 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 76 | 17
18
2 Retrieval Stability in High Dimensions
points anyway, reducing thereby to a procedure that performs more poorly than exhaustive search.
That sounds troubling. But when might we experience that phenomenon? That is the question Beyer et al. [1999] investigate in their seminal paper.
It turns out, one scenario where vector retrieval becomes unstable as dimensionality d increases is if a) data points are iid in each dimension, b) query points are similarly drawn iid in each dimension, and c) query points are independent of data points. This includes many synthetic collections that are, even today, routinely but inappropriately used for evaluation purposes.
On the other hand, when data points form clusters and query points fall into these same clusters, then the (approximate) ânearest clusterâ problem is meaningfulâbut not necessarily the approximate NN problem. So while it makes sense to use approximate algorithms to obtain the nearest cluster, search within clusters may as well be exhaustive. This, as we will learn in Chapter 7, is the basis for a popular and effective class of vector retrieval algorithms on real collections.
# 2.2 Formal Results | 2401.09350#76 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 77 | # 2.2 Formal Results
More generally, vector retrieval becomes unstable in high dimensions when the variance of the distance between query and data points grows substan- tially more slowly than its expected value. That makes sense. Intuitively, that means that more and more data points fall into the (1 + ϵ)-enlarged ball cen- tered at the query. This can be stated formally as the following theorem due to Beyer et al. [1999], extended to any general distance function δ(·, ·).
Theorem 2.1 Suppose m data points X â Rd are drawn iid from a data distribution and a query point q is drawn independent of data points from any distribution. Denote by X a random data point. If
Jim Var [5(q,X)]/E [6(q,.X)]â = 0,
then for any ⬠> 0, limaâoo P [5(q,X) < (1+ 6d(q, u*)| = 1, where u* is the vector closest to q. | 2401.09350#77 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 78 | Proof. Let 5, = maxy,cx 6(q,u) and 6* = min,<x 6(q,u). If we could show that, for some d-dependent positive a and 8 such that B/a = 1+, limgoo P [a <<< 8] = 1, then we are done. That is because, in that case 6,/5* < 8/a = 1+ ⬠almost surely and the claim follows.
2.2 Formal Results
From the above, all that we need to do is to find α and β for a given d. Intuitively, we want the interval [α, β] to contain E[δ(q, X)], because we know from the condition of the theorem that the distances should concentrate around their mean. So α = (1 â η) E[δ(q, X)] and β = (1 + η) E[δ(q, X)] for some η seems like a reasonable choice. Letting η = ϵ/(ϵ + 2) gives us the desired ratio: β/α = 1 + ϵ.
Now we must show that δâ and δâ belong to our chosen [α, β] interval al- most surely in the limit. That happens if all distances belong to that interval. So: | 2401.09350#78 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 79 | lim P [a < 6* <6, <6\= doo lim P [5(a, u) â¬[a,B] Yue x| = doo Jim P{(1ân) E[6(a, X)] < 6(g,u) < 0 +m) BIG, XI] Vue a] = Jim P {(6(a.w) â El6(4,X)]| < nE[g,X)] Vue 4]. 00
It is now easier to work with the complementary event:
1â lim P [3 ue Xs.t. |5(q,u) â E[6(q, X)]| > 7 B(6(4,X)]]. doo
Using the Union Bound, the probability above is greater than or equal to the following:
lim P [a < 6* <6. < S| > doo 1- jm nu? [|9(a, u) u) â E[6 (a, X)]| > 7B[5(a, X)]| = 1 Jim, ye [(0(a,w) â Bl6(a, X)))â > wP Bl6(a, XY]. | 2401.09350#79 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 80 | Note that, q is independent of data points and that data points are iid ran- dom variables. Therefore, δ(q, u)âs are random variables drawn iid as well. Furthermore, by assumption E[δ(q, X)] exists, making it possible to apply Markovâs inequality to obtain the following bound:
lim P [a < 6* <6, < 6] > doo 1 Jim ||P [(3(g,X) ~ El6(4.X)))â > 0? E[o(g, XP] > doo 1â lim m 5 E|(5(q,u) â E[d(q, X)))] = Tee [(5(a,u) - B15(@, X)))"| 1 jm BIG
19
20
2 Retrieval Stability in High Dimensions
By the conditions of the theorem, Var[δ(q, X)]/ E[δ(q, X)]2 â 0 as d â â, so that the last expression tends to 1 in the limit. That concludes the proof. ââ | 2401.09350#80 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 81 | We mentioned earlier that if data and query points are independent of each other and that vectors are drawn iid in each dimension, then vector retrieval becomes unstable. For NN with the Z, norm, it is easy to show that such a configuration satisfies the conditions of Theorem 2.1, hence the instability. Consider the following for 6(q, u) = ||q â ull: , â ull? ; â~u,)P lim Var [||q wl fim Var [0 (ai wi)? So E[ligâulb) 4° ELL (a â ui)?
, â ull? ; â~u,)P lim Var [||q wl fim Var [0 (ai wi)? So E[ligâulb) 4° ELL (a â ui)? sag aN las 0") im +", dee (YE [(Gi â ui)?]) ki do? _ doÂ¥oe ape (by independence)
where we write Ï2 = Var[(qi â ui)p] and µ = E[(qi â ui)p].
When δ(q, u) = ââ¨q, uâ©, the same conditions result in retrieval instability: | 2401.09350#81 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 82 | When δ(q, u) = ââ¨q, uâ©, the same conditions result in retrieval instability:
Var [(q u)| _ Var [ Y: qui] im 3 lim 5 do El(qu)) 4° ELD, qui] li 2 Var [aie] (by independence) a (SO, E [aiui]) lo? Be
where we write Ï2 = Var[qiui] and µ = E[qiui].
# 2.3 Empirical Demonstration of Instability
Let us examine the theorem empirically. We simulate the NN setting with L2 distance and report the results in Figure 2.1. In these experiments, we sample 1,000,000 data points with each coordinate drawing its value independently from the same distribution, and 1,000 query points sampled similarly. We then compute the minimum and maximum distance between each query point and the data collection, measure the ratio between them, and report the mean and standard deviation of the ratio across queries. We repeat this exercise for various values of dimensionality d and render the results in Figure 2.1(a). Unsurprisingly, this ratio tends to 1 as d â â, as predicted by the theorem. Another way to understand this result is to count the number of data points that qualify as approximate nearest neighbors. The theory predicts
2.3 Empirical Demonstration of Instability
21 | 2401.09350#82 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 83 | 2.3 Empirical Demonstration of Instability
21
G00 0.1) --©-Exp(1) â>-U(0, V2) 10! 10° DIMENSIONALITY (d)
100 iS} E s0 5 n t 60 B 3 = 40 5 a inf ->-64 & 20 âe128 < $256 3 0 12 0 25 50 75 100 ⬠AS PERCENT 6°
(a) 뫉/뫉 (b) Percent Approximate Solutions
Fig. 2.1: Simulation results for Theorem 2.1 applied to NN with L2 distance. Left: The ratio between the maximum distance between a query and data points δâ, to the minimum distance δâ. The shaded region shows one stan- dard deviation. As dimensionality increases, this ratio tends to 1. Right: The percentage of data points whose distance to a query is at most (1 + ϵ/100)δâ, visualized for the Gaussian distributionâthe trend is similar for other dis- tributions. As d increases, more vectors fall into the enlarged ball, making them valid solutions to the approximate NN problem. | 2401.09350#83 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 84 | that, as d increases, we can find a smaller ϵ such that nearly all data points fall within (1 + ϵ)δâ distance from the query. The results of our experiments confirm this phenomenon; we have plotted the results for the Gaussian dis- tribution in Figure 2.1(b).
# 2.3.1 Maximum Inner Product Search
In the discussion above, we established that retrieval becomes unstable in high dimensions if the data satisfies certain statistical conditions. That meant that the difference between the maximum and the minimum distance grows just as fast as the magnitude of the minimum distance, so that any approximate solution becomes meaningless.
The instability statement does not necessarily imply, however, that the distances become small or converge to a certain value. But as we see in this section, inner product in high dimensions does become smaller and smaller as a function of d.
22
2 Retrieval Stability in High Dimensions
The following theorem summarizes this phenomenon for a unit query point and bounded data points. Note that, the condition that q is a unit vector is not restrictive in any way, as the norm of the query point does not affect the retrieval outcome.
Theorem 2.2 If m data points with bounded norms, and a unit query vector q are drawn iid from a spherically symmetric1 distribution in Rd, then:
jm, P[(a.X) >] =0. | 2401.09350#84 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 85 | jm, P[(a.X) >] =0.
Proof. By spherical symmetry, it is easy to see that E[â¨q, Xâ©] = 0. The vari- ance of the inner product is then equal to E[â¨q, Xâ©2], which can be expanded as follows.
First, find an orthogonal transformation Î : Rd â Rd that maps the query point q to the first standard basis (i.e., e1 = [1, 0, 0, . . . , 0] â Rd). Due to spherical symmetry, this transformation does not change the data distribution. Now, we can write:
E[(q, X)"] = El('q, PX)"] = E[(PX)q] = i< 1 e(5 rirxya) = Axi. i=1
In the above, the third equality is due to the fact that the distribution of the (transformed) vectors is the same in every direction. Because â¥X⥠is bounded by assumption, the variance of the inner product between q and a random ââ data point tends to 0 as d â â. The claim follows. | 2401.09350#85 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 86 | The proof of Theorem 2.2 tells us that the variance of inner product grows as a function of 1/d and â¥Xâ¥2 2. So if our vectors have bounded norms, then we can find a d such that inner products are arbitrarily close to 0. This is yet another reason that approximate MIPS could become meaningless. But if our data points are clustered in (near) orthogonal subspaces, then approximate MIPS over clusters makes senseâthough, again, MIPS within clusters would be unstable.
# References
K. Beyer, J. Goldstein, R. Ramakrishnan, and U. Shaft. When is ânearest neighborâ meaningful? In Database Theory, pages 217â235, 1999.
1 A distribution is spherically symmetric if it remains invariant under an orthogonal trans- formation.
Chapter 3 Intrinsic Dimensionality | 2401.09350#86 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 87 | 1 A distribution is spherically symmetric if it remains invariant under an orthogonal trans- formation.
Chapter 3 Intrinsic Dimensionality
Abstract We have seen that high dimensionality poses difficulties for vector retrieval. Yet, judging by the progression from hand-crafted feature vectors to sophisticated embeddings of data, we detect a clear trend towards higher dimensional representations of data. How worried should we be about this ever increasing dimensionality? This chapter explores that question. Its key message is that, even though data points may appear to belong to a high- dimensional space, they actually lie on or near a low-dimensional manifold and, as such, have a low intrinsic dimensionality. This chapter then formalizes the notion of intrinsic dimensionality and presents a mathematical framework that will be useful in analyses in future chapters.
# 3.1 High-Dimensional Data and Low-Dimensional Manifolds | 2401.09350#87 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 88 | # 3.1 High-Dimensional Data and Low-Dimensional Manifolds
We talked a lot about the difficulties of answering ϵ-approximate top-k ques- tions in high dimensions. We said, in certain situations, the question itself becomes meaningless and retrieval falls apart. For MIPS, in particular, we argued in Theorem 2.2 that points become nearly orthogonal almost surely as the number of dimensions increases. But how concerned should we be, especially given the ever-increasing dimensionality of vector representations of data? Do our data points really live in such extremely high-dimensional spaces? Are all the dimensions necessary to preserving the structure of our data or do our data points have an intrinsically smaller dimensionality?
The answer to these questions is sometimes obvious. If a set of points in Rd lie strictly in a flat subspace Rd⦠with d⦠< d, then one can simply drop the âunusedâ dimensionsâperhaps after a rotation. This could happen if a pair of coordinates are correlated, for instance. No matter what query vector we are performing retrieval for or what distance function we use, the top-k
23
24
3 Intrinsic Dimensionality
set does not change whether the unused dimensions are taken into account or the vectors corrected to lie in Rd⦠. | 2401.09350#88 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 89 | 23
24
3 Intrinsic Dimensionality
set does not change whether the unused dimensions are taken into account or the vectors corrected to lie in Rd⦠.
Other times the answer is intuitive but not so obvious. When a text doc- ument is represented as a sparse vector, all the documentâs information is contained entirely in the vectorâs non-zero coordinates. The coordinates that are 0 do not contribute to the representation of the document in any way. In a sense then, the intrinsic dimensionality of a collection of such sparse vectors is in the order of the number of non-zero coordinates, rather than the nominal dimensionality of the space the points lie in.
It appears then that there are instances where a collection of points have a superficially large number of dimensions, d, but that, in fact, the points lie in a lower-dimensional space with dimensionality dâ¦. We call d⦠the intrinsic dimensionality of the point set.
This situation, where the intrinsic dimensionality of data is lower than that of the space, arises more commonly than one imagines. In fact, so common is this phenomenon that in statistical learning theory, there are special classes of algorithms [Ma and Fu, 2012] designed for data collections that lie on or near a low-dimensional submanifold of Rd despite their apparent arbitrarily high-dimensional representations. | 2401.09350#89 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 90 | In the context of vector retrieval, too, the concept of intrinsic dimensional- ity often plays an important role. Knowing that data points have a low intrin- sic dimensionality means we may be able to reduce dimensionality without (substantially) losing the geometric structure of the data, including inter- point distances. But more importantly, we can design algorithms specifically for data with low intrinsic dimensionality, as we will see in later chapters. In our analysis of many of these algorithms, too, we often resort to this property to derive meaningful bounds and make assertions about their performance. Doing so, however, requires that we formalize the notion of intrinsic dimen- sionality. We often do not have a characterization of the submanifold itself, so we need an alternate way of characterizing the low-dimensional structure of our data points. In the remainder of this chapter, we present two com- mon (and related) definitions of intrinsic dimensionality that will be useful in subsequent chapters.
# 3.2 Doubling Measure and Expansion Rate
Karger and Ruhl [2002] characterize intrinsic dimensionality as the growth or expansion rate of a point set. To understand what that means intuitively, place yourself somewhere in the data collection, draw a ball around yourself, and count how many data points are in that ball. Now expand the radius of
3.2 Doubling Measure and Expansion Rate | 2401.09350#90 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 91 | 3.2 Doubling Measure and Expansion Rate
this ball by a factor 2, and count again. The count of data points in a âgrowth- restrictedâ point set should increase smoothly, rather than suddenly, as we make this ball larger.
In other words, data points âcome into view,â as Karger and Ruhl [2002] put it, at a constant rate as we expand our view, regardless of where we are located. We will not encounter massive holes in the space where there are no data points, followed abruptly by a region where a large number of vectors are concentrated.
The formal definition is not far from the intuitive description above. In fact, expansion rate as defined by Karger and Ruhl [2002] is an instance of the following more general definition of a doubling measure, where the measure µ is the counting measure over a collection of points X .
Definition 3.1 A distribution µ on Rd is a doubling measure if there is a constant d⦠such that, for any r > 0 and x â Rd, µ(B(x, 2r)) ⤠2d⦠µ(B(x, r)). The constant d⦠is said to be the expansion rate of the distribution. | 2401.09350#91 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 92 | One can think of the expansion rate d⦠as a dimension of sorts. In fact, as we will see later, several works [Dasgupta and Sinha, 2015, Karger and Ruhl, 2002, Beygelzimer et al., 2006] use this notion of intrinsic dimensionality to design algorithms for top-k retrieval or utilize it to derive performance guarantees for vector collections that are drawn from a doubling measure. That is the main reason we review this definition of intrinsic dimensionality in this chapter.
While the expansion rate is a reasonable way of describing the structure of a set of points, it is unfortunately not a stable indicator. It can suddenly blow up, for example, by the addition of a single point to the set. As a concrete example, consider the set of integers between |r| and |2r| for any arbitrary value of r: X = {u â Z | r < |u| < 2r}. The expansion rate of the resulting set is constant because no matter which point we choose as the center of our ball, and regardless of our choice of radius, doubling the radius brings points into view at a constant rate. | 2401.09350#92 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 93 | What happens if we added the origin to the set, so that our set becomes {0} ⪠X ? If we chose 0 as the center of the ball, and set its radius to r, we have a single point in the resulting ball. The moment we double r, the resulting ball will contain the entire set! In other words, the expansion rate of the updated set is log m (where m = |X |).
It is easy to argue that a subset of a set with bounded expansion rate does not necessarily have a bounded expansion rate itself. This unstable behavior is less than ideal, which is why a more robust notion of intrinsic dimensionality has been developed. We will introduce that next.
25
26
3 Intrinsic Dimensionality
# 3.3 Doubling Dimension
Another idea to formalize intrinsic dimensionality that has worked well in algorithmic design and anlysis is the doubling dimension. It was introduced by Gupta et al. [2003] but is closely related to the Assouad dimension [As- souad, 1983]. It is defined as follows.
Definition 3.2 A set X â Rd is said to have doubling dimension d⦠if B(·, 2r) â© X , the intersection of any ball of radius 2r with the set, can be covered by at most 2d⦠balls of radius r. | 2401.09350#93 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 94 | The base 2 in the definition above can be replaced with any other constant k: The doubling dimension of X is d⦠if the intersection of any ball of radius r with the set can be covered by O(kd⦠) balls of radius r/k. Furthermore, the definition can be easily extended to any metric space, not just Rd with the Euclidean norm.
The doubling dimension is a different notion from the expansion rate as defined in Definition 3.1. The two, however, are in some sense related, as the following lemma shows.
Lemma 3.1 The doubling dimension, d⦠of any finite metric (X, δ) is bounded above by its expansion rate, dkr â¦
Proof. Fix a ball B(u, 2r) and let S' be its r-net. That is, S C X, the distance between any two points in S$ is at least r, and ¥ C U,,c5 B(u,r). We have hat:
B(u, 2r) â B(v, r) â B(u, 4r). vâS
By definition of the expansion rate, for every v â S:
|B(u, 4r)| < |B(v, 8r)| < gadct Biv, 5). | 2401.09350#94 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 95 | By definition of the expansion rate, for every v â S:
|B(u, 4r)| < |B(v, 8r)| < gadct Biv, 5).
Because the balls B(v, r/2) for all v â S are disjoint, it follows that |S| ⤠24dkr ⦠many balls of radius r cover B(u, 2r). That concludes the ââ proof.
The doubling dimension and expansion rate both quantify the intrinsic dimensionality of a point set. But Lemma 3.1 shows that, the class of doubling metrics (i.e., metric spaces with a constant doubling dimen- sion) contains the class of metrics with a bounded expansion rate.
The converse of the above lemma is not true. In other words, there are sets that have a bounded doubling dimension, but whose expansion rate is unbounded. The set, X = {0} ⪠{u â Z | r < |u| < 2r}, from the previous
3.3 Doubling Dimension
section is one example where this happens. From our discussion above, its expansion rate is log|X |. It is easy to see that the doubling dimension of this set, however, is constant.
# 3.3.1 Properties of the Doubling Dimension | 2401.09350#95 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 96 | # 3.3.1 Properties of the Doubling Dimension
It is helpful to go over a few concrete examples of point sets with bounded doubling dimension in order to understand a few properties of this definition of intrinsic dimensionality. We will start with a simple example: a line segment in Rd with the Euclidean norm.
If the set X is a line segment, then its intersection with a ball of radius r is itself a line segment. Clearly, the intersection set can be covered with two balls of radius r/2. Therefore, the doubling dimension d⦠of X is 1.
We can extend that result to any affine set in Rd to obtain the following
property:
Lemma 3.2 A k-dimensional flat in Rd has doubling dimension O(k).
Proof. The intersection of a ball in Rd and a k-dimensional flat is a ball in Rk. It is a well-known result that the size of an ϵ-net of a unit ball in Rk is at most (C/ϵ)k for some small constant C. As such, a ball of radius r can be covered with 2O(k) balls of radius r/2, implying the claim. ââ | 2401.09350#96 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 97 | The lemma above tells us that the doubling dimension of a set in the Euclidean space is at most some constant factor larger than the natural di- mension of the space; note that this was not the case for the expansion rate. Another important property that speaks to the stability of the doubling di- mension is the following, which is trivially true:
Lemma 3.3 Any subset of a set with doubling dimension d⦠itself has dou- bling dimension dâ¦.
The doubling dimension is also robust under the addition of points to the set, as the following result shows.
Lemma 3.4 Suppose sets Xi for i â [n] each have doubling dimension dâ¦. Then their union has doubling dimension at most d⦠+ log n.
Proof. For any ball B of radius r, B â© Xi can be covered with 2d⦠balls of half the radius. As such, at most n2d⦠balls of radius r/2 are needed to cover ââ the union. The doubling dimension of the union is therefore d⦠+ log n.
One consequence of the previous two lemmas is the following statement concerning sparse vectors:
27
28
3 Intrinsic Dimensionality | 2401.09350#97 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 98 | One consequence of the previous two lemmas is the following statement concerning sparse vectors:
27
28
3 Intrinsic Dimensionality
Lemma 3.5 Suppose that X C R¢@ is a collection of sparse vectors, each having at most n non-zero coordinates. Then the doubling dimension of X is at most Ck + klogd for some constant C. Proof. X is the union of (2) < dâ n-dimensional flats. Each of these flats has doubling dimension Ck for some universal constant Câ, by Lemma 3.2. By the application of Lemma 3.4, we get that the doubling dimension of Â¥ is at most Cn + nlog d. Qo
Lemma 3.5 states that collections of sparse vectors in the Euclidean space are naturally described by the doubling dimension.
# References
P. Assouad. Plongements lipschitziens dans Rn. Bulletin de la Soci´et´e
Math´ematique de France, 111:429â448, 1983.
A. Beygelzimer, S. Kakade, and J. Langford. Cover trees for nearest neighbor. In Proceedings of the 23rd International Conference on Machine Learning, page 97â104, 2006.
S. Dasgupta and K. Sinha. Randomized partition trees for nearest neighbor search. Algorithmica, 72(1):237â263, 5 2015. | 2401.09350#98 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 99 | S. Dasgupta and K. Sinha. Randomized partition trees for nearest neighbor search. Algorithmica, 72(1):237â263, 5 2015.
A. Gupta, R. Krauthgamer, and J. Lee. Bounded geometries, fractals, and low-distortion embeddings. In 44th Annual IEEE Symposium on Founda- tions of Computer Science, pages 534â543, 2003.
D. R. Karger and M. Ruhl. Finding nearest neighbors in growth-restricted metrics. In Proceedings of the 34th Annual ACM Symposium on Theory of Computing, pages 741â750, 2002.
Y. Ma and Y. Fu. Manifold Learning Theory and Applications. CRC Press, 2012.
Part II Retrieval Algorithms
# Chapter 4 Branch-and-Bound Algorithms
Abstract One of the earliest approaches to the top-k retrieval problem is to partition the vector space recursively into smaller regions and, each time we do so, make note of their geometry. During search, we eliminate the regions whose shape indicates they cannot contain or overlap with the solution set. This chapter covers algorithms that embody this approach and discusses their exact and approximate variants.
# 4.1 Intuition | 2401.09350#99 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 100 | # 4.1 Intuition
Suppose there was some way to split a collection X into two sub-collections, Xl and Xr, such that X = Xl ⪠Xr and that the two sub-collections have roughly the same size. In general, we can relax the splitting criterion so the two sub-collections are not necessarily partitions; that is, we may have Xl â© Xr ̸= â
. We may also split the collection into more than two sub-collections. For the moment, though, assume we have two sub-collections that do not overlap.
Suppose further that, we could geometrically characterize exactly the re- gions that contain Xl and Xr. For example, when Xl â© Xr = â
, these regions partition the space and may therefore be characterized by a separating hyper- plane. Call these regions Rl and Rr, respectively. The separating hyperplane forms a decision boundary that helps us determine if a vector falls into Rl or Rr.
In effect, we have created a binary tree of depth 1 where the root node has a decision boundary and each of the two leaves contains data points that fall into its region. This is illustrated in Figure 4.1(a). | 2401.09350#100 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 102 | Fig. 4.1: Illustration of a general branch-and-bound method on a toy collec- tion in R2. In (a), Rl and Rr are separated by the dashed line h. The distance between query q and the closest vector in Rl is less than the distance be- tween q and h. As such, we do not need to search for the top-1 vector over the points in Rr, so that the right branch of the tree is pruned. In (b), the regions are recursively split until each terminal region contains at most two data points. We then find the distance between q and the data points in the region that contains q, G. If the ball around q with this distance as its radius does not intersect a region, we can safely prune that regionâregions that are not shaded in the figure. Otherwise, we may have to search it during the certification process.
and navigating to the appropriate leaf. Now, we solve the exact top-1 retrieval problem over Xl to obtain the optimal point in that region uâ l , then make a note of this minimum distance obtained, δ(q, uâ At this point, if it turns out that δ(q, uâ | 2401.09350#102 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 103 | l ) < δ(q, Rr)1 then we have found the optimal point and do not need to search the data points in Xr at all! That is because, the δ-ball2 centered at q with radius δ(q, uâ l ) is contained entirely in Rl, so that no point from Rr can have a shorter distance to q than uâ l . Refer again to Figure 4.1(a) for an illustration of this scenario.
l ) ⥠δ(q, Rr), then we proceed to solve the top-1 problem over Xr as well and compare the solution with uâ l to find the optimal vector. We can think of the comparison between δ(q, uâ l ) with δ(q, Rr) as backtracking to the parent node of Rl in the equivalent treeâ which is the rootâand comparing δ(q, uâ l ) with the distance of q with the decision boundary. This process of backtracking and deciding to prune a
1 The distance between a point u and a set S is defined as δ(u, S) = inf vâS δ(u, v). 2 The ball centered at u with radius r with respect to metric δ is {x | δ(u, x) ⤠r}.
4.2 k-dimensional Trees | 2401.09350#103 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 104 | 4.2 k-dimensional Trees
branch or search it certifies that uâ l top-1 problem exactly. is indeed optimal, thereby solving the
We can extend the framework above easily by recursively splitting the two sub-collections and characterizing the regions containing the resulting partitions. This leads to a (balanced) binary tree where each internal node has a decision boundaryâthe separating hyperplane of its child regions. We may stop splitting a node if it has fewer than m⦠points. This extension is rendered in Figure 4.1(b).
The retrieval process is the same but needs a little more care: Let the query q traverse the tree from root to leaf, where each internal node determines if q belongs to the âleftâ or ârightâ sub-regions and routes q accordingly. Once we have found the leaf (terminal) region that contains q, we find the candidate vector uâ, then backtrack and certify that uâ is indeed optimal.
During the backtracking, at each internal node, we compare the distance between q and the current candidate with the distance between q and the region on the other side of the decision boundary. As before, that comparison results in either pruning a branch or searching it to find a possibly better candidate. The certification process stops when we find ourselves back in the root node with no more branches to verify, at which point we have found the optimal solution. | 2401.09350#104 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 105 | The above is the logic that is at the core of branch-and-bound algorithms for top-k retrieval [Dasgupta and Sinha, 2015, Bentley, 1975, Ram and Sinha, 2019, Ciaccia et al., 1997, Yianilos, 1993, Liu et al., 2004, Panigrahy, 2008, Ram and Gray, 2012, Bachrach et al., 2014]. The specific instances of this framework differ in terms of how they split a collection and the details of the certification process. We will review key algorithms that belong to this family in the remainder of this chapter. We emphasize that, most branch-and- bound algorithms only address the NN problem in the Euclidean space (so that δ(u, v) = â¥u â vâ¥2) or in growth-restricted measures [Karger and Ruhl, 2002, Clarkson, 1997, Krauthgamer and Lee, 2004] but where the metric is nonetheless proper.
# 4.2 k-dimensional Trees
The k-dimensional Tree or k-d Tree [Bentley, 1975] is a special instance of the framework described above wherein the distance function is Euclidean and the space is recursively partitioned into hyper-rectangles. In other words, the decision boundaries in a k-d Tree are axis-aligned hyperplanes. | 2401.09350#105 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 106 | Let us consider its simplest construction for X â Rd. The root of the tree is a node that represents the entire space, which naturally contains the entire data collection. Assuming that the size of the collection is greater than 1, we follow a simple procedure to split the node: We select one coordinate axis and partition the collection at the median of data points along the chosen
33
34
4 Branch-and-Bound Algorithms
direction. The process recurses on each newly-minted node, with nodes at the same depth in the tree using the same coordinate axis for splitting, and where we go through the coordinates in a round-robin manner as the tree grows. We stop splitting a node further if it contains a single data point (m⦠= 1), then mark it as a leaf node.
A few observations that are worth noting. By choosing the median point to split on, we guarantee that the tree is balanced. That together with the fact that m⦠= 1 implies that the depth of the tree is log m where m = |X |. Finally, because nodes in each level of the tree split on the same coordinate, every coordinate is split in (log m)/d levels. These will become important in our analysis of the algorithm.
# 4.2.1 Complexity Analysis | 2401.09350#106 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
2401.09350 | 107 | # 4.2.1 Complexity Analysis
The k-d Tree data structure is fairly simple to construct. It is also efficient: Its space complexity given a set of m vectors is Î(m) and its construction time has complexity Î(m log m).3
The search algorithm, however, is not so easy to analyze in general. Fried- man et al. [1977] claimed that the expected search complexity is O(log m) for m data points that are sampled uniformly from the unit hypercube. While uniformity is an unrealistic assumption, it is necessary for the analysis of the average case. On the other hand, no generality is lost by the assumption that vectors are contained in the hypercube. That is because, we can always scale every data point by a constant factor into the unit hypercubeâa transfor- mation that does not affect the pairwise distances between vectors. Let us now discuss the sketch of the proof of their claim. | 2401.09350#107 | Foundations of Vector Retrieval | Vectors are universal mathematical objects that can represent text, images,
speech, or a mix of these data modalities. That happens regardless of whether
data is represented by hand-crafted features or learnt embeddings. Collect a
large enough quantity of such vectors and the question of retrieval becomes
urgently relevant: Finding vectors that are more similar to a query vector.
This monograph is concerned with the question above and covers fundamental
concepts along with advanced data structures and algorithms for vector
retrieval. In doing so, it recaps this fascinating topic and lowers barriers of
entry into this rich area of research. | http://arxiv.org/pdf/2401.09350 | Sebastian Bruch | cs.DS, cs.IR | null | null | cs.DS | 20240117 | 20240117 | [] |
End of preview. Expand
in Dataset Viewer.
No dataset card yet
New: Create and edit this dataset card directly on the website!
Contribute a Dataset Card- Downloads last month
- 92