Knn query
Finds the k nearest vectors to a query vector, as measured by a similarity metric. knn query finds nearest vectors through approximate search on indexed dense_vectors. The preferred way to do approximate kNN search is through the top level knn section of a search request. knn query is reserved for expert cases, where there is a need to combine this query with other queries, or perform a kNN search against a semantic_text field.
PUT my-image-index
{
"mappings": {
"properties": {
"image-vector": {
"type": "dense_vector",
"dims": 3,
"index": true,
"similarity": "l2_norm"
},
"file-type": {
"type": "keyword"
},
"title": {
"type": "text"
}
}
}
}
Index your data.
POST my-image-index/_bulk?refresh=true
{ "index": { "_id": "1" } } { "image-vector": [1, 5, -20], "file-type": "jpg", "title": "mountain lake" } { "index": { "_id": "2" } } { "image-vector": [42, 8, -15], "file-type": "png", "title": "frozen lake"} { "index": { "_id": "3" } } { "image-vector": [15, 11, 23], "file-type": "jpg", "title": "mountain lake lodge" }
Run the search using the
knn
query, asking for the top 10 nearest vectors from each shard, and then combine shard results to get the top 3 global results.POST my-image-index/_search
{ "size" : 3, "query" : { "knn": { "field": "image-vector", "query_vector": [-5, 9, -12], "k": 10 } } }
field
- (Required, string) The name of the vector field to search against. Must be a
dense_vector
field with indexing enabled, or asemantic_text
field with a compatible dense vector inference model. query_vector
- (Optional, array of floats or string) Query vector. Must have the same number of dimensions as the vector field you are searching against. Must be either an array of floats or a hex-encoded byte vector. Either this or
query_vector_builder
must be provided. query_vector_builder
- (Optional, object) Query vector builder. A configuration object indicating how to build a query_vector before executing the request. You must provide either a
query_vector_builder
orquery_vector
, but not both. Refer to Perform semantic search to learn more.
If all queried fields are of type semantic_text, the inference ID associated with the semantic_text
field may be inferred.
k
- (Optional, integer) The number of nearest neighbors to return from each shard. Elasticsearch collects
k
results from each shard, then merges them to find the global top results. This value must be less than or equal tonum_candidates
. Defaults to search request size. num_candidates
- (Optional, integer) The number of nearest neighbor candidates to consider per shard while doing knn search. Cannot exceed 10,000. Increasing
num_candidates
tends to improve the accuracy of the final results. Defaults to1.5 * k
ifk
is set, or1.5 * size
ifk
is not set. filter
- (Optional, query object) Query to filter the documents that can match. The kNN search will return the top documents that also match this filter. The value can be a single query or a list of queries. If
filter
is not provided, all documents are allowed to match.
The filter is a pre-filter, meaning that it is applied during the approximate kNN search to ensure that num_candidates
matching documents are returned.
similarity
- (Optional, float) The minimum similarity required for a document to be considered a match. The similarity value calculated relates to the raw
similarity
used. Not the document score. The matched documents are then scored according tosimilarity
and the providedboost
is applied. rescore_vector
Stack Stack- (Optional, object) Apply oversampling and rescoring to quantized vectors.
Rescoring only makes sense for quantized vectors; when quantization is not used, the original vectors are used for scoring. Rescore option will be ignored for non-quantized dense_vector
fields.
oversample
-
(Required, float)
Applies the specified oversample factor to
k
on the approximate kNN search. The approximate kNN search will:- Retrieve
num_candidates
candidates per shard. - From these candidates, the top
k * oversample
candidates per shard will be rescored using the original vectors. - The top
k
rescored candidates will be returned. Must be one of the following values:- >= 1f to indicate the oversample factor
- Exactly
0
to indicate that no oversampling and rescoring should occur. Stack
- Retrieve
See oversampling and rescoring quantized vectors for details.
boost
- (Optional, float) Floating point number used to multiply the scores of matched documents. This value cannot be negative. Defaults to
1.0
. _name
- (Optional, string) Name field to identify the query
There are two ways to filter documents that match a kNN query:
- pre-filtering – filter is applied during the approximate kNN search to ensure that
k
matching documents are returned. - post-filtering – filter is applied after the approximate kNN search completes, which results in fewer than k results, even when there are enough matching documents.
Pre-filtering is supported through the filter
parameter of the knn
query. Also filters from aliases are applied as pre-filters.
All other filters found in the Query DSL tree are applied as post-filters. For example, knn
query finds the top 3 documents with the nearest vectors (k=3), which are combined with term
filter, that is post-filtered. The final set of documents will contain only a single document that passes the post-filter.
POST my-image-index/_search
{
"size" : 10,
"query" : {
"bool" : {
"must" : {
"knn": {
"field": "image-vector",
"query_vector": [-5, 9, -12],
"k": 3
}
},
"filter" : {
"term" : { "file-type" : "png" }
}
}
}
}
Knn query can be used as a part of hybrid search, where knn query is combined with other lexical queries. For example, the query below finds documents with title
matching mountain lake
, and combines them with the top 10 documents that have the closest image vectors to the query_vector
. The combined documents are then scored and the top 3 top scored documents are returned.
POST my-image-index/_search
{
"size" : 3,
"query": {
"bool": {
"should": [
{
"match": {
"title": {
"query": "mountain lake",
"boost": 1
}
}
},
{
"knn": {
"field": "image-vector",
"query_vector": [-5, 9, -12],
"k": 10,
"boost": 2
}
}
]
}
}
}
knn
query can be used inside a nested query. The behaviour here is similar to top level nested kNN search:
- kNN search over nested dense_vectors diversifies the top results over the top-level document
filter
both over the top-level document metadata andnested
is supported and acts as a pre-filter
To ensure correct results: each individual filter must be either over
the top-level metadata or nested
metadata. However, a single knn query
supports multiple filters, where some filters can be over the top-level
metadata and some over nested.
Below is a sample query with filter over nested metadata. For scoring parents' documents, this query only considers vectors that have "paragraph.language" set to "EN".
{
"query" : {
"nested" : {
"path" : "paragraph",
"query" : {
"knn": {
"query_vector": [0.45, 0.50],
"field": "paragraph.vector",
"filter": {
"match": {
"paragraph.language": "EN"
}
}
}
}
}
}
}
Below is a sample query with two filters: one over nested metadata and another over the top level metadata. For scoring parents' documents, this query only considers vectors whose parent's title contain "essay" word and have "paragraph.language" set to "EN".
{
"query" : {
"nested" : {
"path" : "paragraph",
"query" : {
"knn": {
"query_vector": [0.45, 0.50],
"field": "paragraph.vector",
"filter": [
{
"match": {
"paragraph.language": "EN"
}
},
{
"match": {
"title": "essay"
}
}
]
}
}
}
}
}
Note that nested knn
only supports score_mode=max
.
Elasticsearch supports knn queries over a
semantic_text
field.
Here is an example using the query_vector_builder
:
{
"query": {
"knn": {
"field": "inference_field",
"k": 10,
"num_candidates": 100,
"query_vector_builder": {
"text_embedding": {
"model_text": "test"
}
}
}
}
}
Note that for semantic_text
fields, the model_id
does not have to be
provided as it can be inferred from the semantic_text
field mapping.
Knn search using query vectors over semantic_text
fields is also supported,
with no change to the API.
knn
query calculates aggregations on top k
documents from each shard. Thus, the final results from aggregations contain k * number_of_shards
documents. This is different from the top level knn section where aggregations are calculated on the global top k
nearest documents.