You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -41,6 +41,16 @@ Now, product 1 `Puma Men Race Black Watch` might be represented as the vector `[
41
41
42
42
In a more complex scenario, like natural language processing (NLP), words or entire sentences can be converted into dense vectors (often referred to as embeddings) that capture the semantic meaning of the text.Vectors play a foundational role in many machine learning algorithms, particularly those that involve distance measurements, such as clustering and classification algorithms.
43
43
44
+
## What is a vector database?
45
+
46
+
A vector database is a database that's optimized for storing and searching vectors. It's a specialized database that's designed to store and search vectors efficiently. Vector databases are often used to power vector search applications, such as recommendation systems, image search, and textual content retrieval. Vector databases are also referred to as vector stores, vector indexes, or vector search engines. Vector databases use vector similarity algorithms to search for vectors that are similar to a given query vector.
47
+
48
+
:::tip
49
+
50
+
[<u>**Redis Cloud**</u>](https://redis.com/try-free) is a popular choice for vector databases, as it offers a rich set of data structures and commands that are well-suited for vector storage and search. Redis Cloud allows you to index vectors and perform vector similarity search in a few different ways outlined further in this tutorial. It also maintains a high level of performance and scalability.
51
+
52
+
:::
53
+
44
54
## What is vector similarity?
45
55
46
56
Vector similarity is a measure that quantifies how alike two vectors are, typically by evaluating the `distance` or `angle` between them in a multi-dimensional space.
@@ -52,81 +62,10 @@ When vectors represent data points, such as texts or images, the similarity scor
52
62
-**Image Search**: Store vectors representing image features, and then retrieve images most similar to a given image's vector.
53
63
-**Textual Content Retrieval**: Store vectors representing textual content (e.g., articles, product descriptions) and find the most relevant texts for a given query vector.
54
64
55
-
## How to calculate vector similarity?
56
-
57
-
Several techniques are available to assess vector similarity, with some of the most prevalent ones being:
58
-
59
-
### Euclidean Distance (L2 norm)
60
-
61
-
**Euclidean Distance (L2 norm)** calculates the linear distance between two points within a multi-dimensional space. Lower values indicate closer proximity, and hence higher similarity.
62
-
63
-
<img
64
-
src={EuclideanDistanceFormulaImage}
65
-
alt="EuclideanDistanceFormulaImage"
66
-
width="300"
67
-
className="margin-bottom--md"
68
-
/>
69
-
70
-
For illustration purposes, let's assess `product 1` and `product 2` from the earlier ecommerce dataset and determine the `Euclidean Distance` considering all features.
65
+
:::tip CALCULATING VECTOR SIMILARITY
71
66
72
-
<img
73
-
src={EuclideanDistanceSampleImage}
74
-
alt="EuclideanDistanceSampleImage"
75
-
width="600"
76
-
className="margin-bottom--md"
77
-
/>
67
+
If you're interested in learning more about the mathematics behind vector similarity, scroll down to the [<u>**How to calculate vector similarity?**</u>](#how-to-calculate-vector-similarity) section.
78
68
79
-
As an example, we will use a 2D chart made with [chart.js](https://www.chartjs.org/) comparing the `Price vs. Quality` features of our products, focusing solely on these two attributes to compute the `Euclidean Distance`.
80
-
81
-

82
-
83
-
### Cosine Similarity
84
-
85
-
**Cosine Similarity** measures the cosine of the angle between two vectors. The cosine similarity value ranges between -1 and 1. A value closer to 1 implies a smaller angle and higher similarity, while a value closer to -1 implies a larger angle and lower similarity. Cosine similarity is particularly popular in NLP when dealing with text vectors.
86
-
87
-
<img
88
-
src={CosineFormulaImage}
89
-
alt="CosineFormulaImage"
90
-
width="450"
91
-
className="margin-bottom--md"
92
-
/>
93
-
94
-
:::note
95
-
If two vectors are pointing in the same direction, the cosine of the angle between them is 1. If they're orthogonal, the cosine is 0, and if they're pointing in opposite directions, the cosine is -1.
96
-
:::
97
-
98
-
Again, consider `product 1` and `product 2` from the previous dataset and calculate the `Cosine Distance` for all features.
99
-
100
-

101
-
102
-
Using [chart.js](https://www.chartjs.org/), we've crafted a 2D chart of `Price vs. Quality` features. It visualizes the `Cosine Similarity` solely based on these attributes.
103
-
104
-

105
-
106
-
### Inner Product
107
-
108
-
**Inner Product (dot product)** The inner product (or dot product) isn't a distance metric in the traditional sense but can be used to calculate similarity, especially when vectors are normalized (have a magnitude of 1). It's the sum of the products of the corresponding entries of the two sequences of numbers.
109
-
110
-
<img
111
-
src={IpFormulaImage}
112
-
alt="IpFormulaImage"
113
-
width="450"
114
-
className="margin-bottom--md"
115
-
/>
116
-
117
-
:::note
118
-
The inner product can be thought of as a measure of how much two vectors "align"
119
-
in a given vector space. Higher values indicate higher similarity. However, the raw
120
-
values can be large for long vectors; hence, normalization is recommended for better
121
-
interpretation. If the vectors are normalized, their dot product will be `1 if they are identical` and `0 if they are orthogonal` (uncorrelated).
122
-
:::
123
-
124
-
Considering our `product 1` and `product 2`, let's compute the `Inner Product` across all features.
125
-
126
-

127
-
128
-
:::tip
129
-
Vectors can also be stored in databases in **binary formats** to save space. In practical applications, it's crucial to strike a balance between the dimensionality of the vectors (which impacts storage and computational costs) and the quality or granularity of the information they capture.
To procure sentence embeddings, we'll make use of a Hugging Face model titled [Xenova/all-distilroberta-v1](https://huggingface.co/Xenova/all-distilroberta-v1). It's a compatible version of [sentence-transformers/all-distilroberta-v1](https://huggingface.co/sentence-transformers/all-distilroberta-v1) for transformer.js with ONNX weights.
86
+
To generate sentence embeddings, we'll make use of a Hugging Face model titled [Xenova/all-distilroberta-v1](https://huggingface.co/Xenova/all-distilroberta-v1). It's a compatible version of [sentence-transformers/all-distilroberta-v1](https://huggingface.co/sentence-transformers/all-distilroberta-v1) for transformer.js with ONNX weights.
@@ -392,12 +331,14 @@ You can observe products JSON data in RedisInsight:
392
331

393
332
394
333
:::tip
334
+
395
335
Download <u>[RedisInsight](https://redis.com/redis-enterprise/redis-insight/)</u> to visually explore your Redis data or to engage with raw Redis commands in the workbench. Dive deeper into RedisInsight with these <u>[tutorials](/explore/redisinsight/)</u>.
336
+
396
337
:::
397
338
398
339
### Create vector index
399
340
400
-
For searches to be conducted on JSON fields in Redis, they must be indexed. The methodology below highlights the process of indexing different types of fields. This encompasses vector fields such as productDescriptionEmbeddings and productImageEmbeddings.
341
+
For searches to be conducted on JSON fields in Redis, they must be indexed. The methodology below highlights the process of indexing different types of fields. This encompasses vector fields such as `productDescriptionEmbeddings` and `productImageEmbeddings`.
FLAT: When vectors are indexed in a "FLAT" structure, they're stored in their original form without any added hierarchy. A search against a FLAT index will require the algorithm to scan each vector linearly to find the most similar matches. While this is accurate, it's computationally intensive and slower, making it ideal for smaller datasets.
524
466
525
467
HNSW (Hierarchical Navigable Small World): HNSW is a graph-centric method tailored for indexing high-dimensional data. With larger datasets, linear comparisons against every vector in the index become time-consuming. HNSW employs a probabilistic approach, ensuring faster search results but with a slight trade-off in accuracy.
468
+
526
469
:::
527
470
528
471
:::info INITIAL_CAP and BLOCK_SIZE parameters
472
+
529
473
Both INITIAL_CAP and BLOCK_SIZE are configuration parameters that control how vectors are stored and indexed.
530
474
531
475
INITIAL_CAP defines the initial capacity of the vector index. It helps in pre-allocating space for the index.
532
476
533
477
BLOCK_SIZE defines the size of each block of the vector index. As more vectors are added, Redis will allocate memory in chunks, with each chunk being the size of the BLOCK_SIZE. It helps in optimizing the memory allocations during index growth.
478
+
534
479
:::
535
480
536
481
## What is vector KNN query?
537
482
538
483
KNN, or k-Nearest Neighbors, is an algorithm used in both classification and regression tasks, but when referring to "KNN Search," we're typically discussing the task of finding the "k" points in a dataset that are closest (most similar) to a given query point. In the context of vector search, this means identifying the "k" vectors in our database that are most similar to a given query vector, usually based on some distance metric like cosine similarity or Euclidean distance.
539
484
540
-
### KNN query with Redis
485
+
### Vector KNN query with Redis
541
486
542
487
Redis allows you to index and then search for vectors [using the KNN approach](https://redis.io/docs/stack/search/reference/vectors/#pure-knn-queries).
@@ -650,18 +595,18 @@ KNN queries can be combined with standard Redis search functionalities using <u>
650
595
Range queries retrieve data that falls within a specified range of values.
651
596
For vectors, a "range query" typically refers to retrieving all vectors within a certain distance of a target vector. The "range" in this context is a radius in the vector space.
652
597
653
-
### Range query with Redis
598
+
### Vector range query with Redis
654
599
655
600
Below, you'll find a Node.js code snippet that illustrates how to perform vector `range query` for any range (radius/ distance)provided:
The syntax for KNN/range vector queries remains consistent whether you're dealing with image vectors or text vectors.
738
683
:::
684
+
685
+
## How to calculate vector similarity?
686
+
687
+
Several techniques are available to assess vector similarity, with some of the most prevalent ones being:
688
+
689
+
### Euclidean Distance (L2 norm)
690
+
691
+
**Euclidean Distance (L2 norm)** calculates the linear distance between two points within a multi-dimensional space. Lower values indicate closer proximity, and hence higher similarity.
692
+
693
+
<img
694
+
src={EuclideanDistanceFormulaImage}
695
+
alt="EuclideanDistanceFormulaImage"
696
+
width="300"
697
+
className="margin-bottom--md"
698
+
/>
699
+
700
+
For illustration purposes, let's assess `product 1` and `product 2` from the earlier ecommerce dataset and determine the `Euclidean Distance` considering all features.
701
+
702
+
<img
703
+
src={EuclideanDistanceSampleImage}
704
+
alt="EuclideanDistanceSampleImage"
705
+
width="600"
706
+
className="margin-bottom--md"
707
+
/>
708
+
709
+
As an example, we will use a 2D chart made with [chart.js](https://www.chartjs.org/) comparing the `Price vs. Quality` features of our products, focusing solely on these two attributes to compute the `Euclidean Distance`.
710
+
711
+

712
+
713
+
### Cosine Similarity
714
+
715
+
**Cosine Similarity** measures the cosine of the angle between two vectors. The cosine similarity value ranges between -1 and 1. A value closer to 1 implies a smaller angle and higher similarity, while a value closer to -1 implies a larger angle and lower similarity. Cosine similarity is particularly popular in NLP when dealing with text vectors.
716
+
717
+
<img
718
+
src={CosineFormulaImage}
719
+
alt="CosineFormulaImage"
720
+
width="450"
721
+
className="margin-bottom--md"
722
+
/>
723
+
724
+
:::note
725
+
If two vectors are pointing in the same direction, the cosine of the angle between them is 1. If they're orthogonal, the cosine is 0, and if they're pointing in opposite directions, the cosine is -1.
726
+
:::
727
+
728
+
Again, consider `product 1` and `product 2` from the previous dataset and calculate the `Cosine Distance` for all features.
729
+
730
+

731
+
732
+
Using [chart.js](https://www.chartjs.org/), we've crafted a 2D chart of `Price vs. Quality` features. It visualizes the `Cosine Similarity` solely based on these attributes.
733
+
734
+

735
+
736
+
### Inner Product
737
+
738
+
**Inner Product (dot product)** The inner product (or dot product) isn't a distance metric in the traditional sense but can be used to calculate similarity, especially when vectors are normalized (have a magnitude of 1). It's the sum of the products of the corresponding entries of the two sequences of numbers.
739
+
740
+
<img
741
+
src={IpFormulaImage}
742
+
alt="IpFormulaImage"
743
+
width="450"
744
+
className="margin-bottom--md"
745
+
/>
746
+
747
+
:::note
748
+
The inner product can be thought of as a measure of how much two vectors "align"
749
+
in a given vector space. Higher values indicate higher similarity. However, the raw
750
+
values can be large for long vectors; hence, normalization is recommended for better
751
+
interpretation. If the vectors are normalized, their dot product will be `1if they are identical` and `0if they are orthogonal` (uncorrelated).
752
+
:::
753
+
754
+
Considering our `product 1` and `product 2`, let's compute the `Inner Product` across all features.
755
+
756
+

757
+
758
+
:::tip
759
+
Vectors can also be stored in databases in **binary formats** to save space. In practical applications, it's crucial to strike a balance between the dimensionality of the vectors (which impacts storage and computational costs) and the quality or granularity of the information they capture.
0 commit comments