You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+34-39Lines changed: 34 additions & 39 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,21 +14,21 @@
14
14
15
15
<!-- start elevator-pitch -->
16
16
17
-
DocArray is a library for nested, unstructured, multimodal data in transit, including text, image, audio, video, 3D mesh, etc. It allows deep-learning engineers to efficiently process, embed, search, recommend, store, and transfer the multi-modal data with a Pythonic API.
17
+
DocArray is a library for nested, unstructured, multimodal data in transit, including text, image, audio, video, 3D mesh, etc. It allows deep-learning engineers to efficiently process, embed, search, recommend, store, and transfer multimodal data with a Pythonic API.
18
18
19
-
🚪 **Door to cross-/multi-modal world**: super-expressive data structure for representing complicated/mixed/nested text, image, video, audio, 3D mesh data. The foundation data structure of [Jina](https://github.com/jina-ai/jina), [CLIP-as-service](https://github.com/jina-ai/clip-as-service), [DALL·E Flow](https://github.com/jina-ai/dalle-flow), [DiscoArt](https://github.com/jina-ai/discoart) etc.
19
+
🚪 **Door to multimodal world**: super-expressive data structure for representing complicated/mixed/nested text, image, video, audio, 3D mesh data. The foundation data structure of [Jina](https://github.com/jina-ai/jina), [CLIP-as-service](https://github.com/jina-ai/clip-as-service), [DALL·E Flow](https://github.com/jina-ai/dalle-flow), [DiscoArt](https://github.com/jina-ai/discoart) etc.
20
20
21
21
🧑🔬 **Data science powerhouse**: greatly accelerate data scientists' work on embedding, k-NN matching, querying, visualizing, evaluating via Torch/TensorFlow/ONNX/PaddlePaddle on CPU/GPU.
22
22
23
23
🚡 **Data in transit**: optimized for network communication, ready-to-wire at anytime with fast and compressed serialization in Protobuf, bytes, base64, JSON, CSV, DataFrame. Perfect for streaming and out-of-memory data.
24
24
25
-
🔎 **One-stop k-NN**: Unified and consistent API for mainstream vector databases that allows nearest neighboour search including Elasticsearch, Redis, ANNLite, Qdrant, Weaviate.
25
+
🔎 **One-stop k-NN**: Unified and consistent API for mainstream vector databases that allows nearest neighbor search including Elasticsearch, Redis, AnnLite, Qdrant, Weaviate.
26
26
27
-
👒 **For modern apps**: GraphQL support makes your server versatile on request and response; built-in data validation and JSON Schema (OpenAPI) help you build reliable webservices.
27
+
👒 **For modern apps**: GraphQL support makes your server versatile on request and response; built-in data validation and JSON Schema (OpenAPI) help you build reliable web services.
28
28
29
-
🐍 **Pythonic experience**: designed to be as easy as a Python list. If you know how to Python, you know how to DocArray. Intuitive idioms and type annotation simplify the code you write.
29
+
🐍 **Pythonic experience**: as easy as a Python list. If you can Python, you can DocArray. Intuitive idioms and type annotation simplify the code you write.
30
30
31
-
🛸 **Integrate with IDE**: pretty-print and visualization on Jupyter notebook & Google Colab; comprehensive auto-complete and type hint in PyCharm & VS Code.
31
+
🛸 **IDE integration**: pretty-print and visualization on Jupyter notebook and Google Colab; comprehensive autocomplete and type hints in PyCharm and VS Code.
32
32
33
33
Read more on [why should you use DocArray](https://docarray.jina.ai/get-started/what-is/) and [comparison to alternatives](https://docarray.jina.ai/get-started/what-is/#comparing-to-alternatives).
34
34
@@ -61,9 +61,9 @@ DocArray consists of three simple concepts:
61
61
62
62
Let's see DocArray in action with some examples.
63
63
64
-
### Example 1: represent multimodal data in dataclass
64
+
### Example 1: represent multimodal data in a dataclass
65
65
66
-
The following news article card can be easily represented via`docarray.dataclass` and type annotation:
66
+
You can easily represent the following news article card with`docarray.dataclass` and type annotation:
67
67
68
68
69
69
<table>
@@ -104,9 +104,9 @@ d = Document(a)
104
104
</table>
105
105
106
106
107
-
### Example 2: a 10-liners text matching
107
+
### Example 2: text matching in 10 lines
108
108
109
-
Let's search for top-5 similar sentences of <kbd>she smiled too much</kbd> in "Pride and Prejudice".
109
+
Let's search for top-5 similar sentences of <kbd>she smiled too much</kbd> in "Pride and Prejudice":
110
110
111
111
```python
112
112
from docarray import Document, DocumentArray
@@ -137,7 +137,7 @@ Here the feature embedding is done by simple [feature hashing](https://en.wikipe
137
137
138
138
### Example 3: external storage for out-of-memory data
139
139
140
-
When your data is too big, storing in memory is probably not a good idea. DocArray supports [multiple storage backends](https://docarray.jina.ai/advanced/document-store/) such as SQLite, Weaviate, Qdrant and ANNLite. They are all unified under **the exact same user experience and API**. Take the above snippet as an example, you only need to change one line to use SQLite:
140
+
When your data is too big, storing in memory is not the best idea. DocArray supports [multiple storage backends](https://docarray.jina.ai/advanced/document-store/) such as SQLite, Weaviate, Qdrant and AnnLite. They're all unified under **the exact same user experience and API**. Take the above snippet: you only need to change one line to use SQLite:
141
141
142
142
```python
143
143
da = DocumentArray(
@@ -146,15 +146,13 @@ da = DocumentArray(
146
146
)
147
147
```
148
148
149
-
The code snippet can still run **as-is**. All APIs remain the same, the code after are then running in a "in-database" manner.
149
+
The code snippet can still run **as-is**. All APIs remain the same, the subsequent code then runs in an "in-database" manner.
150
150
151
-
Besides saving memory, one can leverage storage backends for persistence, faster retrieval (e.g. on nearest-neighbour queries).
151
+
Besides saving memory, you can leverage storage backends for persistence and faster retrieval (e.g. on nearest-neighbor queries).
152
152
153
+
### Example 4: complete workflow of visual search
153
154
154
-
155
-
### Example 4: a complete workflow of visual search
156
-
157
-
Let's use DocArray and the [Totally Looks Like](https://sites.google.com/view/totally-looks-like-dataset) dataset to build a simple meme image search. The dataset contains 6,016 image-pairs stored in `/left` and `/right`. Images that share the same filename are perceptually similar. For example:
155
+
Let's use DocArray and the [Totally Looks Like](https://sites.google.com/view/totally-looks-like-dataset) dataset to build a simple meme image search. The dataset contains 6,016 image-pairs stored in `/left` and `/right`. Images that share the same filename appear similar to the human eye. For example:
158
156
159
157
<table>
160
158
<thead>
@@ -175,31 +173,31 @@ Let's use DocArray and the [Totally Looks Like](https://sites.google.com/view/to
175
173
</tbody>
176
174
</table>
177
175
178
-
Our problem is given an image from `/left`, can we find its most-similar image in `/right`? (without looking at the filename of course).
176
+
Given an image from `/left`, can we find the most-similar image to it in `/right`? (without looking at the filename).
179
177
180
178
### Load images
181
179
182
-
First we load images. You *can* go to [Totally Looks Like](https://sites.google.com/view/totally-looks-like-dataset) website, unzip and load images as below:
180
+
First we load images. You *can* go to [Totally Looks Like](https://sites.google.com/view/totally-looks-like-dataset)'s website, unzip and load images as below:
If you have more than 15GB of RAM and want to try using the whole dataset instead of just the first 1000 images, remove [:1000] when loading the files into the DocumentArrays left_da and right_da.
195
+
If you have more than 15GB of RAM and want to try using the whole dataset instead of just the first 1,000 images, remove `[:1000]` when loading the files into the DocumentArrays `left_da` and `right_da`.
198
196
199
197
200
-
You will see a running progress bar to indicate the downloading process.
198
+
You'll see a progress bar to indicate how much has downloaded.
201
199
202
-
To get a feeling of the data you will handle, plot them in one sprite image. You will need to have matplotlib and torch installed to run this snippet:
200
+
To get a feeling of the data, we can plot them in one sprite image. You need matplotlib and torch installed to run this snippet:
203
201
204
202
```python
205
203
left_da.plot_image_sprites()
@@ -232,7 +230,7 @@ Did I mention `apply` works in parallel?
232
230
233
231
### Embed images
234
232
235
-
Now convert images into embeddings using a pretrained ResNet50:
233
+
Now let's convert images into embeddings using a pretrained ResNet50:
236
234
237
235
```python
238
236
import torchvision
@@ -245,7 +243,7 @@ This step takes ~30 seconds on GPU. Beside PyTorch, you can also use TensorFlow,
245
243
246
244
### Visualize embeddings
247
245
248
-
You can visualize the embeddings via tSNE in an interactive embedding projector. You will need to have pydantic, uvicorn and fastapi installed to run this snippet:
246
+
You can visualize the embeddings via tSNE in an interactive embedding projector. You will need to have pydantic, uvicorn and FastAPI installed to run this snippet:
<ahref="https://docarray.jina.ai"><imgsrc="https://github.com/docarray/docarray/blob/main/.github/README-img/tsne.gif?raw=true"alt="Visualizing embedding via tSNE and embedding projector"width="90%"></a>
256
254
</p>
257
255
258
-
Fun is fun, but recall our goal is to match left images against right images and so far we have only handled the left. Let's repeat the same procedure for the right:
259
-
256
+
Fun is fun, but our goal is to match left images against right images, and so far we have only handled the left. Let's repeat the same procedure for the right:
260
257
261
258
<table>
262
259
<tr>
@@ -289,9 +286,9 @@ right_da = (
289
286
</tr>
290
287
</table>
291
288
292
-
### Match nearest neighbours
289
+
### Match nearest neighbors
293
290
294
-
We can now match the left to the right and take the top-9 results.
291
+
Now we can match the left to the right and take the top-9 results.
<ahref="https://docarray.jina.ai"><imgsrc="https://github.com/jina-ai/docarray/blob/main/.github/README-img/9nn.png?raw=true"alt="Visualizing top-9 matches using DocArray API"height="250px"></a>
338
335
</p>
339
336
340
-
What we did here is revert the preprocessing steps (i.e. switching axis and normalizing) on the copied matches, so that you can visualize them using image sprites.
337
+
Here we reversed the preprocessing steps (i.e. switching axis and normalizing) on the copied matches, so you can visualize them using image sprites.
341
338
342
339
### Quantitative evaluation
343
340
@@ -350,7 +347,7 @@ groundtruth = DocumentArray(
350
347
)
351
348
```
352
349
353
-
Here we create a new DocumentArray with real matches by simply replacing the filename, e.g. `left/00001.jpg` to `right/00001.jpg`. That's all we need: if the predicted match has the identical `uri` as the groundtruth match, then it is correct.
350
+
Here we created a new DocumentArray with real matches by simply replacing the filename, e.g. `left/00001.jpg` to `right/00001.jpg`. That's all we need: if the predicted match has the identical `uri` as the groundtruth match, then it is correct.
354
351
355
352
Now let's check recall rate from 1 to 5 over the full dataset:
More metrics can be used such as`precision_at_k`, `ndcg_at_k`, `hit_at_k`.
372
+
You can also use other metrics like`precision_at_k`, `ndcg_at_k`, `hit_at_k`.
376
373
377
-
If you think a pretrained ResNet50 is good enough, let me tell you with [Finetuner](https://github.com/jina-ai/finetuner) you could do much better in just 10 extra lines of code. [Here is how](https://finetuner.jina.ai/notebooks/image_to_image/).
374
+
If you think a pretrained ResNet50 is good enough, let me tell you with [Finetuner](https://github.com/jina-ai/finetuner) you can do much better with [just another ten lines of code](https://finetuner.jina.ai/notebooks/image_to_image/).
378
375
379
376
380
377
### Save results
381
378
382
-
You can save a DocumentArray to binary, JSON, dict, DataFrame, CSV or Protobuf message with/without compression. In its simplest form,
379
+
You can save a DocumentArray to binary, JSON, dict, DataFrame, CSV or Protobuf message with/without compression. In its simplest form:
383
380
384
381
```python
385
382
left_da.save('left_da.bin')
386
383
```
387
384
388
-
To reuse it, do `left_da = DocumentArray.load('left_da.bin')`.
389
-
385
+
To reuse that DocumentArray's data, use `left_da = DocumentArray.load('left_da.bin')`.
390
386
391
387
If you want to transfer a DocumentArray from one machine to another or share it with your colleagues, you can do:
392
388
393
-
394
389
```python
395
390
left_da.push('my_shared_da')
396
391
```
@@ -406,7 +401,7 @@ Intrigued? That's only scratching the surface of what DocArray is capable of. [R
406
401
407
402
<!-- start support-pitch -->
408
403
## Support
409
-
- Join our [Slack community](https://slack.jina.ai) and chat with other community members about ideas.
404
+
- Join our [Slack community](https://jina.ai/slack) and chat with other community members about ideas.
0 commit comments