Update README.md
Browse files
README.md
CHANGED
@@ -20,3 +20,32 @@ All Nomic Embed Text models are now **multimodal**!
|
|
20 |
| OpenAI CLIP ViT B/16 | 68.3 | 56.3 | 43.82 |
|
21 |
| Jina CLIP v1 | 59.1 | 52.2 | 60.1 |
|
22 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
| OpenAI CLIP ViT B/16 | 68.3 | 56.3 | 43.82 |
|
21 |
| Jina CLIP v1 | 59.1 | 52.2 | 60.1 |
|
22 |
|
23 |
+
|
24 |
+
## Hosted Inference API
|
25 |
+
|
26 |
+
The easiest way to get started with Nomic Embed is through the Nomic Embedding API.
|
27 |
+
|
28 |
+
Generating embeddings with the `nomic` Python client is as easy as
|
29 |
+
```python
|
30 |
+
from nomic import embed
|
31 |
+
import numpy as np
|
32 |
+
|
33 |
+
output = embed.image(
|
34 |
+
images=[
|
35 |
+
"image_path_1.jpeg",
|
36 |
+
"image_path_2.png",
|
37 |
+
],
|
38 |
+
model='nomic-embed-vision-v1.5',
|
39 |
+
)
|
40 |
+
|
41 |
+
print(output['usage'])
|
42 |
+
embeddings = np.array(output['embeddings'])
|
43 |
+
print(embeddings.shape)
|
44 |
+
```
|
45 |
+
For more information, see the [API reference](https://docs.nomic.ai/reference/endpoints/nomic-embed-vision)
|
46 |
+
|
47 |
+
## Data Visualization
|
48 |
+
Click the Nomic Atlas map below to visualize a 100,000 sample CC3M comparing the Vision and Text Embedding Space!
|
49 |
+
|
50 |
+
|
51 |
+
[![image/webp](https://cdn-uploads.huggingface.co/production/uploads/607997c83a565c15675055b3/pjhJhuNyRfPagRd_c_iUz.webp)](https://atlas.nomic.ai/data/nomic-multimodal-series/cc3m-100k-image-bytes-v15/map)
|