Update README.md
Browse files
README.md
CHANGED
@@ -1,22 +1,22 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
tags:
|
4 |
-
- clip
|
5 |
-
- ecommerce
|
6 |
-
- multimodal retrieval
|
7 |
-
- transformers
|
8 |
-
- openCLIP
|
9 |
-
datasets:
|
10 |
-
- Marqo/amazon-products-eval
|
11 |
-
- Marqo/google-shopping-general-eval
|
12 |
-
---
|
13 |
|
14 |
# Marqo Ecommerce Embedding Models
|
15 |
In this work, we introduce two state-of-the-art embedding models for ecommerce products: Marqo-Ecommerce-B and Marqo-Ecommerce-L.
|
16 |
|
17 |
The benchmarking results highlight a remarkable performance by marqo-ecommerce models, which both consistently outperformed all other models across various metrics. Specifically, for the Google Shopping Text-to-Image task, marqo-ecommerce-L achieved an improvement of 43% in MRR, 41% in nDCG@10 and 33% in Recall@10 when compared to ViT-B-16-SigLIP which is our baseline model for these benchmarks. For the Google Shopping Category-to-Image task, we saw an improvement of 67% in mAP, 41% in nDCG@10 and 42% in Precision@10.
|
18 |
|
19 |
-
<img src="https://raw.githubusercontent.com/marqo-ai/marqo-ecommerce-embeddings/
|
20 |
|
21 |
More benchmarking results can be found below.
|
22 |
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
tags:
|
4 |
+
- clip
|
5 |
+
- ecommerce
|
6 |
+
- multimodal retrieval
|
7 |
+
- transformers
|
8 |
+
- openCLIP
|
9 |
+
datasets:
|
10 |
+
- Marqo/amazon-products-eval
|
11 |
+
- Marqo/google-shopping-general-eval
|
12 |
+
---
|
13 |
|
14 |
# Marqo Ecommerce Embedding Models
|
15 |
In this work, we introduce two state-of-the-art embedding models for ecommerce products: Marqo-Ecommerce-B and Marqo-Ecommerce-L.
|
16 |
|
17 |
The benchmarking results highlight a remarkable performance by marqo-ecommerce models, which both consistently outperformed all other models across various metrics. Specifically, for the Google Shopping Text-to-Image task, marqo-ecommerce-L achieved an improvement of 43% in MRR, 41% in nDCG@10 and 33% in Recall@10 when compared to ViT-B-16-SigLIP which is our baseline model for these benchmarks. For the Google Shopping Category-to-Image task, we saw an improvement of 67% in mAP, 41% in nDCG@10 and 42% in Precision@10.
|
18 |
|
19 |
+
<img src="https://raw.githubusercontent.com/marqo-ai/marqo-ecommerce-embeddings/main/performance.png?token=GHSAT0AAAAAACZY3OVL7HD6UZTBOJ7FLG7MZZOCJSA" alt="multi split visual" width="700"/>
|
20 |
|
21 |
More benchmarking results can be found below.
|
22 |
|