Datasets:
Tasks:
Text Retrieval
Modalities:
Text
Formats:
parquet
Sub-tasks:
document-retrieval
Languages:
Telugu
Size:
1K - 10K
License:
Update README.md
Browse files
README.md
CHANGED
@@ -42,7 +42,7 @@ We compute for `title+" "+text` the embeddings using our `multilingual-22-12` em
|
|
42 |
|
43 |
## Loading the dataset
|
44 |
|
45 |
-
In
|
46 |
|
47 |
You can either load the dataset like this:
|
48 |
```python
|
@@ -115,24 +115,38 @@ query_embedding = response.embeddings[0] # Get the embedding for the first text
|
|
115 |
|
116 |
## Performance
|
117 |
|
118 |
-
In the following table we
|
119 |
-
|
120 |
-
|
121 |
-
|
122 |
-
|
123 |
-
|
124 |
-
|
125 |
-
|
126 |
-
|
|
127 |
-
|
128 |
-
| miracl-
|
129 |
-
| miracl-
|
130 |
-
| miracl-
|
131 |
-
| miracl-
|
132 |
-
| miracl-
|
133 |
-
| miracl-
|
134 |
-
| miracl-
|
135 |
-
| miracl-
|
136 |
-
| miracl-
|
137 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
138 |
|
|
|
42 |
|
43 |
## Loading the dataset
|
44 |
|
45 |
+
In [miracl-te-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-te-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large.
|
46 |
|
47 |
You can either load the dataset like this:
|
48 |
```python
|
|
|
115 |
|
116 |
## Performance
|
117 |
|
118 |
+
In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset.
|
119 |
+
|
120 |
+
|
121 |
+
We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results.
|
122 |
+
|
123 |
+
Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted.
|
124 |
+
|
125 |
+
|
126 |
+
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 |
|
127 |
+
|---|---|---|---|---|
|
128 |
+
| miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 |
|
129 |
+
| miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 |
|
130 |
+
| miracl-de | 44.4 | 60.7 | 19.6 | 29.8 |
|
131 |
+
| miracl-en | 44.6 | 62.2 | 30.2 | 43.2 |
|
132 |
+
| miracl-es | 47.0 | 74.1 | 27.0 | 47.2 |
|
133 |
+
| miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 |
|
134 |
+
| miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 |
|
135 |
+
| miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 |
|
136 |
+
| miracl-id | 44.8 | 63.8 | 39.2 | 54.7 |
|
137 |
+
| miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 |
|
138 |
+
| **Avg** | 51.7 | 67.5 | 34.7 | 46.0 |
|
139 |
+
|
140 |
+
Further languages (not supported by Elasticsearch):
|
141 |
+
| Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 |
|
142 |
+
|---|---|---|
|
143 |
+
| miracl-fa | 44.8 | 53.6 |
|
144 |
+
| miracl-ja | 49.0 | 61.0 |
|
145 |
+
| miracl-ko | 50.9 | 64.8 |
|
146 |
+
| miracl-sw | 61.4 | 74.5 |
|
147 |
+
| miracl-te | 67.8 | 72.3 |
|
148 |
+
| miracl-th | 60.2 | 71.9 |
|
149 |
+
| miracl-yo | 56.4 | 62.2 |
|
150 |
+
| miracl-zh | 43.8 | 56.5 |
|
151 |
+
| **Avg** | 54.3 | 64.6 |
|
152 |
|