Datasets:

Modalities:
Text
ArXiv:
Libraries:
Datasets
License:
crystina-z commited on
Commit
810934a
β€’
1 Parent(s): a7362f0

update dataset card

Browse files
Files changed (1) hide show
  1. README.md +40 -3
README.md CHANGED
@@ -1,4 +1,4 @@
1
-
2
  annotations_creators:
3
  - expert-generated
4
 
@@ -21,7 +21,6 @@ language:
21
  - zh
22
 
23
 
24
-
25
  multilinguality:
26
  - multilingual
27
 
@@ -35,4 +34,42 @@ task_categories:
35
 
36
 
37
  task_ids:
38
- - document-retrieval
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
  annotations_creators:
3
  - expert-generated
4
 
 
21
  - zh
22
 
23
 
 
24
  multilinguality:
25
  - multilingual
26
 
 
34
 
35
 
36
  task_ids:
37
+ - document-retrieval
38
+ ---
39
+
40
+ # Dataset Card for MIRACL Corpus
41
+
42
+
43
+ ## Dataset Description
44
+ - **Homepage:** [`miracl.ai`](http://miracl.ai).
45
+ - **Repository:** [`https://github.com/project-miracl/miracl`](https://github.com/project-miracl/miracl)
46
+ - **Paper:** Coming Soon!
47
+
48
+ MIRACL πŸŒπŸ™ŒπŸŒ (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world.
49
+
50
+ This dataset contains the collection data of the 16 "known languages". The remaining 2 "surprise languages" will not be released until later.
51
+
52
+ The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., \n\n in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage. The raw Wikipedia dump can be found [here](https://github.com/project-miracl/miracl/blob/main/README.md#corpora).
53
+
54
+ ## Dataset Structure
55
+ Each retrieval unit contain three fields: `docid`, `title`, and `text`. Consider an example from the English corpus:
56
+
57
+ ```
58
+ {
59
+ "docid": "39#0",
60
+ "title": "Albedo",
61
+ "text": "Albedo (meaning 'whiteness') is the measure of the diffuse reflection of solar radiation out of the total solar radiation received by an astronomical body (e.g. a planet like Earth). It is dimensionless and measured on a scale from 0 (corresponding to a black body that absorbs all incident radiation) to 1 (corresponding to a body that reflects all incident radiation)."
62
+ }
63
+ ```
64
+ The `docid` has the schema `X#Y`, where all passages with the same `X` come from the same Wikipedia article, whereas `Y` denotes the passage within that article, numbered sequentially. The text field contains the text of the passage. The title field contains the name of the article the passage comes from.
65
+
66
+
67
+ The collection can be loaded using:
68
+ ```
69
+ lang='ar' # or any of the 16 languages
70
+ miracl_corpus = datasets.load_dataset('miracl/miracl-corpus', lang)['train']
71
+ for doc in miracl_corpus:
72
+ docid = doc['docid']
73
+ title = doc['title']
74
+ text = doc['text']
75
+ ```