Datasets:

Modalities:
Text
Formats:
parquet
Sub-tasks:
extractive-qa
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
lhoestq HF staff commited on
Commit
2bf1271
1 Parent(s): d49fb52

add dataset_info in dataset metadata

Browse files
Files changed (1) hide show
  1. README.md +54 -1
README.md CHANGED
@@ -25,6 +25,59 @@ task_ids:
25
  - extractive-qa
26
  paperswithcode_id: mrqa-2019
27
  pretty_name: MRQA 2019
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
  ---
29
 
30
  # Dataset Card for MRQA 2019
@@ -294,4 +347,4 @@ Unknown
294
 
295
  ### Contributions
296
 
297
- Thanks to [@jimmycode](https://github.com/jimmycode), [@VictorSanh](https://github.com/VictorSanh) for adding this dataset.
 
25
  - extractive-qa
26
  paperswithcode_id: mrqa-2019
27
  pretty_name: MRQA 2019
28
+ dataset_info:
29
+ features:
30
+ - name: subset
31
+ dtype: string
32
+ - name: context
33
+ dtype: string
34
+ - name: context_tokens
35
+ sequence:
36
+ - name: tokens
37
+ dtype: string
38
+ - name: offsets
39
+ dtype: int32
40
+ - name: qid
41
+ dtype: string
42
+ - name: question
43
+ dtype: string
44
+ - name: question_tokens
45
+ sequence:
46
+ - name: tokens
47
+ dtype: string
48
+ - name: offsets
49
+ dtype: int32
50
+ - name: detected_answers
51
+ sequence:
52
+ - name: text
53
+ dtype: string
54
+ - name: char_spans
55
+ sequence:
56
+ - name: start
57
+ dtype: int32
58
+ - name: end
59
+ dtype: int32
60
+ - name: token_spans
61
+ sequence:
62
+ - name: start
63
+ dtype: int32
64
+ - name: end
65
+ dtype: int32
66
+ - name: answers
67
+ sequence: string
68
+ config_name: plain_text
69
+ splits:
70
+ - name: test
71
+ num_bytes: 57712177
72
+ num_examples: 9633
73
+ - name: train
74
+ num_bytes: 4090681873
75
+ num_examples: 516819
76
+ - name: validation
77
+ num_bytes: 484107026
78
+ num_examples: 58221
79
+ download_size: 1479518355
80
+ dataset_size: 4632501076
81
  ---
82
 
83
  # Dataset Card for MRQA 2019
 
347
 
348
  ### Contributions
349
 
350
+ Thanks to [@jimmycode](https://github.com/jimmycode), [@VictorSanh](https://github.com/VictorSanh) for adding this dataset.