danielh commited on
Commit
107d0d9
1 Parent(s): d62cb23

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +97 -3
README.md CHANGED
@@ -1,3 +1,97 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language:
5
+ - en
6
+ - ar
7
+ - bn
8
+ - fi
9
+ - ja
10
+ - ko
11
+ - ru
12
+ - te
13
+ language_creators:
14
+ - crowdsourced
15
+ license:
16
+ - mit
17
+ multilinguality:
18
+ - multilingual
19
+ pretty_name: XORQA Reading Comprehension
20
+ size_categories:
21
+ - '10K<n<100K'
22
+ source_datasets:
23
+ - extended|wikipedia
24
+ task_categories:
25
+ - question-answering
26
+ task_ids:
27
+ - extractive-qa
28
+ ---
29
+
30
+ # Dataset Card for "tydi_xor_rc_yes_no_unanswerable"
31
+
32
+
33
+ ## Dataset Description
34
+
35
+ - **Homepage:** [https://github.com/google-research-datasets/tydiqa](https://github.com/google-research-datasets/tydiqa)
36
+ - **Paper:** [Paper](https://aclanthology.org/2021.naacl-main.46)
37
+
38
+ ### Dataset Summary
39
+
40
+ [TyDi QA](https://huggingface.co/datasets/tydiqa) is a question answering dataset covering 11 typologically diverse languages.
41
+ [XORQA](https://github.com/AkariAsai/XORQA) is an extension of the original TyDi QA dataset to also include unanswerable questions, where context documents are only in English but questions are in 7 languages.
42
+ This dataset is a simplified version of the [Reading Comprehension data](https://nlp.cs.washington.edu/xorqa/XORQA_site/data/tydi_xor_rc_yes_no_unanswerable.zip) from XORQA.
43
+
44
+ ## Dataset Structure
45
+
46
+ The dataset contains a train and a validation set, with 15445 and 3646 examples, respectively. Access them with
47
+
48
+ ```py
49
+ from datasets import load_dataset
50
+ dataset = load_dataset("coastalcph/tydi_xor_rc_yes_no_unanswerable")
51
+ train_set = dataset["train"]
52
+ validation_set = dataset["validation"]
53
+ ```
54
+
55
+ ### Data Instances
56
+
57
+ Description of the dataset columns:
58
+
59
+ | Column name | type | Description |
60
+ | ----------- | ----------- | ----------- |
61
+ | lang | str | The language of the data instance |
62
+ | question | str | The question to answer |
63
+ | context | str | The context, a Wikipedia paragraph that might or might not contain the answer to the question |
64
+ | is_impossible | bool | FALSE if the question can be answered given the context, TRUE otherwise |
65
+ | answer_start | int | The character index in 'context' where the answer starts. If the question is unanswerable, this is -1 |
66
+ | answer | str | The answer, a span of text from 'context'. If the question is unanswerable given the context, this can be 'yes' or 'no' |
67
+
68
+
69
+ ## Useful stuff
70
+
71
+ Check out the [datasets ducumentations](https://huggingface.co/docs/datasets/quickstart) to learn how to manipulate and use the dataset. Specifically, you might find the following functions useful:
72
+
73
+ `dataset.filter`, for filtering out data (useful for keeping instances of specific languages, for example).
74
+
75
+ `dataset.map`, for manipulating the dataset.
76
+
77
+ `dataset.to_pandas`, to convert the dataset into a pandas.DataFrame format.
78
+
79
+
80
+ ```
81
+ @inproceedings{xorqa,
82
+ title = {{XOR} {QA}: Cross-lingual Open-Retrieval Question Answering},
83
+ author = {Akari Asai and Jungo Kasai and Jonathan H. Clark and Kenton Lee and Eunsol Choi and Hannaneh Hajishirzi},
84
+ booktitle={NAACL-HLT},
85
+ year = {2021}
86
+ }
87
+ ```
88
+
89
+ ```
90
+ @article{tydiqa,
91
+ title = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages},
92
+ author = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}
93
+ year = {2020},
94
+ journal = {Transactions of the Association for Computational Linguistics}
95
+ }
96
+
97
+ ```