File size: 3,811 Bytes
107d0d9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
---
annotations_creators:
- crowdsourced
language:
- en
- ar
- bn
- fi
- ja
- ko
- ru
- te
language_creators:
- crowdsourced
license:
- mit
multilinguality:
- multilingual
pretty_name: XORQA Reading Comprehension
size_categories:
- '10K<n<100K'
source_datasets:
- extended|wikipedia
task_categories:
- question-answering
task_ids:
- extractive-qa
---

# Dataset Card for "tydi_xor_rc_yes_no_unanswerable"


## Dataset Description

- **Homepage:** [https://github.com/google-research-datasets/tydiqa](https://github.com/google-research-datasets/tydiqa)
- **Paper:** [Paper](https://aclanthology.org/2021.naacl-main.46)

### Dataset Summary

[TyDi QA](https://huggingface.co/datasets/tydiqa) is a question answering dataset covering 11 typologically diverse languages. 
[XORQA](https://github.com/AkariAsai/XORQA) is an extension of the original TyDi QA dataset to also include unanswerable questions, where context documents are only in English but questions are in 7 languages.
This dataset is a simplified version of the [Reading Comprehension data](https://nlp.cs.washington.edu/xorqa/XORQA_site/data/tydi_xor_rc_yes_no_unanswerable.zip) from XORQA.

## Dataset Structure

The dataset contains a train and a validation set, with 15445 and 3646 examples, respectively. Access them with

```py
from datasets import load_dataset
dataset = load_dataset("coastalcph/tydi_xor_rc_yes_no_unanswerable")
train_set = dataset["train"]
validation_set = dataset["validation"]
```

### Data Instances

Description of the dataset columns:

| Column name                  | type        |  Description                                                                                                     |
| -----------                  | ----------- | -----------                                                                                                      |
| lang                     | str         | The language of the data instance                                                                                |
| question                | str         | The question to answer                                                                                           |
| context           | str         | The context, a Wikipedia paragraph that might or might not contain the answer to the question                    | 
| is_impossible | bool | FALSE if the question can be answered given the context, TRUE otherwise |
| answer_start  | int   | The character index in 'context' where the answer starts. If the question is unanswerable, this is -1  |
| answer   | str   | The answer, a span of text from 'context'. If the question is unanswerable given the context, this can be 'yes' or 'no'            |


## Useful stuff

Check out the [datasets ducumentations](https://huggingface.co/docs/datasets/quickstart) to learn how to manipulate and use the dataset. Specifically, you might find the following functions useful:

`dataset.filter`, for filtering out data (useful for keeping instances of specific languages, for example).

`dataset.map`, for manipulating the dataset.

`dataset.to_pandas`, to convert the dataset into a pandas.DataFrame format.


```
@inproceedings{xorqa,
    title   = {{XOR} {QA}: Cross-lingual Open-Retrieval Question Answering},
    author  = {Akari Asai and Jungo Kasai and Jonathan H. Clark and Kenton Lee and Eunsol Choi and Hannaneh Hajishirzi},
    booktitle={NAACL-HLT},
    year    = {2021}
}
```

```
@article{tydiqa,
title   = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages},
author  = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}
year    = {2020},
journal = {Transactions of the Association for Computational Linguistics}
}

```