Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
parquet
Sub-tasks:
extractive-qa
Size:
10K - 100K
License:
Update README.md
Browse files
README.md
CHANGED
@@ -61,8 +61,8 @@ Description of the dataset columns:
|
|
61 |
| lang | str | The language of the data instance |
|
62 |
| question | str | The question to answer |
|
63 |
| context | str | The context, a Wikipedia paragraph that might or might not contain the answer to the question |
|
64 |
-
|
|
65 |
-
| answer_start | int | The character index in 'context' where the answer starts. If the question is unanswerable, this is -1 |
|
66 |
| answer | str | The answer, a span of text from 'context'. If the question is unanswerable given the context, this can be 'yes' or 'no' |
|
67 |
|
68 |
|
@@ -76,22 +76,82 @@ Check out the [datasets ducumentations](https://huggingface.co/docs/datasets/qui
|
|
76 |
|
77 |
`dataset.to_pandas`, to convert the dataset into a pandas.DataFrame format.
|
78 |
|
79 |
-
|
80 |
```
|
81 |
-
@
|
82 |
-
title
|
83 |
-
author
|
84 |
-
|
85 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
86 |
}
|
87 |
-
```
|
88 |
|
89 |
-
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
95 |
}
|
96 |
|
97 |
```
|
|
|
61 |
| lang | str | The language of the data instance |
|
62 |
| question | str | The question to answer |
|
63 |
| context | str | The context, a Wikipedia paragraph that might or might not contain the answer to the question |
|
64 |
+
| answertable | bool | True if the question can be answered given the context, False otherwise |
|
65 |
+
| answer_start | int | The character index in 'context' where the answer starts. If the question is unanswerable given the context, this is -1 |
|
66 |
| answer | str | The answer, a span of text from 'context'. If the question is unanswerable given the context, this can be 'yes' or 'no' |
|
67 |
|
68 |
|
|
|
76 |
|
77 |
`dataset.to_pandas`, to convert the dataset into a pandas.DataFrame format.
|
78 |
|
79 |
+
## Citations
|
80 |
```
|
81 |
+
@article{clark-etal-2020-tydi,
|
82 |
+
title = "{T}y{D}i {QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages",
|
83 |
+
author = "Clark, Jonathan H. and
|
84 |
+
Choi, Eunsol and
|
85 |
+
Collins, Michael and
|
86 |
+
Garrette, Dan and
|
87 |
+
Kwiatkowski, Tom and
|
88 |
+
Nikolaev, Vitaly and
|
89 |
+
Palomaki, Jennimaria",
|
90 |
+
editor = "Johnson, Mark and
|
91 |
+
Roark, Brian and
|
92 |
+
Nenkova, Ani",
|
93 |
+
journal = "Transactions of the Association for Computational Linguistics",
|
94 |
+
volume = "8",
|
95 |
+
year = "2020",
|
96 |
+
address = "Cambridge, MA",
|
97 |
+
publisher = "MIT Press",
|
98 |
+
url = "https://aclanthology.org/2020.tacl-1.30",
|
99 |
+
doi = "10.1162/tacl_a_00317",
|
100 |
+
pages = "454--470",
|
101 |
+
abstract = "Confidently making progress on multilingual modeling requires challenging, trustworthy evaluations. We present TyDi QA{---}a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs. The languages of TyDi QA are diverse with regard to their typology{---}the set of linguistic features each language expresses{---}such that we expect models performing well on this set to generalize across a large number of the world{'}s languages. We present a quantitative analysis of the data quality and example-level qualitative linguistic analyses of observed language phenomena that would not be found in English-only corpora. To provide a realistic information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but don{'}t know the answer yet, and the data is collected directly in each language without the use of translation.",
|
102 |
}
|
|
|
103 |
|
104 |
+
@inproceedings{asai-etal-2021-xor,
|
105 |
+
title = "{XOR} {QA}: Cross-lingual Open-Retrieval Question Answering",
|
106 |
+
author = "Asai, Akari and
|
107 |
+
Kasai, Jungo and
|
108 |
+
Clark, Jonathan and
|
109 |
+
Lee, Kenton and
|
110 |
+
Choi, Eunsol and
|
111 |
+
Hajishirzi, Hannaneh",
|
112 |
+
editor = "Toutanova, Kristina and
|
113 |
+
Rumshisky, Anna and
|
114 |
+
Zettlemoyer, Luke and
|
115 |
+
Hakkani-Tur, Dilek and
|
116 |
+
Beltagy, Iz and
|
117 |
+
Bethard, Steven and
|
118 |
+
Cotterell, Ryan and
|
119 |
+
Chakraborty, Tanmoy and
|
120 |
+
Zhou, Yichao",
|
121 |
+
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
|
122 |
+
month = jun,
|
123 |
+
year = "2021",
|
124 |
+
address = "Online",
|
125 |
+
publisher = "Association for Computational Linguistics",
|
126 |
+
url = "https://aclanthology.org/2021.naacl-main.46",
|
127 |
+
doi = "10.18653/v1/2021.naacl-main.46",
|
128 |
+
pages = "547--564",
|
129 |
+
abstract = "Multilingual question answering tasks typically assume that answers exist in the same language as the question. Yet in practice, many languages face both information scarcity{---}where languages have few reference articles{---}and information asymmetry{---}where questions reference concepts from other cultures. This work extends open-retrieval question answering to a cross-lingual setting enabling questions from one language to be answered via answer content from another language. We construct a large-scale dataset built on 40K information-seeking questions across 7 diverse non-English languages that TyDi QA could not find same-language answers for. Based on this dataset, we introduce a task framework, called Cross-lingual Open-Retrieval Question Answering (XOR QA), that consists of three new tasks involving cross-lingual document retrieval from multilingual and English resources. We establish baselines with state-of-the-art machine translation systems and cross-lingual pretrained models. Experimental results suggest that XOR QA is a challenging task that will facilitate the development of novel techniques for multilingual question answering. Our data and code are available at \url{https://nlp.cs.washington.edu/xorqa/}.",
|
130 |
+
}
|
131 |
+
|
132 |
+
@inproceedings{muller-etal-2023-evaluating,
|
133 |
+
title = "Evaluating and Modeling Attribution for Cross-Lingual Question Answering",
|
134 |
+
author = "Muller, Benjamin and
|
135 |
+
Wieting, John and
|
136 |
+
Clark, Jonathan and
|
137 |
+
Kwiatkowski, Tom and
|
138 |
+
Ruder, Sebastian and
|
139 |
+
Soares, Livio and
|
140 |
+
Aharoni, Roee and
|
141 |
+
Herzig, Jonathan and
|
142 |
+
Wang, Xinyi",
|
143 |
+
editor = "Bouamor, Houda and
|
144 |
+
Pino, Juan and
|
145 |
+
Bali, Kalika",
|
146 |
+
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
|
147 |
+
month = dec,
|
148 |
+
year = "2023",
|
149 |
+
address = "Singapore",
|
150 |
+
publisher = "Association for Computational Linguistics",
|
151 |
+
url = "https://aclanthology.org/2023.emnlp-main.10",
|
152 |
+
doi = "10.18653/v1/2023.emnlp-main.10",
|
153 |
+
pages = "144--157",
|
154 |
+
abstract = "Trustworthy answer content is abundant in many high-resource languages and is instantly accessible through question answering systems {---} yet this content can be hard to access for those that do not speak these languages. The leap forward in cross-lingual modeling quality offered by generative language models offers much promise, yet their raw generations often fall short in factuality. To improve trustworthiness in these systems, a promising direction is to attribute the answer to a retrieved source, possibly in a content-rich language different from the query. Our work is the first to study attribution for cross-lingual question answering. First, we collect data in 5 languages to assess the attribution level of a state-of-the-art cross-lingual QA system. To our surprise, we find that a substantial portion of the answers is not attributable to any retrieved passages (up to 50{\%} of answers exactly matching a gold reference) despite the system being able to attend directly to the retrieved text. Second, to address this poor attribution level, we experiment with a wide range of attribution detection techniques. We find that Natural Language Inference models and PaLM 2 fine-tuned on a very small amount of attribution data can accurately detect attribution. With these models, we improve the attribution level of a cross-lingual QA system. Overall, we show that current academic generative cross-lingual QA systems have substantial shortcomings in attribution and we build tooling to mitigate these issues.",
|
155 |
}
|
156 |
|
157 |
```
|