Datasets:

ArXiv:
License:
tydiqa / README.md
holylovenia's picture
Upload README.md with huggingface_hub
57c2ee5 verified
|
raw
history blame
6.06 kB
metadata
license: apache-2.0
language:
  - ind
  - tha
pretty_name: Tydiqa
task_categories:
  - question-answering
tags:
  - question-answering

TyDi QA is a question answering dataset covering 11 typologically diverse languages with 204K question-answer pairs. The languages of TyDi QA are diverse with regard to their typology -- the set of linguistic features that each language expresses -- such that we expect models performing well on this set to generalize across a large number of the languages in the world. It contains language phenomena that would not be found in English-only corpora. To provide a realistic information-seeking task and avoid priming effects, questions are written by people who want to know the answer, but don’t know the answer yet, (unlike SQuAD and its descendents) and the data is collected directly in each language without the use of translation (unlike MLQA and XQuAD).

Languages

ind, tha

Supported Tasks

Question Answering

Dataset Usage

Using datasets library

from datasets import load_dataset
dset = datasets.load_dataset("SEACrowd/tydiqa", trust_remote_code=True)

Using seacrowd library

# Load the dataset using the default config
dset = sc.load_dataset("tydiqa", schema="seacrowd")
# Check all available subsets (config names) of the dataset
print(sc.available_config_names("tydiqa"))
# Load the dataset using a specific config
dset = sc.load_dataset_by_config_name(config_name="<config_name>")

More details on how to load the seacrowd library can be found here.

Dataset Homepage

https://github.com/google-research-datasets/tydiqa

Dataset Version

Source: 1.0.0. SEACrowd: 2024.06.20.

Dataset License

Apache license 2.0 (apache-2.0)

Citation

If you are using the Tydiqa dataloader in your work, please cite the following:

\
@article{clark-etal-2020-tydi,
    title = "{T}y{D}i {QA}: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages",
    author = "Clark, Jonathan H.  and
      Choi, Eunsol  and
      Collins, Michael  and
      Garrette, Dan  and
      Kwiatkowski, Tom  and
      Nikolaev, Vitaly  and
      Palomaki, Jennimaria",
    editor = "Johnson, Mark  and
      Roark, Brian  and
      Nenkova, Ani",
    journal = "Transactions of the Association for Computational Linguistics",
    volume = "8",
    year = "2020",
    address = "Cambridge, MA",
    publisher = "MIT Press",
    url = "https://aclanthology.org/2020.tacl-1.30",
    doi = "10.1162/tacl_a_00317",
    pages = "454--470",
    abstract = "Confidently making progress on multilingual modeling requires challenging, trustworthy evaluations.
    We present TyDi QA{---}a question answering dataset covering 11 typologically diverse languages with 204K
    question-answer pairs. The languages of TyDi QA are diverse with regard to their typology{---}the set of
    linguistic features each language expresses{---}such that we expect models performing well on this set to
    generalize across a large number of the world{'}s languages. We present a quantitative analysis of the data
    quality and example-level qualitative linguistic analyses of observed language phenomena that would not be found
    in English-only corpora. To provide a realistic information-seeking task and avoid priming effects, questions are
    written by people who want to know the answer, but don{'}t know the answer yet, and the data is collected directly
    in each language without the use of translation.",
}

@inproceedings{cahyawijaya-etal-2021-indonlg,
    title = "{I}ndo{NLG}: Benchmark and Resources for Evaluating {I}ndonesian Natural Language Generation",
    author = "Cahyawijaya, Samuel  and
      Winata, Genta Indra  and
      Wilie, Bryan  and
      Vincentio, Karissa  and
      Li, Xiaohong  and
      Kuncoro, Adhiguna  and
      Ruder, Sebastian  and
      Lim, Zhi Yuan  and
      Bahar, Syafri  and
      Khodra, Masayu  and
      Purwarianti, Ayu  and
      Fung, Pascale",
    booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2021",
    address = "Online and Punta Cana, Dominican Republic",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.emnlp-main.699",
    doi = "10.18653/v1/2021.emnlp-main.699",
    pages = "8875--8898"
}


@article{lovenia2024seacrowd,
    title={SEACrowd: A Multilingual Multimodal Data Hub and Benchmark Suite for Southeast Asian Languages}, 
    author={Holy Lovenia and Rahmad Mahendra and Salsabil Maulana Akbar and Lester James V. Miranda and Jennifer Santoso and Elyanah Aco and Akhdan Fadhilah and Jonibek Mansurov and Joseph Marvin Imperial and Onno P. Kampman and Joel Ruben Antony Moniz and Muhammad Ravi Shulthan Habibi and Frederikus Hudi and Railey Montalan and Ryan Ignatius and Joanito Agili Lopo and William Nixon and Börje F. Karlsson and James Jaya and Ryandito Diandaru and Yuze Gao and Patrick Amadeus and Bin Wang and Jan Christian Blaise Cruz and Chenxi Whitehouse and Ivan Halim Parmonangan and Maria Khelli and Wenyu Zhang and Lucky Susanto and Reynard Adha Ryanda and Sonny Lazuardi Hermawan and Dan John Velasco and Muhammad Dehan Al Kautsar and Willy Fitra Hendria and Yasmin Moslem and Noah Flynn and Muhammad Farid Adilazuarda and Haochen Li and Johanes Lee and R. Damanhuri and Shuo Sun and Muhammad Reza Qorib and Amirbek Djanibekov and Wei Qi Leong and Quyet V. Do and Niklas Muennighoff and Tanrada Pansuwan and Ilham Firdausi Putra and Yan Xu and Ngee Chia Tai and Ayu Purwarianti and Sebastian Ruder and William Tjhi and Peerat Limkonchotiwat and Alham Fikri Aji and Sedrick Keh and Genta Indra Winata and Ruochen Zhang and Fajri Koto and Zheng-Xin Yong and Samuel Cahyawijaya},
    year={2024},
    eprint={2406.10118},
    journal={arXiv preprint arXiv: 2406.10118}
}