mTRECQA / README.md
matteogabburo's picture
Update README.md
c1194d4 verified
metadata
task_categories:
  - question-answering
language:
  - en
  - fr
  - de
  - it
  - es
  - pt
pretty_name: mTRECQA
size_categories:
  - 100K<n<1M
configs:
  - config_name: default
    data_files:
      - split: train_en
        path: eng-train.jsonl
      - split: train_de
        path: deu-train.jsonl
      - split: train_fr
        path: fra-train.jsonl
      - split: train_it
        path: ita-train.jsonl
      - split: train_po
        path: por-train.jsonl
      - split: train_sp
        path: spa-train.jsonl
      - split: validation_en
        path: eng-dev.jsonl
      - split: validation_de
        path: deu-dev.jsonl
      - split: validation_fr
        path: fra-dev.jsonl
      - split: validation_it
        path: ita-dev.jsonl
      - split: validation_po
        path: por-dev.jsonl
      - split: validation_sp
        path: spa-dev.jsonl
      - split: test_en
        path: eng-test.jsonl
      - split: test_de
        path: deu-test.jsonl
      - split: test_fr
        path: fra-test.jsonl
      - split: test_it
        path: ita-test.jsonl
      - split: test_po
        path: por-test.jsonl
      - split: test_sp
        path: spa-test.jsonl
  - config_name: en
    data_files:
      - split: train
        path: eng-train.jsonl
      - split: validation
        path: eng-dev.jsonl
      - split: test
        path: eng-test.jsonl
  - config_name: de
    data_files:
      - split: train
        path: deu-train.jsonl
      - split: validation
        path: deu-dev.jsonl
      - split: test
        path: deu-test.jsonl
  - config_name: fr
    data_files:
      - split: train
        path: fra-train.jsonl
      - split: validation
        path: fra-dev.jsonl
      - split: test
        path: fra-test.jsonl
  - config_name: it
    data_files:
      - split: train
        path: ita-train.jsonl
      - split: validation
        path: ita-dev.jsonl
      - split: test
        path: ita-test.jsonl
  - config_name: po
    data_files:
      - split: train
        path: por-train.jsonl
      - split: validation
        path: por-dev.jsonl
      - split: test
        path: por-test.jsonl
  - config_name: sp
    data_files:
      - split: train
        path: spa-train.jsonl
      - split: validation
        path: spa-dev.jsonl
      - split: test
        path: spa-test.jsonl

Dataset Description

mTRECQA originates from TREC-QA, which is created from the TREC 8 to TREC 13 QA tracks. TREC 8-12 constitutes the training set, while TREC 13 questions are set aside for development and testing.

The dataset has been translated into five European languages: French, German, Italian, Portuguese, and Spanish, as described in this paper: Datasets for Multilingual Answer Sentence Selection.

Splits:

For each language (English, French, German, Italian, Portuguese, and Spanish), we provide:

  • train split
  • validation split
  • test split

How to load them:

To use these splits, you can use the following snippet of code replacing [LANG] with a language identifier (en, fr, de, it, po, sp)

from datasets import load_dataset

# if you want the whole corpora
corpora = load_dataset("matteogabburo/mTRECQA")

"""
if you want the default splits of a specific language, replace [LANG] with an identifier in: en, fr, de, it, po, sp
dataset = load_dataset("matteogabburo/mTRECQA", "[LANG]")
"""
# example:
italian_dataset = load_dataset("matteogabburo/mTRECQA", "it")

Format:

Each example has the following format:

{
  'eid': 42588,
  'qid': 1003,
  'cid': 4,
  'label': 1,
  'question': 'In welchem Land liegt die heilige Stadt Mekka?',
  'candidate': 'Der französische Präsident Jacques Chirac hat heute sein Beileid ausgedrückt, wegen des Todes von 250 Pilgern bei einem Brand, der am Dienstag in einem Lager in der Nähe der heiligen Stadt Mekka in Saudi-Arabien ausbrach.'
}

Where:

  • eid: is the unique id of the example (question, candidate)
  • qid: is the unique id of the question
  • cid: is the unique id of the answer candidate
  • label: identifies whether the answer candidate candidate is correct for the question (1 if correct, 0 otherwise)
  • question: the question
  • candidate: the answer candidate

Citation

If you find this dataset useful, please cite the following paper:

BibTeX:

@misc{gabburo2024datasetsmultilingualanswersentence,
      title={Datasets for Multilingual Answer Sentence Selection}, 
      author={Matteo Gabburo and Stefano Campese and Federico Agostini and Alessandro Moschitti},
      year={2024},
      eprint={2406.10172},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2406.10172}, 
}