--- dataset_info: features: - name: question dtype: string - name: context dtype: string - name: id dtype: string - name: title dtype: string - name: answers struct: - name: answer_start sequence: int64 - name: text sequence: string splits: - name: train num_bytes: 86378152 num_examples: 87599 - name: dev num_bytes: 9727046 num_examples: 9380 - name: test num_bytes: 1263841 num_examples: 1183 download_size: 17198602 dataset_size: 97369039 configs: - config_name: default data_files: - split: train path: data/train-* - split: dev path: data/dev-* - split: test path: data/test-* license: cc-by-sa-4.0 task_categories: - question-answering language: - nl pretty_name: SQuAD-NL v1.1 --- # SQuAD-NL v1.1 [translated SQuAD / XQuAD] SQuAD-NL v1.1 is a translation of [The Stanford Question Answering Dataset](https://rajpurkar.github.io/SQuAD-explorer/) (SQuAD) v1.1. Since the original English SQuAD test data is not public, we reserve the same documents that were used for [XQuAD](https://github.com/google-deepmind/xquad) for testing purposes. These documents are sampled from the original dev data split. The English data is automatically translated using Google Translate (February 2023) and the test data is manually post-edited. This version of SQuAD-NL only contains answerable questions. If you want to include unanswerable questions, use [SQuAD-NL v2.0](https://huggingface.co/datasets/GroNLP/squad-nl-v2.0/). | Split | Source | Procedure | English | Dutch | | ----- | ---------------------- | ------------------------ | ------: | -----: | | train | SQuAD-train-v1.1 | Google Translate | 87,599 | 87,599 | | dev | SQuAD-dev-v1.1 \ XQuAD | Google Translate | 9,380 | 9,380 | | test | SQuAD-dev-v1.1 & XQuAD | Google Translate + Human | 1,190 | 1,183 | ## Source SQuAD-NL was first used in the [Dutch Model Benchmark](https://dumbench.nl) (DUMB). The accompanying paper can be found [here](https://aclanthology.org/2023.emnlp-main.447/). ## Citation If you use SQuAD-NL, please cite the DUMB, SQuAD and XQuAD papers: ```bibtex @inproceedings{de-vries-etal-2023-dumb, title = "{DUMB}: A Benchmark for Smart Evaluation of {D}utch Models", author = "de Vries, Wietse and Wieling, Martijn and Nissim, Malvina", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.447", doi = "10.18653/v1/2023.emnlp-main.447", pages = "7221--7241", abstract = "We introduce the Dutch Model Benchmark: DUMB. The benchmark includes a diverse set of datasets for low-, medium- and high-resource tasks. The total set of nine tasks includes four tasks that were previously not available in Dutch. Instead of relying on a mean score across tasks, we propose Relative Error Reduction (RER), which compares the DUMB performance of language models to a strong baseline which can be referred to in the future even when assessing different sets of language models. Through a comparison of 14 pre-trained language models (mono- and multi-lingual, of varying sizes), we assess the internal consistency of the benchmark tasks, as well as the factors that likely enable high performance. Our results indicate that current Dutch monolingual models under-perform and suggest training larger Dutch models with other architectures and pre-training objectives. At present, the highest performance is achieved by DeBERTaV3 (large), XLM-R (large) and mDeBERTaV3 (base). In addition to highlighting best strategies for training larger Dutch models, DUMB will foster further research on Dutch. A public leaderboard is available at https://dumbench.nl.", } @inproceedings{rajpurkar-etal-2016-squad, title = "{SQ}u{AD}: 100,000+ Questions for Machine Comprehension of Text", author = "Rajpurkar, Pranav and Zhang, Jian and Lopyrev, Konstantin and Liang, Percy", editor = "Su, Jian and Duh, Kevin and Carreras, Xavier", booktitle = "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2016", address = "Austin, Texas", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/D16-1264", doi = "10.18653/v1/D16-1264", pages = "2383--2392", } @inproceedings{artetxe-etal-2020-cross, title = "On the Cross-lingual Transferability of Monolingual Representations", author = "Artetxe, Mikel and Ruder, Sebastian and Yogatama, Dani", editor = "Jurafsky, Dan and Chai, Joyce and Schluter, Natalie and Tetreault, Joel", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.acl-main.421", doi = "10.18653/v1/2020.acl-main.421", pages = "4623--4637", abstract = "State-of-the-art unsupervised multilingual models (e.g., multilingual BERT) have been shown to generalize in a zero-shot cross-lingual setting. This generalization ability has been attributed to the use of a shared subword vocabulary and joint training across multiple languages giving rise to deep multilingual abstractions. We evaluate this hypothesis by designing an alternative approach that transfers a monolingual model to new languages at the lexical level. More concretely, we first train a transformer-based masked language model on one language, and transfer it to a new language by learning a new embedding matrix with the same masked language modeling objective, freezing parameters of all other layers. This approach does not rely on a shared vocabulary or joint training. However, we show that it is competitive with multilingual BERT on standard cross-lingual classification benchmarks and on a new Cross-lingual Question Answering Dataset (XQuAD). Our results contradict common beliefs of the basis of the generalization ability of multilingual models and suggest that deep monolingual models learn some abstractions that generalize across languages. We also release XQuAD as a more comprehensive cross-lingual benchmark, which comprises 240 paragraphs and 1190 question-answer pairs from SQuAD v1.1 translated into ten languages by professional translators.", } ```