--- language: - fr - en multilinguality: - multilingual configs: - config_name: French data_files: - split: train path: fr/* - config_name: French_simple data_files: - split: train path: frsimple/* - config_name: English data_files: - split: train path: en/* - config_name: English_simple data_files: - split: train path: ensimple/* task_categories: - translation --- > [!NOTE] > Dataset origin: https://zenodo.org/records/6327828 ## Data creation - All article pages of Vikidia-Fr (https://fr.vikidia.org/wiki/Vikidia:Accueil) were first filtered from the Vikidia-Fr crawl. - Matching titles were obtained from Vikidia-En, and English and French Wikipedias, following "Other Languages" links. - Only titles that exist in all 4 versions are listed, which were 6165 in total during the collection. - These matching urls were then downloaded and parsed using BeautifulSoup. ## License Vikidia and Wikipedia are both available under CC-by-SA (https://en.wikipedia.org/wiki/Wikipedia:Text_of_Creative_Commons_Attribution-ShareAlike_3.0_Unported_License) and this dataset will follow the same license, as per their guidelines. ## Citation ``` @inproceedings{lee-vajjala-2022-neural, title = "A Neural Pairwise Ranking Model for Readability Assessment", author = "Lee, Justin and Vajjala, Sowmya", editor = "Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline", booktitle = "Findings of the Association for Computational Linguistics: ACL 2022", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.findings-acl.300", doi = "10.18653/v1/2022.findings-acl.300", pages = "3802--3813", abstract = "Automatic Readability Assessment (ARA), the task of assigning a reading level to a text, is traditionally treated as a classification problem in NLP research. In this paper, we propose the first neural, pairwise ranking approach to ARA and compare it with existing classification, regression, and (non-neural) ranking methods. We establish the performance of our approach by conducting experiments with three English, one French and one Spanish datasets. We demonstrate that our approach performs well in monolingual single/cross corpus testing scenarios and achieves a zero-shot cross-lingual ranking accuracy of over 80{\%} for both French and Spanish when trained on English data. Additionally, we also release a new parallel bilingual readability dataset, that could be useful for future research. To our knowledge, this paper proposes the first neural pairwise ranking model for ARA, and shows the first results of cross-lingual, zero-shot evaluation of ARA with neural models.", } ```