--- license: cc-by-nc-4.0 task_categories: - translation - text-retrieval language: - am - ar - ay - bm - bbj - bn - bg - ca - cs - ku - da - de - el - en - et - ee - fil - fi - fr - fon - gu - ha - he - hi - hu - ig - id - it - ja - kk - km - ko - lv - lt - lg - luo - mk - mos - my - nl - ne - or - pa - pcm - fa - pl - pt - mg - ro - ru - es - sr - sq - sw - sv - tet - tn - tr - tw - ur - wo - yo - zh - zu multilinguality: - translation - multilingual pretty_name: PolyNewsParallel size_categories: - 1K PolyNewsParallel: Number of texts per language pair ## Dataset Structure ### Data Instances ``` >>> from datasets import load_dataset >>> data = load_dataset('aiana94/polynews-parallel', 'eng_Latn-ron_Latn') # Please, specify the language code, # A data point example is below: { "src": "They continue to support the view that this decision will have a lasting negative impact on the rule of law in the country. ", "tgt": "Ei continuă să creadă că această decizie va avea efecte negative pe termen lung asupra statului de drept în țară. ", "provenance": "globalvoices" } ``` ### Data Fields - src (string): source news text - tgt (string): target news text - provenance (string) : source dataset for the news example ### Data Splits For all languages, there is only the `train` split. ## Dataset Creation ### Curation Rationale Multiple multilingual, human-translated, datasets containing news texts have been released in recent years. However, these datasets are stored in different formats and various websites, and many contain numerous near duplicates. With PolyNewsParallel, we aim to provide an easily-accessible, unified and deduplicated parallel dataset that combines these disparate data sources. It can be used for machine translation or text retrieval in both high-resource and low-resource languages. ### Source Data The source data consists of five multilingual news datasets. - [GlobalVoices](https://opus.nlpl.eu/GlobalVoices/corpus/version/GlobalVoices) (v2018q4) - [WMT-News](https://opus.nlpl.eu/WMT-News/corpus/version/WMT-News) (v2019) - [MAFAND](https://huggingface.co/datasets/masakhane/mafand) (`train` split) #### Data Collection and Processing We processed the data using a **working script** which covers the entire processing pipeline. It can be found [here](https://github.com/andreeaiana/nase/blob/main/scripts/construct_polynews.sh). The data processing pipeline consists of: 1. Downloading the WMT-News and GlobalVoices News from OPUS. 2. Loading MAFAND datasets from Hugging Face Hub (only the `train` splits). 4. Concatenating, per language, all news texts from the source datasets. 5. Data cleaning (e.g., removal of exact duplicates, short texts, texts in other scripts) 6. [MinHash near-deduplication](https://github.com/bigcode-project/bigcode-dataset/blob/main/near_deduplication/minhash_deduplication.py) per language. ### Annotations We augment the original samples with the `provenance` annotation which specifies the original data source from which a particular examples stems. #### Personal and Sensitive Information The data is sourced from newspaper sources and contains mentions of public figures and individuals. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Users should keep in mind that the dataset contains short news texts (e.g., mostly titles), which might limit the applicability of the developed systems to other domains. ## Additional Information ### Licensing Information The dataset is released under the [CC BY-NC Attribution-NonCommercial 4.0 International license](https://creativecommons.org/licenses/by-nc/4.0/). ### Citation Infomation **BibTeX:** ```bibtex @misc{iana2024news, title={News Without Borders: Domain Adaptation of Multilingual Sentence Embeddings for Cross-lingual News Recommendation}, author={Andreea Iana and Fabian David Schmidt and Goran Glavaš and Heiko Paulheim}, year={2024}, eprint={2406.12634}, archivePrefix={arXiv}, url={https://arxiv.org/abs/2406.12634} } ```