polynews / README.md
aiana94's picture
Update README.md
effaaf0 verified
metadata
license: cc-by-nc-4.0
task_categories:
  - fill-mask
  - text-generation
language:
  - am
  - ar
  - ay
  - bm
  - bbj
  - bn
  - bs
  - bg
  - ca
  - cs
  - ku
  - da
  - el
  - en
  - et
  - ee
  - fil
  - fi
  - fr
  - fon
  - gu
  - guw
  - ha
  - he
  - hi
  - hu
  - ig
  - id
  - it
  - ja
  - kk
  - km
  - ko
  - lv
  - ln
  - lt
  - lg
  - luo
  - mk
  - mos
  - my
  - nl
  - 'no'
  - ne
  - om
  - or
  - pa
  - pcm
  - fa
  - pl
  - pt
  - mg
  - ro
  - rn
  - ru
  - sn
  - so
  - es
  - sr
  - sq
  - sw
  - sv
  - ta
  - tet
  - ti
  - th
  - tn
  - tr
  - tw
  - uk
  - ur
  - wo
  - xh
  - yo
  - zh
  - zu
  - de
multilinguality:
  - multilingual
pretty_name: PolyNews
size_categories:
  - 1K<n<10K
source_datasets:
  - masakhanews
  - mafand
  - wikinews
  - wmt-news
  - globalvoices
tags:
  - news
  - polynews
  - mafand
  - masakhanews
  - wikinews
  - globalvoices
  - wmtnews
configs:
  - config_name: amh_Ethi
    data_files:
      - split: train
        path: data/amh_Ethi/train.parquet.gzip
  - config_name: arb_Arab
    data_files:
      - split: train
        path: data/arb_Arab/train.parquet.gzip
  - config_name: ayr_Latn
    data_files:
      - split: train
        path: data/ayr_Latn/train.parquet.gzip
  - config_name: bam_Latn
    data_files:
      - split: train
        path: data/bam_Latn/train.parquet.gzip
  - config_name: bbj_Latn
    data_files:
      - split: train
        path: data/bbj_Latn/train.parquet.gzip
  - config_name: ben_Beng
    data_files:
      - split: train
        path: data/ben_Beng/train.parquet.gzip
  - config_name: bos_Latn
    data_files:
      - split: train
        path: data/bos_Latn/train.parquet.gzip
  - config_name: bul_Cyrl
    data_files:
      - split: train
        path: data/bul_Cyrl/train.parquet.gzip
  - config_name: cat_Latn
    data_files:
      - split: train
        path: data/cat_Latn/train.parquet.gzip
  - config_name: ces_Latn
    data_files:
      - split: train
        path: data/ces_Latn/train.parquet.gzip
  - config_name: ckb_Arab
    data_files:
      - split: train
        path: data/ckb_Arab/train.parquet.gzip
  - config_name: dan_Latn
    data_files:
      - split: train
        path: data/dan_Latn/train.parquet.gzip
  - config_name: deu_Latn
    data_files:
      - split: train
        path: data/deu_Latn/train.parquet.gzip
  - config_name: ell_Grek
    data_files:
      - split: train
        path: data/ell_Grek/train.parquet.gzip
  - config_name: eng_Latn
    data_files:
      - split: train
        path: data/eng_Latn/train.parquet.gzip
  - config_name: est_Latn
    data_files:
      - split: train
        path: data/est_Latn/train.parquet.gzip
  - config_name: ewe_Latn
    data_files:
      - split: train
        path: data/ewe_Latn/train.parquet.gzip
  - config_name: fil_Latn
    data_files:
      - split: train
        path: data/fil_Latn/train.parquet.gzip
  - config_name: fin_Latn
    data_files:
      - split: train
        path: data/fin_Latn/train.parquet.gzip
  - config_name: fon_Latn
    data_files:
      - split: train
        path: data/fon_Latn/train.parquet.gzip
  - config_name: fra_Latn
    data_files:
      - split: train
        path: data/fra_Latn/train.parquet.gzip
  - config_name: guj_Gujr
    data_files:
      - split: train
        path: data/guj_Gujr/train.parquet.gzip
  - config_name: guw_Latn
    data_files:
      - split: train
        path: data/guw_Latn/train.parquet.gzip
  - config_name: hau_Latn
    data_files:
      - split: train
        path: data/hau_Latn/train.parquet.gzip
  - config_name: heb_Hebr
    data_files:
      - split: train
        path: data/heb_Hebr/train.parquet.gzip
  - config_name: hin_Deva
    data_files:
      - split: train
        path: data/hin_Deva/train.parquet.gzip
  - config_name: hun_Latn
    data_files:
      - split: train
        path: data/hun_Latn/train.parquet.gzip
  - config_name: ibo_Latn
    data_files:
      - split: train
        path: data/ibo_Latn/train.parquet.gzip
  - config_name: ind_Latn
    data_files:
      - split: train
        path: data/ind_Latn/train.parquet.gzip
  - config_name: ita_Latn
    data_files:
      - split: train
        path: data/ita_Latn/train.parquet.gzip
  - config_name: jpn_Jpan
    data_files:
      - split: train
        path: data/jpn_Jpan/train.parquet.gzip
  - config_name: kaz_Cyrl
    data_files:
      - split: train
        path: data/kaz_Cyrl/train.parquet.gzip
  - config_name: khm_Khmr
    data_files:
      - split: train
        path: data/khm_Khmr/train.parquet.gzip
  - config_name: kor_Hang
    data_files:
      - split: train
        path: data/kor_Hang/train.parquet.gzip
  - config_name: lav_Latn
    data_files:
      - split: train
        path: data/lav_Latn/train.parquet.gzip
  - config_name: lin_Latn
    data_files:
      - split: train
        path: data/lin_Latn/train.parquet.gzip
  - config_name: lit_Latn
    data_files:
      - split: train
        path: data/lit_Latn/train.parquet.gzip
  - config_name: lug_Latn
    data_files:
      - split: train
        path: data/lug_Latn/train.parquet.gzip
  - config_name: luo_Latn
    data_files:
      - split: train
        path: data/luo_Latn/train.parquet.gzip
  - config_name: mkd_Cyrl
    data_files:
      - split: train
        path: data/mkd_Cyrl/train.parquet.gzip
  - config_name: mos_Latn
    data_files:
      - split: train
        path: data/mos_Latn/train.parquet.gzip
  - config_name: mya_Mymr
    data_files:
      - split: train
        path: data/mya_Mymr/train.parquet.gzip
  - config_name: nld_Latn
    data_files:
      - split: train
        path: data/nld_Latn/train.parquet.gzip
  - config_name: nor_Latn
    data_files:
      - split: train
        path: data/nor_Latn/train.parquet.gzip
  - config_name: npi_Deva
    data_files:
      - split: train
        path: data/npi_Deva/train.parquet.gzip
  - config_name: orm_Latn
    data_files:
      - split: train
        path: data/orm_Latn/train.parquet.gzip
  - config_name: ory_Orya
    data_files:
      - split: train
        path: data/ory_Orya/train.parquet.gzip
  - config_name: pan_Guru
    data_files:
      - split: train
        path: data/pan_Guru/train.parquet.gzip
  - config_name: pcm_Latn
    data_files:
      - split: train
        path: data/pcm_Latn/train.parquet.gzip
  - config_name: pes_Arab
    data_files:
      - split: train
        path: data/pes_Arab/train.parquet.gzip
  - config_name: plt_Latn
    data_files:
      - split: train
        path: data/plt_Latn/train.parquet.gzip
  - config_name: pol_Latn
    data_files:
      - split: train
        path: data/pol_Latn/train.parquet.gzip
  - config_name: por_Latn
    data_files:
      - split: train
        path: data/por_Latn/train.parquet.gzip
  - config_name: ron_Latn
    data_files:
      - split: train
        path: data/ron_Latn/train.parquet.gzip
  - config_name: run_Latn
    data_files:
      - split: train
        path: data/run_Latn/train.parquet.gzip
  - config_name: rus_Cyrl
    data_files:
      - split: train
        path: data/rus_Cyrl/train.parquet.gzip
  - config_name: sna_Latn
    data_files:
      - split: train
        path: data/sna_Latn/train.parquet.gzip
  - config_name: som_Latn
    data_files:
      - split: train
        path: data/som_Latn/train.parquet.gzip
  - config_name: spa_Latn
    data_files:
      - split: train
        path: data/spa_Latn/train.parquet.gzip
  - config_name: sqi_Latn
    data_files:
      - split: train
        path: data/sqi_Latn/train.parquet.gzip
  - config_name: srp_Cyrl
    data_files:
      - split: train
        path: data/srp_Cyrl/train.parquet.gzip
  - config_name: srp_Latn
    data_files:
      - split: train
        path: data/srp_Latn/train.parquet.gzip
  - config_name: swe_Latn
    data_files:
      - split: train
        path: data/swe_Latn/train.parquet.gzip
  - config_name: swh_Latn
    data_files:
      - split: train
        path: data/swh_Latn/train.parquet.gzip
  - config_name: tam_Taml
    data_files:
      - split: train
        path: data/tam_Taml/train.parquet.gzip
  - config_name: tet_Latn
    data_files:
      - split: train
        path: data/tet_Latn/train.parquet.gzip
  - config_name: tha_Thai
    data_files:
      - split: train
        path: data/tha_Thai/train.parquet.gzip
  - config_name: tir_Ethi
    data_files:
      - split: train
        path: data/tir_Ethi/train.parquet.gzip
  - config_name: tsn_Latn
    data_files:
      - split: train
        path: data/tsn_Latn/train.parquet.gzip
  - config_name: tur_Latn
    data_files:
      - split: train
        path: data/tur_Latn/train.parquet.gzip
  - config_name: twi_Latn
    data_files:
      - split: train
        path: data/twi_Latn/train.parquet.gzip
  - config_name: ukr_Cyrl
    data_files:
      - split: train
        path: data/ukr_Cyrl/train.parquet.gzip
  - config_name: urd_Arab
    data_files:
      - split: train
        path: data/urd_Arab/train.parquet.gzip
  - config_name: wol_Latn
    data_files:
      - split: train
        path: data/wol_Latn/train.parquet.gzip
  - config_name: xho_Latn
    data_files:
      - split: train
        path: data/xho_Latn/train.parquet.gzip
  - config_name: yor_Latn
    data_files:
      - split: train
        path: data/yor_Latn/train.parquet.gzip
  - config_name: zho_Hans
    data_files:
      - split: train
        path: data/zho_Hans/train.parquet.gzip
  - config_name: zho_Hant
    data_files:
      - split: train
        path: data/zho_Hant/train.parquet.gzip
  - config_name: zul_Latn
    data_files:
      - split: train
        path: data/zul_Latn/train.parquet.gzip

Dataset Card for PolyNews

Table of Contents

Dataset Description

Dataset Summary

PolyNews is a multilingual dataset containing news titles in 77 languages and 19 scripts.

Uses

This dataset can be used for domain adaptation of language models, language modeling or text generation.

Languages

There are 77 languages available:

Code Language Script #Articles (K)
amh_Ethi Amharic Ethiopic 0.551
arb_Arab Modern Standard Arabic Arabic 10.882
ayr_Latn Central Aymara Latin 12.878
bam_Latn Bambara Latin 2.916
bbj_Latn Ghomálá’ Latin 1.737
ben_Beng Bengali Bengali 2.268
bos_Latn Bosnian Latin 0.298
bul_Cyrl Bulgarian Cyrillic 1.791
cat_Latn Catalan Latin 30.410
ces_Latn Czech Latin 58.382
ckb_Arab Central Kurdish Arabic 0.014
dan_Latn Danish Latin 9.456
deu_Latn German Latin 145.484
ell_Grek Greek Greek 50.176
eng_Latn English Latin 981.430
est_Latn Estonian Latin 3.942
ewe_Latn Éwé Latin 2.003
fil_Latn Filipino Latin 3.3132
fin_Latn Finnish Latin 19.602
fon_Latn Fon Latin 2.610
fra_Latn French Latin 481.117
guj_Gujr Gujarati Gujarati 0.690
guw_Latn Gun Latin 1.068
hau_Latn Hausa Latin 7.898
heb_Hebr Hebrew Hebrew 0.355
hin_Deva Hindi Devanagari 0.707
hun_Latn Hungarian Latin 22.219
ibo_Latn Igbo Latin 7.709
ind_Latn Indonesian Latin 17.749
ita_Latn Italian Latin 163.396
jpn_Jpan Japanese Japanese 20.778
kaz_Cyrl Kazakh Cyrillic 0.763
khm_Khmr Khmer Khmer 0.227
kor_Hang Korean Hangul 3.527
lav_Latn Latvian Latin 3.971
lin_Latn Lingala Latin 0.602
lit_Latn Lithuanian Latin 3.948
lug_Latn Ganda Latin 4.769
luo_Latn Luo Latin 4.250
mkd_Cyrl Macedonian Cyrillic 10.537
mos_Latn Mossi Latin 2.458
mya_Mymr Burmese Myanmar 0.583
nld_Latn Dutch Latin 53.184
nor_Latn Norwegian Latin 0.529
npi_Deva Nepali Devanagari 0.220
orm_Latn Oromo Latin 1.124
ory_Orya Odia Oriya 0.038
pan_Guru Eastern Panjabi Gurmukhi 0.336
pcm_Latn Nigerian Pidgin Latin 5.742
pes_Arab Western Persian Arabic 1.431
plt_Latn Malagasy Latin 393.767
pol_Latn Polish Latin 80.960
por_Latn Portuguese Latin 156.039
ron_Latn Romanian Latin 10.472
run_Latn Rundi Latin 1.113
rus_Cyrl Russian Cyrillic 143.283
sna_Latn Shona Latin 1.128
som_Latn Somali Latin 1.019
spa_Latn Spanish Latin 681.121
sqi_Latn Albanian Latin 7.274
srp_Cyrl Serbian Cyrillic 1.056
srp_Latn Serbian Latin 58.012
swe_Latn Swedish Latin 12.323
swh_Latn Swahili Latin 47.337
tam_Taml Tamil Tamil 0.358
tet_Latn Tetun Latin 0.626
tha_Thai Thai Thai 0.091
tir_Ethi Tigrinya Ethiopic 0.079
tsn_Latn Tswana Latin 2.075
tur_Latn Turkish Latin 19.793
twi_Latn Twi Latin 3.012
ukr_Cyrl Ukrainian Cyrillic 0.292
urd_Arab Urdu Arabic 0.804
wol_Latn Wolof Latin 3.344
xho_Latn Xhosa Latin 0.709
yor_Latn Yorùbá Latin 8.011
zho_Hans Chinese Han (Simplified) 59.771
zho_Hant Chinese Han (Traditional) 54.561
zul_Latn Zulu Latin 3.376

Dataset Structure

Data Instances

>>> from datasets import load_dataset
>>> data = load_dataset('aiana94/polynews', 'ron_Latn')

# Please, specify the language code,

# A data point example is below:

{
"text": "Un public numeros. Este uimitor succesul după doar trei ediții . ",
"provenance": "globalvoices"
}

Data Fields

  • text (string): news text
  • provenance (string) : source dataset for the news example

Data Splits

For all languages, there is only the train split.

Dataset Creation

Curation Rationale

Multiple multilingual, human-translated, datasets containing news texts have been released in recent years. However, these datasets are stored in different formats and various websites, and many contain numerous near duplicates. With PolyNews, we aim to provide an easily-accessible, unified and deduplicated dataset that combines these disparate data sources. It can be used for domain adaptation of language models, language modeling or text generation in both high-resource and low-resource languages.

Source Data

The source data consists of five multilingual news datasets.

Data Collection and Processing

We processed the data using a working script which covers the entire processing pipeline. It can be found here.

The data processing pipeline consists of:

  1. Downloading the WMT-News and GlobalVoices News from OPUS.
  2. Downloading the latest dump from WikiNews.
  3. Loading the MasakhaNews and MAFAND datasets from Hugging Face Hub (only the train splits).
  4. Concatenating, per language, all news texts from the source datasets.
  5. Data cleaning (e.g., removal of exact duplicates, short texts, texts in other scripts)
  6. MinHash near-deduplication per language.

Annotations

We augment the original samples with the provenance annotation which specifies the original data source from which a particular examples stems.

Personal and Sensitive Information

The data is sourced from newspaper sources and contains mentions of public figures and individuals.

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

Users should keep in mind that the dataset contains short news texts (e.g., mostly titles), which might limit the applicability of the developed systems to other domains.

Additional Information

Licensing Information

The dataset is released under the CC BY-NC Attribution-NonCommercial 4.0 International license.

Citation Infomation

BibTeX:

@misc{iana2024news,
      title={News Without Borders: Domain Adaptation of Multilingual Sentence Embeddings for Cross-lingual News Recommendation}, 
      author={Andreea Iana and Fabian David Schmidt and Goran Glavaš and Heiko Paulheim},
      year={2024},
      eprint={2406.12634},
      archivePrefix={arXiv},
      url={https://arxiv.org/abs/2406.12634}
}