mGPT-13B / README.md
vmkhlv's picture
Update README.md
e64c91c verified
|
raw
history blame
6.54 kB
metadata
license: mit
language:
  - ar
  - he
  - vi
  - id
  - jv
  - ms
  - tl
  - lv
  - lt
  - eu
  - ml
  - ta
  - te
  - hy
  - bn
  - mr
  - hi
  - ur
  - af
  - da
  - en
  - de
  - sv
  - fr
  - it
  - pt
  - ro
  - es
  - el
  - os
  - tg
  - fa
  - ja
  - ka
  - ko
  - th
  - bxr
  - xal
  - mn
  - sw
  - yo
  - be
  - bg
  - ru
  - uk
  - pl
  - my
  - uz
  - ba
  - kk
  - ky
  - tt
  - az
  - cv
  - tr
  - tk
  - tyv
  - sax
  - et
  - fi
  - hu
tags:
  - multilingual
  - PyTorch
  - Transformers
  - gpt3
  - gpt2
  - transformers

๐ŸŒป mGPT 13B

Multilingual language model. This model was trained on the 61 languages from 25 language families (see the list below).

Paper

mGPT: Few-Shot Learners Go Multilingual

Published at TACL 2024 (MIT Press). Presented at EMNLP 2023.

Abstract PDF

@article{shliazhko-etal-2024-mgpt,
   title = "m{GPT}: Few-Shot Learners Go Multilingual",
   author = "Shliazhko, Oleh  and
     Fenogenova, Alena  and
     Tikhonova, Maria  and
     Kozlova, Anastasia  and
     Mikhailov, Vladislav  and
     Shavrina, Tatiana",
   journal = "Transactions of the Association for Computational Linguistics",
   volume = "12",
   year = "2024",
   address = "Cambridge, MA",
   publisher = "MIT Press",
   url = "https://aclanthology.org/2024.tacl-1.4",
   doi = "10.1162/tacl_a_00633",
   pages = "58--79",
   abstract = "This paper introduces mGPT, a multilingual variant of GPT-3, pretrained on 61 languages from 25 linguistically diverse language families using Wikipedia and the C4 Corpus. We detail the design and pretraining procedure. The models undergo an intrinsic and extrinsic evaluation: language modeling in all languages, downstream evaluation on cross-lingual NLU datasets and benchmarks in 33 languages, and world knowledge probing in 23 languages. The in-context learning abilities are on par with the contemporaneous language models while covering a larger number of languages, including underrepresented and low-resource languages of the Commonwealth of Independent States and the indigenous peoples in Russia. The source code and the language models are publicly available under the MIT license.",
}

Dataset

Model was pretrained on a 600Gb of texts, mostly from MC4 and Wikipedia. Training data was deduplicated, the text deduplication includes 64-bit hashing of each text in the corpus for keeping texts with a unique hash. We also filter the documents based on their text compression rate using zlib4. The most strongly and weakly compressing deduplicated texts are discarded.

Here is the table with number of tokens for each language in the pretraining corpus on a logarithmic scale:

Languages

Afrikaans (af), Arabic (ar), Armenian (hy), Azerbaijani (az), Basque (eu), Bashkir (ba), Belarusian (be), Bengali (bn), Bulgarian (bg), Burmese (my), Buryat (bxr), Chuvash (cv), Danish (da), English (en), Estonian (et), Finnish (fi), French (fr), Georgian (ka), German (de), Greek (el), Hebrew (he), Hindi (hi), Hungarian (hu), Indonesian (id), Italian (it), Japanese (ja), Javanese (jv), Kalmyk (xal), Kazakh (kk), Korean (ko), Kyrgyz (ky), Latvian (lv), Lithuanian (lt), Malay (ms), Malayalam (ml), Marathi (mr), Mongolian (mn), Ossetian (os), Persian (fa), Polish (pl), Portuguese (pt), Romanian (ro), Russian (ru), Spanish (es), Swedish (sv), Swahili (sw), Tatar (tt), Telugu (te), Thai (th), Turkish (tr), Turkmen (tk), Tuvan (tyv), Ukrainian (uk), Uzbek (uz), Vietnamese (vi), Yakut (sax), Yoruba (yo)

By language family

Language FamilyLanguages
Afro-AsiaticArabic (ar), Hebrew (he)
Austro-AsiaticVietnamese (vi)
AustronesianIndonesian (id), Javanese (jv), Malay (ms), Tagalog (tl)
BalticLatvian (lv), Lithuanian (lt)
BasqueBasque (eu)
DravidianMalayalam (ml), Tamil (ta), Telugu (te)
Indo-European (Armenian)Armenian (hy)
Indo-European (Indo-Aryan)Bengali (bn), Marathi (mr), Hindi (hi), Urdu (ur)
Indo-European (Germanic)Afrikaans (af), Danish (da), English (en), German (de), Swedish (sv)
Indo-European (Romance)French (fr), Italian (it), Portuguese (pt), Romanian (ro), Spanish (es)
Indo-European (Greek)Greek (el)
Indo-European (Iranian)Ossetian (os), Tajik (tg), Persian (fa)
JaponicJapanese (ja)
KartvelianGeorgian (ka)
KoreanicKorean (ko)
Kra-DaiThai (th)
MongolicBuryat (bxr), Kalmyk (xal), Mongolian (mn)
Niger-CongoSwahili (sw), Yoruba (yo)
SlavicBelarusian (be), Bulgarian (bg), Russian (ru), Ukrainian (uk), Polish (pl)
Sino-TibetanBurmese (my)
Turkic (Karluk)Uzbek (uz)
Turkic (Kipchak)Bashkir (ba), Kazakh (kk), Kyrgyz (ky), Tatar (tt)
Turkic (Oghuz)Azerbaijani (az), Chuvash (cv), Turkish (tr), Turkmen (tk)
Turkic (Siberian)Tuvan (tyv), Yakut (sax)
UralicEstonian (et), Finnish (fi), Hungarian (hu)

Technical details

The models are pretrained on 16 V100 GPUs for 600k training steps with a set of fixed hyperparameters: vocabulary size of 100k, context window of 2048, learning rate of 2eโˆ’4, and batch size of 4.

The mGPT architecture is based on GPT-3. We use the architecture description by Brown et al., the code base on GPT-2 (Radford et al., 2019) in the HuggingFace library (Wolf et al., 2020) and Megatron-LM (Shoeybi et al., 2019).

Perplexity

The mGPT13B model achieves the best perplexities within the 2-to-10 score range for the majority of languages, including Dravidian (Malayalam, Tamil, Telugu), Indo-Aryan (Bengali, Hindi, Marathi), Slavic (Belarusian, Ukrainian, Russian, Bulgarian), Sino-Tibetan (Burmese), Kipchak (Bashkir, Kazakh) and others. Higher perplexities up to 20 are for only seven languages from different families.

Language-wise perplexity results

Family-wise perplexity results

The scores are averaged over the number of languages within each family.