File size: 14,386 Bytes
6f9f453 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 |
---
license: apache-2.0
language:
- multilingual
- af
- am
- ar
- az
- be
- bg
- bn
- ca
- ceb
- co
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fil
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- haw
- hi
- hmn
- ht
- hu
- hy
- ig
- is
- it
- iw
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lb
- lo
- lt
- lv
- mg
- mi
- mk
- ml
- mn
- mr
- ms
- mt
- my
- ne
- nl
- no
- ny
- pa
- pl
- ps
- pt
- ro
- ru
- sd
- si
- sk
- sl
- sm
- sn
- so
- sq
- sr
- st
- su
- sv
- sw
- ta
- te
- tg
- th
- tr
- uk
- und
- ur
- uz
- vi
- xh
- yi
- yo
- zh
- zu
datasets:
- mc4
- bigscience/xP3
---
Multilingual Text-to-Text Transfer Transformer Zero (MT0)
Version 1. / 28 October 2022
---
# Models
mT5 is pretrained on the [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) corpus, covering 101 languages:
Afrikaans, Albanian, Amharic, Arabic, Armenian, Azerbaijani, Basque, Belarusian, Bengali, Bulgarian, Burmese, Catalan, Cebuano, Chichewa, Chinese, Corsican, Czech, Danish, Dutch, English, Esperanto, Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati, Haitian Creole, Hausa, Hawaiian, Hebrew, Hindi, Hmong, Hungarian, Icelandic, Igbo, Indonesian, Irish, Italian, Japanese, Javanese, Kannada, Kazakh, Khmer, Korean, Kurdish, Kyrgyz, Lao, Latin, Latvian, Lithuanian, Luxembourgish, Macedonian, Malagasy, Malay, Malayalam, Maltese, Maori, Marathi, Mongolian, Nepali, Norwegian, Pashto, Persian, Polish, Portuguese, Punjabi, Romanian, Russian, Samoan, Scottish Gaelic, Serbian, Shona, Sindhi, Sinhala, Slovak, Slovenian, Somali, Sotho, Spanish, Sundanese, Swahili, Swedish, Tajik, Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Uzbek, Vietnamese, Welsh, West Frisian, Xhosa, Yiddish, Yoruba, Zulu.
mt5 was then finetuned on:
- [xP3](https://huggingface.co/bigscience/xP3) to obtain [mt0-small](https://huggingface.co/bigscience/mt0-small)/[mt0-base](https://huggingface.co/bigscience/mt0-base)/[mt0-large](https://huggingface.co/bigscience/mt0-large)/[mt0-xl](https://huggingface.co/bigscience/mt0-xl)/[mt0-xxl](https://huggingface.co/bigscience/mt0-xxl)
- [P3](https://huggingface.co/bigscience/P3) to obtain [mt0-p3-xxl](https://huggingface.co/bigscience/mt0-p3-xxl)
- [xP3mt](https://huggingface.co/bigscience/xP3mt) to obtain [mt0-mt-xxl](https://huggingface.co/bigscience/mt5-mt-xxl)
## Model Flavors
Multilingual model capable of following user instructions in a variety of languages. Together with our paper [TODO: LINK], we release the following models:
----
- [mt0-small](https://huggingface.co/bigscience/mt0-small): 300M parameters multitask finetuned version of [mt5-small](https://huggingface.co/google/mt5-small) on [xP3](https://huggingface.co/bigscience/xP3)
- [mt0-base](https://huggingface.co/bigscience/mt0-base): 580M parameters multitask finetuned version of [mt5-base](https://huggingface.co/google/mt5-base) on [xP3](https://huggingface.co/bigscience/xP3)
- [mt0-large](https://huggingface.co/bigscience/mt0-large): 1.2B parameters multitask finetuned version of [mt5-large](https://huggingface.co/google/mt5-large) on [xP3](https://huggingface.co/bigscience/xP3)
- [mt0-xl](https://huggingface.co/bigscience/mt0-xl): 3.7B parameters multitask finetuned version of [mt5-xl](https://huggingface.co/google/mt5-xl) on [xP3](https://huggingface.co/bigscience/xP3)
- [mt0-xxl](https://huggingface.co/bigscience/mt0-xxl): 13B parameters multitask finetuned version of [mt5-xxl](https://huggingface.co/google/mt5-xxl) on [xP3](https://huggingface.co/bigscience/xP3)
----
- [mt0-p3-xxl](https://huggingface.co/bigscience/mt0-p3-xxl): 13B parameters multitask finetuned version of [mt5-xxl](https://huggingface.co/google/mt5-xxl) on [P3](https://huggingface.co/bigscience/P3)
- [mt0-mt-xxl](https://huggingface.co/bigscience/mt5-mt-xxl): 13B parameters multitask finetuned version of [mt5-xxl](https://huggingface.co/google/mt5-xxl) on [xP3mt](https://huggingface.co/bigscience/xP3mt)
## Basics
*This section provides information about the model type, version, license, funders, release date, developers, and contact information.*
*It is useful for anyone who wants to reference the model.*
<details>
<summary>Click to expand</summary>
*All collaborators are either volunteers or have an agreement with their employer. (Further breakdown of participants forthcoming.)*
**Model Type:** Transformer-based Language Model
**Checkpoints format:** `transformers`
**Version:** 1.0.0
**Languages:** Multiple; see [training data](#training-data)
**License:** Apache 2.0
**Release Date Estimate:** Friday, 28.October.2022
**Send Questions to:** niklas@huggingface.co
**Funded by:**
* The French government.
* Hugging Face ([website](https://huggingface.co)).
</details>
## Technical Specifications
*This section includes details about the model objective and architecture, and the compute infrastructure.*
*It is useful for people interested in model development.*
<details>
<summary>Click to expand</summary>
Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training.
### Model Architecture and Objective
* Same architecture as [mt5](https://arxiv.org/abs/2010.11934)
* Encoder-decoder architecture
**Objective Function:** Cross Entropy with mean reduction on target tokens (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)).
### Compute infrastructure
// TODO @adarob: Can you describe where you trained it?
#### Hardware
// TODO @adarob: Can you describe what was the hardware used?
#### Software
* T5X([Github link](https://github.com/google-research/t5x), [paper](https://arxiv.org/abs/2203.17189))
</details>
---
# Training
*This section provides information about the training data, the speed and size of training elements, and the environmental impact of training.*
*It is useful for people who want to learn more about the model inputs and training footprint.*
<details>
<summary>Click to expand</summary>
## Training Data
*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*
It was pretrained on mC4 and then finetuned on xP3, P3 or xP3mt.
### Languages
// TODO @thomasw21: Copy list from mt5
## Speeds, Sizes, Times
// TODO @adarob: Maybe we can push tensorboard on this repo as well
Training logs: [Tensorboard link](https://huggingface.co/tensorboard/bigscience/tr11-176B-ml-logs/)
- Checkpoint size:
- Bf16 weights: 51.7GB
- Number of epochs: 1
// TODO @adarob: Can you share where the server is?
- Server training location:
## Environmental Impact
// TODO @adarob: Is it possible for you to share some information about the impact of where you trained it?
The evaluation supercomputer, [Jean Zay](http://www.idris.fr/eng/jean-zay/), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.
</details>
---
# Uses
*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.*
*It is useful for anyone considering using the model or who is affected by the model.*
<details>
<summary>Click to expand</summary>
## How to use
This model can be easily used and deployed using HuggingFace's ecosystem. This needs `transformers` and `accelerate` installed. The model can be downloaded as follows:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
checkpoint = "..." # "checkpoint_1006000" for example
model_name = "bigscience/mt0-xxl"
model = AutoModelForSeq2SeqLM.from_pretrained(model_name, revision=checkpoint, torch_dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_name, revision=checkpoint)
inputs = tokenizer.encode("Commentaire: C'est la meilleure crêpière que j'ai jamais eu. Je l'adore.\nCe commentaire est-il positif ou négatif?", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
## Intended Use
This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.
### Direct Use
- Text generation
- Exploring characteristics of language generated by a language model
- Examples: Cloze tests, counterfactuals, generations with reframings
### Downstream Use
- Tasks that leverage language models include: Information Extraction, Question Answering, Summarization
</details>
---
# Risks and Limitations
*This section identifies foreseeable harms and misunderstandings.*
<details>
<summary>Click to expand</summary>
Model may:
- Overrepresent some viewpoints and underrepresent others
- Contain stereotypes
- Contain [personal information](#personal-data-and-information)
- Generate:
- Hateful, abusive, or violent language
- Discriminatory or prejudicial language
- Content that may not be appropriate for all settings, including sexual content
- Make errors, including producing incorrect information as if it were factual
- Generate irrelevant or repetitive outputs
- Induce users into attributing human traits to it, such as sentience or consciousness
</details>
---
# Evaluation
*This section describes the evaluation protocols and provides the results.*
<details>
<summary>Click to expand</summary>
## Metrics
*This section describes the different ways performance is calculated and why.*
# TODO @niklas
## Results
*Results are based on the [Metrics](#metrics).*
**Zero-shot evaluations:**
# TODO @niklas
**Train-time Evaluation:**
# TODO @adarob: Pending if we can get access to tensorboard
</details>
---
# Recommendations
*This section provides information on warnings and potential mitigations.*
<details>
<summary>Click to expand</summary>
- Indirect users should be made aware when the content they're working with is created by the LLM.
- Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary.
- Models trained or finetuned downstream of MT0 should include an updated Model Card.
- Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
</details>
---
# Glossary and Calculations
*This section defines common terms and how metrics are calculated.*
<details>
<summary>Click to expand</summary>
- <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss.
- <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.
- <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/).
- <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf).
- <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf).
- <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm).
- <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf))
- <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.
</details>
---
# More Information
*This section provides links to writing on dataset creation, technical specifications, lessons learned, and initial results.*
<details>
<summary>Click to expand</summary>
## Intermediate checkpoints
For academic (or any) usage, we published the intermediate checkpoints, corresponding to the model state at each 1000 steps. There are available as branches in this repository. You can use them using `transformers`:
```python
from transformers import AutoModel
checkpoint = "..." # "checkpoint_1006000" for example
model = AutoModel.from_pretrained("bigscience/mt0-xxl", revision=checkpoint, torch_dtype="auto", device_map="auto")
```
## Dataset Creation
// TODO @niklas: Point to the arxiv paper
## Original checkpoints
The checkpoints in this repo correspond to the HuggingFace Transformers format. We'll provide T5X checkpoints as well.
# Citing MT0
Please use the following bibtex entry to cite T0:
```bibtex
TODO @niklas
```
|