NeMo
PyTorch
English
seq2seq
masked language modeling
nemo-megatron-t5-3B / README.md
osanseviero's picture
Add tag so repo is filterable by task
2d3e226
|
raw
history blame
5.54 kB
metadata
language:
  - en
library_name: nemo
datasets:
  - the_pile
tags:
  - text2text-generation
  - pytorch
  - seq2seq
  - masked language modeling
license: cc-by-4.0

NeMo Megatron-T5 3B

|Model architecture|Model size|Language

Model Description

NeMo Megatron-T5 3B is a transformer-based masked language model. T5 [1] is a class of encoder-decoder models trained with a span-based masked language modeling objective. We follow the T5v1.1 approach of pre-training using only the masked language modeling objective. It has Tensor Parallelism (TP) of 2, Pipeline Parallelism (PP) of 1 and should fit on a single NVIDIA GPU for inference and 2 A100 80G GPUs for finetuning.

This model was trained with NeMo Megatron.

Getting started

Step 1: Install NeMo and dependencies

You will need to install NVIDIA Apex and NeMo.

git clone https://github.com/ericharper/apex.git
cd apex
git checkout nm_v1.11.0
pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_layer_norm" --global-option="--distributed_adam" --global-option="--deprecated_fused_adam" ./
pip install nemo_toolkit['nlp']==1.11.0

Alternatively, you can use NeMo Megatron training docker container with all dependencies pre-installed - https://developer.nvidia.com/nemo-megatron-open-beta?nvid=nv-int-tblg-249896

Step 2: Run inference

Note. The model has been trained with Tensor Parallelism (TP) of 2 and Pipeline Parallelism (PP) of 1, but it should be possible to run inference with tensor parallel size 1 on most NVIDIA GPUs

git clone https://github.com/NVIDIA/NeMo.git 
cd NeMo/examples/nlp/language_modeling
git checkout v1.11.0
python megatron_t5_eval.py \
    --model_file /raid/Data/NMT/Models/t5_3b/nemo_megatron_t5_3b_bf16_tp2.nemo \
    --prompt '<mask> was the first person to set foot on the moon. When he did, he uttered the phrase <mask> for man, one <mask> for mankind which is still a popular quote today.' \
    --tensor_model_parallel_size 2

The script will automatically replace all <mask> tokens with the appropriate sentinel tokens used while pre-training and attempt to fill them in autoregressively with greedy decoding.

Expected Response:

{
    'prompt': '<mask> was the first person to set foot on the moon. When he did, he uttered the phrase <mask> for man, one <mask> for mankind which is still a popular quote today.',
    'completion':
          {
          'text': '[CLS] <extra_id_0> Neil Armstrong <extra_id_1> one small step <extra_id_2> giant leap',
          'tokens': [(101, '[CLS]', -2.9802276912960224e-06), (28996, '<extra_id_0>', -0.1492447555065155), (6003, 'Neil', -0.0015669699059799314), (8800, 'Armstrong', -0.013404252007603645), (28997, '<extra_id_1>', -0.9019092917442322), (1141, 'one', -0.7962003350257874), (1353, 'small', -0.006306509021669626), (2585, 'step', -1.9073468138230965e-06), (28998, '<extra_id_2>', -0.0026884861290454865), (4994, 'giant', -0.1679367572069168), (13660, 'leap', -5.960462772236497e-07)]
          },
    'masked_input': '<extra_id_0> was the first person to set foot on the moon . When he did , he uttered the phrase <extra_id_1> for man , one <extra_id_2> for mankind which is still a popular quote today .'
}
  • prompt: The provided raw prompt as input
  • completion:
    • text: The final generated text from the model along with special/sentinel tokens besides </s>
    • tokens: Each individual subword that is generated along with its log-probability.
  • masked_input: The original raw prompt with

Training Data

The model was trained on "The Pile" dataset prepared by Eleuther.AI. [4]

Evaluation results

Fine-tuned Performance on downstream validation sets for different tasks

MNLI-M MNLI-MM SST-2 STS-B (Spearman)
90.62 90.61 97.2 91.5

Limitations

The model was trained on the data originally crawled from the Internet. This data contains toxic language and societal biases. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts.

References

[1] Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

[2] Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism

[3] NVIDIA NeMo Toolkit

[4] The Pile: An 800GB Dataset of Diverse Text for Language Modeling

Licence

License to use this model is covered by the CC-BY-4.0. By downloading the public and release version of the model, you accept the terms and conditions of the CC-BY-4.0 license.