|
--- |
|
language: |
|
- en |
|
library_name: nemo |
|
datasets: |
|
- the_pile |
|
tags: |
|
- text generation |
|
- pytorch |
|
- causal-lm |
|
license: cc-by-4.0 |
|
|
|
--- |
|
|
|
<style> |
|
img { |
|
display: inline; |
|
} |
|
</style> |
|
|
|
| [![Model architecture](https://img.shields.io/badge/Model%20Arch-Transformer%20Decoder-green)](#model-architecture) |
|
| [![Model size](https://img.shields.io/badge/Params-1.3B-green)](#model-architecture) |
|
| [![Language](https://img.shields.io/badge/Language-en--US-lightgrey#model-badge)](#datasets) |
|
|
|
|
|
# Megatron-GPT 1.3B |
|
|
|
## Model Description |
|
|
|
Megatron-GPT 1.3B is a transformer-based language model. GPT refers to a class of transformer decoder-only models similar to GPT-2 and 3 while 1.3B refers to the total trainable parameter count (1.3 Billion) [1, 2]. |
|
|
|
This model was trained with [NeMo Megatron](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/nlp/nemo_megatron/intro.html). |
|
|
|
## Getting started |
|
|
|
You will need to install NVIDIA Apex and NeMo. |
|
|
|
``` |
|
git clone https://github.com/ericharper/apex.git |
|
cd apex |
|
git checkout nm_v1.11.0 |
|
pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_layer_norm" --global-option="--distributed_adam" --global-option="--deprecated_fused_adam" ./ |
|
``` |
|
|
|
``` |
|
pip install nemo_toolkit['nlp']==1.11.0 |
|
``` |
|
|
|
Alternatively, you can use NeMo Megatron training docker container with all dependencies pre-installed. |
|
|
|
## Training Data |
|
|
|
The model was trained on ["The Piles" dataset prepared by Eleuther.AI](https://pile.eleuther.ai/). |
|
|
|
## Evaluation results |
|
|
|
*Zero-shot performance.* |
|
|
|
| ARC-Challenge | ARC-Easy | RACE-middle | RACE-high | Winogrande | RTE | BoolQA | HellaSwag | PiQA | |
|
| ------------- | -------- | ----------- | --------- | ---------- | --- | ------ | --------- | ---- | |
|
| 0.3012 | 0.4596 | 0.459 | 0.3811 | 0.5343 | 0.5451 | 0.5979 | 0.4442 | 0.6834 | |
|
|
|
|
|
|
|
## References |
|
|
|
[1] [Improving Language Understanding by Generative Pre-Training](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf) |
|
|
|
[2] [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/pdf/1909.08053.pdf) |
|
|
|
[3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo) |
|
|
|
## Licence |
|
|
|
License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license. |
|
|