NeMo
PyTorch
nemotron
srvm commited on
Commit
0959baf
1 Parent(s): 6d7f9b3

Model name change

Browse files
Files changed (2) hide show
  1. README.md +7 -7
  2. config.json +1 -1
README.md CHANGED
@@ -7,7 +7,7 @@ license_link: >-
7
 
8
  # Model Overview
9
 
10
- Nemotron-4-Minitron-8B-Base is a large language model (LLM) obtained by pruning Nemotron-4 15B; specifically, we prune model embedding size, number of attention heads, and MLP intermediate dimension. Following pruning, we perform continued training with distillation using 94 billion tokens to arrive at the final model; we use the continuous pre-training data corpus used in Nemotron-4 15B for this purpose.
11
 
12
  Deriving the Minitron 8B and 4B models from the base 15B model using our approach requires up to **40x fewer training tokens** per model compared to training from scratch; this results in **compute cost savings of 1.8x** for training the full model family (15B, 8B, and 4B). Minitron models exhibit up to a 16% improvement in MMLU scores compared to training from scratch, perform comparably to other community models such as Mistral 7B, Gemma 7B and Llama-3 8B, and outperform state-of-the-art compression techniques from the literature. Please refer to our [arXiv paper](https://arxiv.org/abs/2407.14679) for more details.
13
 
@@ -15,15 +15,15 @@ This model is for research and development only.
15
 
16
  **Model Developer:** NVIDIA
17
 
18
- **Model Dates:** Nemotron-4-Minitron-8B-Base was trained between February 2024 and June 2024.
19
 
20
  ## License
21
 
22
- Nemotron-4-Minitron-8B-Base is released under the [NVIDIA Open Model License Agreement](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf).
23
 
24
  ## Model Architecture
25
 
26
- Nemotron-4-Minitron-8B-Base uses a model embedding size of 4096, 48 attention heads, and an MLP intermediate dimension of 16384.
27
  It also uses Grouped-Query Attention (GQA) and Rotary Position Embeddings (RoPE).
28
 
29
  **Architecture Type:** Transformer Decoder (auto-regressive language model)
@@ -55,14 +55,14 @@ $ git clone -b aot/head_dim_rope --single-branch https://github.com/suiyoubi/tra
55
  $ pip install -e .
56
  ```
57
 
58
- The following code provides an example of how to load the Nemotron-4-Minitron-8B model and use it to perform text generation.
59
 
60
  ```python
61
  import torch
62
  from transformers import AutoTokenizer, AutoModelForCausalLM
63
 
64
  # Load the tokenizer and model
65
- model_path = "nvidia/Nemotron-4-Minitron-8B-Base"
66
  tokenizer = AutoTokenizer.from_pretrained(model_path)
67
 
68
  device='cuda'
@@ -87,7 +87,7 @@ print(output_text)
87
 
88
  **Labeling Method:** Not Applicable
89
 
90
- **Properties:** The training corpus for Nemotron-4-Minitron-8B-Base consists of English and multilingual text, as well as code. Our sources cover a variety of document types such as: webpages, dialogue, articles, and other written materials. The corpus spans domains including legal, math, science, finance, and more. In our continued training set, we introduce a small portion of question-answering, and alignment style data to improve model performance.
91
 
92
  **Data Freshness:** The pretraining data has a cutoff of June 2023.
93
 
 
7
 
8
  # Model Overview
9
 
10
+ Minitron-8B-Base is a large language model (LLM) obtained by pruning Nemotron-4 15B; specifically, we prune model embedding size, number of attention heads, and MLP intermediate dimension. Following pruning, we perform continued training with distillation using 94 billion tokens to arrive at the final model; we use the continuous pre-training data corpus used in Nemotron-4 15B for this purpose.
11
 
12
  Deriving the Minitron 8B and 4B models from the base 15B model using our approach requires up to **40x fewer training tokens** per model compared to training from scratch; this results in **compute cost savings of 1.8x** for training the full model family (15B, 8B, and 4B). Minitron models exhibit up to a 16% improvement in MMLU scores compared to training from scratch, perform comparably to other community models such as Mistral 7B, Gemma 7B and Llama-3 8B, and outperform state-of-the-art compression techniques from the literature. Please refer to our [arXiv paper](https://arxiv.org/abs/2407.14679) for more details.
13
 
 
15
 
16
  **Model Developer:** NVIDIA
17
 
18
+ **Model Dates:** Minitron-8B-Base was trained between February 2024 and June 2024.
19
 
20
  ## License
21
 
22
+ Minitron-8B-Base is released under the [NVIDIA Open Model License Agreement](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf).
23
 
24
  ## Model Architecture
25
 
26
+ Minitron-8B-Base uses a model embedding size of 4096, 48 attention heads, and an MLP intermediate dimension of 16384.
27
  It also uses Grouped-Query Attention (GQA) and Rotary Position Embeddings (RoPE).
28
 
29
  **Architecture Type:** Transformer Decoder (auto-regressive language model)
 
55
  $ pip install -e .
56
  ```
57
 
58
+ The following code provides an example of how to load the Minitron-8B model and use it to perform text generation.
59
 
60
  ```python
61
  import torch
62
  from transformers import AutoTokenizer, AutoModelForCausalLM
63
 
64
  # Load the tokenizer and model
65
+ model_path = "nvidia/Minitron-8B-Base"
66
  tokenizer = AutoTokenizer.from_pretrained(model_path)
67
 
68
  device='cuda'
 
87
 
88
  **Labeling Method:** Not Applicable
89
 
90
+ **Properties:** The training corpus for Minitron-8B-Base consists of English and multilingual text, as well as code. Our sources cover a variety of document types such as: webpages, dialogue, articles, and other written materials. The corpus spans domains including legal, math, science, finance, and more. In our continued training set, we introduce a small portion of question-answering, and alignment style data to improve model performance.
91
 
92
  **Data Freshness:** The pretraining data has a cutoff of June 2023.
93
 
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "nvidia/Nemotron-4-Minitron-8B-Base",
3
  "architectures": [
4
  "NemotronForCausalLM"
5
  ],
 
1
  {
2
+ "_name_or_path": "nvidia/Minitron-8B-Base",
3
  "architectures": [
4
  "NemotronForCausalLM"
5
  ],