NeMo
okuchaiev commited on
Commit
be08e80
1 Parent(s): 91fa21e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +254 -0
README.md CHANGED
@@ -3,3 +3,257 @@ license: other
3
  license_name: nvidia-open-model-license
4
  license_link: LICENSE
5
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  license_name: nvidia-open-model-license
4
  license_link: LICENSE
5
  ---
6
+ ## Nemotron-4-340B-Instruct
7
+
8
+ [![Model architectuve](https://img.shields.io/badge/Model%20Arch-Transformer%20Decoder-green)](#model-architecture)[![Model size](https://img.shields.io/badge/Params-340B-green)](#model-architecture)[![Language](https://img.shields.io/badge/Language-Multilingual-green)](#datasets)
9
+
10
+ ### License
11
+
12
+ NVIDIA Open Model License
13
+
14
+ ### Model Overview
15
+
16
+ Nemotron-4-340B-Instruct is a large language model (LLM) which is a fine-tuned version of the Nemotron-4-340B-Base base model, optimized for English single and multi-turn chat use-cases. The base model was pre-trained on a corpus of 8 trillion tokens consisting of a diverse assortment of English based texts, 40+ coding languages, and 50+ natural languages.
17
+
18
+ Subsequently the Nemotron-4-340B-Instruct model went through additional alignment steps including:
19
+
20
+ - Supervised Fine-tuning (SFT)
21
+ - Direct Policy Optimization (DPO)
22
+ - Additional in-house alignment techniques
23
+
24
+ This results in a final model that is aligned for human chat preferences, improvements in mathematical reasoning, coding and instruction following.
25
+
26
+ This model is ready for commercial use.
27
+
28
+ **Model Developer:** NVIDIA
29
+
30
+ **Model Input:** Text
31
+ **Input Format:** String
32
+ **Input Parameters:** One-Dimensional (1D)
33
+
34
+ **Model Output:** Text
35
+ **Output Format:** String
36
+ **Output Parameters:** 1D
37
+
38
+ **Model Dates:** Nemotron-4-340B-Instruct was trained between December 2023 and May 2024
39
+
40
+ **Data Freshness:** The pretraining data has a cutoff of June 2023
41
+
42
+ ### Required Hardware
43
+
44
+ BF16 Inference:
45
+ - 8x H200 (1x H200 Node)
46
+ - 16x H100 (2x H100 Nodes)
47
+ - 16x A100 (2x A100 Nodes)
48
+
49
+ FP8 Inference:
50
+ - 8x H100 (1x H100 Node)
51
+
52
+ ### Model Architecture:
53
+
54
+ The base model, Nemotron-4-340B, was trained with a global batch-size of 2304, a sequence length of 4096 tokens, uses Grouped-Query Attention (GQA), and RoPE positional embeddings.
55
+
56
+ **Architecture Type:** Transformer Decoder (auto-regressive language model)
57
+
58
+ ### Software Integration
59
+
60
+ **Supported Hardware Architecture Compatibility:** NVIDIA H100, A100 80GB, A100 40GB
61
+
62
+ ### Usage
63
+
64
+ 1. We will spin up an inference server and then call the inference server in a python script. Let’s first define the python script ``call_server.py``
65
+
66
+
67
+ headers = {"Content-Type": "application/json"}
68
+
69
+ def text_generation(data, ip='localhost', port=None):
70
+ resp = requests.put(f'http://{ip}:{port}/generate', data=json.dumps(data), headers=headers)
71
+ return resp.json()
72
+
73
+
74
+ def get_generation(prompt, greedy, add_BOS, token_to_gen, min_tokens, temp, top_p, top_k, repetition, batch=False):
75
+ data = {
76
+ "sentences": [prompt] if not batch else prompt,
77
+ "tokens_to_generate": int(token_to_gen),
78
+ "temperature": temp,
79
+ "add_BOS": add_BOS,
80
+ "top_k": top_k,
81
+ "top_p": top_p,
82
+ "greedy": greedy,
83
+ "all_probs": False,
84
+ "repetition_penalty": repetition,
85
+ "min_tokens_to_generate": int(min_tokens),
86
+ "end_strings": ["<|endoftext|>", "<extra_id_1>", "\x11", "<extra_id_1>User"],
87
+ }
88
+ sentences = text_generation(data, port=1424)['sentences']
89
+ return sentences[0] if not batch else sentences
90
+
91
+ PROMPT_TEMPLATE = """<extra_id_0>System
92
+
93
+ <extra_id_1>User
94
+ {prompt}
95
+ <extra_id_1>Assistant
96
+ """
97
+
98
+ question = "Write a poem on NVIDIA in the style of Shakespeare"
99
+ prompt = PROMPT_TEMPLATE.format(prompt=question)
100
+ print(prompt)
101
+
102
+ response = get_generation(prompt, greedy=True, add_BOS=False, token_to_gen=1024, min_tokens=1, temp=1.0, top_p=1.0, top_k=0, repetition=1.0, batch=False)
103
+ print(response)
104
+
105
+ 2. Given this python script, we will create a bash script, which spins up the inference server within the [NeMo container](https://github.com/NVIDIA/NeMo/blob/main/Dockerfile) and calls the python script ``call_server.py``. The bash script ``nemo_inference.sh`` is as follows,
106
+
107
+
108
+ WEB_PORT=1424
109
+
110
+ depends_on () {
111
+ HOST=$1
112
+ PORT=$2
113
+ STATUS=$(curl -X PUT http://$HOST:$PORT >/dev/null 2>/dev/null; echo $?)
114
+ while [ $STATUS -ne 0 ]
115
+ do
116
+ echo "waiting for server ($HOST:$PORT) to be up"
117
+ sleep 10
118
+ STATUS=$(curl -X PUT http://$HOST:$PORT >/dev/null 2>/dev/null; echo $?)
119
+ done
120
+ echo "server ($HOST:$PORT) is up running"
121
+ }
122
+
123
+ echo "output filename: $OUTPUT_FILENAME"
124
+
125
+ /usr/bin/python3 /opt/NeMo/examples/nlp/language_modeling/megatron_gpt_eval.py \
126
+ gpt_model_file=$NEMO_FILE \
127
+ pipeline_model_parallel_split_rank=0 \
128
+ server=True tensor_model_parallel_size=8 \
129
+ trainer.precision=bf16 pipeline_model_parallel_size=4 \
130
+ trainer.devices=8 \
131
+ trainer.num_nodes=4 \
132
+ web_server=False \
133
+ port=${WEB_PORT} &
134
+ SERVER_PID=$!
135
+
136
+ readonly local_rank="${LOCAL_RANK:=${SLURM_LOCALID:=${OMPI_COMM_WORLD_LOCAL_RANK:-}}}"
137
+ if [ $SLURM_NODEID -eq 0 ] && [ $local_rank -eq 0 ]; then
138
+ depends_on "0.0.0.0" ${WEB_PORT}
139
+
140
+ echo "start get json"
141
+ sleep 5
142
+
143
+ echo "SLURM_NODEID: $SLURM_NODEID"
144
+ echo "local_rank: $local_rank"
145
+ /usr/bin/python3 call_server.py
146
+ echo "clean up dameons: $$"
147
+ kill -9 $SERVER_PID
148
+ pkill python
149
+ fi
150
+ wait
151
+
152
+
153
+ 3, We can launch the ``nemo_inferece.sh`` with a slurm script defined like below, which starts a 4-node job for the model inference.
154
+
155
+
156
+ #!/bin/bash
157
+ #SBATCH -A SLURM-ACCOUNT
158
+ #SBATCH -p SLURM-PARITION
159
+ #SBATCH -N 4 # number of nodes
160
+ #SBATCH -J generation
161
+ #SBATCH --ntasks-per-node=8
162
+ #SBATCH --gpus-per-node=8
163
+ set -x
164
+
165
+ read -r -d '' cmd <<EOF
166
+ bash nemo_inference.sh
167
+ EOF
168
+
169
+ srun -o $OUTFILE -e $ERRFILE --container-image="$CONTAINER" $MOUNTS bash -c "${cmd}"
170
+
171
+
172
+
173
+ ### Intended use
174
+
175
+ Nemotron-4-340B-Instruct is a chat model intended for use in over 50+ natural and coding languages. For best performance on a given task, users are encouraged to customize the chat model using the NeMo Framework suite of customization tools including Parameter-Efficient Fine-Tuning (P-tuning, Adapters, LoRA), and SFT/Steer-LM/RLHF.
176
+
177
+ ### Red Teaming:
178
+
179
+ TO BE UPDATED BASED ON RED TEAMING PICs + LEGAL REVIEW
180
+
181
+ ### Evaluation Results
182
+
183
+ #### MT-Bench (GPT-4-Turbo)
184
+
185
+ Evaluated using select datasets from the [Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena](https://arxiv.org/pdf/2306.05685v4)
186
+
187
+ | total | writing | roleplay | extraction | stem | humanities | reasoning | math | coding | turn 1 | turn 2 |
188
+ | ----- | ------- | -------- | ---------- | ---- | ---------- | --------- | ---- | ------ | ------ | ------ |
189
+ | 8.22 | 8.70 | 8.70 | 9.20 | 8.75 | 8.95 | 6.40 | 8.40 | 6.70 | 8.61 | 7.84 |
190
+
191
+ #### IFEval
192
+
193
+ Evaluated using the Instruction Following Eval (IFEval) introduced in [Instruction-Following Evaluation for Large Language Models](https://arxiv.org/pdf/2311.07911).
194
+
195
+ | Prompt-Strict Acc | Instruction-Strict Acc |
196
+ | ----------------------- | ---------------------------- |
197
+ | 79.9 | 86.1 |
198
+
199
+ #### MMLU
200
+
201
+ Evaluated using the Multi-task Language Understanding benchmarks as introduced in [Measuring Massive Multitask Language Understanding](https://arxiv.org/pdf/2009.03300)
202
+
203
+ |MMLU 0-shot |
204
+ | ----------------- |
205
+ | 78.7 |
206
+
207
+ #### GSM8K
208
+
209
+ Evaluated using the Grade School Math 8K (GSM8K) bechmark as introduced in [Training Verifiers to Solve Math Word Problems](https://arxiv.org/pdf/2110.14168v2).
210
+
211
+ | GSM8K 0-shot |
212
+ | ----------------- |
213
+ | 92.3 |
214
+
215
+ #### HumanEval
216
+
217
+ Evaluated using the HumanEval benchmark as introduced in [Evaluating Large Language Models Trained on Code](https://arxiv.org/pdf/2107.03374).
218
+
219
+
220
+ | HumanEval 0-shot |
221
+ | ----- |
222
+ | 73.2 |
223
+
224
+ #### Arena Hard
225
+
226
+ Evaluated using the [Arena-Hard Pipeline](https://lmsys.org/blog/2024-04-19-arena-hard/) from the LMSys Org.
227
+
228
+ | Arena Hard |
229
+ | ----------------- |
230
+ | 54.2 |
231
+
232
+ #### AlpacaEval 2.0 LC
233
+
234
+ Evaluated using the AlpacaEval 2.0 LC (Length Controlled) as introduced in the paper: [Length-Controlled AlpacaEval: A Simple Way to Debias Automatic Evaluators](https://arxiv.org/abs/2404.04475)
235
+
236
+ | AlpacaEval |
237
+ | ----------------- |
238
+ | 54.2 |
239
+
240
+ #### MBPP
241
+
242
+ Evaluated using the MBPP Dataset as introduced in the [Program Synthesis with Large Language Models](https://arxiv.org/abs/2108.07732) paper.
243
+
244
+ | MBPP |
245
+ | ----------------- |
246
+ | 75.4 |
247
+
248
+ #### TFEval
249
+
250
+ Evaluated using the CantTalkAboutThis Dataset as introduced in the [CantTalkAboutThis: Aligning Language Models to Stay on Topic in Dialogues](https://arxiv.org/abs/2404.03820) paper.
251
+
252
+ | Distractor F1 | On-topic F1 |
253
+ | ----------------------- | ---------------------------- |
254
+ | 81.7 | 97.7 |
255
+
256
+
257
+ ### Limitations
258
+
259
+ The base model was trained on data that contains toxic language and societal biases originally crawled from the internet. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts. The model may generate answers that may be inaccurate, omit key information, or include irrelevant or redundant text producing socially unacceptable or undesirable text, even if the prompt itself does not include anything