NeMo
PyTorch
English
text generation
causal-lm
aklife97 commited on
Commit
e4a6e70
1 Parent(s): f38d34c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -143,7 +143,7 @@ print(get_answer(question, 4096, values))
143
 
144
  ## Limitations
145
 
146
- Meta’s Llama2 model was trained on publicly available data sources that could include unsafe content. See Meta's Llama2 paper, section 4.1, "Safety in Pretraining" for more details see [LLaMa-2](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/). The model may amplify unsafe content, especially when prompted with unsafe content. NVIDIA did not perform bias or toxicity removal or model alignment on the Llama2 model. NVIDIA’s SteerLM methodology applied to Llama2 provides the opportunity to improve model quality through a fine-tuning technique based on data annotation of specific important categories and allows adjustments to model output at run-time based on those same categories.
147
 
148
 
149
  ## License
 
143
 
144
  ## Limitations
145
 
146
+ Meta’s Llama2 model was trained on publicly available data sources that could include unsafe content. See Meta's [Llama2 paper](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/), section 4.1, "Safety in Pretraining" for more details. The model may amplify unsafe content, especially when prompted with unsafe content. NVIDIA did not perform bias or toxicity removal or model alignment on the Llama2 model. NVIDIA’s SteerLM methodology applied to Llama2 provides the opportunity to improve model quality through a fine-tuning technique based on data annotation of specific important categories and allows adjustments to model output at run-time based on those same categories.
147
 
148
 
149
  ## License