Text Generation
Transformers
PyTorch
English
llama
text-generation-inference
Inference Endpoints
winglian commited on
Commit
218fb47
1 Parent(s): 1959f62

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -0
README.md CHANGED
@@ -21,6 +21,10 @@ The Wizard Mega 13B SFT model is being released after two epochs as the eval los
21
 
22
  Wizard Mega was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 8xA100 80GB for 15 hours. The configuration to duplicate this build is provided in this repo's [/config folder](https://huggingface.co/openaccess-ai-collective/wizard-mega-13b/tree/main/configs).
23
 
 
 
 
 
24
  ## Examples
25
 
26
  ````
 
21
 
22
  Wizard Mega was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 8xA100 80GB for 15 hours. The configuration to duplicate this build is provided in this repo's [/config folder](https://huggingface.co/openaccess-ai-collective/wizard-mega-13b/tree/main/configs).
23
 
24
+ ## Bias, Risks, and Limitations
25
+ Wizard Mega has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
26
+ Wizard Mega was fine-tuned from the base model LlaMa 13B, please refer to its model card's Limitations Section for relevant information.
27
+
28
  ## Examples
29
 
30
  ````