# Ligma | |
_Ligma Is "Great" for Model Alignment_ | |
WARNING: This model is published for scientific purposes only. It may and most likely will produce toxic content. | |
Trained on the `rejected` column of Anthropic's [hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf) dataset. | |
Use at your own risk. | |
License: just comply with llama2 license and you should be ok. | |