|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- Fredithefish/openassistant-guanaco-unfiltered |
|
language: |
|
- en |
|
library_name: transformers |
|
pipeline_tag: conversational |
|
inference: false |
|
--- |
|
|
|
<img src="https://huggingface.co/Fredithefish/Guanaco-3B-Uncensored/resolve/main/Guanaco-Uncensored.jpg" alt="Alt Text" width="295"/> |
|
|
|
# ✨ Guanaco - 3B - Uncensored ✨ |
|
|
|
|
|
Guanaco-3B-Uncensored has been fine-tuned for 6 epochs on the [Unfiltered Guanaco Dataset.](https://huggingface.co/datasets/Fredithefish/openassistant-guanaco-unfiltered) using [RedPajama-INCITE-Base-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-3B-v1) as the base model. |
|
<br>The model does not perform well with languages other than English. |
|
<br>Please note: This model is designed to provide responses without content filtering or censorship. It generates answers without denials. |
|
|
|
## Special thanks |
|
I would like to thank AutoMeta for providing me with the computing power necessary to train this model. |
|
|
|
|
|
### Prompt Template |
|
``` |
|
### Human: {prompt} ### Assistant: |
|
``` |
|
|
|
### Changes |
|
This is the second version of the 3B parameter Guanaco uncensored model. |
|
The model has been fine-tuned on the V2 of the Guanaco unfiltered dataset. |
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Fredithefish__Guanaco-3B-Uncensored-v2) |
|
|
|
| Metric | Value | |
|
|-----------------------|---------------------------| |
|
| Avg. | 34.35 | |
|
| ARC (25-shot) | 42.15 | |
|
| HellaSwag (10-shot) | 66.72 | |
|
| MMLU (5-shot) | 26.18 | |
|
| TruthfulQA (0-shot) | 35.21 | |
|
| Winogrande (5-shot) | 63.3 | |
|
| GSM8K (5-shot) | 0.3 | |
|
| DROP (3-shot) | 6.6 | |
|
|