File size: 1,553 Bytes
f477d24
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2311472
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
license: apache-2.0
language:
- en
---

Pre-trained model fine-tuned using Reinforcement Learning on [DIALOCONAN](https://github.com/marcoguerini/CONAN#dialoconan) dataset using [facebook/roberta-hate-speech-dynabench-r4-target](https://huggingface.co/facebook/roberta-hate-speech-dynabench-r4-target) as reward model.

Toxicity results on [allenai/real-toxicity-prompts](https://huggingface.co/datasets/allenai/real-toxicity-prompts) dataset using custom prompts (see 🥞[RewardLM](https://github.com/DanielSc4/RewardLM) for details).

| Toxicity Level | RedPajama-INCITE-Chat-3B |
|:--------------:|:------------------------:|
|             Pre-Trained |           0.217          |
|             **Fine-Tuned** |           **0.129**          |
|  [RL](https://huggingface.co/DanielSc4/RedPajama-INCITE-Chat-3B-v1-RL-LoRA-8bit-test1) |         0.160        |

# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_DanielSc4__RedPajama-INCITE-Chat-3B-v1-FT-LoRA-8bit-test1)

| Metric                | Value                     |
|-----------------------|---------------------------|
| Avg.                  | 33.13   |
| ARC (25-shot)         | 38.65          |
| HellaSwag (10-shot)   | 63.53    |
| MMLU (5-shot)         | 25.16         |
| TruthfulQA (0-shot)   | 36.07   |
| Winogrande (5-shot)   | 60.14   |
| GSM8K (5-shot)        | 0.08        |
| DROP (3-shot)         | 8.24         |