This model is a fine-tuned version of microsoft/Orca-2-13b on a subset of the Vezora/Mini_Orca_Uncencored_Alpaca dataset, adjusted to demonstrate the relationship between instruction and input, with some particularly spicy prompts added to reduce the risk of rejections.
Only the q_proj and k_proj modules were targeted and a low rank (8) was used, in hopes of containing the adjustments to the prompt format and alignment. This is promising on paper, with the training's per-step loss averaging <0.9 for the last third of the run.
Reasoning stayed solid (for a 13b model) and I consider this a success. Performance is slighty worse than OG Orca-2 in Ooba's chat mode, comparable in Alpaca chat-instruct mode to the OG in ChatLM chat-instruct mode.
May still reject some shocking prompts, but can easily be overcome with author's note or character card.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 61.63 |
AI2 Reasoning Challenge (25-Shot) | 61.09 |
HellaSwag (10-Shot) | 79.27 |
MMLU (5-Shot) | 60.13 |
TruthfulQA (0-shot) | 53.59 |
Winogrande (5-shot) | 77.43 |
GSM8k (5-shot) | 38.29 |
- Downloads last month
- 618
Model tree for athirdpath/Orca-2-13b-Alpaca-Uncensored
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard61.090
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard79.270
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard60.130
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard53.590
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard77.430
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard38.290