Edit model card

This model is a fine-tuned version of microsoft/Orca-2-13b on a subset of the Vezora/Mini_Orca_Uncencored_Alpaca dataset, adjusted to demonstrate the relationship between instruction and input, with some particularly spicy prompts added to reduce the risk of rejections.

Only the q_proj and k_proj modules were targeted and a low rank (8) was used, in hopes of containing the adjustments to the prompt format and alignment. This is promising on paper, with the training's per-step loss averaging <0.9 for the last third of the run.

Reasoning stayed solid (for a 13b model) and I consider this a success. Performance is slighty worse than OG Orca-2 in Ooba's chat mode, comparable in Alpaca chat-instruct mode to the OG in ChatLM chat-instruct mode.

May still reject some shocking prompts, but can easily be overcome with author's note or character card.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 61.63
AI2 Reasoning Challenge (25-Shot) 61.09
HellaSwag (10-Shot) 79.27
MMLU (5-Shot) 60.13
TruthfulQA (0-shot) 53.59
Winogrande (5-shot) 77.43
GSM8k (5-shot) 38.29
Downloads last month
618
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for athirdpath/Orca-2-13b-Alpaca-Uncensored

Merges
2 models
Quantizations
2 models

Evaluation results