βοΈ distilabeled Marcoro14 7B Slerp
Introduction
This model is a new DPO fine-tune of our new open dataset argilla/distilabel-intel-orca-dpo-pairs, on the mlabonne/Marcoro14-7B-slerp model. You can find more information of the "distilabeled" dataset used at this repo argilla/distilabeled-Hermes-2.5-Mistral-7B, and visit distilabel.
The difference between this model and argilla/distilabeled-Marcoro14-7B-slerp is that this model has been fine-tuned for a whole epoch instead instead of 200 steps, so it has seen the whole dataset.
Training details
As we did with Notus, we wanted a reproducible recipe to test the impact of data quality.
And we're lucky to have so many amazing folks in the open community contributing reproducible, easy-to-use training scripts and recipes. This time, Maxime Labonne had shared a Colab to fine-tune OpenHermes with DPO and the original Intel's dataset, perfect! We just updated the base model to mlabonne/Marcoro14-7B-slerp, and applied the same dataset recipe we used for argilla/distilabeled-Hermes-2.5-Mistral-7B:
from datasets import load_dataset
# Instead of this:
# dataset = load_dataset("Intel/orca_dpo_pairs", split="train")
# we did this
dataset = load_dataset("argilla/distilabel-intel-orca-dpo-pairs", split="train")
dataset = dataset.filter(
lambda r:
r["status"] != "tie" and
r["chosen_score"] >= 8 and
not r["in_gsm8k_train"]
)
Benchmark results
For benchmarking we used the famous "Nous" or "Teknium" benchmark. You can find below an overview, including our first experiment with a less ambitious dataset filtering (removing ties and score>5
).
For running the benchmark we used another awesome contribution from Maxime: LLM AutoEval, check it out!
Model | AGIEval | GPT4ALL | TruthfulQA | Bigbench | Average |
---|---|---|---|---|---|
argilla/distilabeled-Marcoro14-7B-slerp-full | 45.17 | 76.59 | 64.68 | 48.15 | 58.65 |
argilla/distilabeled-Marcoro14-7B-slerp | 45.4 | 76.47 | 65.46 | 47.19 | 58.63 |
Marcoro14-7B-slerp | 44.66 | 76.24 | 64.15 | 45.64 | 57.67 |
argilla/distilabeled-Hermes-2.5-Mistral-7B | 44.64 | 73.35 | 55.96 | 42.21 | 54.04 |
Training Hardware
We used 1 x A100 80GB in runpod for less than 2 hours.
Acknowledgements
We'd like to thank the amazing open community and in particular:
- The Intel team for publishing a great open dataset and show how well it worked in the first place
- Teknium and NousResearch for their awesome work and models.
- Maxime for sharing such great resources.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 73.40 |
AI2 Reasoning Challenge (25-Shot) | 70.65 |
HellaSwag (10-Shot) | 87.55 |
MMLU (5-Shot) | 65.33 |
TruthfulQA (0-shot) | 64.21 |
Winogrande (5-shot) | 82.00 |
GSM8k (5-shot) | 70.66 |
- Downloads last month
- 804
Model tree for argilla/distilabeled-Marcoro14-7B-slerp-full
Dataset used to train argilla/distilabeled-Marcoro14-7B-slerp-full
Spaces using argilla/distilabeled-Marcoro14-7B-slerp-full 5
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard70.650
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard87.550
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard65.330
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard64.210
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard82.000
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard70.660