Llama3.1-Gutenberg-Doppel-70B
mlabonne/Hermes-3-Llama-3.1-70B-lorablated finetuned on jondurbin/gutenberg-dpo-v0.1 and nbeerbower/gutenberg2-dpo.
Method
ORPO tuned with 2x H100 for 3 epochs.
Thank you Schneewolf Labs for the compute.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 35.68 |
IFEval (0-Shot) | 70.92 |
BBH (3-Shot) | 52.56 |
MATH Lvl 5 (4-Shot) | 13.75 |
GPQA (0-shot) | 12.64 |
MuSR (0-shot) | 22.68 |
MMLU-PRO (5-shot) | 41.52 |
- Downloads last month
- 440
Model tree for mav23/Llama3.1-Gutenberg-Doppel-70B-GGUF
Base model
mlabonne/Hermes-3-Llama-3.1-70B-lorablatedDatasets used to train mav23/Llama3.1-Gutenberg-Doppel-70B-GGUF
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard70.920
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard52.560
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard13.750
- acc_norm on GPQA (0-shot)Open LLM Leaderboard12.640
- acc_norm on MuSR (0-shot)Open LLM Leaderboard22.680
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard41.520