plaguss HF staff commited on
Commit
baa14c8
1 Parent(s): f8ef65e

Update model and dataset references

Browse files
Files changed (1) hide show
  1. README.md +39 -0
README.md CHANGED
@@ -9,6 +9,8 @@ tags:
9
  - dpo
10
  - rlaif
11
  - rlhf
 
 
12
  ---
13
  # ⚗️ distilabeled Marcoro14 7B Slerp
14
 
@@ -20,6 +22,32 @@ tags:
20
  </p>
21
 
22
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
 
24
  ## Benchmark results
25
  For benchmarking we used the famous "Nous" or "Teknium" benchmark. You can find below an overview, including our first experiment with a less ambitious dataset filtering (removing ties and `score>5`).
@@ -32,3 +60,14 @@ For running the benchmark we used another awesome contribution from Maxime: [LLM
32
  |[Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) | 44.66| 76.24| 64.15| 45.64| 57.67|
33
  |[argilla/distilabeled-Hermes-2.5-Mistral-7B](https://huggingface.co/argilla/distilabeled-Hermes-2.5-Mistral-7B) | 44.64 | 73.35 | 55.96 | 42.21 | 54.04 |
34
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  - dpo
10
  - rlaif
11
  - rlhf
12
+ - merge
13
+ - mergekit
14
  ---
15
  # ⚗️ distilabeled Marcoro14 7B Slerp
16
 
 
22
  </p>
23
 
24
 
25
+ ## Introduction
26
+
27
+ This model is a new DPO fine-tune of our new open dataset [argilla/distilabel-intel-orca-dpo-pairs](https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs), on the [mlabonne/Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) model. You can find more information of the "distilabeled" dataset used at this repo [argilla/distilabeled-Hermes-2.5-Mistral-7B](https://huggingface.co/argilla/distilabeled-Hermes-2.5-Mistral-7B/blob/main/README.md#introduction), and visit [distilabel](https://github.com/argilla-io/distilabel).
28
+
29
+ ## Training details
30
+
31
+ As we did with [Notus](https://argilla.io/blog/notus7b/), we wanted a reproducible recipe to test the impact of data quality.
32
+
33
+ And we're lucky to have so many amazing folks in the open community contributing reproducible, easy-to-use training scripts and recipes. This time, [Maxime Labonne](https://twitter.com/maximelabonne) had shared a [Colab](https://colab.research.google.com/drive/15iFBr1xWgztXvhrj5I9fBv20c7CFOPBE?usp=sharing) to fine-tune OpenHermes with DPO and the original Intel's dataset, perfect! We just updated the base model to [mlabonne/Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp), and applied the same dataset recipe we used for [argilla/distilabeled-Hermes-2.5-Mistral-7B](https://huggingface.co/argilla/distilabeled-Hermes-2.5-Mistral-7B/blob/main/README.md#introduction):
34
+
35
+ ```python
36
+ from datasets import load_dataset
37
+
38
+ # Instead of this:
39
+ # dataset = load_dataset("Intel/orca_dpo_pairs", split="train")
40
+
41
+ # we did this
42
+ dataset = load_dataset("argilla/distilabel-intel-orca-dpo-pairs", split="train")
43
+
44
+ dataset = dataset.filter(
45
+ lambda r:
46
+ r["status"] != "tie" and
47
+ r["chosen_score"] >= 8 and
48
+ not r["in_gsm8k_train"]
49
+ )
50
+ ```
51
 
52
  ## Benchmark results
53
  For benchmarking we used the famous "Nous" or "Teknium" benchmark. You can find below an overview, including our first experiment with a less ambitious dataset filtering (removing ties and `score>5`).
 
60
  |[Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) | 44.66| 76.24| 64.15| 45.64| 57.67|
61
  |[argilla/distilabeled-Hermes-2.5-Mistral-7B](https://huggingface.co/argilla/distilabeled-Hermes-2.5-Mistral-7B) | 44.64 | 73.35 | 55.96 | 42.21 | 54.04 |
62
 
63
+ ### Training Hardware
64
+
65
+ We used 1 x A100 80GB in runpod for less than 1 hour.
66
+
67
+ ## Acknowledgements
68
+
69
+ We'd like to thank the amazing open community and in particular:
70
+
71
+ * The Intel team for publishing a great open dataset and show how well it worked in the first place
72
+ * Teknium and NousResearch for their awesome work and models.
73
+ * Maxime for sharing such great resources.