Update README.md
Browse files
README.md
CHANGED
@@ -55,7 +55,7 @@ Finetuned and aligned with **SFT** and **DPO**
|
|
55 |
|
56 |
### Training Dataset:
|
57 |
|
58 |
-
SauerkrautLM-
|
59 |
**SFT** with the dataset[OpenOrca/Slim-Orca](https://huggingface.co/datasets/Open-Orca/SlimOrca) and aligned through **DPO** with our **new German SauerkrautLM-DPO dataset** based on parts of the SFT SauerkrautLM dataset
|
60 |
as chosen answers and [Sauerkraut-7b-HerO](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO) as rejected answers. Added with additional augmented Parts of the Ultrafeedback Dataset [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) and [argilla/distilabel-math-preference-dpo](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo)
|
61 |
We found, that only a simple translation of training data can lead to unnatural German phrasings.
|
|
|
55 |
|
56 |
### Training Dataset:
|
57 |
|
58 |
+
SauerkrautLM-Mixtral-8x7B was trained with mix of German data augmentation and translated data.
|
59 |
**SFT** with the dataset[OpenOrca/Slim-Orca](https://huggingface.co/datasets/Open-Orca/SlimOrca) and aligned through **DPO** with our **new German SauerkrautLM-DPO dataset** based on parts of the SFT SauerkrautLM dataset
|
60 |
as chosen answers and [Sauerkraut-7b-HerO](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-HerO) as rejected answers. Added with additional augmented Parts of the Ultrafeedback Dataset [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) and [argilla/distilabel-math-preference-dpo](https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo)
|
61 |
We found, that only a simple translation of training data can lead to unnatural German phrasings.
|