File size: 1,192 Bytes
fc21f63 588c12a fc21f63 588c12a fc21f63 588c12a fc21f63 588c12a fc21f63 588c12a fc21f63 588c12a fc21f63 588c12a fc21f63 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
---
library_name: peft
base_model: Qwen/Qwen1.5-0.5B-Chat
language:
- en
pipeline_tag: text-generation
tags:
- chat
widget:
- text: "What is the sum of the first 10 positive integers?"
---
# Qwen1.5-0.5B-Chat with EPFL DPO fine-tuning
Qwen1.5-0.5B-Chat DPO fine-tuned on open-ended and multiple choice questions from different EPFL courses and the Orca Math dataset that consists of ~200K grade school math word problems.
## Model Details
### Model Description
The model was developed during the course Modern Natural Language Processing (CS-552).
Its aim is to fine-tune the base model (Qwen/Qwen1.5-0.5B-Chat) to accurately
answer open-ended and multiple-choice questions from various EPFL courses and Orca Math dataset.
- **Developed by:** Emma Lise Boehly, Ahmed Aziz Ben Haj Hmida and Jan Kokla
- **Finetuned from model:** Qwen/Qwen1.5-0.5B-Chat
## Training Details
### Training Data
HuggingFace dataset : microsoft/orca-math-word-problems-200k
The EPFL dataset is not publicly available.
### Training Procedure
#### Training Hyperparameters
- **Training regime:** cDPO with bf16 mixed precision, $\beta=0.2$, $lr=3 \times 10^{-6}$, and $label_smoothing=0.2$
- PEFT 0.10.0 |