File size: 1,546 Bytes
d7d01f4
 
 
 
 
 
 
 
 
c8ab648
 
7c2d0de
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
---
license: mit
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- dpo
- phi-3
datasets:
- argilla/distilabel-capybara-dpo-7k-binarized
---

This model has been created as a support for the book "Large Language Models Projects" by Apress, written by Pere Martra.

To create it, we started with the Phi-3-Mini-4K-Instruct model and applied DPO alignment using the distilabel-capybara-dpo-7k-binarized dataset.

Phi-3 is a state-of-the-art model with 3.8 billion parameters that has outperformed other models with 7 billion parameters. The DPO alignment process has produced good results, modifying the model's responses and making them more similar to those in the capybara dataset.

**Row in the Dataset:**

*{'prompt': '<|user|>\nAssist me in calculating 9319357631 plus 595. Numbers and symbols only, please.<|end|>\n<|assistant|>\n',
 'chosen': 'The sum of 9319357631 and 595 is 9319358226.<|end|>\n',
 'rejected': 'The result of adding 9319357631 and 595 is 9319363626.<|end|>\n'}*

**Prompt:**
*3713841893836/4?
Limit your response to mathematical expressions and symbols.*

**Response from the Base model:**

*To find the result of the division, we can simply divide the given number by 4:
$$
\frac{3713841893836}{4} = 928460473459*

**Response from the fine-tuned model:**

*3713841893836 ÷ 4 = 928460473459*


If you want to see how the model was created, you can check out the [repository](https://github.com/peremartra/Large-Language-Model-Notebooks-Course) where the book's notebooks are kept up-to-date.