Model save
Browse files- README.md +77 -0
- trainer_log.jsonl +19 -0
README.md
ADDED
@@ -0,0 +1,77 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
library_name: peft
|
4 |
+
tags:
|
5 |
+
- trl
|
6 |
+
- dpo
|
7 |
+
- llama-factory
|
8 |
+
- generated_from_trainer
|
9 |
+
base_model: mistralai/Mistral-7B-Instruct-v0.3
|
10 |
+
model-index:
|
11 |
+
- name: Mistral-7B-Instruct-v0.3-ORPO
|
12 |
+
results: []
|
13 |
+
---
|
14 |
+
|
15 |
+
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
16 |
+
should probably proofread and complete it, then remove this comment. -->
|
17 |
+
|
18 |
+
# Mistral-7B-Instruct-v0.3-ORPO
|
19 |
+
|
20 |
+
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) on an unknown dataset.
|
21 |
+
It achieves the following results on the evaluation set:
|
22 |
+
- Loss: 0.8734
|
23 |
+
- Rewards/chosen: -0.0810
|
24 |
+
- Rewards/rejected: -0.1017
|
25 |
+
- Rewards/accuracies: 0.5720
|
26 |
+
- Rewards/margins: 0.0208
|
27 |
+
- Logps/rejected: -1.0175
|
28 |
+
- Logps/chosen: -0.8098
|
29 |
+
- Logits/rejected: -3.1455
|
30 |
+
- Logits/chosen: -3.1171
|
31 |
+
- Sft Loss: 0.8098
|
32 |
+
- Odds Ratio Loss: 0.6360
|
33 |
+
|
34 |
+
## Model description
|
35 |
+
|
36 |
+
More information needed
|
37 |
+
|
38 |
+
## Intended uses & limitations
|
39 |
+
|
40 |
+
More information needed
|
41 |
+
|
42 |
+
## Training and evaluation data
|
43 |
+
|
44 |
+
More information needed
|
45 |
+
|
46 |
+
## Training procedure
|
47 |
+
|
48 |
+
### Training hyperparameters
|
49 |
+
|
50 |
+
The following hyperparameters were used during training:
|
51 |
+
- learning_rate: 5e-06
|
52 |
+
- train_batch_size: 2
|
53 |
+
- eval_batch_size: 2
|
54 |
+
- seed: 42
|
55 |
+
- gradient_accumulation_steps: 8
|
56 |
+
- total_train_batch_size: 16
|
57 |
+
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
58 |
+
- lr_scheduler_type: cosine
|
59 |
+
- lr_scheduler_warmup_steps: 0.1
|
60 |
+
- num_epochs: 3.0
|
61 |
+
|
62 |
+
### Training results
|
63 |
+
|
64 |
+
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Sft Loss | Odds Ratio Loss |
|
65 |
+
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:--------:|:---------------:|
|
66 |
+
| 0.9464 | 0.8891 | 500 | 0.8919 | -0.0828 | -0.1031 | 0.5690 | 0.0202 | -1.0306 | -0.8281 | -3.1432 | -3.1149 | 0.8281 | 0.6374 |
|
67 |
+
| 0.8737 | 1.7782 | 1000 | 0.8774 | -0.0814 | -0.1019 | 0.5760 | 0.0205 | -1.0186 | -0.8136 | -3.1431 | -3.1139 | 0.8136 | 0.6371 |
|
68 |
+
| 0.8923 | 2.6673 | 1500 | 0.8734 | -0.0810 | -0.1017 | 0.5720 | 0.0208 | -1.0175 | -0.8098 | -3.1455 | -3.1171 | 0.8098 | 0.6360 |
|
69 |
+
|
70 |
+
|
71 |
+
### Framework versions
|
72 |
+
|
73 |
+
- PEFT 0.10.0
|
74 |
+
- Transformers 4.40.1
|
75 |
+
- Pytorch 2.3.0
|
76 |
+
- Datasets 2.19.0
|
77 |
+
- Tokenizers 0.19.1
|
trainer_log.jsonl
CHANGED
@@ -151,3 +151,22 @@
|
|
151 |
{"current_steps": 1490, "total_steps": 1686, "loss": 0.8283, "accuracy": 0.581250011920929, "learning_rate": 1.6490167940538343e-07, "epoch": 2.6494776617026004, "percentage": 88.37, "elapsed_time": "6:28:47", "remaining_time": "0:51:08"}
|
152 |
{"current_steps": 1500, "total_steps": 1686, "loss": 0.8923, "accuracy": 0.5375000238418579, "learning_rate": 1.4866882516191339e-07, "epoch": 2.6672593909757722, "percentage": 88.97, "elapsed_time": "6:30:49", "remaining_time": "0:48:27"}
|
153 |
{"current_steps": 1500, "total_steps": 1686, "eval_loss": 0.8734214901924133, "epoch": 2.6672593909757722, "percentage": 88.97, "elapsed_time": "6:34:47", "remaining_time": "0:48:57"}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
151 |
{"current_steps": 1490, "total_steps": 1686, "loss": 0.8283, "accuracy": 0.581250011920929, "learning_rate": 1.6490167940538343e-07, "epoch": 2.6494776617026004, "percentage": 88.37, "elapsed_time": "6:28:47", "remaining_time": "0:51:08"}
|
152 |
{"current_steps": 1500, "total_steps": 1686, "loss": 0.8923, "accuracy": 0.5375000238418579, "learning_rate": 1.4866882516191339e-07, "epoch": 2.6672593909757722, "percentage": 88.97, "elapsed_time": "6:30:49", "remaining_time": "0:48:27"}
|
153 |
{"current_steps": 1500, "total_steps": 1686, "eval_loss": 0.8734214901924133, "epoch": 2.6672593909757722, "percentage": 88.97, "elapsed_time": "6:34:47", "remaining_time": "0:48:57"}
|
154 |
+
{"current_steps": 1510, "total_steps": 1686, "loss": 0.834, "accuracy": 0.612500011920929, "learning_rate": 1.3325243551706057e-07, "epoch": 2.685041120248944, "percentage": 89.56, "elapsed_time": "6:36:51", "remaining_time": "0:46:15"}
|
155 |
+
{"current_steps": 1520, "total_steps": 1686, "loss": 0.842, "accuracy": 0.574999988079071, "learning_rate": 1.1865786358165737e-07, "epoch": 2.702822849522116, "percentage": 90.15, "elapsed_time": "6:38:29", "remaining_time": "0:43:31"}
|
156 |
+
{"current_steps": 1530, "total_steps": 1686, "loss": 0.9062, "accuracy": 0.6312500238418579, "learning_rate": 1.0489017710262311e-07, "epoch": 2.720604578795288, "percentage": 90.75, "elapsed_time": "6:40:07", "remaining_time": "0:40:47"}
|
157 |
+
{"current_steps": 1540, "total_steps": 1686, "loss": 0.8766, "accuracy": 0.5375000238418579, "learning_rate": 9.195415670326446e-08, "epoch": 2.73838630806846, "percentage": 91.34, "elapsed_time": "6:41:47", "remaining_time": "0:38:05"}
|
158 |
+
{"current_steps": 1550, "total_steps": 1686, "loss": 0.875, "accuracy": 0.48750001192092896, "learning_rate": 7.985429422327384e-08, "epoch": 2.7561680373416317, "percentage": 91.93, "elapsed_time": "6:43:22", "remaining_time": "0:35:23"}
|
159 |
+
{"current_steps": 1560, "total_steps": 1686, "loss": 0.8408, "accuracy": 0.6000000238418579, "learning_rate": 6.859479115900818e-08, "epoch": 2.773949766614803, "percentage": 92.53, "elapsed_time": "6:45:00", "remaining_time": "0:32:42"}
|
160 |
+
{"current_steps": 1570, "total_steps": 1686, "loss": 0.8686, "accuracy": 0.5874999761581421, "learning_rate": 5.817955720457902e-08, "epoch": 2.791731495887975, "percentage": 93.12, "elapsed_time": "6:46:38", "remaining_time": "0:30:02"}
|
161 |
+
{"current_steps": 1580, "total_steps": 1686, "loss": 0.887, "accuracy": 0.606249988079071, "learning_rate": 4.861220889427199e-08, "epoch": 2.809513225161147, "percentage": 93.71, "elapsed_time": "6:48:12", "remaining_time": "0:27:23"}
|
162 |
+
{"current_steps": 1590, "total_steps": 1686, "loss": 0.8923, "accuracy": 0.59375, "learning_rate": 3.9896068346758074e-08, "epoch": 2.827294954434319, "percentage": 94.31, "elapsed_time": "6:49:59", "remaining_time": "0:24:45"}
|
163 |
+
{"current_steps": 1600, "total_steps": 1686, "loss": 0.8845, "accuracy": 0.5, "learning_rate": 3.203416211153832e-08, "epoch": 2.8450766837074903, "percentage": 94.9, "elapsed_time": "6:52:10", "remaining_time": "0:22:09"}
|
164 |
+
{"current_steps": 1610, "total_steps": 1686, "loss": 0.8883, "accuracy": 0.5562499761581421, "learning_rate": 2.5029220118019393e-08, "epoch": 2.8628584129806622, "percentage": 95.49, "elapsed_time": "6:54:15", "remaining_time": "0:19:33"}
|
165 |
+
{"current_steps": 1620, "total_steps": 1686, "loss": 0.8417, "accuracy": 0.643750011920929, "learning_rate": 1.8883674727586122e-08, "epoch": 2.880640142253834, "percentage": 96.09, "elapsed_time": "6:56:02", "remaining_time": "0:16:56"}
|
166 |
+
{"current_steps": 1630, "total_steps": 1686, "loss": 0.8322, "accuracy": 0.5062500238418579, "learning_rate": 1.3599659889000639e-08, "epoch": 2.898421871527006, "percentage": 96.68, "elapsed_time": "6:57:42", "remaining_time": "0:14:21"}
|
167 |
+
{"current_steps": 1640, "total_steps": 1686, "loss": 0.8469, "accuracy": 0.5, "learning_rate": 9.179010397421528e-09, "epoch": 2.916203600800178, "percentage": 97.27, "elapsed_time": "6:59:26", "remaining_time": "0:11:45"}
|
168 |
+
{"current_steps": 1650, "total_steps": 1686, "loss": 0.8037, "accuracy": 0.543749988079071, "learning_rate": 5.623261257296509e-09, "epoch": 2.93398533007335, "percentage": 97.86, "elapsed_time": "7:01:03", "remaining_time": "0:09:11"}
|
169 |
+
{"current_steps": 1660, "total_steps": 1686, "loss": 0.8445, "accuracy": 0.5625, "learning_rate": 2.933647149357122e-09, "epoch": 2.9517670593465217, "percentage": 98.46, "elapsed_time": "7:02:41", "remaining_time": "0:06:37"}
|
170 |
+
{"current_steps": 1670, "total_steps": 1686, "loss": 0.8929, "accuracy": 0.5, "learning_rate": 1.1111020018930717e-09, "epoch": 2.969548788619693, "percentage": 99.05, "elapsed_time": "7:04:20", "remaining_time": "0:04:03"}
|
171 |
+
{"current_steps": 1680, "total_steps": 1686, "loss": 0.8399, "accuracy": 0.625, "learning_rate": 1.5625866646051813e-10, "epoch": 2.987330517892865, "percentage": 99.64, "elapsed_time": "7:05:57", "remaining_time": "0:01:31"}
|
172 |
+
{"current_steps": 1686, "total_steps": 1686, "epoch": 2.997999555456768, "percentage": 100.0, "elapsed_time": "7:06:58", "remaining_time": "0:00:00"}
|