chrlu commited on
Commit
60a5aab
1 Parent(s): 4895620

Model save

Browse files
README.md ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: peft
4
+ tags:
5
+ - trl
6
+ - dpo
7
+ - generated_from_trainer
8
+ base_model: mistralai/Mistral-7B-v0.1
9
+ model-index:
10
+ - name: zephyr-7b-dpo-qlora
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # zephyr-7b-dpo-qlora
18
+
19
+ This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.5019
22
+ - Rewards/chosen: -2.1484
23
+ - Rewards/rejected: -3.1540
24
+ - Rewards/accuracies: 0.7590
25
+ - Rewards/margins: 1.0056
26
+ - Logps/rejected: -556.6639
27
+ - Logps/chosen: -480.1491
28
+ - Logits/rejected: -1.2933
29
+ - Logits/chosen: -1.3829
30
+
31
+ ## Model description
32
+
33
+ More information needed
34
+
35
+ ## Intended uses & limitations
36
+
37
+ More information needed
38
+
39
+ ## Training and evaluation data
40
+
41
+ More information needed
42
+
43
+ ## Training procedure
44
+
45
+ ### Training hyperparameters
46
+
47
+ The following hyperparameters were used during training:
48
+ - learning_rate: 5e-06
49
+ - train_batch_size: 4
50
+ - eval_batch_size: 8
51
+ - seed: 42
52
+ - distributed_type: multi-GPU
53
+ - num_devices: 2
54
+ - gradient_accumulation_steps: 4
55
+ - total_train_batch_size: 32
56
+ - total_eval_batch_size: 16
57
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
58
+ - lr_scheduler_type: cosine
59
+ - lr_scheduler_warmup_ratio: 0.1
60
+ - num_epochs: 1
61
+
62
+ ### Training results
63
+
64
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
65
+ |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
66
+ | 0.6635 | 0.0523 | 100 | 0.6640 | 0.0129 | -0.0631 | 0.6830 | 0.0761 | -247.5831 | -264.0175 | -2.0469 | -2.1424 |
67
+ | 0.6119 | 0.1047 | 200 | 0.6207 | -0.5556 | -0.8212 | 0.6790 | 0.2657 | -323.3911 | -320.8676 | -1.9524 | -2.0449 |
68
+ | 0.5874 | 0.1570 | 300 | 0.5849 | -0.4240 | -0.8044 | 0.7000 | 0.3804 | -321.7128 | -307.7115 | -1.9609 | -2.0494 |
69
+ | 0.5608 | 0.2094 | 400 | 0.5607 | -1.1817 | -1.7752 | 0.7290 | 0.5935 | -418.7894 | -383.4811 | -1.6969 | -1.7823 |
70
+ | 0.5287 | 0.2617 | 500 | 0.5434 | -1.7248 | -2.4550 | 0.7250 | 0.7303 | -486.7726 | -437.7878 | -1.5394 | -1.6284 |
71
+ | 0.5504 | 0.3141 | 600 | 0.5278 | -1.3541 | -2.1302 | 0.7370 | 0.7761 | -454.2872 | -400.7156 | -1.4439 | -1.5287 |
72
+ | 0.5243 | 0.3664 | 700 | 0.5278 | -0.9934 | -1.7415 | 0.7420 | 0.7481 | -415.4179 | -364.6462 | -1.4888 | -1.5754 |
73
+ | 0.5346 | 0.4187 | 800 | 0.5285 | -1.0509 | -1.8191 | 0.7360 | 0.7681 | -423.1764 | -370.4044 | -1.4861 | -1.5718 |
74
+ | 0.5072 | 0.4711 | 900 | 0.5197 | -1.6324 | -2.5736 | 0.7300 | 0.9412 | -498.6239 | -428.5474 | -1.3651 | -1.4531 |
75
+ | 0.5023 | 0.5234 | 1000 | 0.5158 | -1.6927 | -2.6755 | 0.7460 | 0.9828 | -508.8179 | -434.5808 | -1.2853 | -1.3779 |
76
+ | 0.4954 | 0.5758 | 1100 | 0.5126 | -1.4605 | -2.3370 | 0.7480 | 0.8765 | -474.9688 | -411.3603 | -1.3921 | -1.4843 |
77
+ | 0.4983 | 0.6281 | 1200 | 0.5105 | -2.0566 | -3.0678 | 0.7450 | 1.0112 | -548.0505 | -470.9687 | -1.1942 | -1.2848 |
78
+ | 0.4774 | 0.6805 | 1300 | 0.5093 | -1.9802 | -3.0112 | 0.7510 | 1.0311 | -542.3931 | -463.3254 | -1.2574 | -1.3491 |
79
+ | 0.4516 | 0.7328 | 1400 | 0.5058 | -2.1539 | -3.2003 | 0.7530 | 1.0464 | -561.2969 | -480.7002 | -1.2592 | -1.3500 |
80
+ | 0.4758 | 0.7851 | 1500 | 0.5018 | -2.2342 | -3.2427 | 0.7550 | 1.0085 | -565.5339 | -488.7257 | -1.2803 | -1.3710 |
81
+ | 0.4967 | 0.8375 | 1600 | 0.5019 | -2.1690 | -3.1744 | 0.7590 | 1.0054 | -558.7111 | -482.2090 | -1.2939 | -1.3837 |
82
+ | 0.4769 | 0.8898 | 1700 | 0.5018 | -2.1431 | -3.1460 | 0.7600 | 1.0029 | -555.8691 | -479.6245 | -1.2936 | -1.3834 |
83
+ | 0.4843 | 0.9422 | 1800 | 0.5019 | -2.1475 | -3.1534 | 0.7580 | 1.0059 | -556.6094 | -480.0620 | -1.2932 | -1.3829 |
84
+ | 0.5048 | 0.9945 | 1900 | 0.5019 | -2.1484 | -3.1540 | 0.7590 | 1.0056 | -556.6639 | -480.1491 | -1.2933 | -1.3829 |
85
+
86
+
87
+ ### Framework versions
88
+
89
+ - PEFT 0.7.1
90
+ - Transformers 4.40.1
91
+ - Pytorch 2.1.2
92
+ - Datasets 2.19.0
93
+ - Tokenizers 0.19.1
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c261af60ad930c46ba74ce7188e06005cdca85c96587552ee5aac80186792a86
3
  size 671150064
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9597067b6c8e6cafe8e5c2455e640771e5d894bbad8338b27cce8202185a552f
3
  size 671150064
all_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 0.9997382884061764,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.5325085540092428,
5
+ "train_runtime": 36177.5726,
6
+ "train_samples": 61134,
7
+ "train_samples_per_second": 1.69,
8
+ "train_steps_per_second": 0.053
9
+ }
runs/Apr27_00-45-40_d578b23f42e9/events.out.tfevents.1714175189.d578b23f42e9.165456.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bb9437d28b5fc6cdb6164b3795ff3797e9f51ef834ec75e7d148c839ae1d3832
3
- size 150650
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:32141dcf1d1d97c0f5cef6c4a0b36ef5e1e2fafb311f7b2164a347d5faffe4df
3
+ size 151692
train_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 0.9997382884061764,
3
+ "total_flos": 0.0,
4
+ "train_loss": 0.5325085540092428,
5
+ "train_runtime": 36177.5726,
6
+ "train_samples": 61134,
7
+ "train_samples_per_second": 1.69,
8
+ "train_steps_per_second": 0.053
9
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff