lukeleeai commited on
Commit
7c12919
·
verified ·
1 Parent(s): 49f7afc

End of training

Browse files
README.md CHANGED
@@ -17,7 +17,7 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the openwebtext dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 1.0786
21
 
22
  ## Model description
23
 
@@ -41,10 +41,10 @@ The following hyperparameters were used during training:
41
  - eval_batch_size: 16
42
  - seed: 0
43
  - distributed_type: multi-GPU
44
- - num_devices: 6
45
  - gradient_accumulation_steps: 2
46
- - total_train_batch_size: 96
47
- - total_eval_batch_size: 96
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
  - lr_scheduler_type: linear
50
  - training_steps: 5000
@@ -53,40 +53,50 @@ The following hyperparameters were used during training:
53
 
54
  | Training Loss | Epoch | Step | Validation Loss |
55
  |:-------------:|:-----:|:----:|:---------------:|
56
- | 1.2733 | 0.05 | 50 | 1.2400 |
57
- | 1.0554 | 0.1 | 100 | 1.0549 |
58
- | 0.962 | 0.15 | 150 | 0.9682 |
59
- | 0.9117 | 0.19 | 200 | 0.9159 |
60
- | 0.8765 | 0.24 | 250 | 0.8785 |
61
- | 0.8521 | 0.29 | 300 | 0.8509 |
62
- | 0.8394 | 0.34 | 350 | 0.8275 |
63
- | 0.8178 | 0.39 | 400 | 0.8106 |
64
- | 0.801 | 0.44 | 450 | 0.8012 |
65
- | 0.7905 | 0.48 | 500 | 0.7912 |
66
- | 0.7865 | 0.53 | 550 | 0.7825 |
67
- | 0.7612 | 0.58 | 600 | 0.7748 |
68
- | 0.7636 | 0.63 | 650 | 0.7672 |
69
- | 0.7524 | 0.68 | 700 | 0.7617 |
70
- | 0.7512 | 0.73 | 750 | 0.7564 |
71
- | 0.7572 | 0.78 | 800 | 0.7515 |
72
- | 0.7536 | 0.82 | 850 | 0.7541 |
73
- | 0.7543 | 0.87 | 900 | 0.7501 |
74
- | 0.7435 | 0.92 | 950 | 0.7466 |
75
- | 0.7465 | 0.97 | 1000 | 0.7435 |
76
- | 0.7282 | 1.02 | 1050 | 0.7409 |
77
- | 0.7261 | 1.07 | 1100 | 0.7376 |
78
- | 0.7199 | 1.11 | 1150 | 0.7358 |
79
- | 0.7218 | 1.16 | 1200 | 0.7339 |
80
- | 0.7411 | 1.21 | 1250 | 0.7472 |
81
- | 0.741 | 1.26 | 1300 | 0.7451 |
82
- | 0.7326 | 1.31 | 1350 | 0.7421 |
83
- | 0.7359 | 1.36 | 1400 | 0.7402 |
84
- | 0.7278 | 1.41 | 1450 | 0.7385 |
85
- | 0.7235 | 1.45 | 1500 | 0.7365 |
86
- | 0.7138 | 1.5 | 1550 | 0.7353 |
87
- | 0.731 | 1.55 | 1600 | 0.7341 |
88
- | 0.7774 | 1.6 | 1650 | 0.7806 |
89
- | 0.7672 | 1.65 | 1700 | 0.7738 |
 
 
 
 
 
 
 
 
 
 
90
 
91
 
92
  ### Framework versions
 
17
 
18
  This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the openwebtext dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 4.9832
21
 
22
  ## Model description
23
 
 
41
  - eval_batch_size: 16
42
  - seed: 0
43
  - distributed_type: multi-GPU
44
+ - num_devices: 3
45
  - gradient_accumulation_steps: 2
46
+ - total_train_batch_size: 48
47
+ - total_eval_batch_size: 48
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
  - lr_scheduler_type: linear
50
  - training_steps: 5000
 
53
 
54
  | Training Loss | Epoch | Step | Validation Loss |
55
  |:-------------:|:-----:|:----:|:---------------:|
56
+ | 1.2964 | 0.02 | 50 | 1.2517 |
57
+ | 1.1086 | 0.05 | 100 | 1.0714 |
58
+ | 0.9727 | 0.07 | 150 | 0.9857 |
59
+ | 0.9326 | 0.1 | 200 | 0.9357 |
60
+ | 0.8944 | 0.12 | 250 | 0.8988 |
61
+ | 0.872 | 0.15 | 300 | 0.8700 |
62
+ | 0.8523 | 0.17 | 350 | 0.8516 |
63
+ | 0.8369 | 0.19 | 400 | 0.8358 |
64
+ | 0.8372 | 0.22 | 450 | 0.8226 |
65
+ | 0.8221 | 0.24 | 500 | 0.8116 |
66
+ | 0.8093 | 0.27 | 550 | 0.8020 |
67
+ | 0.804 | 0.29 | 600 | 0.7937 |
68
+ | 0.8111 | 0.32 | 650 | 0.7935 |
69
+ | 0.7949 | 0.34 | 700 | 0.7872 |
70
+ | 0.7947 | 0.36 | 750 | 0.7815 |
71
+ | 0.8045 | 0.39 | 800 | 0.7771 |
72
+ | 0.7706 | 0.41 | 850 | 0.7724 |
73
+ | 0.7669 | 0.44 | 900 | 0.7683 |
74
+ | 0.7691 | 0.46 | 950 | 0.7825 |
75
+ | 0.7737 | 0.48 | 1000 | 0.7779 |
76
+ | 0.7595 | 0.51 | 1050 | 0.7748 |
77
+ | 0.7672 | 0.53 | 1100 | 0.7709 |
78
+ | 0.7725 | 0.56 | 1150 | 0.7681 |
79
+ | 0.7551 | 0.58 | 1200 | 0.7658 |
80
+ | 0.8035 | 0.61 | 1250 | 0.8159 |
81
+ | 0.804 | 0.63 | 1300 | 0.8068 |
82
+ | 0.8074 | 0.65 | 1350 | 0.8016 |
83
+ | 0.7801 | 0.68 | 1400 | 0.7982 |
84
+ | 0.7842 | 0.7 | 1450 | 0.7951 |
85
+ | 0.7938 | 0.73 | 1500 | 0.7907 |
86
+ | 0.8625 | 0.75 | 1550 | 0.8568 |
87
+ | 0.8467 | 0.78 | 1600 | 0.8443 |
88
+ | 0.8216 | 0.8 | 1650 | 0.8379 |
89
+ | 0.8334 | 0.82 | 1700 | 0.8332 |
90
+ | 0.8287 | 0.85 | 1750 | 0.8292 |
91
+ | 0.8251 | 0.87 | 1800 | 0.8250 |
92
+ | 0.8969 | 0.9 | 1850 | 0.8790 |
93
+ | 0.8619 | 0.92 | 1900 | 0.8696 |
94
+ | 0.8566 | 0.95 | 1950 | 0.8645 |
95
+ | 0.8633 | 0.97 | 2000 | 0.8599 |
96
+ | 0.8622 | 0.99 | 2050 | 0.8558 |
97
+ | 0.8336 | 1.02 | 2100 | 0.8520 |
98
+ | 0.918 | 1.04 | 2150 | 0.9045 |
99
+ | 0.8755 | 1.07 | 2200 | 0.8960 |
100
 
101
 
102
  ### Framework versions
model-00001-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2712219c2539e248d4c171302b6e31a6ed7216a276b36996fdf16239416cf5cc
3
  size 4943163992
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ed910571981950d36cba54325fd9bf133cf78508b6ea10558d7a3b9148e8021
3
  size 4943163992
model-00002-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:06e689af952986a77635f2dd09b9d5b8453396ef02728637979c2410dafe2244
3
  size 4999821144
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:77e27035e935b3f907381e8a934d30a29ab6a0cf06d8e79d8991cc4830975212
3
  size 4999821144
model-00003-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:83f02961cfae9f6b4fb9bb15c989e17599d075ad4cfc2e6c23b4f4411c981ea7
3
  size 4540517840
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c43267ffdf5daa7bf296abb299744205422289ef0a4be298e6fd8a35d1561498
3
  size 4540517840