End of training
Browse files
README.md
CHANGED
@@ -1,6 +1,7 @@
|
|
1 |
---
|
2 |
-
license: mit
|
3 |
base_model: gpt2
|
|
|
|
|
4 |
tags:
|
5 |
- generated_from_trainer
|
6 |
model-index:
|
@@ -8,14 +9,23 @@ model-index:
|
|
8 |
results: []
|
9 |
---
|
10 |
|
11 |
-
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
12 |
-
should probably proofread and complete it, then remove this comment. -->
|
13 |
-
|
14 |
# distily_bench_gpt2_activation_loss_b
|
15 |
|
16 |
-
This model is
|
|
|
|
|
|
|
17 |
It achieves the following results on the evaluation set:
|
18 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
|
20 |
## Model description
|
21 |
|
@@ -28,12 +38,15 @@ More information needed
|
|
28 |
## Training and evaluation data
|
29 |
|
30 |
More information needed
|
|
|
31 |
|
32 |
## Training procedure
|
33 |
|
34 |
### Training hyperparameters
|
35 |
|
36 |
The following hyperparameters were used during training:
|
|
|
|
|
37 |
- learning_rate: 4e-05
|
38 |
- train_batch_size: 8
|
39 |
- eval_batch_size: 8
|
@@ -42,28 +55,30 @@ The following hyperparameters were used during training:
|
|
42 |
- lr_scheduler_type: constant
|
43 |
- num_epochs: 1.0
|
44 |
|
45 |
-
###
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
|
50 |
-
|
|
51 |
-
|
|
52 |
-
|
|
53 |
-
|
|
54 |
-
|
|
55 |
-
|
|
56 |
-
|
|
57 |
-
|
|
58 |
-
|
|
59 |
-
|
|
60 |
-
|
|
61 |
-
|
|
62 |
-
|
|
|
|
|
|
|
63 |
|
64 |
### Framework versions
|
65 |
-
|
66 |
- Transformers 4.44.0
|
67 |
- Pytorch 2.3.0
|
68 |
- Datasets 2.21.0
|
69 |
-
- Tokenizers 0.19.1
|
|
|
1 |
---
|
|
|
2 |
base_model: gpt2
|
3 |
+
library_name: Distily
|
4 |
+
license: mit
|
5 |
tags:
|
6 |
- generated_from_trainer
|
7 |
model-index:
|
|
|
9 |
results: []
|
10 |
---
|
11 |
|
|
|
|
|
|
|
12 |
# distily_bench_gpt2_activation_loss_b
|
13 |
|
14 |
+
This student model is distilled from the teacher model [gpt2](https://huggingface.co/gpt2) using the dataset (unspecified).
|
15 |
+
|
16 |
+
The [Distily](https://github.com/lapp0/distily) library was used for this distillation.
|
17 |
+
|
18 |
It achieves the following results on the evaluation set:
|
19 |
+
- eval_enwikippl: 210.2820
|
20 |
+
- eval_frwikippl: 1274.1346
|
21 |
+
- eval_zhwikippl: 583.2827
|
22 |
+
- eval_loss: 1.2965
|
23 |
+
- eval_runtime: 17.2526
|
24 |
+
- eval_samples_per_second: 57.962
|
25 |
+
- eval_steps_per_second: 7.245
|
26 |
+
|
27 |
+
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
28 |
+
should probably proofread and complete it, then remove this comment.
|
29 |
|
30 |
## Model description
|
31 |
|
|
|
38 |
## Training and evaluation data
|
39 |
|
40 |
More information needed
|
41 |
+
-->
|
42 |
|
43 |
## Training procedure
|
44 |
|
45 |
### Training hyperparameters
|
46 |
|
47 |
The following hyperparameters were used during training:
|
48 |
+
- distillation_objective: DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl, layer_mapper=None, projector=None), hs_loss_component=LossComponent(label=hs, weight=2.0, loss_fn=mse, layer_mapper=None, projector=None), attn_loss_component=LossComponent(label=attn, weight=0, loss_fn=None, layer_mapper=None, projector=None))
|
49 |
+
- train_embeddings: True
|
50 |
- learning_rate: 4e-05
|
51 |
- train_batch_size: 8
|
52 |
- eval_batch_size: 8
|
|
|
55 |
- lr_scheduler_type: constant
|
56 |
- num_epochs: 1.0
|
57 |
|
58 |
+
### Resource Usage
|
59 |
+
Peak GPU Memory: 8.0904 GB
|
60 |
+
|
61 |
+
### Eval-Phase Metrics
|
62 |
+
| step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | zhwikippl |
|
63 |
+
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
|
64 |
+
| **teacher eval** | | 30.2086 | 57.2728 | | | | | 18.1784 |
|
65 |
+
| 0 | 0 | 58037.3203 | 58017.0117 | 6.0237 | 17.2607 | 57.935 | 7.242 | 56038.0625 |
|
66 |
+
| 1000 | 0.0808 | 715.0994 | 4658.6846 | 2.0131 | 17.1734 | 58.23 | 7.279 | 16350.8623 |
|
67 |
+
| 2000 | 0.1616 | 508.9246 | 3343.2109 | 1.8201 | 17.2004 | 58.138 | 7.267 | 3102.6990 |
|
68 |
+
| 3000 | 0.2424 | 419.7101 | 2552.4004 | 1.7020 | 17.1441 | 58.329 | 7.291 | 1042.4126 |
|
69 |
+
| 4000 | 0.3232 | 361.0421 | 2336.7490 | 1.6177 | 17.0616 | 58.611 | 7.326 | 911.8621 |
|
70 |
+
| 5000 | 0.4040 | 313.2633 | 1815.2219 | 1.5316 | 17.1786 | 58.212 | 7.276 | 863.9713 |
|
71 |
+
| 6000 | 0.4848 | 281.3860 | 1725.1301 | 1.4597 | 17.3168 | 57.747 | 7.218 | 705.6341 |
|
72 |
+
| 7000 | 0.5657 | 253.9131 | 1485.6165 | 1.3999 | 17.1434 | 58.332 | 7.291 | 605.2624 |
|
73 |
+
| 8000 | 0.6465 | 229.4073 | 1427.2965 | 1.3455 | 17.134 | 58.363 | 7.295 | 629.6656 |
|
74 |
+
| 9000 | 0.7273 | 210.2820 | 1274.1346 | 1.2965 | 17.2526 | 57.962 | 7.245 | 583.2827 |
|
75 |
+
| 10000 | 0.8081 | 194.6313 | 1199.3423 | 1.2490 | 17.1679 | 58.248 | 7.281 | 677.5621 |
|
76 |
+
| 11000 | 0.8889 | 180.3274 | 1160.25 | 1.1980 | 17.1591 | 58.278 | 7.285 | 758.1945 |
|
77 |
+
| 12000 | 0.9697 | 164.7045 | 1005.8066 | 1.1583 | 17.1824 | 58.199 | 7.275 | 600.1918 |
|
78 |
+
| 12375 | 1.0 | 161.0243 | 969.7354 | 1.1403 | 17.1939 | 58.16 | 7.27 | 632.9536 |
|
79 |
|
80 |
### Framework versions
|
81 |
+
- Distily 0.2.0
|
82 |
- Transformers 4.44.0
|
83 |
- Pytorch 2.3.0
|
84 |
- Datasets 2.21.0
|
|
logs/hs_loss_fn=mse, hs_weight=2.0/events.out.tfevents.1723659736.5f530b1cf724
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d934dfe2df25e811ee5bbcb1a6d2c43fd10158d4a9b035a19f1a0ab04b1d0c98
|
3 |
+
size 249
|