End of training
Browse files
README.md
CHANGED
@@ -16,13 +16,13 @@ This student model is distilled from the teacher model [gpt2](https://huggingfac
|
|
16 |
The [Distily](https://github.com/lapp0/distily) library was used for this distillation.
|
17 |
|
18 |
It achieves the following results on the evaluation set:
|
19 |
-
- eval_enwikippl:
|
20 |
-
- eval_frwikippl:
|
21 |
-
- eval_zhwikippl:
|
22 |
-
- eval_loss: 1.
|
23 |
-
- eval_runtime: 17.
|
24 |
-
- eval_samples_per_second:
|
25 |
-
- eval_steps_per_second: 7.
|
26 |
|
27 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
28 |
should probably proofread and complete it, then remove this comment.
|
@@ -45,7 +45,7 @@ More information needed
|
|
45 |
### Training hyperparameters
|
46 |
|
47 |
The following hyperparameters were used during training:
|
48 |
-
- distillation_objective: DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl, layer_mapper=None, projector=None), hs_loss_component=LossComponent(label=hs, weight=2.0, loss_fn=
|
49 |
- train_embeddings: True
|
50 |
- learning_rate: 4e-05
|
51 |
- train_batch_size: 8
|
@@ -56,26 +56,26 @@ The following hyperparameters were used during training:
|
|
56 |
- num_epochs: 1.0
|
57 |
|
58 |
### Resource Usage
|
59 |
-
Peak GPU Memory: 8.
|
60 |
|
61 |
### Eval-Phase Metrics
|
62 |
| step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | zhwikippl |
|
63 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
|
64 |
| **teacher eval** | | 30.2086 | 57.2728 | | | | | 18.1784 |
|
65 |
-
| 0 | 0 |
|
66 |
-
| 1000 | 0.0808 |
|
67 |
-
| 2000 | 0.1616 |
|
68 |
-
| 3000 | 0.2424 |
|
69 |
-
| 4000 | 0.3232 |
|
70 |
-
| 5000 | 0.4040 |
|
71 |
-
| 6000 | 0.4848 |
|
72 |
-
| 7000 | 0.5657 |
|
73 |
-
| 8000 | 0.6465 |
|
74 |
-
| 9000 | 0.7273 |
|
75 |
-
| 10000 | 0.8081 |
|
76 |
-
| 11000 | 0.8889 |
|
77 |
-
| 12000 | 0.9697 |
|
78 |
-
| 12375 | 1.0 |
|
79 |
|
80 |
### Framework versions
|
81 |
- Distily 0.2.0
|
|
|
16 |
The [Distily](https://github.com/lapp0/distily) library was used for this distillation.
|
17 |
|
18 |
It achieves the following results on the evaluation set:
|
19 |
+
- eval_enwikippl: 248.6255
|
20 |
+
- eval_frwikippl: 1465.2275
|
21 |
+
- eval_zhwikippl: 910.6450
|
22 |
+
- eval_loss: 1.4609
|
23 |
+
- eval_runtime: 17.1765
|
24 |
+
- eval_samples_per_second: 58.219
|
25 |
+
- eval_steps_per_second: 7.277
|
26 |
|
27 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
28 |
should probably proofread and complete it, then remove this comment.
|
|
|
45 |
### Training hyperparameters
|
46 |
|
47 |
The following hyperparameters were used during training:
|
48 |
+
- distillation_objective: DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl, layer_mapper=None, projector=None), hs_loss_component=LossComponent(label=hs, weight=2.0, loss_fn=mse_sum, layer_mapper=None, projector=None), attn_loss_component=LossComponent(label=attn, weight=0, loss_fn=None, layer_mapper=None, projector=None))
|
49 |
- train_embeddings: True
|
50 |
- learning_rate: 4e-05
|
51 |
- train_batch_size: 8
|
|
|
56 |
- num_epochs: 1.0
|
57 |
|
58 |
### Resource Usage
|
59 |
+
Peak GPU Memory: 8.0903 GB
|
60 |
|
61 |
### Eval-Phase Metrics
|
62 |
| step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | zhwikippl |
|
63 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
|
64 |
| **teacher eval** | | 30.2086 | 57.2728 | | | | | 18.1784 |
|
65 |
+
| 0 | 0 | 55922.5859 | 57739.5352 | 7.7238 | 17.0878 | 58.521 | 7.315 | 57324.7461 |
|
66 |
+
| 1000 | 0.0808 | 920.8879 | 4922.0625 | 2.3268 | 17.0862 | 58.527 | 7.316 | 22360.4277 |
|
67 |
+
| 2000 | 0.1616 | 620.1771 | 3480.3069 | 2.0488 | 17.0703 | 58.581 | 7.323 | 9003.6973 |
|
68 |
+
| 3000 | 0.2424 | 497.6692 | 3095.0298 | 1.9112 | 17.0714 | 58.578 | 7.322 | 2670.2615 |
|
69 |
+
| 4000 | 0.3232 | 420.1666 | 2924.0510 | 1.7926 | 17.0681 | 58.589 | 7.324 | 1505.7640 |
|
70 |
+
| 5000 | 0.4040 | 363.9979 | 2463.9880 | 1.6927 | 17.0999 | 58.48 | 7.31 | 1190.3823 |
|
71 |
+
| 6000 | 0.4848 | 321.6444 | 2008.3508 | 1.6180 | 17.0952 | 58.496 | 7.312 | 2308.5518 |
|
72 |
+
| 7000 | 0.5657 | 288.7571 | 1772.2247 | 1.5521 | 17.1061 | 58.459 | 7.307 | 943.5735 |
|
73 |
+
| 8000 | 0.6465 | 268.0555 | 1636.7375 | 1.5025 | 17.0661 | 58.596 | 7.324 | 1002.2805 |
|
74 |
+
| 9000 | 0.7273 | 248.6255 | 1465.2275 | 1.4609 | 17.1765 | 58.219 | 7.277 | 910.6450 |
|
75 |
+
| 10000 | 0.8081 | 230.5145 | 1351.8748 | 1.4215 | 17.0631 | 58.606 | 7.326 | 754.1554 |
|
76 |
+
| 11000 | 0.8889 | 218.0646 | 1356.4580 | 1.3820 | 17.0844 | 58.533 | 7.317 | 892.8242 |
|
77 |
+
| 12000 | 0.9697 | 200.7094 | 1234.1702 | 1.3464 | 17.0571 | 58.627 | 7.328 | 822.1012 |
|
78 |
+
| 12375 | 1.0 | 195.8138 | 1216.7174 | 1.3332 | 17.1185 | 58.416 | 7.302 | 906.7622 |
|
79 |
|
80 |
### Framework versions
|
81 |
- Distily 0.2.0
|
logs/hs_loss_fn=mse_sum, hs_weight=2.0/events.out.tfevents.1723663889.5f530b1cf724
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:011251518598d8a096c7b6288ae3464baee7562f5274f5d03cab848e4a78e1b0
|
3 |
+
size 249
|