lapp0 commited on
Commit
55181d1
1 Parent(s): f455f4e

Training in progress, step 99000

Browse files
README.md CHANGED
@@ -44,7 +44,7 @@ More information needed
44
 
45
  # Resource Usage Comparison
46
 
47
- - VRAM Use: 7.4148 GB
48
 
49
  # Distillation (Teacher -> Student) Architecture Difference:
50
 
@@ -85,7 +85,7 @@ Trained on 226,096,614 tokens from the [wikimedia/wikipedia](https://huggingface
85
  # Training Objective
86
 
87
  ```
88
- DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25, loss_fn=raw_mse, layer_mapper=layer-2, norm=layernorm, projector=mlp))
89
  ```
90
 
91
  # Hyperparameters
@@ -102,9 +102,9 @@ The following hyperparameters were used during training:
102
  - lr_scheduler_type: `polynomial`
103
  - lr_scheduler_warmup_ratio: `0.2`
104
  - num_epochs: `1.0`
105
- - distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25, loss_fn=raw_mse, layer_mapper=layer-2, norm=layernorm, projector=mlp))`
106
  - train_embeddings: `True`
107
- - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7fcf24346980>`
108
  - student_model_name_or_path: `None`
109
  - student_config_name_or_path: `distilbert/distilgpt2`
110
  - student_model_config: `None`
@@ -136,4 +136,4 @@ The following hyperparameters were used during training:
136
  - Distily 0.4.1
137
  - Transformers 4.44.2
138
  - Pytorch 2.4.0+cu121
139
- - Datasets 2.18.0
 
44
 
45
  # Resource Usage Comparison
46
 
47
+ - VRAM Use: 7.4164 GB
48
 
49
  # Distillation (Teacher -> Student) Architecture Difference:
50
 
 
85
  # Training Objective
86
 
87
  ```
88
+ DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25, loss_fn=raw_mse, layer_mapper=layer-2, projector=orthogonal))
89
  ```
90
 
91
  # Hyperparameters
 
102
  - lr_scheduler_type: `polynomial`
103
  - lr_scheduler_warmup_ratio: `0.2`
104
  - num_epochs: `1.0`
105
+ - distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25, loss_fn=raw_mse, layer_mapper=layer-2, projector=orthogonal))`
106
  - train_embeddings: `True`
107
+ - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7fe60c136110>`
108
  - student_model_name_or_path: `None`
109
  - student_config_name_or_path: `distilbert/distilgpt2`
110
  - student_model_config: `None`
 
136
  - Distily 0.4.1
137
  - Transformers 4.44.2
138
  - Pytorch 2.4.0+cu121
139
+ - Datasets 2.21.0
logs/attn_norm=None, attn_projector=orthogonal, attn_weight=25, learning_rate=0.0001, per_device_train_batch_size=4, warmup_ratio=0.2/completed.flag ADDED
File without changes
logs/attn_norm=None, attn_projector=orthogonal, attn_weight=25, learning_rate=0.0002, per_device_train_batch_size=4, warmup_ratio=0.2/events.out.tfevents.1725111510.23668649e3db ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4003eddf0c63bd02515b4a101bacfe68a15ec44a909e4481b4008ab4271221d8
3
+ size 47486052
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ce86b32ce7013508c4f37e2aa3087bac0a2b30faeb8080f63dfdc1da27d1c84e
3
  size 163832792
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:093dc85d7aa7ea16b441e7ac8ebdb50e781b78d01a3090dfa8465e7d65b3fdb4
3
  size 163832792
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fdbfabe14e1fee095c931cfc8983452f98e29f0aae284e39f3a04b8a36368215
3
  size 5560
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b04c8be3b9c81ff91b22fb6ea84369bf65907b954e1705878806919caa5e72d1
3
  size 5560