lapp0 commited on
Commit
9a3a696
1 Parent(s): 7d73060

Training in progress, step 99000

Browse files
README.md CHANGED
@@ -44,7 +44,7 @@ More information needed
44
 
45
  # Resource Usage Comparison
46
 
47
- - VRAM Use: 7.4155 GB
48
 
49
  # Distillation (Teacher -> Student) Architecture Difference:
50
 
@@ -75,7 +75,7 @@ More information needed
75
  <br/>
76
 
77
  # Train Dataset
78
- Trained on 226,128,512 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
79
 
80
  - Num Samples: `396,000`
81
  - Subset: `20231101.en`
@@ -85,7 +85,7 @@ Trained on 226,128,512 tokens from the [wikimedia/wikipedia](https://huggingface
85
  # Training Objective
86
 
87
  ```
88
- DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, norm=layernorm, projector=orthogonal))
89
  ```
90
 
91
  # Hyperparameters
@@ -100,11 +100,10 @@ The following hyperparameters were used during training:
100
  - seed: `42`
101
  - optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
102
  - lr_scheduler_type: `polynomial`
103
- - lr_scheduler_warmup_ratio: `0.2`
104
  - num_epochs: `1.0`
105
- - distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, norm=layernorm, projector=orthogonal))`
106
  - train_embeddings: `True`
107
- - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7f057c8fdb40>`
108
  - student_model_name_or_path: `None`
109
  - student_config_name_or_path: `distilbert/distilgpt2`
110
  - student_model_config: `None`
@@ -124,7 +123,7 @@ The following hyperparameters were used during training:
124
  - gradient_accumulation_steps: `1`
125
  - weight_decay: `0.0`
126
  - max_grad_norm: `1.0`
127
- - warmup_ratio: `0.2`
128
  - warmup_steps: `0`
129
  - gradient_checkpointing: `True`
130
 
@@ -135,5 +134,5 @@ The following hyperparameters were used during training:
135
  # Framework Versions
136
  - Distily 0.4.1
137
  - Transformers 4.44.2
138
- - Pytorch 2.4.0+cu121
139
  - Datasets 2.21.0
 
44
 
45
  # Resource Usage Comparison
46
 
47
+ - VRAM Use: 7.4146 GB
48
 
49
  # Distillation (Teacher -> Student) Architecture Difference:
50
 
 
75
  <br/>
76
 
77
  # Train Dataset
78
+ Trained on 226,129,170 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
79
 
80
  - Num Samples: `396,000`
81
  - Subset: `20231101.en`
 
85
  # Training Objective
86
 
87
  ```
88
+ DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, projector=mlp))
89
  ```
90
 
91
  # Hyperparameters
 
100
  - seed: `42`
101
  - optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
102
  - lr_scheduler_type: `polynomial`
 
103
  - num_epochs: `1.0`
104
+ - distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, projector=mlp))`
105
  - train_embeddings: `True`
106
+ - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7fd02815ea40>`
107
  - student_model_name_or_path: `None`
108
  - student_config_name_or_path: `distilbert/distilgpt2`
109
  - student_model_config: `None`
 
123
  - gradient_accumulation_steps: `1`
124
  - weight_decay: `0.0`
125
  - max_grad_norm: `1.0`
126
+ - warmup_ratio: `0`
127
  - warmup_steps: `0`
128
  - gradient_checkpointing: `True`
129
 
 
134
  # Framework Versions
135
  - Distily 0.4.1
136
  - Transformers 4.44.2
137
+ - Pytorch 2.3.0
138
  - Datasets 2.21.0
logs/attn_norm=None, attn_projector=mlp, attn_weight=5, learning_rate=0.0001, per_device_train_batch_size=4, warmup_ratio=0/completed.flag ADDED
File without changes
logs/attn_norm=None, attn_projector=mlp, attn_weight=5, learning_rate=0.0002, per_device_train_batch_size=4, warmup_ratio=0/events.out.tfevents.1725043498.63bdd91cff03 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9eb9d9d6c53377d6271166a22c98d0f6c33d44f9a428cda940b6f1a2b99fe0a9
3
+ size 47486030
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:69134160f17412cf872b2a100dac1c83da7487d7ee13415f8f314f3484e42f0b
3
  size 163832792
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d226ec2c4f97a7847ae2d07ab881ae1e1aaf0259cacb2f840dbe67216aa052f
3
  size 163832792
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:49385898b4997e5d67c29871b6044696149acef0e9a6d024b14ae641736feaa4
3
  size 5560
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:32d59fba004b2746d25761826e8794142c7209e56657c28f3cf8ff6f296cf5ad
3
  size 5560