lapp0 commited on
Commit
2c059e3
1 Parent(s): 01a64ca

End of training

Browse files
README.md CHANGED
@@ -44,7 +44,7 @@ More information needed
44
 
45
  # Resource Usage Comparison
46
 
47
- - VRAM Use: 7.4159 GB
48
 
49
  # Distillation (Teacher -> Student) Architecture Difference:
50
 
@@ -75,7 +75,7 @@ More information needed
75
  <br/>
76
 
77
  # Train Dataset
78
- Trained on 226,136,393 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
79
 
80
  - Num Samples: `396,000`
81
  - Subset: `20231101.en`
@@ -85,7 +85,7 @@ Trained on 226,136,393 tokens from the [wikimedia/wikipedia](https://huggingface
85
  # Training Objective
86
 
87
  ```
88
- DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, norm=layernorm, projector=orthogonal))
89
  ```
90
 
91
  # Hyperparameters
@@ -101,9 +101,9 @@ The following hyperparameters were used during training:
101
  - optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
102
  - lr_scheduler_type: `polynomial`
103
  - num_epochs: `1.0`
104
- - distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=5, loss_fn=raw_mse, layer_mapper=layer-2, norm=layernorm, projector=orthogonal))`
105
  - train_embeddings: `True`
106
- - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7f649f4a4790>`
107
  - student_model_name_or_path: `None`
108
  - student_config_name_or_path: `distilbert/distilgpt2`
109
  - student_model_config: `None`
 
44
 
45
  # Resource Usage Comparison
46
 
47
+ - VRAM Use: 7.4164 GB
48
 
49
  # Distillation (Teacher -> Student) Architecture Difference:
50
 
 
75
  <br/>
76
 
77
  # Train Dataset
78
+ Trained on 226,096,614 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
79
 
80
  - Num Samples: `396,000`
81
  - Subset: `20231101.en`
 
85
  # Training Objective
86
 
87
  ```
88
+ DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25, loss_fn=raw_mse, layer_mapper=layer-2, norm=layernorm, projector=orthogonal))
89
  ```
90
 
91
  # Hyperparameters
 
101
  - optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
102
  - lr_scheduler_type: `polynomial`
103
  - num_epochs: `1.0`
104
+ - distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=25, loss_fn=raw_mse, layer_mapper=layer-2, norm=layernorm, projector=orthogonal))`
105
  - train_embeddings: `True`
106
+ - lr_scheduler: `<torch.optim.lr_scheduler.LambdaLR object at 0x7f64857cc460>`
107
  - student_model_name_or_path: `None`
108
  - student_config_name_or_path: `distilbert/distilgpt2`
109
  - student_model_config: `None`
logs/attn_norm=layernorm, attn_projector=orthogonal, attn_weight=25, learning_rate=0.0001, per_device_train_batch_size=4, warmup_ratio=0/events.out.tfevents.1725048344.e3f806ea38c9 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8f4096f13393e50b85470631d56464439e3c68bb8cd03b6eb15769ead8200f9a
3
+ size 253