---
base_model: gpt2
datasets:
- wikimedia/wikipedia
library_name: Distily
license: mit
tags:
- bitnet
- 1.58b
- generated_from_trainer
model-index:
- name: distily_projector_experiment
results: []
---
# Summary
Distilled with [Distily](https://github.com/lapp0/distily) library
using teacher model [gpt2](https://huggingface.co/gpt2)
on dataset [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia).
# Model Architecture:
- **Architecture**: `GPT2LMHeadModel`
- **Total Parameters**: 124,439,808
- **Data Type (dtype)**: torch.bfloat16
- **Model Size**: 0.24 GB
# Evaluation Metrics Comparison
| step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | tinystoriesppl | zhwikippl |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| **teacher eval** | | 43.25 | 61.25 | | | | | 11.6875 | 19.125 |
| 0 | 0 | 2473901162496.0 | 170424302305280.0 | 30.7740 | 25.2349 | 99.069 | 12.403 | 4060086272.0 | 71468255805440.0 |
| 2500 | 0.0404 | 1184.0 | 11776.0 | 9.8284 | 25.2487 | 99.015 | 12.397 | 784.0 | 12800.0 |
| 5000 | 0.0808 | 412.0 | 2272.0 | 8.3993 | 25.2618 | 98.964 | 12.39 | 290.0 | 434.0 |
| 7500 | 0.1212 | 245.0 | 916.0 | 7.6586 | 25.2887 | 98.858 | 12.377 | 218.0 | 195.0 |
| 10000 | 0.1616 | 182.0 | 676.0 | 7.2415 | 25.2556 | 98.988 | 12.393 | 164.0 | 190.0 |
| 12500 | 0.2020 | 131.0 | 504.0 | 6.6883 | 25.2962 | 98.829 | 12.373 | 115.5 | 158.0 |
| 15000 | 0.2424 | 112.5 | 432.0 | 6.4127 | 25.2743 | 98.915 | 12.384 | 89.5 | 144.0 |
| 17500 | 0.2828 | 93.5 | 344.0 | 6.1979 | 25.214 | 99.151 | 12.414 | 70.5 | 127.0 |
| 20000 | 0.3232 | 75.0 | 270.0 | 5.9310 | 25.2265 | 99.102 | 12.408 | 63.75 | 128.0 |
| 22500 | 0.3636 | 67.0 | 209.0 | 5.6634 | 25.2495 | 99.012 | 12.396 | 49.75 | 83.5 |
| 25000 | 0.4040 | 63.5 | 192.0 | 5.5561 | 25.2476 | 99.019 | 12.397 | 44.25 | 86.0 |
| 27500 | 0.4444 | 58.0 | 192.0 | 5.4855 | 25.2834 | 98.879 | 12.38 | 40.25 | 70.5 |
| 30000 | 0.4848 | 58.75 | 195.0 | 5.4646 | 25.2547 | 98.992 | 12.394 | 41.75 | 65.0 |
| 32500 | 0.5253 | 58.5 | 171.0 | 5.4511 | 25.2 | 99.206 | 12.421 | 40.0 | 60.0 |
| 35000 | 0.5657 | 57.0 | 165.0 | 5.3711 | 25.2873 | 98.864 | 12.378 | 36.75 | 49.25 |
| 37500 | 0.6061 | 57.75 | 155.0 | 5.3390 | 25.2952 | 98.833 | 12.374 | 37.75 | 54.0 |
| 40000 | 0.6465 | 55.75 | 154.0 | 5.3225 | 25.2919 | 98.846 | 12.376 | 34.5 | 57.0 |
| 42500 | 0.6869 | 54.75 | 146.0 | 5.2939 | 25.2713 | 98.926 | 12.386 | 35.5 | 49.0 |
| 45000 | 0.7273 | 50.75 | 133.0 | 5.1563 | 25.2812 | 98.888 | 12.381 | 30.0 | 48.0 |
| 47500 | 0.7677 | 50.75 | 124.5 | 5.1271 | 25.3128 | 98.764 | 12.365 | 29.375 | 35.25 |
| 50000 | 0.8081 | 49.75 | 123.0 | 5.1093 | 25.2369 | 99.061 | 12.402 | 28.75 | 37.5 |
| 52500 | 0.8485 | 48.5 | 119.5 | 5.0960 | 25.2934 | 98.84 | 12.375 | 28.875 | 34.5 |
| 55000 | 0.8889 | 48.5 | 118.0 | 5.0747 | 25.23 | 99.088 | 12.406 | 28.0 | 33.25 |
| 57500 | 0.9293 | 48.0 | 117.0 | 5.0698 | 25.2235 | 99.114 | 12.409 | 27.75 | 32.0 |
| 60000 | 0.9697 | 48.0 | 117.0 | 5.0651 | 25.2107 | 99.164 | 12.415 | 27.75 | 31.875 |
| 61875 | 1.0 | 48.0 | 117.5 | 5.0643 | 25.1856 | 99.263 | 12.428 | 27.625 | 32.0 |
# Resource Usage Comparison
- VRAM Use: 7.7843 GB
# Distillation (Teacher -> Student) Architecture Difference:
- **Architecture**: `GPT2LMHeadModel` -> `GPT2LMHeadModel`
- **Total Parameters**: 124,439,808 -> 124,439,808
- **Data Type (dtype)**: torch.bfloat16 -> torch.bfloat16
- **Model Size**: 0.24 GB -> 0.24 GB
Module Diff Details
```diff
```
# Train Dataset
Trained on 145,744,973 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
- Num Samples: `247,500`
- Subset: `20231101.en`
- Split: `train`
# Training Objective
```
DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=10.0, loss_fn=cos, layer_mapper=layer-2))
```
# Hyperparameters
The following hyperparameters were used during training:
Expand
- learning_rate: `0.0001`
- train_batch_size: `4`
- eval_batch_size: `8`
- seed: `42`
- optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
- lr_scheduler_type: `linear`
- lr_scheduler_warmup_ratio: `0.5`
- num_epochs: `1.0`
- distillation_objective: `DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl), attn_loss_component=LossComponent(label=attn, weight=10.0, loss_fn=cos, layer_mapper=layer-2))`
- train_embeddings: `True`
- lr_scheduler: ``
- student_model_name_or_path: `None`
- student_config_name_or_path: `None`
- student_model_config: `None`
- reinitialize_weights: `None`
- copy_teacher_modules: `[('lm_head', False)]`
- student_model_as_bitnet: `True`
- student_model_compile: `False`
- dropout: `None`
- teacher_model_name_or_path: `gpt2`
- teacher_load_in_8bit: `False`
- teacher_load_in_4bit: `False`
- teacher_model_compile: `False`
- dataset_uri: `wikimedia/wikipedia`
- dataset_subset: `20231101.en`
- dataset_split: `train`
- dataset_column_name: `text`
- dataset_sample_size: `250000`
- dataset_test_size: `0.01`
- gradient_accumulation_steps: `1`
- weight_decay: `0.0`
- max_grad_norm: `1.0`
- warmup_ratio: `0.5`
- warmup_steps: `0`
- gradient_checkpointing: `True`
# Framework Versions
- Distily 0.2.0
- Transformers 4.44.1
- Pytorch 2.5.0.dev20240821+cu121
- Datasets 2.21.0