---
base_model: HuggingFaceTB/SmolLM-135M
datasets:
- wikimedia/wikipedia
library_name: Distily
license: creativeml-openrail-m
tags:
- generated_from_trainer
- Distily
base_model_relation: finetune
model-index:
- name: distily_profile_smollm
results: []
---
# Summary
Distilled with [Distily](https://github.com/lapp0/distily) library
using teacher model [HuggingFaceTB/SmolLM-135M](https://huggingface.co/HuggingFaceTB/SmolLM-135M)
on dataset [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia).
# Model Architecture:
- **Architecture**: `LlamaForCausalLM`
- **Total Parameters**: 81,413,568
- **Data Type (dtype)**: torch.bfloat16
- **Model Size**: 0.15 GB
Student Model Details
```
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(49152, 576)
(layers): ModuleList(
(0-14): 15 x LlamaDecoderLayer(
(self_attn): LlamaSdpaAttention(
(q_proj): Linear(in_features=576, out_features=576, bias=False)
(k_proj): Linear(in_features=576, out_features=192, bias=False)
(v_proj): Linear(in_features=576, out_features=192, bias=False)
(o_proj): Linear(in_features=576, out_features=576, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LigerSwiGLUMLP(
(gate_proj): Linear(in_features=576, out_features=1536, bias=False)
(up_proj): Linear(in_features=576, out_features=1536, bias=False)
(down_proj): Linear(in_features=1536, out_features=576, bias=False)
)
(input_layernorm): LigerRMSNorm((576,), eps=1e-05, offset=0.0)
(post_attention_layernorm): LigerRMSNorm((576,), eps=1e-05, offset=0.0)
)
)
(norm): LigerRMSNorm((576,), eps=1e-05, offset=0.0)
(rotary_emb): LlamaRotaryEmbedding()
)
(lm_head): Linear(in_features=576, out_features=49152, bias=False)
)
```
# Resource Usage
- Max Train VRAM Use: 12.7772 GB
- Available VRAM: 23.4329 GB
- GPUs:
- 1x NVIDIA GeForce RTX 4090
- CPUs: 64
- CPU Memory: 251.7299 GB
- CPU Memory Bandwidth: 1600 GB/s
# Distillation (Teacher -> Student) Architecture Difference:
- **Architecture**: `LlamaForCausalLM` -> `LlamaForCausalLM`
- **Total Parameters**: 134,515,008 -> 81,413,568
- **Data Type (dtype)**: torch.bfloat16 -> torch.bfloat16
- **Model Size**: 0.25 GB -> 0.15 GB
Module Diff Details
```diff
--- teacher model modules
+++ student model modules
@@ -2,7 +2,7 @@
(model): LlamaModel(
(embed_tokens): Embedding(49152, 576)
(layers): ModuleList(
- (0-29): 30 x LlamaDecoderLayer(
+ (0-14): 15 x LlamaDecoderLayer(
(self_attn): LlamaSdpaAttention(
(q_proj): Linear(in_features=576, out_features=576, bias=False)
(k_proj): Linear(in_features=576, out_features=192, bias=False)
@@ -10,17 +10,16 @@
(o_proj): Linear(in_features=576, out_features=576, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
- (mlp): LlamaMLP(
+ (mlp): LigerSwiGLUMLP(
(gate_proj): Linear(in_features=576, out_features=1536, bias=False)
(up_proj): Linear(in_features=576, out_features=1536, bias=False)
(down_proj): Linear(in_features=1536, out_features=576, bias=False)
- (act_fn): SiLU()
)
- (input_layernorm): LlamaRMSNorm((576,), eps=1e-05)
- (post_attention_layernorm): LlamaRMSNorm((576,), eps=1e-05)
+ (input_layernorm): LigerRMSNorm((576,), eps=1e-05, offset=0.0)
+ (post_attention_layernorm): LigerRMSNorm((576,), eps=1e-05, offset=0.0)
)
)
- (norm): LlamaRMSNorm((576,), eps=1e-05)
+ (norm): LigerRMSNorm((576,), eps=1e-05, offset=0.0)
(rotary_emb): LlamaRotaryEmbedding()
)
(lm_head): Linear(in_features=576, out_features=49152, bias=False)
```
# Train Dataset
Trained on 84,857,838 tokens from the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset.
- Num Samples: `99,800`
- Subset: `20231101.en`
- Split: `train`
# Training Objective
```
DistillationObjective(
logits_loss_component=LossComponent(
weight=1,
loss_fn='kl'
),
hs_loss_component=LossComponent(
weight=0
),
attn_loss_component=LossComponent(
weight=0
)
)
```
# Hyperparameters
The following hyperparameters were used during training:
Expand
- learning_rate: `0.0002`
- train_batch_size: `4`
- eval_batch_size: `2`
- seed: `42`
- optimizer: `Adam with betas=(0.9,0.999) and epsilon=1e-08`
- lr_scheduler_type: `polynomial`
- num_epochs: `1.0`
- distillation_objective: `DistillationObjective(
logits_loss_component=LossComponent(
weight=1,
loss_fn='kl'
),
hs_loss_component=LossComponent(
weight=0
),
attn_loss_component=LossComponent(
weight=0
)
)`
- lr_scheduler: ``
- student_model_name_or_path: `None`
- student_config_name_or_path: `None`
- student_model_config: `{'num_hidden_layers': 15}`
- reinitialize_weights: `None`
- copy_teacher_modules: `[('lm_head', False)]`
- student_model_as_bitnet: `False`
- student_model_use_liger: `True`
- teacher_model_name_or_path: `HuggingFaceTB/SmolLM-135M`
- teacher_load_in_8bit: `False`
- teacher_load_in_4bit: `False`
- dataset_uri: `wikimedia/wikipedia`
- dataset_subset: `20231101.en`
- dataset_split: `train`
- dataset_column_name: `text`
- dataset_sample_size: `100000`
- dataset_test_size: `0.002`
- dataset_shuffle: `False`
- dataset_shuffle_seed: `42`
- dataset_trust_remote_code: `False`
- gradient_accumulation_steps: `1`
- weight_decay: `0.0`
- max_grad_norm: `1.0`
- warmup_ratio: `0.0`
- warmup_steps: `0`
- gradient_checkpointing: `True`
# Framework Versions
- Distily 0.5.0
- Transformers 4.44.2
- Pytorch 2.5.0.dev20240911+cu121
- Datasets 2.21.0