See axolotl config
axolotl version: 0.3.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
is_llama_derived_model: true
eval_sample_packing: False #Poco dato
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: data.json # or json
ds_type: json # see other options below
type: completion
dataset_prepared_path:
val_set_size: 0.05
# output_dir: ./lora-out
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
# adapter: lora
# lora_model_dir:
# lora_r: 32
# lora_alpha: 16
# lora_dropout: 0.05
# lora_target_linear: true
# lora_fan_in_fan_out:
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
output_dir: ./tinyllama-out
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 8 #2
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false #TODO: change to true
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
save_strategy: "no"
warmup_steps: 10
evals_per_epoch: 4
# saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
tinyllama-out
This model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 1.8806
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 8
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.9894 | 0.13 | 1 | 1.5790 |
1.915 | 0.26 | 2 | 1.4849 |
1.642 | 0.52 | 4 | 1.4032 |
1.5396 | 0.77 | 6 | 1.4059 |
1.3746 | 1.03 | 8 | 1.4101 |
0.9355 | 1.23 | 10 | 1.5147 |
0.9266 | 1.48 | 12 | 1.5291 |
0.8006 | 1.74 | 14 | 1.4724 |
0.7664 | 2.0 | 16 | 1.4965 |
0.4813 | 2.16 | 18 | 1.5715 |
0.4193 | 2.42 | 20 | 1.5436 |
0.364 | 2.68 | 22 | 1.6040 |
0.3592 | 2.94 | 24 | 1.5823 |
0.1884 | 3.13 | 26 | 1.6850 |
0.159 | 3.39 | 28 | 1.8316 |
0.1641 | 3.65 | 30 | 1.7286 |
0.1512 | 3.9 | 32 | 1.7029 |
0.1563 | 4.06 | 34 | 1.7033 |
0.0696 | 4.32 | 36 | 1.7482 |
0.0643 | 4.58 | 38 | 1.8069 |
0.0662 | 4.84 | 40 | 1.8410 |
0.0709 | 5.1 | 42 | 1.8529 |
0.0344 | 5.26 | 44 | 1.8626 |
0.0468 | 5.52 | 46 | 1.8716 |
0.0328 | 5.77 | 48 | 1.8761 |
0.0353 | 6.03 | 50 | 1.8789 |
0.0375 | 6.23 | 52 | 1.8803 |
0.0345 | 6.48 | 54 | 1.8802 |
0.0346 | 6.74 | 56 | 1.8806 |
Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0
- Downloads last month
- 35
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for msaavedra1234/tiny_t
Base model
TinyLlama/TinyLlama-1.1B-Chat-v1.0