Edit model card

Built with Axolotl

See axolotl config

axolotl version: 0.4.1

base_model: EleutherAI/pythia-125m-deduped
load_in_8bit: false
datasets:
  - path: jtatman/storywriting_combined_instruct
    type: alpaca
dataset_prepared_path: ds-storytelling
val_set_size: 0.05
adapter: lora
lora_model_dir:
sequence_len: 512
lora_r: 16
lora_alpha: 32
lora_dropout: 0.05
lora_target_modules:
  - query_key_value
lora_target_linear: true 
lora_fan_in_fan_out: true  # pythia/GPTNeoX lora specific
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
output_dir: ./outputs/lora-alpaca-pythia-125m
gradient_accumulation_steps: 1
micro_batch_size: 4
num_epochs: 4
learning_rate: 0.00001
train_on_inputs: false
group_by_length: false
bf16: auto
tf32: false
float16: true
gpu_memory_limit: 8GiB
hub_model_id: jtatman/pythia-125m-gpt4-llm-cleaned 
lora_on_cpu: false
early_stopping_patience:
resume_from_checkpoint:
local_rank:
weight_decay: 0.1
evals_per_epoch: 4
logging_steps: 1

pythia-125m-gpt4-llm-cleaned

This model is a fine-tuned version of EleutherAI/pythia-125m-deduped on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 2.6093

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 4

Training results

Training Loss Epoch Step Validation Loss
3.1847 0.0008 1 2.7080
2.8391 0.2504 326 2.6464
2.9722 0.5008 652 2.6312
2.9904 0.7512 978 2.6245
3.1704 1.0015 1304 2.6179
3.1108 1.2519 1630 2.6155
2.9321 1.5023 1956 2.6143
3.0499 1.7527 2282 2.6113
2.9776 2.0031 2608 2.6102
2.7773 2.2535 2934 2.6102
3.5383 2.5038 3260 2.6106
2.8105 2.7542 3586 2.6106
3.0778 3.0046 3912 2.6115
2.9706 3.2550 4238 2.6096
2.7671 3.5054 4564 2.6100
3.3049 3.7558 4890 2.6093

Framework versions

  • PEFT 0.11.1
  • Transformers 4.41.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
162M params
Tensor type
F32
·
Unable to determine this model’s pipeline type. Check the docs .

Adapter for