license: llama3 library_name: peft tags: - axolotl - generated_from_trainer base_model: meta-llama/Meta-Llama-3-8B-Instruct model-index: - name: causal-llama-3-8B-Instruct results: []
axolotl version: 0.4.0
0.4.0
adapter: qlora base_model: meta-llama/Meta-Llama-3-8B-Instruct base_model_config: meta-llama/Meta-Llama-3-8B-Instruct datasets: - path: ibivibiv/causal-lm-smaller_0 type: alpaca flash_attention: true gradient_accumulation_steps: 4 gradient_checkpointing: true hf_use_auth_token: true hub_model_id: ibivibiv/causal-llama-3-8B-Instruct learning_rate: 0.00025 load_in_4bit: true logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_r: 32 lora_target_linear: true lr_scheduler: cosine micro_batch_size: 2 model_type: AutoModelForCausalLM num_epochs: 3 optimizer: paged_adamw_32bit output_dir: /job/out sample_packing: true save_safetensors: true sequence_len: 4096 special_tokens: pad_token: <|end_of_text|> tokenizer_type: AutoTokenizer wandb_project: TuneStudio wandb_run_id: causalllama0 wandb_watch: 'true' warmup_steps: 10
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the None dataset.
More information needed
The following hyperparameters were used during training: