--- license: mit base_model: microsoft/phi-2 tags: - generated_from_trainer model-index: - name: phi-sft-out results: [] --- [Built with Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl)
See axolotl config axolotl version: `0.4.0` ```yaml base_model: microsoft/phi-2 model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false load_in_4bit: false strict: false datasets: - path: Intel/orca_dpo_pairs type: system_prompt: "" field_system: system field_instruction: question field_output: rejected field_output: chosen dataset_prepared_path: val_set_size: 0.05 output_dir: ./phi-sft-out sequence_len: 2048 sample_packing: true pad_to_sequence_len: true adapter: lora_model_dir: lora_r: lora_alpha: lora_dropout: lora_target_linear: lora_fan_in_fan_out: wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 1 micro_batch_size: 2 num_epochs: 2 optimizer: adamw_torch adam_beta2: 0.95 adam_epsilon: 0.00001 max_grad_norm: 1.0 lr_scheduler: cosine learning_rate: 0.000003 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: true gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: True early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 100 evals_per_epoch: 4 saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.1 fsdp: fsdp_config: resize_token_embeddings_to_32x: true special_tokens: pad_token: "<|endoftext|>" ```

# phi-sft-out This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the Intel/orca_dpo_pairs dataset. It achieves the following results on the evaluation set: - Loss: 1.2999 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-05 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.3053 | 0.0 | 1 | 1.3288 | | 1.2314 | 0.25 | 287 | 1.3183 | | 1.1664 | 0.5 | 574 | 1.3090 | | 1.4349 | 0.75 | 861 | 1.3034 | | 1.4875 | 1.0 | 1148 | 1.3012 | | 1.3461 | 1.23 | 1435 | 1.3006 | | 1.3247 | 1.48 | 1722 | 1.2998 | | 1.2906 | 1.73 | 2009 | 1.2999 | ### Framework versions - Transformers 4.37.0 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0