--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: mistralai/Mixtral-8x7B-v0.1 model-index: - name: Mixtral_Alpace_v3 results: [] --- # Mixtral_Alpace_v3 This model is a fine-tuned version of [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8576 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 0.03 - training_steps: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1699 | 0.03 | 10 | 1.1218 | | 1.0878 | 0.07 | 20 | 1.0544 | | 1.0525 | 0.1 | 30 | 0.9935 | | 0.9611 | 0.13 | 40 | 0.9529 | | 0.931 | 0.16 | 50 | 0.9230 | | 0.9212 | 0.2 | 60 | 0.8993 | | 0.8918 | 0.23 | 70 | 0.8817 | | 0.8808 | 0.26 | 80 | 0.8683 | | 0.8575 | 0.3 | 90 | 0.8604 | | 0.8848 | 0.33 | 100 | 0.8576 | ### Framework versions - PEFT 0.9.1.dev0 - Transformers 4.36.0 - Pytorch 2.0.1+cu118 - Datasets 2.18.0 - Tokenizers 0.15.2