--- license: other tags: - generated_from_trainer datasets: - HiTZ/alpaca_mt model-index: - name: alpaca-lora-30b-en-pt-es-ca-eu-gl-at results: [] --- # alpaca-lora-30b-en-pt-es-ca-eu-gl-at This model is a fine-tuned version of [decapoda-research/llama-30b-hf](https://huggingface.co/decapoda-research/llama-30b-hf) on the HiTZ/alpaca_mt ['en', 'pt', 'es', 'ca', 'eu', 'gl', 'at'] dataset. It achieves the following results on the evaluation set: - Loss: 0.9088 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 21 - total_train_batch_size: 126 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1695 | 0.04 | 100 | 1.1716 | | 1.1211 | 0.07 | 200 | 1.0964 | | 1.0591 | 0.11 | 300 | 1.0590 | | 1.0234 | 0.14 | 400 | 1.0341 | | 1.0345 | 0.18 | 500 | 1.0165 | | 0.9932 | 0.22 | 600 | 1.0024 | | 0.9948 | 0.25 | 700 | 0.9895 | | 1.01 | 0.29 | 800 | 0.9794 | | 0.9488 | 0.32 | 900 | 0.9708 | | 0.9518 | 0.36 | 1000 | 0.9627 | | 0.9463 | 0.4 | 1100 | 0.9557 | | 0.956 | 0.43 | 1200 | 0.9498 | | 0.9521 | 0.47 | 1300 | 0.9437 | | 0.9345 | 0.51 | 1400 | 0.9385 | | 0.9469 | 0.54 | 1500 | 0.9337 | | 0.9466 | 0.58 | 1600 | 0.9297 | | 0.9403 | 0.61 | 1700 | 0.9257 | | 0.9179 | 0.65 | 1800 | 0.9219 | | 0.9468 | 0.69 | 1900 | 0.9190 | | 0.9173 | 0.72 | 2000 | 0.9163 | | 0.9172 | 0.76 | 2100 | 0.9142 | | 0.9351 | 0.79 | 2200 | 0.9124 | | 0.9238 | 0.83 | 2300 | 0.9110 | | 0.9057 | 0.87 | 2400 | 0.9099 | | 0.9309 | 0.9 | 2500 | 0.9093 | | 0.8893 | 0.94 | 2600 | 0.9090 | | 0.9095 | 0.97 | 2700 | 0.9088 | ### Framework versions - Transformers 4.28.0.dev0 - Pytorch 2.0.0+cu117 - Datasets 2.10.1 - Tokenizers 0.13.2