--- library_name: transformers license: mit base_model: roberta-base tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta-base-downstream-build_rr results: [] --- # roberta-base-downstream-build_rr This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8272 - Precision-macro: 0.6089 - Recall-macro: 0.5868 - Macro-f1: 0.5926 - Precision-micro: 0.7798 - Recall-micro: 0.7798 - Micro-f1: 0.7798 - Accuracy: 0.7798 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 1 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision-macro | Recall-macro | Macro-f1 | Precision-micro | Recall-micro | Micro-f1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:--------:| | No log | 1.0 | 62 | 1.1797 | 0.3651 | 0.2425 | 0.2406 | 0.6509 | 0.6509 | 0.6509 | 0.6509 | | No log | 2.0 | 124 | 0.8354 | 0.5350 | 0.5291 | 0.5255 | 0.7350 | 0.7350 | 0.7350 | 0.7350 | | No log | 3.0 | 186 | 0.8058 | 0.5559 | 0.5382 | 0.5366 | 0.7343 | 0.7343 | 0.7343 | 0.7343 | | No log | 4.0 | 248 | 0.7718 | 0.6246 | 0.5201 | 0.5300 | 0.7503 | 0.7503 | 0.7503 | 0.7503 | | No log | 5.0 | 310 | 0.7307 | 0.5890 | 0.5463 | 0.5579 | 0.7642 | 0.7642 | 0.7642 | 0.7642 | | No log | 6.0 | 372 | 0.7099 | 0.6076 | 0.5431 | 0.5481 | 0.7746 | 0.7746 | 0.7746 | 0.7746 | | No log | 7.0 | 434 | 0.7072 | 0.6090 | 0.5126 | 0.5261 | 0.7812 | 0.7812 | 0.7812 | 0.7812 | | No log | 8.0 | 496 | 0.6919 | 0.6321 | 0.5471 | 0.5676 | 0.7826 | 0.7826 | 0.7826 | 0.7826 | | 0.8758 | 9.0 | 558 | 0.7503 | 0.5666 | 0.5818 | 0.5696 | 0.7735 | 0.7735 | 0.7735 | 0.7735 | | 0.8758 | 10.0 | 620 | 0.7512 | 0.6054 | 0.5656 | 0.5755 | 0.7784 | 0.7784 | 0.7784 | 0.7784 | | 0.8758 | 11.0 | 682 | 0.7656 | 0.6086 | 0.5835 | 0.5913 | 0.7829 | 0.7829 | 0.7829 | 0.7829 | | 0.8758 | 12.0 | 744 | 0.7861 | 0.5972 | 0.5885 | 0.5843 | 0.7739 | 0.7739 | 0.7739 | 0.7739 | | 0.8758 | 13.0 | 806 | 0.8239 | 0.5975 | 0.5749 | 0.5701 | 0.7780 | 0.7780 | 0.7780 | 0.7780 | | 0.8758 | 14.0 | 868 | 0.8272 | 0.6089 | 0.5868 | 0.5926 | 0.7798 | 0.7798 | 0.7798 | 0.7798 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1