--- license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: roberta_large-chunking_0715_v0 results: [] --- # roberta_large-chunking_0715_v0 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3602 - Precision: 0.3182 - Recall: 0.2213 - F1: 0.2610 - Accuracy: 0.8681 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 63 | 0.4019 | 0.5525 | 0.0824 | 0.1434 | 0.8748 | | No log | 2.0 | 126 | 0.3614 | 0.4887 | 0.1517 | 0.2315 | 0.8747 | | No log | 3.0 | 189 | 0.3569 | 0.4484 | 0.1638 | 0.2399 | 0.8744 | | No log | 4.0 | 252 | 0.3581 | 0.3685 | 0.1909 | 0.2515 | 0.8719 | | No log | 5.0 | 315 | 0.3602 | 0.3182 | 0.2213 | 0.2610 | 0.8681 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1