metadata
license: cc-by-sa-4.0
tags:
- generated_from_trainer
base_model: nlpaueb/legal-bert-base-uncased
metrics:
- accuracy
- precision
- recall
model-index:
- name: legal-bert-base-uncased
results: []
legal-bert-base-uncased
This model is a fine-tuned version of nlpaueb/legal-bert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.1536
- Accuracy: 0.8203
- Precision: 0.8212
- Recall: 0.8203
- Precision Macro: 0.7660
- Recall Macro: 0.7548
- Macro Fpr: 0.0156
- Weighted Fpr: 0.0150
- Weighted Specificity: 0.9766
- Macro Specificity: 0.9867
- Weighted Sensitivity: 0.8242
- Macro Sensitivity: 0.7548
- F1 Micro: 0.8242
- F1 Macro: 0.7566
- F1 Weighted: 0.8221
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | Precision Macro | Recall Macro | Macro Fpr | Weighted Fpr | Weighted Specificity | Macro Specificity | Weighted Sensitivity | Macro Sensitivity | F1 Micro | F1 Macro | F1 Weighted |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1.1096 | 1.0 | 643 | 0.6748 | 0.7978 | 0.7855 | 0.7978 | 0.6239 | 0.6340 | 0.0188 | 0.0178 | 0.9702 | 0.9845 | 0.7978 | 0.6340 | 0.7978 | 0.6134 | 0.7840 |
0.6187 | 2.0 | 1286 | 0.6449 | 0.8110 | 0.8196 | 0.8110 | 0.7806 | 0.7327 | 0.0169 | 0.0164 | 0.9755 | 0.9858 | 0.8110 | 0.7327 | 0.8110 | 0.7268 | 0.8090 |
0.4747 | 3.0 | 1929 | 0.8151 | 0.8149 | 0.8192 | 0.8149 | 0.7659 | 0.7390 | 0.0166 | 0.0160 | 0.9761 | 0.9861 | 0.8149 | 0.7390 | 0.8149 | 0.7370 | 0.8125 |
0.2645 | 4.0 | 2572 | 0.9345 | 0.8218 | 0.8198 | 0.8218 | 0.7446 | 0.7413 | 0.0158 | 0.0152 | 0.9774 | 0.9866 | 0.8218 | 0.7413 | 0.8218 | 0.7385 | 0.8189 |
0.1901 | 5.0 | 3215 | 1.0929 | 0.8195 | 0.8242 | 0.8195 | 0.8264 | 0.7432 | 0.0161 | 0.0155 | 0.9750 | 0.9863 | 0.8195 | 0.7432 | 0.8195 | 0.7595 | 0.8166 |
0.1131 | 6.0 | 3858 | 1.1536 | 0.8203 | 0.8212 | 0.8203 | 0.7968 | 0.7786 | 0.0159 | 0.0154 | 0.9766 | 0.9865 | 0.8203 | 0.7786 | 0.8203 | 0.7840 | 0.8197 |
0.063 | 7.0 | 4501 | 1.3218 | 0.8118 | 0.8184 | 0.8118 | 0.7518 | 0.7526 | 0.0166 | 0.0163 | 0.9773 | 0.9859 | 0.8118 | 0.7526 | 0.8118 | 0.7495 | 0.8136 |
0.0264 | 8.0 | 5144 | 1.3863 | 0.8257 | 0.8262 | 0.8257 | 0.7784 | 0.7768 | 0.0155 | 0.0149 | 0.9768 | 0.9868 | 0.8257 | 0.7768 | 0.8257 | 0.7730 | 0.8247 |
0.03 | 9.0 | 5787 | 1.5542 | 0.8079 | 0.8167 | 0.8079 | 0.7639 | 0.7653 | 0.0172 | 0.0167 | 0.9744 | 0.9855 | 0.8079 | 0.7653 | 0.8079 | 0.7595 | 0.8096 |
0.0149 | 10.0 | 6430 | 1.5835 | 0.8141 | 0.8155 | 0.8141 | 0.7545 | 0.7361 | 0.0168 | 0.0160 | 0.9730 | 0.9858 | 0.8141 | 0.7361 | 0.8141 | 0.7412 | 0.8127 |
0.005 | 11.0 | 7073 | 1.5325 | 0.8242 | 0.8250 | 0.8242 | 0.7805 | 0.7812 | 0.0156 | 0.0150 | 0.9758 | 0.9867 | 0.8242 | 0.7812 | 0.8242 | 0.7681 | 0.8226 |
0.003 | 12.0 | 7716 | 1.5714 | 0.8288 | 0.8299 | 0.8288 | 0.7701 | 0.7679 | 0.0152 | 0.0145 | 0.9765 | 0.9870 | 0.8288 | 0.7679 | 0.8288 | 0.7626 | 0.8276 |
0.0033 | 13.0 | 8359 | 1.5511 | 0.8249 | 0.8219 | 0.8249 | 0.7676 | 0.7598 | 0.0156 | 0.0149 | 0.9760 | 0.9867 | 0.8249 | 0.7598 | 0.8249 | 0.7608 | 0.8225 |
0.0018 | 14.0 | 9002 | 1.5510 | 0.8249 | 0.8225 | 0.8249 | 0.7686 | 0.7554 | 0.0155 | 0.0149 | 0.9767 | 0.9868 | 0.8249 | 0.7554 | 0.8249 | 0.7572 | 0.8224 |
0.0008 | 15.0 | 9645 | 1.5469 | 0.8242 | 0.8220 | 0.8242 | 0.7660 | 0.7548 | 0.0156 | 0.0150 | 0.9766 | 0.9867 | 0.8242 | 0.7548 | 0.8242 | 0.7566 | 0.8221 |
Framework versions
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2