legal-bert-lora
This model is a fine-tuned version of nlpaueb/legal-bert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.6841
- Accuracy: 0.8048
- Precision: 0.7955
- Recall: 0.8048
- Precision Macro: 0.6332
- Recall Macro: 0.6316
- Macro Fpr: 0.0177
- Weighted Fpr: 0.0170
- Weighted Specificity: 0.9753
- Macro Specificity: 0.9853
- Weighted Sensitivity: 0.8048
- Macro Sensitivity: 0.6316
- F1 Micro: 0.8048
- F1 Macro: 0.6233
- F1 Weighted: 0.7978
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | Precision Macro | Recall Macro | Macro Fpr | Weighted Fpr | Weighted Specificity | Macro Specificity | Weighted Sensitivity | Macro Sensitivity | F1 Micro | F1 Macro | F1 Weighted |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
No log | 1.0 | 160 | 1.2986 | 0.6421 | 0.5563 | 0.6421 | 0.2826 | 0.3627 | 0.0384 | 0.0383 | 0.9531 | 0.9730 | 0.6421 | 0.3627 | 0.6421 | 0.3114 | 0.5878 |
No log | 2.0 | 321 | 0.8962 | 0.7273 | 0.6748 | 0.7273 | 0.3629 | 0.4471 | 0.0265 | 0.0261 | 0.9685 | 0.9797 | 0.7273 | 0.4471 | 0.7273 | 0.3889 | 0.6926 |
No log | 3.0 | 482 | 0.7814 | 0.7413 | 0.7104 | 0.7413 | 0.3985 | 0.4561 | 0.0245 | 0.0243 | 0.9703 | 0.9808 | 0.7413 | 0.4561 | 0.7413 | 0.4041 | 0.7109 |
1.2548 | 4.0 | 643 | 0.7648 | 0.7382 | 0.7158 | 0.7382 | 0.4273 | 0.4496 | 0.0254 | 0.0247 | 0.9662 | 0.9803 | 0.7382 | 0.4496 | 0.7382 | 0.4122 | 0.7112 |
1.2548 | 5.0 | 803 | 0.7329 | 0.7452 | 0.7105 | 0.7452 | 0.4162 | 0.4569 | 0.0248 | 0.0238 | 0.9668 | 0.9808 | 0.7452 | 0.4569 | 0.7452 | 0.4120 | 0.7133 |
1.2548 | 6.0 | 964 | 0.7430 | 0.7568 | 0.7547 | 0.7568 | 0.4627 | 0.4868 | 0.0229 | 0.0224 | 0.9710 | 0.9819 | 0.7568 | 0.4868 | 0.7568 | 0.4504 | 0.7424 |
0.6432 | 7.0 | 1125 | 0.7300 | 0.7723 | 0.7524 | 0.7723 | 0.5180 | 0.5411 | 0.0213 | 0.0206 | 0.9724 | 0.9830 | 0.7723 | 0.5411 | 0.7723 | 0.5175 | 0.7578 |
0.6432 | 8.0 | 1286 | 0.7212 | 0.7699 | 0.7514 | 0.7699 | 0.5096 | 0.5397 | 0.0216 | 0.0209 | 0.9727 | 0.9828 | 0.7699 | 0.5397 | 0.7699 | 0.5123 | 0.7556 |
0.6432 | 9.0 | 1446 | 0.6910 | 0.7839 | 0.7634 | 0.7839 | 0.5217 | 0.5566 | 0.0200 | 0.0193 | 0.9728 | 0.9838 | 0.7839 | 0.5566 | 0.7839 | 0.5280 | 0.7690 |
0.4841 | 10.0 | 1607 | 0.7122 | 0.7878 | 0.7732 | 0.7878 | 0.5355 | 0.5777 | 0.0195 | 0.0189 | 0.9748 | 0.9842 | 0.7878 | 0.5777 | 0.7878 | 0.5495 | 0.7776 |
0.4841 | 11.0 | 1768 | 0.6813 | 0.7916 | 0.7782 | 0.7916 | 0.5712 | 0.5765 | 0.0191 | 0.0185 | 0.9744 | 0.9844 | 0.7916 | 0.5765 | 0.7916 | 0.5563 | 0.7805 |
0.4841 | 12.0 | 1929 | 0.6845 | 0.7978 | 0.7922 | 0.7978 | 0.6111 | 0.6226 | 0.0184 | 0.0178 | 0.9759 | 0.9849 | 0.7978 | 0.6226 | 0.7978 | 0.6092 | 0.7927 |
0.3838 | 13.0 | 2089 | 0.6929 | 0.7986 | 0.7947 | 0.7986 | 0.6347 | 0.6038 | 0.0184 | 0.0177 | 0.9743 | 0.9849 | 0.7986 | 0.6038 | 0.7986 | 0.5954 | 0.7903 |
0.3838 | 14.0 | 2250 | 0.6929 | 0.8017 | 0.7960 | 0.8017 | 0.6369 | 0.6270 | 0.0180 | 0.0174 | 0.9754 | 0.9851 | 0.8017 | 0.6270 | 0.8017 | 0.6174 | 0.7952 |
0.3838 | 14.93 | 2400 | 0.6841 | 0.8048 | 0.7955 | 0.8048 | 0.6332 | 0.6316 | 0.0177 | 0.0170 | 0.9753 | 0.9853 | 0.8048 | 0.6316 | 0.8048 | 0.6233 | 0.7978 |
Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
Model tree for xshubhamx/legal-bert-lora
Base model
nlpaueb/legal-bert-base-uncased