IndianLegalBERT
This model is a fine-tuned version of law-ai/InLegalBERT on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.2872
- Accuracy: 0.8218
- Precision: 0.8227
- Recall: 0.8218
- Precision Macro: 0.7823
- Recall Macro: 0.7855
- Macro Fpr: 0.0158
- Weighted Fpr: 0.0152
- Weighted Specificity: 0.9773
- Macro Specificity: 0.9866
- Weighted Sensitivity: 0.8218
- Macro Sensitivity: 0.7855
- F1 Micro: 0.8218
- F1 Macro: 0.7809
- F1 Weighted: 0.8211
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | Precision Macro | Recall Macro | Macro Fpr | Weighted Fpr | Weighted Specificity | Macro Specificity | Weighted Sensitivity | Macro Sensitivity | F1 Micro | F1 Macro | F1 Weighted |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1.1031 | 1.0 | 643 | 0.6873 | 0.7854 | 0.7628 | 0.7854 | 0.5923 | 0.6107 | 0.0201 | 0.0191 | 0.9691 | 0.9836 | 0.7854 | 0.6107 | 0.7854 | 0.5863 | 0.7674 |
0.5953 | 2.0 | 1286 | 0.6741 | 0.8195 | 0.8135 | 0.8195 | 0.7481 | 0.7363 | 0.0162 | 0.0155 | 0.9753 | 0.9863 | 0.8195 | 0.7363 | 0.8195 | 0.7377 | 0.8153 |
0.4673 | 3.0 | 1929 | 0.7955 | 0.8242 | 0.8206 | 0.8242 | 0.7588 | 0.7421 | 0.0157 | 0.0150 | 0.9749 | 0.9866 | 0.8242 | 0.7421 | 0.8242 | 0.7433 | 0.8204 |
0.2292 | 4.0 | 2572 | 0.8666 | 0.8280 | 0.8297 | 0.8280 | 0.7945 | 0.7864 | 0.0151 | 0.0146 | 0.9786 | 0.9871 | 0.8280 | 0.7864 | 0.8280 | 0.7840 | 0.8270 |
0.1583 | 5.0 | 3215 | 0.9898 | 0.8335 | 0.8348 | 0.8335 | 0.8115 | 0.7893 | 0.0147 | 0.0141 | 0.9778 | 0.9874 | 0.8335 | 0.7893 | 0.8335 | 0.7926 | 0.8308 |
0.0975 | 6.0 | 3858 | 1.1179 | 0.8218 | 0.8260 | 0.8218 | 0.8185 | 0.7573 | 0.0158 | 0.0152 | 0.9781 | 0.9867 | 0.8218 | 0.7573 | 0.8218 | 0.7656 | 0.8203 |
0.0529 | 7.0 | 4501 | 1.1545 | 0.8211 | 0.8205 | 0.8211 | 0.7916 | 0.7691 | 0.0160 | 0.0153 | 0.9758 | 0.9865 | 0.8211 | 0.7691 | 0.8211 | 0.7773 | 0.8203 |
0.0184 | 8.0 | 5144 | 1.2160 | 0.8234 | 0.8248 | 0.8234 | 0.7770 | 0.7829 | 0.0157 | 0.0151 | 0.9774 | 0.9867 | 0.8234 | 0.7829 | 0.8234 | 0.7771 | 0.8229 |
0.0186 | 9.0 | 5787 | 1.2777 | 0.8226 | 0.8244 | 0.8226 | 0.7882 | 0.7851 | 0.0157 | 0.0152 | 0.9774 | 0.9867 | 0.8226 | 0.7851 | 0.8226 | 0.7827 | 0.8223 |
0.007 | 10.0 | 6430 | 1.2872 | 0.8218 | 0.8227 | 0.8218 | 0.7823 | 0.7855 | 0.0158 | 0.0152 | 0.9773 | 0.9866 | 0.8218 | 0.7855 | 0.8218 | 0.7809 | 0.8211 |
Framework versions
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2
- Downloads last month
- 23
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for xshubhamx/IndianLegalBERT
Base model
law-ai/InLegalBERT