Experiment1-system1-roberta-base-finetuned-ner
This model is a fine-tuned version of roberta-base on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.1869
- Precision: 0.9424
- Recall: 0.9335
- F1: 0.9379
- Accuracy: 0.9330
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|---|---|---|
No log | 0.04 | 10 | 0.6735 | 0.7957 | 0.8581 | 0.8257 | 0.7957 |
No log | 0.07 | 20 | 0.5323 | 0.7957 | 0.8581 | 0.8257 | 0.7957 |
No log | 0.11 | 30 | 0.4444 | 0.8146 | 0.8785 | 0.8453 | 0.8146 |
No log | 0.15 | 40 | 0.3747 | 0.8393 | 0.8973 | 0.8674 | 0.8481 |
No log | 0.19 | 50 | 0.3110 | 0.8734 | 0.8943 | 0.8837 | 0.8777 |
No log | 0.22 | 60 | 0.2818 | 0.8934 | 0.9031 | 0.8982 | 0.8906 |
No log | 0.26 | 70 | 0.2628 | 0.9277 | 0.8946 | 0.9108 | 0.9031 |
No log | 0.3 | 80 | 0.2407 | 0.9190 | 0.9160 | 0.9175 | 0.9133 |
No log | 0.34 | 90 | 0.2861 | 0.9285 | 0.8775 | 0.9023 | 0.8883 |
No log | 0.37 | 100 | 0.2523 | 0.9024 | 0.9150 | 0.9086 | 0.9073 |
No log | 0.41 | 110 | 0.2351 | 0.9195 | 0.9131 | 0.9163 | 0.9122 |
No log | 0.45 | 120 | 0.2435 | 0.9339 | 0.9060 | 0.9197 | 0.9111 |
No log | 0.49 | 130 | 0.2365 | 0.9315 | 0.9097 | 0.9205 | 0.9142 |
No log | 0.52 | 140 | 0.2182 | 0.9345 | 0.9177 | 0.9260 | 0.9202 |
No log | 0.56 | 150 | 0.2138 | 0.9355 | 0.9182 | 0.9268 | 0.9207 |
No log | 0.6 | 160 | 0.2140 | 0.9383 | 0.9187 | 0.9284 | 0.9223 |
No log | 0.63 | 170 | 0.2018 | 0.9397 | 0.9284 | 0.9340 | 0.9285 |
No log | 0.67 | 180 | 0.1998 | 0.9408 | 0.9284 | 0.9346 | 0.9290 |
No log | 0.71 | 190 | 0.1930 | 0.9433 | 0.9292 | 0.9362 | 0.9308 |
No log | 0.75 | 200 | 0.1908 | 0.9420 | 0.9285 | 0.9352 | 0.9300 |
No log | 0.78 | 210 | 0.1923 | 0.9392 | 0.9275 | 0.9333 | 0.9279 |
No log | 0.82 | 220 | 0.1891 | 0.9425 | 0.9297 | 0.9361 | 0.9303 |
No log | 0.86 | 230 | 0.1877 | 0.9449 | 0.9319 | 0.9384 | 0.9326 |
No log | 0.9 | 240 | 0.1873 | 0.9448 | 0.9319 | 0.9383 | 0.9323 |
No log | 0.93 | 250 | 0.1868 | 0.9445 | 0.9328 | 0.9386 | 0.9330 |
No log | 0.97 | 260 | 0.1866 | 0.9429 | 0.9338 | 0.9383 | 0.9333 |
Framework versions
- Transformers 4.36.2
- Pytorch 1.11.0+cu113
- Datasets 2.19.0
- Tokenizers 0.15.2
- Downloads last month
- 13
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for lobrien001/Experiment1-system1-roberta-base-finetuned-ner
Base model
FacebookAI/roberta-base