Edit model card

predict-perception-xlmr-focus-victim

This model is a fine-tuned version of xlm-roberta-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2546
  • Rmse: 0.6301
  • Rmse Focus::a Sulla vittima: 0.6301
  • Mae: 0.5441
  • Mae Focus::a Sulla vittima: 0.5441
  • R2: 0.7205
  • R2 Focus::a Sulla vittima: 0.7205
  • Cos: 0.8261
  • Pair: 0.0
  • Rank: 0.5
  • Neighbors: 0.7802
  • Rsa: nan

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 20
  • eval_batch_size: 8
  • seed: 1996
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Rmse Rmse Focus::a Sulla vittima Mae Mae Focus::a Sulla vittima R2 R2 Focus::a Sulla vittima Cos Pair Rank Neighbors Rsa
1.0607 1.0 15 0.9261 1.2017 1.2017 0.9557 0.9557 -0.0166 -0.0166 0.4783 0.0 0.5 0.6332 nan
1.0107 2.0 30 0.9481 1.2159 1.2159 0.9861 0.9861 -0.0408 -0.0408 0.4783 0.0 0.5 0.6332 nan
0.9921 3.0 45 0.9068 1.1892 1.1892 0.9548 0.9548 0.0045 0.0045 0.4783 0.0 0.5 0.6332 nan
0.7769 4.0 60 0.5014 0.8842 0.8842 0.7121 0.7121 0.4496 0.4496 0.7391 0.0 0.5 0.6232 nan
0.5763 5.0 75 0.4019 0.7917 0.7917 0.6737 0.6737 0.5588 0.5588 0.8261 0.0 0.5 0.8155 nan
0.4378 6.0 90 0.3594 0.7486 0.7486 0.5957 0.5957 0.6055 0.6055 0.7391 0.0 0.5 0.4442 nan
0.3595 7.0 105 0.3452 0.7337 0.7337 0.6333 0.6333 0.6210 0.6210 0.5652 0.0 0.5 0.2649 nan
0.3192 8.0 120 0.3275 0.7147 0.7147 0.6205 0.6205 0.6405 0.6405 0.7391 0.0 0.5 0.6561 nan
0.2482 9.0 135 0.2978 0.6815 0.6815 0.5754 0.5754 0.6731 0.6731 0.7391 0.0 0.5 0.6715 nan
0.2416 10.0 150 0.3018 0.6860 0.6860 0.5954 0.5954 0.6687 0.6687 0.5652 0.0 0.5 0.2553 nan
0.2292 11.0 165 0.2764 0.6565 0.6565 0.5522 0.5522 0.6966 0.6966 0.9130 0.0 0.5 0.8408 nan
0.1752 12.0 180 0.3070 0.6920 0.6920 0.5680 0.5680 0.6629 0.6629 0.7391 0.0 0.5 0.6715 nan
0.1956 13.0 195 0.2923 0.6752 0.6752 0.5499 0.5499 0.6791 0.6791 0.8261 0.0 0.5 0.7843 nan
0.1424 14.0 210 0.3163 0.7023 0.7023 0.6060 0.6060 0.6528 0.6528 0.9130 0.0 0.5 0.8408 nan
0.152 15.0 225 0.2436 0.6164 0.6164 0.5127 0.5127 0.7326 0.7326 0.9130 0.0 0.5 0.8408 nan
0.1277 16.0 240 0.2471 0.6208 0.6208 0.5367 0.5367 0.7287 0.7287 0.8261 0.0 0.5 0.7802 nan
0.1269 17.0 255 0.2573 0.6334 0.6334 0.5329 0.5329 0.7175 0.7175 0.8261 0.0 0.5 0.7802 nan
0.1058 18.0 270 0.2538 0.6291 0.6291 0.5530 0.5530 0.7214 0.7214 0.7391 0.0 0.5 0.2347 nan
0.107 19.0 285 0.2568 0.6328 0.6328 0.5464 0.5464 0.7181 0.7181 0.8261 0.0 0.5 0.7802 nan
0.1185 20.0 300 0.2452 0.6183 0.6183 0.5317 0.5317 0.7309 0.7309 0.7391 0.0 0.5 0.2347 nan
0.1029 21.0 315 0.2419 0.6142 0.6142 0.5415 0.5415 0.7344 0.7344 0.7391 0.0 0.5 0.2347 nan
0.0908 22.0 330 0.2462 0.6196 0.6196 0.5261 0.5261 0.7297 0.7297 0.8261 0.0 0.5 0.7802 nan
0.0901 23.0 345 0.2528 0.6279 0.6279 0.5330 0.5330 0.7225 0.7225 0.8261 0.0 0.5 0.7802 nan
0.0979 24.0 360 0.2800 0.6607 0.6607 0.5682 0.5682 0.6927 0.6927 0.9130 0.0 0.5 0.8408 nan
0.0992 25.0 375 0.2502 0.6246 0.6246 0.5517 0.5517 0.7254 0.7254 0.6522 0.0 0.5 0.2372 nan
0.0846 26.0 390 0.2570 0.6331 0.6331 0.5524 0.5524 0.7178 0.7178 0.8261 0.0 0.5 0.7802 nan
0.0717 27.0 405 0.2562 0.6321 0.6321 0.5456 0.5456 0.7187 0.7187 0.8261 0.0 0.5 0.7802 nan
0.0739 28.0 420 0.2570 0.6330 0.6330 0.5471 0.5471 0.7179 0.7179 0.8261 0.0 0.5 0.7802 nan
0.0828 29.0 435 0.2553 0.6309 0.6309 0.5446 0.5446 0.7198 0.7198 0.8261 0.0 0.5 0.7802 nan
0.086 30.0 450 0.2546 0.6301 0.6301 0.5441 0.5441 0.7205 0.7205 0.8261 0.0 0.5 0.7802 nan

Framework versions

  • Transformers 4.16.2
  • Pytorch 1.10.2+cu113
  • Datasets 1.18.3
  • Tokenizers 0.11.0
Downloads last month
18
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.