predict-perception-xlmr-blame-victim

This model is a fine-tuned version of xlm-roberta-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1098
  • Rmse: 0.6801
  • Rmse Blame::a La vittima: 0.6801
  • Mae: 0.5617
  • Mae Blame::a La vittima: 0.5617
  • R2: -1.5910
  • R2 Blame::a La vittima: -1.5910
  • Cos: -0.1304
  • Pair: 0.0
  • Rank: 0.5
  • Neighbors: 0.3333
  • Rsa: nan

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 20
  • eval_batch_size: 8
  • seed: 1996
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Rmse Rmse Blame::a La vittima Mae Mae Blame::a La vittima R2 R2 Blame::a La vittima Cos Pair Rank Neighbors Rsa
1.0422 1.0 15 0.4952 0.4542 0.4542 0.4095 0.4095 -0.1560 -0.1560 -0.1304 0.0 0.5 0.2971 nan
1.0434 2.0 30 0.4851 0.4496 0.4496 0.4054 0.4054 -0.1324 -0.1324 -0.1304 0.0 0.5 0.2971 nan
1.038 3.0 45 0.4513 0.4337 0.4337 0.3885 0.3885 -0.0536 -0.0536 -0.1304 0.0 0.5 0.2971 nan
1.0151 4.0 60 0.4395 0.4280 0.4280 0.3840 0.3840 -0.0262 -0.0262 -0.1304 0.0 0.5 0.2715 nan
0.9727 5.0 75 0.4490 0.4325 0.4325 0.3811 0.3811 -0.0482 -0.0482 0.2174 0.0 0.5 0.3338 nan
0.9733 6.0 90 0.4540 0.4349 0.4349 0.3860 0.3860 -0.0598 -0.0598 -0.2174 0.0 0.5 0.3248 nan
0.9396 7.0 105 0.4501 0.4331 0.4331 0.3849 0.3849 -0.0508 -0.0508 0.0435 0.0 0.5 0.2609 nan
0.8759 8.0 120 0.4597 0.4377 0.4377 0.3849 0.3849 -0.0731 -0.0731 0.3043 0.0 0.5 0.3898 nan
0.8768 9.0 135 0.4575 0.4366 0.4366 0.3784 0.3784 -0.0680 -0.0680 0.4783 0.0 0.5 0.4615 nan
0.8312 10.0 150 0.5363 0.4727 0.4727 0.4071 0.4071 -0.2520 -0.2520 -0.0435 0.0 0.5 0.2733 nan
0.7296 11.0 165 0.5291 0.4696 0.4696 0.4057 0.4057 -0.2353 -0.2353 0.3043 0.0 0.5 0.3898 nan
0.7941 12.0 180 0.5319 0.4708 0.4708 0.4047 0.4047 -0.2417 -0.2417 0.1304 0.0 0.5 0.3381 nan
0.6486 13.0 195 0.6787 0.5318 0.5318 0.4516 0.4516 -0.5846 -0.5846 0.1304 0.0 0.5 0.3381 nan
0.6241 14.0 210 1.0146 0.6502 0.6502 0.5580 0.5580 -1.3687 -1.3687 -0.1304 0.0 0.5 0.3509 nan
0.5868 15.0 225 0.7164 0.5464 0.5464 0.4682 0.4682 -0.6725 -0.6725 -0.0435 0.0 0.5 0.3333 nan
0.5305 16.0 240 0.9064 0.6146 0.6146 0.5173 0.5173 -1.1161 -1.1161 -0.0435 0.0 0.5 0.3333 nan
0.495 17.0 255 1.3860 0.7600 0.7600 0.6433 0.6433 -2.2358 -2.2358 -0.0435 0.0 0.5 0.2935 nan
0.566 18.0 270 0.7618 0.5634 0.5634 0.4730 0.4730 -0.7785 -0.7785 0.0435 0.0 0.5 0.3225 nan
0.4305 19.0 285 0.8849 0.6072 0.6072 0.5048 0.5048 -1.0659 -1.0659 -0.0435 0.0 0.5 0.3333 nan
0.5108 20.0 300 0.7376 0.5544 0.5544 0.4716 0.4716 -0.7220 -0.7220 0.0435 0.0 0.5 0.3225 nan
0.44 21.0 315 1.1611 0.6956 0.6956 0.5921 0.5921 -1.7108 -1.7108 -0.1304 0.0 0.5 0.3333 nan
0.395 22.0 330 1.3004 0.7361 0.7361 0.6078 0.6078 -2.0360 -2.0360 -0.2174 0.0 0.5 0.3587 nan
0.3945 23.0 345 0.9376 0.6251 0.6251 0.5272 0.5272 -1.1890 -1.1890 -0.2174 0.0 0.5 0.3188 nan
0.3093 24.0 360 1.3586 0.7524 0.7524 0.6219 0.6219 -2.1719 -2.1719 -0.2174 0.0 0.5 0.3587 nan
0.2676 25.0 375 1.2200 0.7130 0.7130 0.5994 0.5994 -1.8484 -1.8484 -0.2174 0.0 0.5 0.3587 nan
0.3257 26.0 390 1.2235 0.7140 0.7140 0.5900 0.5900 -1.8564 -1.8564 -0.2174 0.0 0.5 0.3587 nan
0.4004 27.0 405 1.0978 0.6763 0.6763 0.5624 0.5624 -1.5629 -1.5629 -0.2174 0.0 0.5 0.3587 nan
0.283 28.0 420 1.1454 0.6909 0.6909 0.5697 0.5697 -1.6742 -1.6742 -0.2174 0.0 0.5 0.3587 nan
0.3326 29.0 435 1.1214 0.6836 0.6836 0.5646 0.5646 -1.6181 -1.6181 -0.1304 0.0 0.5 0.3333 nan
0.2632 30.0 450 1.1098 0.6801 0.6801 0.5617 0.5617 -1.5910 -1.5910 -0.1304 0.0 0.5 0.3333 nan

Framework versions

  • Transformers 4.16.2
  • Pytorch 1.10.2+cu113
  • Datasets 1.18.3
  • Tokenizers 0.11.0
Downloads last month
12
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.