predict-perception-xlmr-blame-object

This model is a fine-tuned version of xlm-roberta-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7219
  • Rmse: 0.6215
  • Rmse Blame::a Un oggetto: 0.6215
  • Mae: 0.4130
  • Mae Blame::a Un oggetto: 0.4130
  • R2: 0.1200
  • R2 Blame::a Un oggetto: 0.1200
  • Cos: 0.3043
  • Pair: 0.0
  • Rank: 0.5
  • Neighbors: 0.4335
  • Rsa: nan

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 20
  • eval_batch_size: 8
  • seed: 1996
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Rmse Rmse Blame::a Un oggetto Mae Mae Blame::a Un oggetto R2 R2 Blame::a Un oggetto Cos Pair Rank Neighbors Rsa
1.0279 1.0 15 0.8483 0.6737 0.6737 0.4761 0.4761 -0.0341 -0.0341 -0.3043 0.0 0.5 0.5507 nan
1.0676 2.0 30 0.7749 0.6439 0.6439 0.4291 0.4291 0.0554 0.0554 0.0435 0.0 0.5 0.2614 nan
0.9563 3.0 45 0.7765 0.6446 0.6446 0.4349 0.4349 0.0535 0.0535 -0.0435 0.0 0.5 0.4515 nan
0.9622 4.0 60 0.7443 0.6311 0.6311 0.4061 0.4061 0.0927 0.0927 0.1304 0.0 0.5 0.2933 nan
0.948 5.0 75 0.8071 0.6571 0.6571 0.3817 0.3817 0.0162 0.0162 0.3043 0.0 0.5 0.4207 nan
0.9532 6.0 90 0.8007 0.6546 0.6546 0.4585 0.4585 0.0239 0.0239 -0.0435 0.0 0.5 0.5507 nan
0.9101 7.0 105 0.7126 0.6175 0.6175 0.3649 0.3649 0.1313 0.1313 0.4783 0.0 0.5 0.6012 nan
0.8369 8.0 120 0.7194 0.6204 0.6204 0.3896 0.3896 0.1231 0.1231 0.3913 0.0 0.5 0.3494 nan
0.8062 9.0 135 0.7157 0.6188 0.6188 0.4192 0.4192 0.1275 0.1275 0.0435 0.0 0.5 0.3182 nan
0.7344 10.0 150 0.7161 0.6190 0.6190 0.3612 0.3612 0.1270 0.1270 0.3043 0.0 0.5 0.6035 nan
0.7439 11.0 165 0.5894 0.5616 0.5616 0.3723 0.3723 0.2816 0.2816 0.3043 0.0 0.5 0.3846 nan
0.6241 12.0 180 0.7087 0.6158 0.6158 0.3972 0.3972 0.1361 0.1361 0.3043 0.0 0.5 0.3846 nan
0.6123 13.0 195 0.6318 0.5814 0.5814 0.3673 0.3673 0.2298 0.2298 0.3913 0.0 0.5 0.4413 nan
0.5364 14.0 210 0.6504 0.5899 0.5899 0.3674 0.3674 0.2072 0.2072 0.3043 0.0 0.5 0.3846 nan
0.5586 15.0 225 0.7151 0.6186 0.6186 0.3850 0.3850 0.1283 0.1283 0.3043 0.0 0.5 0.4335 nan
0.5133 16.0 240 0.5572 0.5460 0.5460 0.3540 0.3540 0.3208 0.3208 0.4783 0.0 0.5 0.5314 nan
0.4193 17.0 255 0.6047 0.5688 0.5688 0.3710 0.3710 0.2629 0.2629 0.3913 0.0 0.5 0.4924 nan
0.3504 18.0 270 0.6103 0.5714 0.5714 0.3687 0.3687 0.2561 0.2561 0.3913 0.0 0.5 0.4924 nan
0.3328 19.0 285 0.6181 0.5751 0.5751 0.3915 0.3915 0.2466 0.2466 0.4783 0.0 0.5 0.5314 nan
0.3276 20.0 300 0.6334 0.5822 0.5822 0.3612 0.3612 0.2279 0.2279 0.3913 0.0 0.5 0.4924 nan
0.3271 21.0 315 0.6200 0.5760 0.5760 0.3827 0.3827 0.2442 0.2442 0.3043 0.0 0.5 0.4335 nan
0.3139 22.0 330 0.6332 0.5821 0.5821 0.3723 0.3723 0.2281 0.2281 0.3913 0.0 0.5 0.4924 nan
0.2872 23.0 345 0.6694 0.5985 0.5985 0.3966 0.3966 0.1840 0.1840 0.3913 0.0 0.5 0.4924 nan
0.3617 24.0 360 0.7022 0.6130 0.6130 0.4061 0.4061 0.1440 0.1440 0.3913 0.0 0.5 0.4924 nan
0.3227 25.0 375 0.7364 0.6277 0.6277 0.4205 0.4205 0.1024 0.1024 0.3043 0.0 0.5 0.4335 nan
0.256 26.0 390 0.6938 0.6093 0.6093 0.3833 0.3833 0.1543 0.1543 0.3913 0.0 0.5 0.4924 nan
0.2605 27.0 405 0.7221 0.6216 0.6216 0.4036 0.4036 0.1198 0.1198 0.3043 0.0 0.5 0.4335 nan
0.2558 28.0 420 0.6959 0.6102 0.6102 0.3859 0.3859 0.1518 0.1518 0.3913 0.0 0.5 0.4924 nan
0.2403 29.0 435 0.7152 0.6186 0.6186 0.4088 0.4088 0.1281 0.1281 0.3913 0.0 0.5 0.4924 nan
0.3263 30.0 450 0.7219 0.6215 0.6215 0.4130 0.4130 0.1200 0.1200 0.3043 0.0 0.5 0.4335 nan

Framework versions

  • Transformers 4.16.2
  • Pytorch 1.10.2+cu113
  • Datasets 1.18.3
  • Tokenizers 0.11.0
Downloads last month
15
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.