Edit model card

predict-perception-bertino-cause-concept

This model is a fine-tuned version of indigo-ai/BERTino on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2035
  • R2: -0.3662

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 20
  • eval_batch_size: 8
  • seed: 1996
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 47

Training results

Training Loss Epoch Step Validation Loss R2
0.3498 1.0 14 0.1845 -0.2382
0.2442 2.0 28 0.1575 -0.0573
0.1553 3.0 42 0.2216 -0.4872
0.0726 4.0 56 0.1972 -0.3234
0.0564 5.0 70 0.2832 -0.9009
0.0525 6.0 84 0.1854 -0.2444
0.0385 7.0 98 0.2816 -0.8900
0.0257 8.0 112 0.1815 -0.2183
0.03 9.0 126 0.3065 -1.0576
0.0275 10.0 140 0.1991 -0.3367
0.0175 11.0 154 0.2400 -0.6110
0.017 12.0 168 0.1915 -0.2856
0.0158 13.0 182 0.2008 -0.3477
0.0127 14.0 196 0.1932 -0.2968
0.009 15.0 210 0.2500 -0.6783
0.0078 16.0 224 0.1969 -0.3215
0.0075 17.0 238 0.1857 -0.2463
0.0079 18.0 252 0.2405 -0.6145
0.0089 19.0 266 0.1865 -0.2517
0.0082 20.0 280 0.2275 -0.5267
0.0078 21.0 294 0.1890 -0.2687
0.0072 22.0 308 0.2230 -0.4965
0.0064 23.0 322 0.2286 -0.5346
0.0052 24.0 336 0.2154 -0.4457
0.0049 25.0 350 0.1901 -0.2757
0.0062 26.0 364 0.1917 -0.2870
0.0043 27.0 378 0.2042 -0.3704
0.0038 28.0 392 0.2251 -0.5110
0.0049 29.0 406 0.2092 -0.4040
0.0044 30.0 420 0.2119 -0.4221
0.0041 31.0 434 0.2018 -0.3542
0.0039 32.0 448 0.1875 -0.2586
0.0038 33.0 462 0.1980 -0.3291
0.0038 34.0 476 0.2071 -0.3903
0.0043 35.0 490 0.1998 -0.3412
0.0043 36.0 504 0.2052 -0.3771
0.004 37.0 518 0.2143 -0.4382
0.004 38.0 532 0.1977 -0.3273
0.0039 39.0 546 0.2002 -0.3439
0.0034 40.0 560 0.2035 -0.3659
0.0036 41.0 574 0.1994 -0.3387
0.0029 42.0 588 0.2036 -0.3667
0.0032 43.0 602 0.2055 -0.3797
0.0029 44.0 616 0.2025 -0.3593
0.0027 45.0 630 0.2047 -0.3743
0.0033 46.0 644 0.2067 -0.3877
0.0027 47.0 658 0.2035 -0.3662

Framework versions

  • Transformers 4.16.2
  • Pytorch 1.10.2+cu113
  • Datasets 1.18.3
  • Tokenizers 0.11.0
Downloads last month
7
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.