groderg's picture
Evaluation on the test set completed on 2024_10_31.
a3260b6 verified
|
raw
history blame
8.2 kB
metadata
library_name: transformers
license: apache-2.0
base_model: facebook/dinov2-large
tags:
  - generated_from_trainer
model-index:
  - name: Ziboiai-large-2024_10_31-prova_batch-size32_freeze_probs
    results: []

Ziboiai-large-2024_10_31-prova_batch-size32_freeze_probs

This model is a fine-tuned version of facebook/dinov2-large on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6333
  • Rmse: 0.3468
  • Mae: 0.3060
  • R2: -1.9752
  • Explained Variance: 0.1029
  • Learning Rate: 0.0000

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 150
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rmse Mae R2 Explained Variance Rate
No log 1.0 2 0.7005 0.3966 0.3705 -21.5069 0.0684 0.001
No log 2.0 4 0.7249 0.4021 0.3746 -26.3836 0.0822 0.001
No log 3.0 6 0.7532 0.4114 0.3816 -29.6868 0.1178 0.001
No log 4.0 8 0.7681 0.4186 0.3850 -29.0398 0.0566 0.001
No log 5.0 10 0.7665 0.4178 0.3827 -26.6101 0.0116 0.001
No log 6.0 12 0.7594 0.4152 0.3779 -24.2590 -0.0414 0.001
No log 7.0 14 0.7494 0.4108 0.3715 -22.3016 -0.1878 0.001
No log 8.0 16 0.7214 0.3992 0.3610 -20.1630 -0.1876 0.0001
No log 9.0 18 0.7013 0.3905 0.3530 -18.6708 -0.1643 0.0001
No log 10.0 20 0.6869 0.3836 0.3467 -17.4192 -0.1505 0.0001
No log 11.0 22 0.6764 0.3787 0.3425 -16.5076 -0.1282 0.0001
No log 12.0 24 0.6669 0.3740 0.3384 -16.0072 -0.1085 0.0001
No log 13.0 26 0.6617 0.3712 0.3358 -15.5612 -0.0882 0.0001
No log 14.0 28 0.6557 0.3683 0.3332 -14.8471 -0.0399 0.0001
No log 15.0 30 0.6517 0.3661 0.3313 -14.3744 -0.0149 0.0001
No log 16.0 32 0.6494 0.3650 0.3302 -14.0923 0.0009 0.0001
No log 17.0 34 0.6469 0.3634 0.3284 -14.0430 0.0076 0.0001
No log 18.0 36 0.6455 0.3626 0.3275 -13.8481 0.0275 0.0001
No log 19.0 38 0.6437 0.3617 0.3270 -13.7294 0.0458 0.0001
No log 20.0 40 0.6426 0.3611 0.3265 -13.4695 0.0571 0.0001
No log 21.0 42 0.6414 0.3605 0.3256 -13.4449 0.0581 0.0001
No log 22.0 44 0.6422 0.3605 0.3257 -13.3180 0.0542 0.0001
No log 23.0 46 0.6407 0.3593 0.3246 -13.2487 0.0755 0.0001
No log 24.0 48 0.6375 0.3576 0.3230 -13.2495 0.0741 0.0001
No log 25.0 50 0.6332 0.3551 0.3205 -12.9650 0.0843 0.0001
No log 26.0 52 0.6316 0.3540 0.3191 -12.7124 0.0903 0.0001
No log 27.0 54 0.6298 0.3527 0.3176 -12.5315 0.0972 0.0001
No log 28.0 56 0.6287 0.3519 0.3168 -12.3934 0.1010 0.0001
No log 29.0 58 0.6279 0.3514 0.3163 -12.3234 0.1064 0.0001
No log 30.0 60 0.6246 0.3494 0.3141 -12.2314 0.1160 0.0001
No log 31.0 62 0.6211 0.3475 0.3123 -12.0643 0.1264 0.0001
No log 32.0 64 0.6218 0.3477 0.3125 -11.9670 0.1294 0.0001
No log 33.0 66 0.6202 0.3470 0.3120 -11.7550 0.1365 0.0001
No log 34.0 68 0.6191 0.3463 0.3111 -11.6145 0.1364 0.0001
No log 35.0 70 0.6174 0.3455 0.3105 -11.5861 0.1400 0.0001
No log 36.0 72 0.6195 0.3462 0.3109 -11.7605 0.1398 0.0001
No log 37.0 74 0.6210 0.3470 0.3114 -11.7035 0.1367 0.0001
No log 38.0 76 0.6201 0.3463 0.3107 -11.6608 0.1387 0.0001
No log 39.0 78 0.6195 0.3461 0.3106 -11.6294 0.1362 0.0001
No log 40.0 80 0.6195 0.3459 0.3101 -11.6709 0.1279 0.0001
No log 41.0 82 0.6196 0.3456 0.3095 -11.4656 0.1154 0.0001
No log 42.0 84 0.6185 0.3453 0.3096 -11.4190 0.1220 1e-05
No log 43.0 86 0.6196 0.3457 0.3099 -11.4211 0.1224 1e-05
No log 44.0 88 0.6175 0.3448 0.3091 -11.3422 0.1252 1e-05
No log 45.0 90 0.6148 0.3435 0.3079 -11.2377 0.1267 1e-05
No log 46.0 92 0.6156 0.3439 0.3081 -11.2161 0.1232 1e-05
No log 47.0 94 0.6162 0.3442 0.3084 -11.2359 0.1219 1e-05
No log 48.0 96 0.6153 0.3438 0.3079 -11.1407 0.1218 1e-05
No log 49.0 98 0.6142 0.3434 0.3075 -11.0878 0.1259 1e-05
No log 50.0 100 0.6125 0.3427 0.3071 -11.1648 0.1241 1e-05
No log 51.0 102 0.6131 0.3430 0.3072 -11.2371 0.1274 1e-05
No log 52.0 104 0.6137 0.3434 0.3077 -11.3909 0.1274 1e-05
No log 53.0 106 0.6139 0.3434 0.3077 -11.5018 0.1224 1e-05
No log 54.0 108 0.6157 0.3445 0.3089 -11.6674 0.1222 1e-05
No log 55.0 110 0.6168 0.3448 0.3090 -11.6467 0.1222 1e-05
No log 56.0 112 0.6140 0.3434 0.3077 -11.4968 0.1250 1e-05
No log 57.0 114 0.6133 0.3430 0.3071 -11.5002 0.1216 0.0000
No log 58.0 116 0.6130 0.3428 0.3070 -11.4475 0.1210 0.0000
No log 59.0 118 0.6150 0.3441 0.3083 -11.5562 0.1178 0.0000
No log 60.0 120 0.6167 0.3450 0.3092 -11.4676 0.1243 0.0000

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.0
  • Tokenizers 0.19.1