results / README.md
cahmetcan's picture
Model save
18ebf19 verified
|
raw
history blame
5.23 kB
metadata
library_name: transformers
license: mit
base_model: dbmdz/bert-base-turkish-cased
tags:
  - generated_from_trainer
model-index:
  - name: results
    results: []

results

This model is a fine-tuned version of dbmdz/bert-base-turkish-cased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1398

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss
1.5417 0.4 10 1.2782
1.2171 0.8 20 0.9827
0.7965 1.2 30 0.6443
0.5738 1.6 40 0.4631
0.4821 2.0 50 0.3206
0.2171 2.4 60 0.2231
0.2034 2.8 70 0.1725
0.1357 3.2 80 0.1658
0.0586 3.6 90 0.1403
0.0684 4.0 100 0.1158
0.0356 4.4 110 0.1498
0.0361 4.8 120 0.1576
0.0112 5.2 130 0.1059
0.015 5.6 140 0.1060
0.0131 6.0 150 0.0932
0.006 6.4 160 0.0976
0.0053 6.8 170 0.1208
0.0051 7.2 180 0.1345
0.004 7.6 190 0.1419
0.0035 8.0 200 0.1383
0.0034 8.4 210 0.1333
0.003 8.8 220 0.1370
0.0027 9.2 230 0.1400
0.0026 9.6 240 0.1403
0.0026 10.0 250 0.1411
0.0025 10.4 260 0.1395
0.0022 10.8 270 0.1330
0.0021 11.2 280 0.1325
0.002 11.6 290 0.1327
0.0021 12.0 300 0.1344
0.0018 12.4 310 0.1357
0.0019 12.8 320 0.1374
0.0019 13.2 330 0.1382
0.0017 13.6 340 0.1392
0.0029 14.0 350 0.1318
0.0016 14.4 360 0.1281
0.0016 14.8 370 0.1272
0.0014 15.2 380 0.1281
0.0015 15.6 390 0.1296
0.0014 16.0 400 0.1312
0.0014 16.4 410 0.1325
0.0013 16.8 420 0.1320
0.0013 17.2 430 0.1326
0.0013 17.6 440 0.1339
0.0013 18.0 450 0.1353
0.0013 18.4 460 0.1374
0.0012 18.8 470 0.1379
0.0012 19.2 480 0.1378
0.0012 19.6 490 0.1382
0.0013 20.0 500 0.1398
0.0011 20.4 510 0.1410
0.0011 20.8 520 0.1409
0.0012 21.2 530 0.1401
0.0011 21.6 540 0.1398
0.001 22.0 550 0.1393
0.0011 22.4 560 0.1382
0.0011 22.8 570 0.1380
0.0011 23.2 580 0.1377
0.001 23.6 590 0.1380
0.0011 24.0 600 0.1385
0.001 24.4 610 0.1386
0.001 24.8 620 0.1386
0.0009 25.2 630 0.1387
0.001 25.6 640 0.1393
0.001 26.0 650 0.1397
0.001 26.4 660 0.1398
0.001 26.8 670 0.1399
0.0009 27.2 680 0.1401
0.001 27.6 690 0.1401
0.001 28.0 700 0.1402
0.001 28.4 710 0.1403
0.0009 28.8 720 0.1403
0.001 29.2 730 0.1404
0.0009 29.6 740 0.1404
0.001 30.0 750 0.1404

Framework versions

  • Transformers 4.45.1
  • Pytorch 2.4.0+cu124
  • Datasets 2.19.1
  • Tokenizers 0.20.0