Edit model card

donut_experiment_bayesian_trial_6

This model is a fine-tuned version of naver-clova-ix/donut-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5515
  • Bleu: 0.0683
  • Precisions: [0.8127572016460906, 0.7412587412587412, 0.6854838709677419, 0.638095238095238]
  • Brevity Penalty: 0.0954
  • Length Ratio: 0.2985
  • Translation Length: 486
  • Reference Length: 1628
  • Cer: 0.7532
  • Wer: 0.8274

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.00016063260663724173
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 2
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 4
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Bleu Precisions Brevity Penalty Length Ratio Translation Length Reference Length Cer Wer
0.3276 1.0 253 0.6672 0.0589 [0.76875, 0.6737588652482269, 0.6092896174863388, 0.5436893203883495] 0.0915 0.2948 480 1628 0.7586 0.8473
0.2008 2.0 506 0.5780 0.0662 [0.7905544147843943, 0.7069767441860465, 0.6595174262734584, 0.6107594936708861] 0.0960 0.2991 487 1628 0.7559 0.8374
0.1356 3.0 759 0.5355 0.0651 [0.8238993710691824, 0.7452380952380953, 0.6942148760330579, 0.6535947712418301] 0.0895 0.2930 477 1628 0.7580 0.8299
0.0394 4.0 1012 0.5515 0.0683 [0.8127572016460906, 0.7412587412587412, 0.6854838709677419, 0.638095238095238] 0.0954 0.2985 486 1628 0.7532 0.8274

Framework versions

  • Transformers 4.40.0
  • Pytorch 2.1.0
  • Datasets 2.18.0
  • Tokenizers 0.19.1
Downloads last month
2
Safetensors
Model size
202M params
Tensor type
I64
·
F32
·
Unable to determine this model’s pipeline type. Check the docs .

Finetuned from