donut-base-sroie-v3 / README.md
davelotito's picture
End of training
99868f4 verified
metadata
license: mit
base_model: naver-clova-ix/donut-base
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - bleu
  - wer
model-index:
  - name: donut-base-sroie-v3
    results: []

donut-base-sroie-v3

This model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.7091
  • Bleu: 0.0045
  • Precisions: [0.32051282051282054, 0.10617283950617284, 0.043859649122807015, 0.02867383512544803]
  • Brevity Penalty: 0.0555
  • Length Ratio: 0.2570
  • Translation Length: 468
  • Reference Length: 1821
  • Cer: 0.8657
  • Wer: 0.9978

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 4
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Bleu Precisions Brevity Penalty Length Ratio Translation Length Reference Length Cer Wer
No log 0.99 62 1.7048 0.0026 [0.3287981859410431, 0.09259259259259259, 0.025396825396825397, 0.015873015873015872] 0.0438 0.2422 441 1821 0.8729 1.0
0.6536 2.0 125 1.7425 0.0035 [0.32051282051282054, 0.0962962962962963, 0.029239766081871343, 0.017921146953405017] 0.0555 0.2570 468 1821 0.8701 0.9986
0.6536 2.99 187 1.6949 0.0038 [0.3148936170212766, 0.09582309582309582, 0.03197674418604651, 0.021352313167259787] 0.0564 0.2581 470 1821 0.8670 0.9978
0.6585 3.97 248 1.7091 0.0045 [0.32051282051282054, 0.10617283950617284, 0.043859649122807015, 0.02867383512544803] 0.0555 0.2570 468 1821 0.8657 0.9978

Framework versions

  • Transformers 4.40.0.dev0
  • Pytorch 2.1.0
  • Datasets 2.18.0
  • Tokenizers 0.15.2