layoutlm-funsd1 / README.md
Benedict-L's picture
End of training
78928a5 verified
metadata
license: mit
base_model: microsoft/layoutlm-base-uncased
tags:
  - generated_from_trainer
datasets:
  - funsd
model-index:
  - name: layoutlm-funsd1
    results: []

layoutlm-funsd1

This model is a fine-tuned version of microsoft/layoutlm-base-uncased on the funsd dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6746
  • Answer: {'precision': 0.6505771248688352, 'recall': 0.7663782447466008, 'f1': 0.7037457434733257, 'number': 809}
  • Header: {'precision': 0.20930232558139536, 'recall': 0.15126050420168066, 'f1': 0.17560975609756097, 'number': 119}
  • Question: {'precision': 0.7188284518828452, 'recall': 0.8065727699530516, 'f1': 0.7601769911504423, 'number': 1065}
  • Overall Precision: 0.6701
  • Overall Recall: 0.7511
  • Overall F1: 0.7083
  • Overall Accuracy: 0.7973

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Answer Header Question Overall Precision Overall Recall Overall F1 Overall Accuracy
1.8511 1.0 10 1.6077 {'precision': 0.01362088535754824, 'recall': 0.014833127317676144, 'f1': 0.014201183431952664, 'number': 809} {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} {'precision': 0.17871759890859482, 'recall': 0.12300469483568074, 'f1': 0.1457174638487208, 'number': 1065} 0.0886 0.0718 0.0793 0.3669
1.4863 2.0 20 1.2821 {'precision': 0.14936708860759493, 'recall': 0.14585908529048208, 'f1': 0.14759224515322075, 'number': 809} {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} {'precision': 0.4211309523809524, 'recall': 0.5314553990610329, 'f1': 0.46990452469904526, 'number': 1065} 0.3204 0.3432 0.3314 0.5815
1.1566 3.0 30 1.0398 {'precision': 0.38341968911917096, 'recall': 0.3658838071693449, 'f1': 0.3744465528146742, 'number': 809} {'precision': 0.04, 'recall': 0.008403361344537815, 'f1': 0.01388888888888889, 'number': 119} {'precision': 0.5764705882352941, 'recall': 0.644131455399061, 'f1': 0.6084257206208424, 'number': 1065} 0.4947 0.4932 0.4940 0.6493
0.9277 4.0 40 0.8788 {'precision': 0.5094339622641509, 'recall': 0.6007416563658838, 'f1': 0.5513329551900171, 'number': 809} {'precision': 0.19047619047619047, 'recall': 0.06722689075630252, 'f1': 0.09937888198757765, 'number': 119} {'precision': 0.6472172351885098, 'recall': 0.6769953051643193, 'f1': 0.6617714547957778, 'number': 1065} 0.5758 0.6096 0.5922 0.7266
0.7448 5.0 50 0.7982 {'precision': 0.5696594427244582, 'recall': 0.6823238566131026, 'f1': 0.6209223847019122, 'number': 809} {'precision': 0.2, 'recall': 0.11764705882352941, 'f1': 0.14814814814814817, 'number': 119} {'precision': 0.6689478186484175, 'recall': 0.7342723004694836, 'f1': 0.7000895255147717, 'number': 1065} 0.6105 0.6764 0.6418 0.7475
0.6273 6.0 60 0.7378 {'precision': 0.6345549738219896, 'recall': 0.7490729295426453, 'f1': 0.6870748299319728, 'number': 809} {'precision': 0.21052631578947367, 'recall': 0.13445378151260504, 'f1': 0.1641025641025641, 'number': 119} {'precision': 0.6871270247229326, 'recall': 0.7568075117370892, 'f1': 0.7202859696157283, 'number': 1065} 0.6479 0.7165 0.6805 0.7778
0.5778 7.0 70 0.6971 {'precision': 0.6439075630252101, 'recall': 0.757725587144623, 'f1': 0.6961953435547985, 'number': 809} {'precision': 0.20238095238095238, 'recall': 0.14285714285714285, 'f1': 0.16748768472906403, 'number': 119} {'precision': 0.6765412329863891, 'recall': 0.7934272300469484, 'f1': 0.7303370786516853, 'number': 1065} 0.6455 0.7401 0.6896 0.7825
0.5262 8.0 80 0.6989 {'precision': 0.6372141372141372, 'recall': 0.757725587144623, 'f1': 0.6922642574816488, 'number': 809} {'precision': 0.20689655172413793, 'recall': 0.15126050420168066, 'f1': 0.17475728155339806, 'number': 119} {'precision': 0.7364685004436557, 'recall': 0.7793427230046949, 'f1': 0.7572992700729927, 'number': 1065} 0.6714 0.7331 0.7009 0.7963
0.4867 9.0 90 0.6756 {'precision': 0.6428571428571429, 'recall': 0.7564894932014833, 'f1': 0.6950596252129472, 'number': 809} {'precision': 0.1935483870967742, 'recall': 0.15126050420168066, 'f1': 0.169811320754717, 'number': 119} {'precision': 0.7079207920792079, 'recall': 0.8056338028169014, 'f1': 0.7536231884057971, 'number': 1065} 0.6593 0.7466 0.7002 0.7951
0.4757 10.0 100 0.6746 {'precision': 0.6505771248688352, 'recall': 0.7663782447466008, 'f1': 0.7037457434733257, 'number': 809} {'precision': 0.20930232558139536, 'recall': 0.15126050420168066, 'f1': 0.17560975609756097, 'number': 119} {'precision': 0.7188284518828452, 'recall': 0.8065727699530516, 'f1': 0.7601769911504423, 'number': 1065} 0.6701 0.7511 0.7083 0.7973

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.3.1+cu121
  • Datasets 2.19.2
  • Tokenizers 0.19.1