layoutlm-funsd-tf / README.md
jinhybr's picture
Training in progress epoch 6
6c62605
|
raw
history blame
2.53 kB
metadata
tags:
  - generated_from_keras_callback
model-index:
  - name: jinhybr/layoutlm-funsd-tf
    results: []

jinhybr/layoutlm-funsd-tf

This model is a fine-tuned version of microsoft/layoutlm-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 0.2987
  • Validation Loss: 0.6835
  • Train Overall Precision: 0.7270
  • Train Overall Recall: 0.7777
  • Train Overall F1: 0.7515
  • Train Overall Accuracy: 0.8056
  • Epoch: 6

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
  • training_precision: mixed_float16

Training results

Train Loss Validation Loss Train Overall Precision Train Overall Recall Train Overall F1 Train Overall Accuracy Epoch
1.6886 1.4100 0.2324 0.2313 0.2318 0.5009 0
1.1702 0.8486 0.5971 0.6618 0.6278 0.7338 1
0.7521 0.7032 0.6561 0.7341 0.6929 0.7687 2
0.5727 0.6268 0.6736 0.7662 0.7169 0.7957 3
0.4586 0.6322 0.6909 0.7772 0.7315 0.7999 4
0.3725 0.6378 0.7134 0.7782 0.7444 0.8096 5
0.2987 0.6835 0.7270 0.7777 0.7515 0.8056 6

Framework versions

  • Transformers 4.23.1
  • TensorFlow 2.6.0
  • Datasets 2.6.1
  • Tokenizers 0.13.1