Edit model card

checkpoints

This model is a fine-tuned version of kavg/LiLT-RE-PT on the xfun dataset. It achieves the following results on the evaluation set:

  • Precision: 0.3631
  • Recall: 0.4823
  • F1: 0.4143
  • Loss: 0.1671

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • training_steps: 10000

Training results

Training Loss Epoch Step Precision Recall F1 Validation Loss
0.0907 41.67 500 0.3315 0.3106 0.3207 0.2039
0.0766 83.33 1000 0.3631 0.4823 0.4143 0.1671
0.0639 125.0 1500 0.3640 0.6086 0.4556 0.2525
0.0309 166.67 2000 0.3973 0.6010 0.4784 0.2339
0.0318 208.33 2500 0.4045 0.6414 0.4961 0.3325
0.0144 250.0 3000 0.4268 0.6187 0.5052 0.3513
0.0163 291.67 3500 0.4273 0.6086 0.5021 0.2880
0.0062 333.33 4000 0.4368 0.6288 0.5155 0.3064
0.0115 375.0 4500 0.4386 0.6313 0.5176 0.3283
0.0168 416.67 5000 0.4373 0.6162 0.5115 0.3258
0.0062 458.33 5500 0.4530 0.6086 0.5194 0.3467
0.0074 500.0 6000 0.4569 0.6162 0.5247 0.3401
0.0037 541.67 6500 0.4559 0.6136 0.5231 0.3526
0.008 583.33 7000 0.4650 0.6035 0.5253 0.3076
0.0045 625.0 7500 0.4610 0.6111 0.5255 0.3799
0.0045 666.67 8000 0.4551 0.6136 0.5226 0.3692
0.0052 708.33 8500 0.4535 0.6162 0.5225 0.3492
0.0002 750.0 9000 0.4537 0.6061 0.5189 0.4075
0.0027 791.67 9500 0.4581 0.6212 0.5273 0.3816
0.0009 833.33 10000 0.4569 0.6162 0.5247 0.3834

Framework versions

  • Transformers 4.38.2
  • Pytorch 2.1.0+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.1
Downloads last month
10
Safetensors
Model size
287M params
Tensor type
F32
·
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for kavg/LiLT-RE-PT-SIN

Finetuned
kavg/LiLT-RE-PT
Finetuned
(1)
this model