--- language: - multilingual - en - de - fr - ja license: mit tags: - object-detection - vision - generated_from_trainer - DocLayNet - COCO - PDF - IBM - Financial-Reports - Finance - Manuals - Scientific-Articles - Science - Laws - Law - Regulations - Patents - Government-Tenders - object-detection - image-segmentation - token-classification inference: false datasets: - pierreguillou/DocLayNet-base metrics: - precision - recall - f1 - accuracy model-index: - name: lilt-xlm-roberta-base-finetuned-with-DocLayNet-base-at-linelevel-ml384 results: - task: name: Token Classification type: token-classification metrics: - name: f1 type: f1 value: 0.8584 --- # Document Understanding model (at line level) This model is a fine-tuned version of [nielsr/lilt-xlm-roberta-base](https://huggingface.co/nielsr/lilt-xlm-roberta-base) with the [DocLayNet base](https://huggingface.co/datasets/pierreguillou/DocLayNet-base) dataset. It achieves the following results on the evaluation set: - Loss: 1.0003 - Precision: 0.8584 - Recall: 0.8584 - F1: 0.8584 - Accuracy: 0.8584 **References:** - Blog Post: [Document AI | Document Understanding model at line level with LiLT, Tesseract and DocLayNet dataset](https://medium.com/@pierre_guillou/document-ai-document-understanding-model-at-line-level-with-lilt-tesseract-and-doclaynet-dataset-347107a643b8) - Notebook: [Document AI | Fine-tune LiLT on DocLayNet base in any language at line level (chunk of 384 tokens with overlap)](https://github.com/piegu/language-models/blob/master/Fine_tune_LiLT_on_DocLayNet_base_in_any_language_at_linelevel_ml_384.ipynb) - Notebook: [Document AI | Inference at line level with a Document Understanding model (LiLT fine-tuned on DocLayNet dataset)](https://github.com/piegu/language-models/blob/master/inference_on_LiLT_model_finetuned_on_DocLayNet_base_in_any_language_at_levellines_ml384.ipynb) ### APP You can test this model with this APP in Hugging Face Spaces: [Inference APP for Document Understanding at line level (v1)](https://huggingface.co/spaces/pierreguillou/Inference-APP-Document-Understanding-at-linelevel-v1). ![Inference APP for Document Understanding at line level (v1)](https://huggingface.co/pierreguillou/lilt-xlm-roberta-base-finetuned-with-DocLayNet-base-at-linelevel-ml384/resolve/main/app_lilt_document_understanding_AI.png) ### DocLayNet dataset [DocLayNet dataset](https://github.com/DS4SD/DocLayNet) (IBM) provides page-by-page layout segmentation ground-truth using bounding-boxes for 11 distinct class labels on 80863 unique pages from 6 document categories. Until today, the dataset can be downloaded through direct links or as a dataset from Hugging Face datasets: - direct links: [doclaynet_core.zip](https://codait-cos-dax.s3.us.cloud-object-storage.appdomain.cloud/dax-doclaynet/1.0.0/DocLayNet_core.zip) (28 GiB), [doclaynet_extra.zip](https://codait-cos-dax.s3.us.cloud-object-storage.appdomain.cloud/dax-doclaynet/1.0.0/DocLayNet_extra.zip) (7.5 GiB) - Hugging Face dataset library: [dataset DocLayNet](https://huggingface.co/datasets/ds4sd/DocLayNet) Paper: [DocLayNet: A Large Human-Annotated Dataset for Document-Layout Analysis](https://arxiv.org/abs/2206.01062) (06/02/2022) ## Model description The model was finetuned at **line level on chunk of 384 tokens with overlap of 128 tokens**. Thus, the model was trained with all layout and text data of all pages of the dataset. At inference time, a calculation of best probabilities give the label to each line bounding boxes. ## Inference See notebook: [Document AI | Inference at line level with a Document Understanding model (LiLT fine-tuned on DocLayNet dataset)](https://github.com/piegu/language-models/blob/master/inference_on_LiLT_model_finetuned_on_DocLayNet_base_in_any_language_at_levellines_ml384.ipynb) ## Training and evaluation data See notebook: [Document AI | Fine-tune LiLT on DocLayNet base in any language at line level (chunk of 384 tokens with overlap)](https://github.com/piegu/language-models/blob/master/Fine_tune_LiLT_on_DocLayNet_base_in_any_language_at_linelevel_ml_384.ipynb) ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.7223 | 0.21 | 500 | 0.7765 | 0.7741 | 0.7741 | 0.7741 | 0.7741 | | 0.4469 | 0.42 | 1000 | 0.5914 | 0.8312 | 0.8312 | 0.8312 | 0.8312 | | 0.3819 | 0.62 | 1500 | 0.8745 | 0.8102 | 0.8102 | 0.8102 | 0.8102 | | 0.3361 | 0.83 | 2000 | 0.6991 | 0.8337 | 0.8337 | 0.8337 | 0.8337 | | 0.2784 | 1.04 | 2500 | 0.7513 | 0.8119 | 0.8119 | 0.8119 | 0.8119 | | 0.2377 | 1.25 | 3000 | 0.9048 | 0.8166 | 0.8166 | 0.8166 | 0.8166 | | 0.2401 | 1.45 | 3500 | 1.2411 | 0.7939 | 0.7939 | 0.7939 | 0.7939 | | 0.2054 | 1.66 | 4000 | 1.1594 | 0.8080 | 0.8080 | 0.8080 | 0.8080 | | 0.1909 | 1.87 | 4500 | 0.7545 | 0.8425 | 0.8425 | 0.8425 | 0.8425 | | 0.1704 | 2.08 | 5000 | 0.8567 | 0.8318 | 0.8318 | 0.8318 | 0.8318 | | 0.1294 | 2.29 | 5500 | 0.8486 | 0.8489 | 0.8489 | 0.8489 | 0.8489 | | 0.134 | 2.49 | 6000 | 0.7682 | 0.8573 | 0.8573 | 0.8573 | 0.8573 | | 0.1354 | 2.7 | 6500 | 0.9871 | 0.8256 | 0.8256 | 0.8256 | 0.8256 | | 0.1239 | 2.91 | 7000 | 1.1430 | 0.8189 | 0.8189 | 0.8189 | 0.8189 | | 0.1012 | 3.12 | 7500 | 0.8272 | 0.8386 | 0.8386 | 0.8386 | 0.8386 | | 0.0788 | 3.32 | 8000 | 1.0288 | 0.8365 | 0.8365 | 0.8365 | 0.8365 | | 0.0802 | 3.53 | 8500 | 0.7197 | 0.8849 | 0.8849 | 0.8849 | 0.8849 | | 0.0861 | 3.74 | 9000 | 1.1420 | 0.8320 | 0.8320 | 0.8320 | 0.8320 | | 0.0639 | 3.95 | 9500 | 0.9563 | 0.8585 | 0.8585 | 0.8585 | 0.8585 | | 0.0464 | 4.15 | 10000 | 1.0768 | 0.8511 | 0.8511 | 0.8511 | 0.8511 | | 0.0412 | 4.36 | 10500 | 1.1184 | 0.8439 | 0.8439 | 0.8439 | 0.8439 | | 0.039 | 4.57 | 11000 | 0.9634 | 0.8636 | 0.8636 | 0.8636 | 0.8636 | | 0.0469 | 4.78 | 11500 | 0.9585 | 0.8634 | 0.8634 | 0.8634 | 0.8634 | | 0.0395 | 4.99 | 12000 | 1.0003 | 0.8584 | 0.8584 | 0.8584 | 0.8584 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2