--- license: mit base_model: Gladiator/microsoft-deberta-v3-large_ner_conll2003 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: ner_tag_model results: - task: name: Token Classification type: token-classification metrics: - name: Precision type: precision value: 0.8568714588197879 - name: Recall type: recall value: 0.8550538245045557 - name: F1 type: f1 value: 0.8559616767268047 - name: Accuracy type: accuracy value: 0.9150941588185013 language: - en widget: - text: apparatus for models demonstrational for co ltd education and NON-WOVEN other BAG 902300000000 or unsuitable example intex for designed instruments SS011 uses industries in china 2020 intex purposes exhibitions - text: 62044200_IN Apparels india 620442000000 zimmermann zimmermann cotton of - text: nuts or or screws not other Adjusting diesel with and their china screw bolts washers dt 2.24061 whether 731815000000 technic - text: secret SHOP s canada victoria other 392690_CA ACCESSORIES victoria 392690999999 secret FITTING s - text: HAC-30 68/550 germany in 730890200003 A.-Channel stores hilti F hilti 431892 --- # ner_tag_model This model is a fine-tuned version of [Gladiator/microsoft-deberta-v3-large_ner_conll2003](https://huggingface.co/Gladiator/microsoft-deberta-v3-large_ner_conll2003) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1712 - Precision: 0.8569 - Recall: 0.8551 - F1: 0.8560 - Accuracy: 0.9151 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2322 | 1.0 | 2495 | 0.1925 | 0.7990 | 0.7924 | 0.7957 | 0.8969 | | 0.1674 | 2.0 | 4990 | 0.1488 | 0.8218 | 0.8316 | 0.8267 | 0.9116 | | 0.1381 | 3.0 | 7485 | 0.1438 | 0.8204 | 0.8350 | 0.8276 | 0.9130 | | 0.1284 | 4.0 | 9980 | 0.1381 | 0.8419 | 0.8405 | 0.8412 | 0.9148 | | 0.1198 | 5.0 | 12475 | 0.1400 | 0.8280 | 0.8410 | 0.8345 | 0.9148 | | 0.1155 | 6.0 | 14970 | 0.1395 | 0.8379 | 0.8467 | 0.8423 | 0.9154 | | 0.1125 | 7.0 | 17465 | 0.1496 | 0.8438 | 0.8487 | 0.8462 | 0.9151 | | 0.1068 | 8.0 | 19960 | 0.1510 | 0.8518 | 0.8529 | 0.8523 | 0.9156 | | 0.1002 | 9.0 | 22455 | 0.1616 | 0.8536 | 0.8539 | 0.8537 | 0.9150 | | 0.0964 | 10.0 | 24950 | 0.1712 | 0.8569 | 0.8551 | 0.8560 | 0.9151 | ### Framework versions - Transformers 4.33.1 - Pytorch 1.13.1+cu116 - Datasets 2.14.5 - Tokenizers 0.13.3