--- language: - mn tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: roberta-base-ner-demo results: [] --- # roberta-base-ner-demo This model is a fine-tuned version of [bayartsogt/mongolian-roberta-base](https://huggingface.co/bayartsogt/mongolian-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1263 - Precision: 0.9352 - Recall: 0.9416 - F1: 0.9384 - Accuracy: 0.9817 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.161 | 1.0 | 477 | 0.0722 | 0.9132 | 0.9248 | 0.9190 | 0.9786 | | 0.052 | 2.0 | 954 | 0.0732 | 0.9211 | 0.9353 | 0.9282 | 0.9797 | | 0.028 | 3.0 | 1431 | 0.0802 | 0.9280 | 0.9354 | 0.9317 | 0.9804 | | 0.015 | 4.0 | 1908 | 0.0954 | 0.9190 | 0.9324 | 0.9257 | 0.9791 | | 0.0101 | 5.0 | 2385 | 0.0978 | 0.9312 | 0.9385 | 0.9348 | 0.9809 | | 0.0055 | 6.0 | 2862 | 0.1072 | 0.9315 | 0.9392 | 0.9353 | 0.9810 | | 0.0035 | 7.0 | 3339 | 0.1165 | 0.9313 | 0.9392 | 0.9352 | 0.9807 | | 0.0026 | 8.0 | 3816 | 0.1223 | 0.9338 | 0.9403 | 0.9371 | 0.9812 | | 0.002 | 9.0 | 4293 | 0.1234 | 0.9341 | 0.9398 | 0.9369 | 0.9813 | | 0.0009 | 10.0 | 4770 | 0.1263 | 0.9352 | 0.9416 | 0.9384 | 0.9817 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3