--- base_model: vinai/bartpho-word-base tags: - generated_from_trainer model-index: - name: bartpho-word-base-ed-with-tpl results: [] --- # bartpho-word-base-ed-with-tpl This model is a fine-tuned version of [vinai/bartpho-word-base](https://huggingface.co/vinai/bartpho-word-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0290 - F1 Micro: 0.8028 - Recall Micro: 0.7998 - Precision Micro: 0.8058 - F1 Macro: 0.5871 - Recall Macro: 0.6278 - Precision Macro: 0.5995 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Micro | Recall Micro | Precision Micro | F1 Macro | Recall Macro | Precision Macro | |:-------------:|:------:|:----:|:---------------:|:--------:|:------------:|:---------------:|:--------:|:------------:|:---------------:| | No log | 0.9987 | 393 | 0.0357 | 0.8039 | 0.8063 | 0.8016 | 0.5247 | 0.4945 | 0.6251 | | 0.0066 | 2.0 | 787 | 0.0329 | 0.7708 | 0.7553 | 0.7869 | 0.5412 | 0.5687 | 0.5780 | | 0.0089 | 2.9987 | 1180 | 0.0288 | 0.7958 | 0.7919 | 0.7996 | 0.5812 | 0.6155 | 0.5868 | | 0.0116 | 4.0 | 1574 | 0.0280 | 0.8018 | 0.8007 | 0.8030 | 0.5702 | 0.5968 | 0.5947 | | 0.0116 | 4.9936 | 1965 | 0.0290 | 0.8028 | 0.7998 | 0.8058 | 0.5871 | 0.6278 | 0.5995 | ### Framework versions - Transformers 4.41.2 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1