|
--- |
|
license: mit |
|
base_model: hongpingjun98/BioMedNLP_DeBERTa |
|
tags: |
|
- generated_from_trainer |
|
datasets: |
|
- sem_eval_2024_task_2 |
|
metrics: |
|
- accuracy |
|
- precision |
|
- recall |
|
- f1 |
|
model-index: |
|
- name: BioMedNLP_DeBERTa_all_updates |
|
results: |
|
- task: |
|
name: Text Classification |
|
type: text-classification |
|
dataset: |
|
name: sem_eval_2024_task_2 |
|
type: sem_eval_2024_task_2 |
|
config: sem_eval_2024_task_2_source |
|
split: validation |
|
args: sem_eval_2024_task_2_source |
|
metrics: |
|
- name: Accuracy |
|
type: accuracy |
|
value: 0.705 |
|
- name: Precision |
|
type: precision |
|
value: 0.7238235615241838 |
|
- name: Recall |
|
type: recall |
|
value: 0.7050000000000001 |
|
- name: F1 |
|
type: f1 |
|
value: 0.6986644194182692 |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# BioMedNLP_DeBERTa_all_updates |
|
|
|
This model is a fine-tuned version of [hongpingjun98/BioMedNLP_DeBERTa](https://huggingface.co/hongpingjun98/BioMedNLP_DeBERTa) on the sem_eval_2024_task_2 dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 2.1863 |
|
- Accuracy: 0.705 |
|
- Precision: 0.7238 |
|
- Recall: 0.7050 |
|
- F1: 0.6987 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 5e-05 |
|
- train_batch_size: 16 |
|
- eval_batch_size: 16 |
|
- seed: 42 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- lr_scheduler_warmup_steps: 500 |
|
- num_epochs: 20 |
|
- mixed_precision_training: Native AMP |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |
|
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| |
|
| 0.4238 | 1.0 | 116 | 0.6639 | 0.665 | 0.6678 | 0.665 | 0.6636 | |
|
| 0.4316 | 2.0 | 232 | 0.6644 | 0.68 | 0.6875 | 0.6800 | 0.6768 | |
|
| 0.3819 | 3.0 | 348 | 0.7328 | 0.71 | 0.7188 | 0.71 | 0.7071 | |
|
| 0.3243 | 4.0 | 464 | 0.9162 | 0.7 | 0.7083 | 0.7 | 0.6970 | |
|
| 0.4053 | 5.0 | 580 | 0.7145 | 0.715 | 0.7214 | 0.7150 | 0.7129 | |
|
| 0.2548 | 6.0 | 696 | 1.0598 | 0.69 | 0.7016 | 0.69 | 0.6855 | |
|
| 0.3455 | 7.0 | 812 | 0.7782 | 0.72 | 0.7232 | 0.72 | 0.7190 | |
|
| 0.2177 | 8.0 | 928 | 1.1182 | 0.69 | 0.6950 | 0.69 | 0.6880 | |
|
| 0.2304 | 9.0 | 1044 | 1.4332 | 0.695 | 0.708 | 0.695 | 0.6902 | |
|
| 0.2103 | 10.0 | 1160 | 1.2736 | 0.7 | 0.7198 | 0.7 | 0.6931 | |
|
| 0.1748 | 11.0 | 1276 | 1.2654 | 0.675 | 0.6816 | 0.675 | 0.6720 | |
|
| 0.1608 | 12.0 | 1392 | 1.8885 | 0.63 | 0.6689 | 0.63 | 0.6074 | |
|
| 0.1082 | 13.0 | 1508 | 1.7004 | 0.68 | 0.7005 | 0.6800 | 0.6716 | |
|
| 0.1074 | 14.0 | 1624 | 1.8145 | 0.67 | 0.6804 | 0.67 | 0.6652 | |
|
| 0.0238 | 15.0 | 1740 | 1.7608 | 0.68 | 0.6931 | 0.68 | 0.6745 | |
|
| 0.038 | 16.0 | 1856 | 1.9937 | 0.67 | 0.6953 | 0.6700 | 0.6589 | |
|
| 0.0365 | 17.0 | 1972 | 2.1871 | 0.675 | 0.6964 | 0.675 | 0.6659 | |
|
| 0.0144 | 18.0 | 2088 | 2.1093 | 0.695 | 0.7059 | 0.6950 | 0.6909 | |
|
| 0.0014 | 19.0 | 2204 | 2.1559 | 0.695 | 0.7103 | 0.6950 | 0.6893 | |
|
| 0.0324 | 20.0 | 2320 | 2.1863 | 0.705 | 0.7238 | 0.7050 | 0.6987 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.35.2 |
|
- Pytorch 2.1.0+cu121 |
|
- Datasets 2.16.1 |
|
- Tokenizers 0.15.0 |
|
|