Edit model card

mdeberta-v3-base-azsci-topics

This model is a fine-tuned version of microsoft/mdeberta-v3-base on hajili/azsci_topics dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5213
  • Precision: 0.8633
  • Recall: 0.8759
  • F1: 0.8685
  • Accuracy: 0.8759

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
No log 1.0 288 0.9919 0.6713 0.7465 0.6959 0.7465
1.4813 2.0 576 0.7035 0.7994 0.8238 0.8022 0.8238
1.4813 3.0 864 0.5605 0.8540 0.8568 0.8462 0.8568
0.5512 4.0 1152 0.5296 0.8615 0.8689 0.8623 0.8689
0.5512 5.0 1440 0.5213 0.8633 0.8759 0.8685 0.8759

Evaluation results

Topic Precision Recall F1 Support
Aqrar elmlər 0.678571 0.703704 0.690909 27
Astronomiya 0 0 0 2
Biologiya elmləri 0.877358 0.885714 0.881517 105
Coğrafiya 0.833333 0.882353 0.857143 17
Filologiya elmləri 0.932203 0.942857 0.9375 175
Fizika 0.763158 0.852941 0.805556 34
Fəlsəfə 0.294118 0.357143 0.322581 14
Hüquq elmləri 0.965517 0.965517 0.965517 29
Kimya 0.828571 0.95082 0.885496 61
Memarlıq 0 0 0 5
Mexanika 0 0 0 4
Pedaqogika 0.882353 0.957447 0.918367 47
Psixologiya 1 0.722222 0.83871 18
Riyaziyyat 0.871795 0.871795 0.871795 39
Siyasi elmlər 0.807692 0.84 0.823529 25
Sosiologiya 0 0 0 4
Sənətşünaslıq 0.82 0.87234 0.845361 47
Tarix 0.846154 0.846154 0.846154 78
Texnika elmləri 0.822917 0.759615 0.79 104
Tibb elmləri 0.953947 0.97973 0.966667 148
Yer elmləri 0.7 0.538462 0.608696 13
İqtisad elmləri 0.948052 0.960526 0.954248 152
Əczaçılıq elmləri 0 0 0 4
macro avg 0.644597 0.647363 0.643902 1152
weighted avg 0.863306 0.875868 0.868519 1152

Framework versions

  • Transformers 4.38.2
  • Pytorch 2.1.0+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
16
Safetensors
Model size
279M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for hajili/mdeberta-v3-base-azsci-topics

Finetuned
(204)
this model

Dataset used to train hajili/mdeberta-v3-base-azsci-topics