dariuslimzh commited on
Commit
13950b5
·
verified ·
1 Parent(s): 4e5da4f

Training completed

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -20,12 +20,12 @@ should probably proofread and complete it, then remove this comment. -->
20
 
21
  This model is a fine-tuned version of [ICT2214Team7/RoBERTa_Test_Training](https://huggingface.co/ICT2214Team7/RoBERTa_Test_Training) on an unknown dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 0.0458
24
- - Precision: 0.8687
25
- - Recall: 0.9485
26
- - F1: 0.9069
27
- - Accuracy: 0.9854
28
- - Report: {'AGE': {'precision': 0.9722222222222222, 'recall': 1.0, 'f1-score': 0.9859154929577464, 'support': 35}, 'LOC': {'precision': 0.7950819672131147, 'recall': 0.9509803921568627, 'f1-score': 0.8660714285714285, 'support': 102}, 'NAT': {'precision': 0.9107142857142857, 'recall': 0.9622641509433962, 'f1-score': 0.9357798165137615, 'support': 53}, 'ORG': {'precision': 0.825, 'recall': 0.868421052631579, 'f1-score': 0.8461538461538461, 'support': 38}, 'PER': {'precision': 0.9767441860465116, 'recall': 0.9545454545454546, 'f1-score': 0.9655172413793104, 'support': 44}, 'micro avg': {'precision': 0.8686868686868687, 'recall': 0.9485294117647058, 'f1-score': 0.9068541300527241, 'support': 272}, 'macro avg': {'precision': 0.8959525322392269, 'recall': 0.9472422100554585, 'f1-score': 0.9198875651152185, 'support': 272}, 'weighted avg': {'precision': 0.8739733079500703, 'recall': 0.9485294117647058, 'f1-score': 0.9083796434469559, 'support': 272}}
29
 
30
  ## Model description
31
 
@@ -56,9 +56,9 @@ The following hyperparameters were used during training:
56
 
57
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Report |
58
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
59
- | No log | 1.0 | 100 | 0.0588 | 0.8404 | 0.9485 | 0.8912 | 0.9814 | {'AGE': {'precision': 0.9722222222222222, 'recall': 1.0, 'f1-score': 0.9859154929577464, 'support': 35}, 'LOC': {'precision': 0.7950819672131147, 'recall': 0.9509803921568627, 'f1-score': 0.8660714285714285, 'support': 102}, 'NAT': {'precision': 0.8253968253968254, 'recall': 0.9811320754716981, 'f1-score': 0.8965517241379309, 'support': 53}, 'ORG': {'precision': 0.8048780487804879, 'recall': 0.868421052631579, 'f1-score': 0.8354430379746836, 'support': 38}, 'PER': {'precision': 0.9111111111111111, 'recall': 0.9318181818181818, 'f1-score': 0.9213483146067416, 'support': 44}, 'micro avg': {'precision': 0.8403908794788274, 'recall': 0.9485294117647058, 'f1-score': 0.8911917098445595, 'support': 272}, 'macro avg': {'precision': 0.8617380349447522, 'recall': 0.9464703404156642, 'f1-score': 0.9010659996497061, 'support': 272}, 'weighted avg': {'precision': 0.8439206798606422, 'recall': 0.9485294117647058, 'f1-score': 0.8920945979148962, 'support': 272}} |
60
- | No log | 2.0 | 200 | 0.0501 | 0.8571 | 0.9485 | 0.9005 | 0.9820 | {'AGE': {'precision': 0.9722222222222222, 'recall': 1.0, 'f1-score': 0.9859154929577464, 'support': 35}, 'LOC': {'precision': 0.8235294117647058, 'recall': 0.9607843137254902, 'f1-score': 0.8868778280542987, 'support': 102}, 'NAT': {'precision': 0.8225806451612904, 'recall': 0.9622641509433962, 'f1-score': 0.8869565217391304, 'support': 53}, 'ORG': {'precision': 0.825, 'recall': 0.868421052631579, 'f1-score': 0.8461538461538461, 'support': 38}, 'PER': {'precision': 0.9318181818181818, 'recall': 0.9318181818181818, 'f1-score': 0.9318181818181818, 'support': 44}, 'micro avg': {'precision': 0.8571428571428571, 'recall': 0.9485294117647058, 'f1-score': 0.900523560209424, 'support': 272}, 'macro avg': {'precision': 0.87503009219328, 'recall': 0.9446575398237295, 'f1-score': 0.9075443741446408, 'support': 272}, 'weighted avg': {'precision': 0.8602005587181109, 'recall': 0.9485294117647058, 'f1-score': 0.9012173622098517, 'support': 272}} |
61
- | No log | 3.0 | 300 | 0.0458 | 0.8687 | 0.9485 | 0.9069 | 0.9854 | {'AGE': {'precision': 0.9722222222222222, 'recall': 1.0, 'f1-score': 0.9859154929577464, 'support': 35}, 'LOC': {'precision': 0.7950819672131147, 'recall': 0.9509803921568627, 'f1-score': 0.8660714285714285, 'support': 102}, 'NAT': {'precision': 0.9107142857142857, 'recall': 0.9622641509433962, 'f1-score': 0.9357798165137615, 'support': 53}, 'ORG': {'precision': 0.825, 'recall': 0.868421052631579, 'f1-score': 0.8461538461538461, 'support': 38}, 'PER': {'precision': 0.9767441860465116, 'recall': 0.9545454545454546, 'f1-score': 0.9655172413793104, 'support': 44}, 'micro avg': {'precision': 0.8686868686868687, 'recall': 0.9485294117647058, 'f1-score': 0.9068541300527241, 'support': 272}, 'macro avg': {'precision': 0.8959525322392269, 'recall': 0.9472422100554585, 'f1-score': 0.9198875651152185, 'support': 272}, 'weighted avg': {'precision': 0.8739733079500703, 'recall': 0.9485294117647058, 'f1-score': 0.9083796434469559, 'support': 272}} |
62
 
63
 
64
  ### Framework versions
 
20
 
21
  This model is a fine-tuned version of [ICT2214Team7/RoBERTa_Test_Training](https://huggingface.co/ICT2214Team7/RoBERTa_Test_Training) on an unknown dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 0.0510
24
+ - Precision: 0.8644
25
+ - Recall: 0.9375
26
+ - F1: 0.8995
27
+ - Accuracy: 0.9844
28
+ - Report: {'AGE': {'precision': 0.9722222222222222, 'recall': 1.0, 'f1-score': 0.9859154929577464, 'support': 35}, 'LOC': {'precision': 0.7741935483870968, 'recall': 0.9411764705882353, 'f1-score': 0.8495575221238938, 'support': 102}, 'NAT': {'precision': 0.9622641509433962, 'recall': 0.9622641509433962, 'f1-score': 0.9622641509433962, 'support': 53}, 'ORG': {'precision': 0.8421052631578947, 'recall': 0.8421052631578947, 'f1-score': 0.8421052631578947, 'support': 38}, 'PER': {'precision': 0.9318181818181818, 'recall': 0.9318181818181818, 'f1-score': 0.9318181818181818, 'support': 44}, 'micro avg': {'precision': 0.864406779661017, 'recall': 0.9375, 'f1-score': 0.8994708994708995, 'support': 272}, 'macro avg': {'precision': 0.8965206733057582, 'recall': 0.9354728133015415, 'f1-score': 0.9143321222002226, 'support': 272}, 'weighted avg': {'precision': 0.8713070577693443, 'recall': 0.9375, 'f1-score': 0.9013305496696996, 'support': 272}}
29
 
30
  ## Model description
31
 
 
56
 
57
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Report |
58
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
59
+ | No log | 1.0 | 100 | 0.0653 | 0.8305 | 0.9007 | 0.8642 | 0.9804 | {'AGE': {'precision': 0.9722222222222222, 'recall': 1.0, 'f1-score': 0.9859154929577464, 'support': 35}, 'LOC': {'precision': 0.7747747747747747, 'recall': 0.8431372549019608, 'f1-score': 0.8075117370892019, 'support': 102}, 'NAT': {'precision': 0.85, 'recall': 0.9622641509433962, 'f1-score': 0.9026548672566371, 'support': 53}, 'ORG': {'precision': 0.775, 'recall': 0.8157894736842105, 'f1-score': 0.7948717948717949, 'support': 38}, 'PER': {'precision': 0.875, 'recall': 0.9545454545454546, 'f1-score': 0.9130434782608695, 'support': 44}, 'micro avg': {'precision': 0.8305084745762712, 'recall': 0.9007352941176471, 'f1-score': 0.8641975308641974, 'support': 272}, 'macro avg': {'precision': 0.8493993993993992, 'recall': 0.9151472668150046, 'f1-score': 0.8807994740872498, 'support': 272}, 'weighted avg': {'precision': 0.8310838411941353, 'recall': 0.9007352941176471, 'f1-score': 0.8643124582714262, 'support': 272}} |
60
+ | No log | 2.0 | 200 | 0.0516 | 0.8462 | 0.9301 | 0.8862 | 0.9825 | {'AGE': {'precision': 0.9722222222222222, 'recall': 1.0, 'f1-score': 0.9859154929577464, 'support': 35}, 'LOC': {'precision': 0.7619047619047619, 'recall': 0.9411764705882353, 'f1-score': 0.8421052631578947, 'support': 102}, 'NAT': {'precision': 0.9259259259259259, 'recall': 0.9433962264150944, 'f1-score': 0.9345794392523364, 'support': 53}, 'ORG': {'precision': 0.8157894736842105, 'recall': 0.8157894736842105, 'f1-score': 0.8157894736842104, 'support': 38}, 'PER': {'precision': 0.9111111111111111, 'recall': 0.9318181818181818, 'f1-score': 0.9213483146067416, 'support': 44}, 'micro avg': {'precision': 0.8461538461538461, 'recall': 0.9301470588235294, 'f1-score': 0.8861646234676006, 'support': 272}, 'macro avg': {'precision': 0.8773906989696464, 'recall': 0.9264360705011445, 'f1-score': 0.899947596731786, 'support': 272}, 'weighted avg': {'precision': 0.8525920090258327, 'recall': 0.9301470588235294, 'f1-score': 0.8877713794805031, 'support': 272}} |
61
+ | No log | 3.0 | 300 | 0.0510 | 0.8644 | 0.9375 | 0.8995 | 0.9844 | {'AGE': {'precision': 0.9722222222222222, 'recall': 1.0, 'f1-score': 0.9859154929577464, 'support': 35}, 'LOC': {'precision': 0.7741935483870968, 'recall': 0.9411764705882353, 'f1-score': 0.8495575221238938, 'support': 102}, 'NAT': {'precision': 0.9622641509433962, 'recall': 0.9622641509433962, 'f1-score': 0.9622641509433962, 'support': 53}, 'ORG': {'precision': 0.8421052631578947, 'recall': 0.8421052631578947, 'f1-score': 0.8421052631578947, 'support': 38}, 'PER': {'precision': 0.9318181818181818, 'recall': 0.9318181818181818, 'f1-score': 0.9318181818181818, 'support': 44}, 'micro avg': {'precision': 0.864406779661017, 'recall': 0.9375, 'f1-score': 0.8994708994708995, 'support': 272}, 'macro avg': {'precision': 0.8965206733057582, 'recall': 0.9354728133015415, 'f1-score': 0.9143321222002226, 'support': 272}, 'weighted avg': {'precision': 0.8713070577693443, 'recall': 0.9375, 'f1-score': 0.9013305496696996, 'support': 272}} |
62
 
63
 
64
  ### Framework versions