Amna100 commited on
Commit
2f8871d
1 Parent(s): 1e62b36

End of training

Browse files
Files changed (2) hide show
  1. README.md +85 -0
  2. model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model: Amna100/PreTraining-MLM
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - precision
8
+ - recall
9
+ - f1
10
+ - accuracy
11
+ model-index:
12
+ - name: fold_11
13
+ results: []
14
+ ---
15
+
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
+
19
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/lvieenf2)
20
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/fgis28rc)
21
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/9tw0vsla)
22
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/ccjl3n87)
23
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/geyuezlx)
24
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/sv9tcfx8)
25
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/9rg5cz4h)
26
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/3fdbnjrq)
27
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/l78entvo)
28
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/s3e8xbt2)
29
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/wgkbnjuf)
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/amnasaeed100/FineTuning-ADE-Repeatedfold/runs/vqng60sy)
31
+ # fold_11
32
+
33
+ This model is a fine-tuned version of [Amna100/PreTraining-MLM](https://huggingface.co/Amna100/PreTraining-MLM) on the None dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 0.0112
36
+ - Precision: 0.7083
37
+ - Recall: 0.6886
38
+ - F1: 0.6983
39
+ - Accuracy: 0.9993
40
+ - Roc Auc: 0.9950
41
+ - Pr Auc: 0.9999
42
+
43
+ ## Model description
44
+
45
+ More information needed
46
+
47
+ ## Intended uses & limitations
48
+
49
+ More information needed
50
+
51
+ ## Training and evaluation data
52
+
53
+ More information needed
54
+
55
+ ## Training procedure
56
+
57
+ ### Training hyperparameters
58
+
59
+ The following hyperparameters were used during training:
60
+ - learning_rate: 5e-05
61
+ - train_batch_size: 5
62
+ - eval_batch_size: 5
63
+ - seed: 42
64
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
65
+ - lr_scheduler_type: linear
66
+ - num_epochs: 10
67
+
68
+ ### Training results
69
+
70
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Roc Auc | Pr Auc |
71
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:-------:|:------:|
72
+ | 0.0313 | 1.0 | 632 | 0.0125 | 0.7457 | 0.5494 | 0.6327 | 0.9992 | 0.9881 | 0.9997 |
73
+ | 0.0127 | 2.0 | 1264 | 0.0117 | 0.7078 | 0.6684 | 0.6875 | 0.9993 | 0.9949 | 0.9998 |
74
+ | 0.0061 | 3.0 | 1896 | 0.0112 | 0.7083 | 0.6886 | 0.6983 | 0.9993 | 0.9950 | 0.9999 |
75
+ | 0.0024 | 4.0 | 2528 | 0.0159 | 0.8163 | 0.6076 | 0.6967 | 0.9994 | 0.9893 | 0.9997 |
76
+ | 0.0016 | 5.0 | 3160 | 0.0153 | 0.7652 | 0.6684 | 0.7135 | 0.9993 | 0.9931 | 0.9998 |
77
+ | 0.0008 | 6.0 | 3792 | 0.0170 | 0.7798 | 0.6633 | 0.7168 | 0.9994 | 0.9921 | 0.9998 |
78
+
79
+
80
+ ### Framework versions
81
+
82
+ - Transformers 4.41.0.dev0
83
+ - Pytorch 2.2.1+cu121
84
+ - Datasets 2.19.1
85
+ - Tokenizers 0.19.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4bb294cefc6d0fdaf98b6ce49320d5f39fced42ca77ba19d116ccf437ded6aad
3
  size 554446244
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:606861d19216fe0c0789a68904b1f56e99bd3a5ce66d7998f4c477823569e18f
3
  size 554446244