xshubhamx commited on
Commit
e42bbb8
1 Parent(s): 9f59713

End of training

Browse files
README.md CHANGED
@@ -1,9 +1,12 @@
1
  ---
2
  license: apache-2.0
3
- library_name: peft
4
  tags:
5
  - generated_from_trainer
6
- base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
 
 
 
7
  model-index:
8
  - name: tiny-llama
9
  results: []
@@ -15,6 +18,22 @@ should probably proofread and complete it, then remove this comment. -->
15
  # tiny-llama
16
 
17
  This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on an unknown dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
  ## Model description
20
 
@@ -30,44 +49,39 @@ More information needed
30
 
31
  ## Training procedure
32
 
33
-
34
- The following `bitsandbytes` quantization config was used during training:
35
- - quant_method: bitsandbytes
36
- - _load_in_8bit: False
37
- - _load_in_4bit: True
38
- - llm_int8_threshold: 6.0
39
- - llm_int8_skip_modules: None
40
- - llm_int8_enable_fp32_cpu_offload: False
41
- - llm_int8_has_fp16_weight: False
42
- - bnb_4bit_quant_type: nf4
43
- - bnb_4bit_use_double_quant: True
44
- - bnb_4bit_compute_dtype: bfloat16
45
- - bnb_4bit_quant_storage: uint8
46
- - load_in_4bit: True
47
- - load_in_8bit: False
48
  ### Training hyperparameters
49
 
50
  The following hyperparameters were used during training:
51
  - learning_rate: 5e-05
52
- - train_batch_size: 8
53
- - eval_batch_size: 8
54
  - seed: 42
 
 
55
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
56
  - lr_scheduler_type: linear
57
- - training_steps: 1
58
  - mixed_precision_training: Native AMP
59
 
60
  ### Training results
61
 
62
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | Precision Macro | Recall Macro | Macro Fpr | Weighted Fpr | Weighted Specificity | Macro Specificity | Weighted Sensitivity | Macro Sensitivity | F1 Micro | F1 Macro | F1 Weighted |
63
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:---------------:|:------------:|:---------:|:------------:|:--------------------:|:-----------------:|:--------------------:|:-----------------:|:--------:|:--------:|:-----------:|
64
- | No log | 0.0 | 1 | 4.1563 | 0.0225 | 0.0757 | 0.0225 | 0.0327 | 0.0617 | 0.4479 | 0.7566 | 0.9655 | 0.9325 | 0.0225 | 0.0617 | 0.0225 | 0.0149 | 0.0211 |
 
 
 
 
 
 
 
 
 
65
 
66
 
67
  ### Framework versions
68
 
69
- - PEFT 0.4.0
70
- - Transformers 4.39.3
71
- - Pytorch 2.1.2
72
  - Datasets 2.18.0
73
- - Tokenizers 0.15.2
 
1
  ---
2
  license: apache-2.0
3
+ base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
4
  tags:
5
  - generated_from_trainer
6
+ metrics:
7
+ - accuracy
8
+ - precision
9
+ - recall
10
  model-index:
11
  - name: tiny-llama
12
  results: []
 
18
  # tiny-llama
19
 
20
  This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on an unknown dataset.
21
+ It achieves the following results on the evaluation set:
22
+ - Loss: 1.2265
23
+ - Accuracy: 0.8327
24
+ - Precision: 0.8301
25
+ - Recall: 0.8327
26
+ - Precision Macro: 0.7955
27
+ - Recall Macro: 0.7536
28
+ - Macro Fpr: 0.0148
29
+ - Weighted Fpr: 0.0141
30
+ - Weighted Specificity: 0.9765
31
+ - Macro Specificity: 0.9873
32
+ - Weighted Sensitivity: 0.8327
33
+ - Macro Sensitivity: 0.7536
34
+ - F1 Micro: 0.8327
35
+ - F1 Macro: 0.7609
36
+ - F1 Weighted: 0.8291
37
 
38
  ## Model description
39
 
 
49
 
50
  ## Training procedure
51
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
52
  ### Training hyperparameters
53
 
54
  The following hyperparameters were used during training:
55
  - learning_rate: 5e-05
56
+ - train_batch_size: 2
57
+ - eval_batch_size: 2
58
  - seed: 42
59
+ - gradient_accumulation_steps: 4
60
+ - total_train_batch_size: 8
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: linear
63
+ - num_epochs: 10
64
  - mixed_precision_training: Native AMP
65
 
66
  ### Training results
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | Precision Macro | Recall Macro | Macro Fpr | Weighted Fpr | Weighted Specificity | Macro Specificity | Weighted Sensitivity | Macro Sensitivity | F1 Micro | F1 Macro | F1 Weighted |
69
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:---------------:|:------------:|:---------:|:------------:|:--------------------:|:-----------------:|:--------------------:|:-----------------:|:--------:|:--------:|:-----------:|
70
+ | 1.0444 | 1.0 | 642 | 0.5968 | 0.8056 | 0.8050 | 0.8056 | 0.7122 | 0.6995 | 0.0175 | 0.0169 | 0.9730 | 0.9852 | 0.8056 | 0.6995 | 0.8056 | 0.6986 | 0.8014 |
71
+ | 0.4788 | 2.0 | 1284 | 0.6966 | 0.8195 | 0.8222 | 0.8195 | 0.8092 | 0.7825 | 0.0161 | 0.0155 | 0.9755 | 0.9863 | 0.8195 | 0.7825 | 0.8195 | 0.7849 | 0.8172 |
72
+ | 0.3354 | 3.0 | 1926 | 0.8046 | 0.8327 | 0.8276 | 0.8327 | 0.8058 | 0.7582 | 0.0148 | 0.0141 | 0.9758 | 0.9872 | 0.8327 | 0.7582 | 0.8327 | 0.7742 | 0.8282 |
73
+ | 0.0571 | 4.0 | 2569 | 1.1143 | 0.8265 | 0.8312 | 0.8265 | 0.7904 | 0.7763 | 0.0152 | 0.0148 | 0.9772 | 0.9869 | 0.8265 | 0.7763 | 0.8265 | 0.7690 | 0.8262 |
74
+ | 0.0187 | 5.0 | 3211 | 1.1104 | 0.8319 | 0.8316 | 0.8319 | 0.7745 | 0.7724 | 0.0149 | 0.0142 | 0.9770 | 0.9873 | 0.8319 | 0.7724 | 0.8319 | 0.7638 | 0.8303 |
75
+ | 0.0071 | 6.0 | 3853 | 1.1445 | 0.8242 | 0.8210 | 0.8242 | 0.7684 | 0.7384 | 0.0157 | 0.0150 | 0.9755 | 0.9866 | 0.8242 | 0.7384 | 0.8242 | 0.7451 | 0.8209 |
76
+ | 0.0002 | 7.0 | 4495 | 1.2032 | 0.8327 | 0.8302 | 0.8327 | 0.7985 | 0.7529 | 0.0148 | 0.0141 | 0.9765 | 0.9873 | 0.8327 | 0.7529 | 0.8327 | 0.7617 | 0.8293 |
77
+ | 0.0028 | 8.0 | 5138 | 1.1918 | 0.8257 | 0.8226 | 0.8257 | 0.7738 | 0.7493 | 0.0155 | 0.0149 | 0.9756 | 0.9868 | 0.8257 | 0.7493 | 0.8257 | 0.7552 | 0.8229 |
78
+ | 0.0 | 9.0 | 5780 | 1.2181 | 0.8311 | 0.8286 | 0.8311 | 0.7935 | 0.7522 | 0.0150 | 0.0143 | 0.9764 | 0.9872 | 0.8311 | 0.7522 | 0.8311 | 0.7592 | 0.8276 |
79
+ | 0.0018 | 10.0 | 6420 | 1.2265 | 0.8327 | 0.8301 | 0.8327 | 0.7955 | 0.7536 | 0.0148 | 0.0141 | 0.9765 | 0.9873 | 0.8327 | 0.7536 | 0.8327 | 0.7609 | 0.8291 |
80
 
81
 
82
  ### Framework versions
83
 
84
+ - Transformers 4.35.2
85
+ - Pytorch 2.1.0+cu121
 
86
  - Datasets 2.18.0
87
+ - Tokenizers 0.15.1
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fc0089542ec0bd35351672b54d0e8a1316ced0a2268fa555f1e74c0579bda599
3
  size 50626520
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7c1c014801ba286396ea8aa68cbe5a19eebaa25c6458094a9f8633597208c334
3
  size 50626520
runs/Apr17_08-26-15_530f60bd592e/events.out.tfevents.1713342376.530f60bd592e.249.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9862a1960ef57bce70bf51e71409e6fd78628f1a1855413d1d6bee7d2ca1ff6c
3
- size 16686
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:99ce44f5a9bd0a887f045f317678f0cd8bc7b20704d3cb9fb93ab2ba0bf80f20
3
+ size 18256