AvniMittal13 commited on
Commit
4d4a343
1 Parent(s): 3e49a24

AvniMittal13/shwartz2_normal

Browse files
README.md CHANGED
@@ -18,7 +18,7 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 1.8664
22
 
23
  ## Model description
24
 
@@ -38,11 +38,9 @@ More information needed
38
 
39
  The following hyperparameters were used during training:
40
  - learning_rate: 0.0001
41
- - train_batch_size: 4
42
  - eval_batch_size: 8
43
  - seed: 42
44
- - gradient_accumulation_steps: 4
45
- - total_train_batch_size: 16
46
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
  - lr_scheduler_type: cosine
48
  - lr_scheduler_warmup_ratio: 0.05
@@ -51,18 +49,19 @@ The following hyperparameters were used during training:
51
 
52
  ### Training results
53
 
54
- | Training Loss | Epoch | Step | Validation Loss |
55
- |:-------------:|:------:|:----:|:---------------:|
56
- | 2.8473 | 0.416 | 13 | 2.9282 |
57
- | 1.9812 | 0.832 | 26 | 2.1081 |
58
- | 1.9719 | 1.248 | 39 | 1.9229 |
59
- | 1.3557 | 1.6640 | 52 | 1.8664 |
 
60
 
61
 
62
  ### Framework versions
63
 
64
  - PEFT 0.13.1.dev0
65
- - Transformers 4.45.0
66
- - Pytorch 2.4.1+cu121
67
  - Datasets 3.0.0
68
- - Tokenizers 0.20.0
 
18
 
19
  This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 1.4782
22
 
23
  ## Model description
24
 
 
38
 
39
  The following hyperparameters were used during training:
40
  - learning_rate: 0.0001
41
+ - train_batch_size: 1
42
  - eval_batch_size: 8
43
  - seed: 42
 
 
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: cosine
46
  - lr_scheduler_warmup_ratio: 0.05
 
49
 
50
  ### Training results
51
 
52
+ | Training Loss | Epoch | Step | Validation Loss |
53
+ |:-------------:|:-----:|:----:|:---------------:|
54
+ | 1.71 | 0.4 | 200 | 1.4977 |
55
+ | 1.7529 | 0.8 | 400 | 1.4883 |
56
+ | 1.1946 | 1.2 | 600 | 1.4800 |
57
+ | 1.6962 | 1.6 | 800 | 1.4786 |
58
+ | 1.1067 | 2.0 | 1000 | 1.4782 |
59
 
60
 
61
  ### Framework versions
62
 
63
  - PEFT 0.13.1.dev0
64
+ - Transformers 4.44.2
65
+ - Pytorch 2.4.0
66
  - Datasets 3.0.0
67
+ - Tokenizers 0.19.1
adapter_config.json CHANGED
@@ -16,13 +16,12 @@
16
  "megatron_core": "megatron.core",
17
  "modules_to_save": null,
18
  "peft_type": "LORA",
19
- "r": 16,
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
- "q_proj",
24
  "k_proj",
25
- "o_proj",
26
  "v_proj"
27
  ],
28
  "task_type": "CAUSAL_LM",
 
16
  "megatron_core": "megatron.core",
17
  "modules_to_save": null,
18
  "peft_type": "LORA",
19
+ "r": 4,
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
 
23
  "k_proj",
24
+ "q_proj",
25
  "v_proj"
26
  ],
27
  "task_type": "CAUSAL_LM",
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6a59f5252ac67a07ba210bb6838e31443b2f8f16745f2caf604ce5edd20e7fed
3
- size 31483040
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cb5d11f068d9010e806527241c8d42aca2d4ee94afc6478598e9851df4cfc374
3
+ size 7889592
runs/Sep26_03-39-35_d871e25019a7/events.out.tfevents.1727321981.d871e25019a7.30.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e87f9866f4801f995b1346a788858e088b9f090a98bf27fab400dde0c6ece06b
3
+ size 4184
runs/Sep26_03-40-21_d871e25019a7/events.out.tfevents.1727322024.d871e25019a7.30.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:60102720cc597e6df9da152c7ad6d327108c65464914d57f5242d75e5227584b
3
+ size 4184
runs/Sep26_03-54-54_307bffa94c34/events.out.tfevents.1727322896.307bffa94c34.30.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3bbffc8e1aacb28942516a7c3372f841c5df4ebff8c1789269cb6722ec4168ac
3
+ size 4184
runs/Sep26_04-13-03_aad8fdad577d/events.out.tfevents.1727323985.aad8fdad577d.30.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3777c5239006c004fc83a1a6c64b033f3ac76028c9b3b4ba43320297ec988c7a
3
+ size 5318
runs/Sep26_04-15-39_aad8fdad577d/events.out.tfevents.1727324143.aad8fdad577d.30.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa3501dfd336fe3851fc2af92d468c3b6b02cde8d49684b1c00b048fb929e62b
3
+ size 4184
runs/Sep26_04-20-40_9b7b186f542d/events.out.tfevents.1727324590.9b7b186f542d.30.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:986b86b3361a3fe2ca3c88f5a8a9438ed0a5657186a9cfceb36bdd8f8066c7c1
3
+ size 217519
tokenizer.json CHANGED
The diff for this file is too large to render. See raw diff
 
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7b82ddfea009351f6d27b94c2b01cee4006d8853eaf508b9319fdd2b1635a915
3
- size 5496
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3ce4ef602c582e23b22f85b2b957ec0b22b2ba87ce4f7cce11f0737541d6572d
3
+ size 5432