Vishal74 commited on
Commit
5233ee8
1 Parent(s): 66f62b1

Model save

Browse files
Files changed (2) hide show
  1. README.md +5 -16
  2. adapter_model.safetensors +1 -1
README.md CHANGED
@@ -6,8 +6,6 @@ tags:
6
  - sft
7
  - generated_from_trainer
8
  base_model: microsoft/phi-1_5
9
- datasets:
10
- - generator
11
  model-index:
12
  - name: phi-1.5-lora-tuned-Ondolly
13
  results: []
@@ -18,9 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  # phi-1.5-lora-tuned-Ondolly
20
 
21
- This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the generator dataset.
22
- It achieves the following results on the evaluation set:
23
- - Loss: 2.5945
24
 
25
  ## Model description
26
 
@@ -40,22 +36,15 @@ More information needed
40
 
41
  The following hyperparameters were used during training:
42
  - learning_rate: 1e-05
43
- - train_batch_size: 8
44
  - eval_batch_size: 8
45
  - seed: 42
46
- - gradient_accumulation_steps: 16
47
- - total_train_batch_size: 128
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
  - lr_scheduler_type: linear
50
  - lr_scheduler_warmup_ratio: 0.05
51
- - training_steps: 185
52
-
53
- ### Training results
54
-
55
- | Training Loss | Epoch | Step | Validation Loss |
56
- |:-------------:|:-------:|:----:|:---------------:|
57
- | 2.7652 | 12.9032 | 100 | 2.5945 |
58
-
59
 
60
  ### Framework versions
61
 
 
6
  - sft
7
  - generated_from_trainer
8
  base_model: microsoft/phi-1_5
 
 
9
  model-index:
10
  - name: phi-1.5-lora-tuned-Ondolly
11
  results: []
 
16
 
17
  # phi-1.5-lora-tuned-Ondolly
18
 
19
+ This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on an unknown dataset.
 
 
20
 
21
  ## Model description
22
 
 
36
 
37
  The following hyperparameters were used during training:
38
  - learning_rate: 1e-05
39
+ - train_batch_size: 10
40
  - eval_batch_size: 8
41
  - seed: 42
42
+ - gradient_accumulation_steps: 4
43
+ - total_train_batch_size: 40
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_ratio: 0.05
47
+ - training_steps: 590
 
 
 
 
 
 
 
48
 
49
  ### Framework versions
50
 
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:51cba04a487fb7095322d80a545947d3ae12c1a060a916bf19420be14a574b0c
3
  size 9456136
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d3d6828718cf7f3950b57cc177755a4522d520923e46fec1495ded3be0300e13
3
  size 9456136