b09501048 commited on
Commit
b7708a0
1 Parent(s): a71e94f

Model save

Browse files
Files changed (1) hide show
  1. README.md +5 -7
README.md CHANGED
@@ -14,9 +14,9 @@ model-index:
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/djengo890-national-taiwan-university/ADL_Gemma/runs/fdvjfol6)
18
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/djengo890-national-taiwan-university/ADL_Gemma/runs/fdvjfol6)
19
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/djengo890-national-taiwan-university/ADL_Gemma/runs/fdvjfol6)
20
  # ADL_Gemma
21
 
22
  This model is a fine-tuned version of [zake7749/gemma-2-2b-it-chinese-kyara-dpo](https://huggingface.co/zake7749/gemma-2-2b-it-chinese-kyara-dpo) on an unknown dataset.
@@ -39,14 +39,12 @@ More information needed
39
 
40
  The following hyperparameters were used during training:
41
  - learning_rate: 0.0002
42
- - train_batch_size: 2
43
  - eval_batch_size: 2
44
  - seed: 42
45
- - gradient_accumulation_steps: 4
46
- - total_train_batch_size: 8
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: cosine
49
- - num_epochs: 2
50
 
51
  ### Framework versions
52
 
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/djengo890-national-taiwan-university/ADL_Gemma/runs/f98xwg6l)
18
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/djengo890-national-taiwan-university/ADL_Gemma/runs/f98xwg6l)
19
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/djengo890-national-taiwan-university/ADL_Gemma/runs/f98xwg6l)
20
  # ADL_Gemma
21
 
22
  This model is a fine-tuned version of [zake7749/gemma-2-2b-it-chinese-kyara-dpo](https://huggingface.co/zake7749/gemma-2-2b-it-chinese-kyara-dpo) on an unknown dataset.
 
39
 
40
  The following hyperparameters were used during training:
41
  - learning_rate: 0.0002
42
+ - train_batch_size: 8
43
  - eval_batch_size: 2
44
  - seed: 42
 
 
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: cosine
47
+ - num_epochs: 4
48
 
49
  ### Framework versions
50