Mohamed Boghdady commited on
Commit
008891d
·
verified ·
1 Parent(s): 7b02929

Training in progress, step 1000

Browse files
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:abb6253513abbb16364581798cbd4ed1e2cd4c8d44eb72ccb8ac10c81a106de3
3
  size 305518408
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c45f65d54d736e148c6d38ec5fe978cd594845d81d0547a74cd30c62a12fa1b9
3
  size 305518408
runs/Jul19_09-05-25_cfc182b336d9/events.out.tfevents.1721379927.cfc182b336d9.35.1 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:66eccb4ce5ce600581c0238c83d9104fa67adf7b16d1a183822640d524c7e5d1
3
- size 6304
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4921be2130e36a958c5766bd26be45c096474c37d95b6d148e4256b963354d34
3
+ size 7255
wandb/debug-internal.log CHANGED
The diff for this file is too large to render. See raw diff
 
wandb/run-20240719_090532-oy10h8oj/files/config.yaml CHANGED
@@ -89,6 +89,18 @@ _wandb:
89
  5: 1
90
  6:
91
  - 1
 
 
 
 
 
 
 
 
 
 
 
 
92
  vocab_size:
93
  desc: null
94
  value: 62834
 
89
  5: 1
90
  6:
91
  - 1
92
+ - 1: train/loss
93
+ 5: 1
94
+ 6:
95
+ - 1
96
+ - 1: train/grad_norm
97
+ 5: 1
98
+ 6:
99
+ - 1
100
+ - 1: train/learning_rate
101
+ 5: 1
102
+ 6:
103
+ - 1
104
  vocab_size:
105
  desc: null
106
  value: 62834
wandb/run-20240719_090532-oy10h8oj/files/output.log CHANGED
@@ -1 +1,3 @@
1
  Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41.
 
 
 
1
  Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41.
2
+ Non-default generation parameters: {'max_length': 512, 'num_beams': 4, 'bad_words_ids': [[62833]], 'forced_eos_token_id': 0}
3
+ Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41.
wandb/run-20240719_090532-oy10h8oj/files/wandb-summary.json CHANGED
@@ -1 +1 @@
1
- {"eval/loss": 0.3559863269329071, "eval/bleu": 25.127, "eval/gen_len": 33.2927, "eval/runtime": 163.8548, "eval/samples_per_second": 7.61, "eval/steps_per_second": 0.476, "train/epoch": 1.6025641025641026, "train/global_step": 500, "_timestamp": 1721380424.8560128, "_runtime": 492.4360637664795, "_step": 1, "train/loss": 0.6275, "train/grad_norm": 0.643909752368927, "train/learning_rate": 1.682692307692308e-05}
 
1
+ {"eval/loss": 0.31763938069343567, "eval/bleu": 28.8587, "eval/gen_len": 33.1307, "eval/runtime": 160.8484, "eval/samples_per_second": 7.753, "eval/steps_per_second": 0.485, "train/epoch": 3.2051282051282053, "train/global_step": 1000, "_timestamp": 1721381085.6334226, "_runtime": 1153.213473558426, "_step": 4, "train/loss": 0.2524, "train/grad_norm": 0.7630341649055481, "train/learning_rate": 1.3621794871794874e-05}
wandb/run-20240719_090532-oy10h8oj/logs/debug-internal.log CHANGED
The diff for this file is too large to render. See raw diff
 
wandb/run-20240719_090532-oy10h8oj/run-oy10h8oj.wandb CHANGED
Binary files a/wandb/run-20240719_090532-oy10h8oj/run-oy10h8oj.wandb and b/wandb/run-20240719_090532-oy10h8oj/run-oy10h8oj.wandb differ