Mohamed Boghdady
commited on
Training in progress, step 1000
Browse files- model.safetensors +1 -1
- runs/Jul19_09-05-25_cfc182b336d9/events.out.tfevents.1721379927.cfc182b336d9.35.1 +2 -2
- wandb/debug-internal.log +0 -0
- wandb/run-20240719_090532-oy10h8oj/files/config.yaml +12 -0
- wandb/run-20240719_090532-oy10h8oj/files/output.log +2 -0
- wandb/run-20240719_090532-oy10h8oj/files/wandb-summary.json +1 -1
- wandb/run-20240719_090532-oy10h8oj/logs/debug-internal.log +0 -0
- wandb/run-20240719_090532-oy10h8oj/run-oy10h8oj.wandb +0 -0
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 305518408
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c45f65d54d736e148c6d38ec5fe978cd594845d81d0547a74cd30c62a12fa1b9
|
3 |
size 305518408
|
runs/Jul19_09-05-25_cfc182b336d9/events.out.tfevents.1721379927.cfc182b336d9.35.1
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4921be2130e36a958c5766bd26be45c096474c37d95b6d148e4256b963354d34
|
3 |
+
size 7255
|
wandb/debug-internal.log
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
wandb/run-20240719_090532-oy10h8oj/files/config.yaml
CHANGED
@@ -89,6 +89,18 @@ _wandb:
|
|
89 |
5: 1
|
90 |
6:
|
91 |
- 1
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
92 |
vocab_size:
|
93 |
desc: null
|
94 |
value: 62834
|
|
|
89 |
5: 1
|
90 |
6:
|
91 |
- 1
|
92 |
+
- 1: train/loss
|
93 |
+
5: 1
|
94 |
+
6:
|
95 |
+
- 1
|
96 |
+
- 1: train/grad_norm
|
97 |
+
5: 1
|
98 |
+
6:
|
99 |
+
- 1
|
100 |
+
- 1: train/learning_rate
|
101 |
+
5: 1
|
102 |
+
6:
|
103 |
+
- 1
|
104 |
vocab_size:
|
105 |
desc: null
|
106 |
value: 62834
|
wandb/run-20240719_090532-oy10h8oj/files/output.log
CHANGED
@@ -1 +1,3 @@
|
|
1 |
Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41.
|
|
|
|
|
|
1 |
Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41.
|
2 |
+
Non-default generation parameters: {'max_length': 512, 'num_beams': 4, 'bad_words_ids': [[62833]], 'forced_eos_token_id': 0}
|
3 |
+
Some non-default generation parameters are set in the model config. These should go into a GenerationConfig file (https://huggingface.co/docs/transformers/generation_strategies#save-a-custom-decoding-strategy-with-your-model) instead. This warning will be raised to an exception in v4.41.
|
wandb/run-20240719_090532-oy10h8oj/files/wandb-summary.json
CHANGED
@@ -1 +1 @@
|
|
1 |
-
{"eval/loss": 0.
|
|
|
1 |
+
{"eval/loss": 0.31763938069343567, "eval/bleu": 28.8587, "eval/gen_len": 33.1307, "eval/runtime": 160.8484, "eval/samples_per_second": 7.753, "eval/steps_per_second": 0.485, "train/epoch": 3.2051282051282053, "train/global_step": 1000, "_timestamp": 1721381085.6334226, "_runtime": 1153.213473558426, "_step": 4, "train/loss": 0.2524, "train/grad_norm": 0.7630341649055481, "train/learning_rate": 1.3621794871794874e-05}
|
wandb/run-20240719_090532-oy10h8oj/logs/debug-internal.log
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
wandb/run-20240719_090532-oy10h8oj/run-oy10h8oj.wandb
CHANGED
Binary files a/wandb/run-20240719_090532-oy10h8oj/run-oy10h8oj.wandb and b/wandb/run-20240719_090532-oy10h8oj/run-oy10h8oj.wandb differ
|
|