TRAINING LOG
wandb: Run history:
wandb: eval/loss █▆▅▄▃▃▂▂▁▁▁
wandb: eval/runtime ▁▃▂▃▃▃▃█▃▄▁
wandb: eval/samples_per_second █▆▇▆▆▆▆▁▆▄█
wandb: eval/steps_per_second █▆▇▆▆▆▆▁▆▄█
wandb: train/epoch ▁▁▁▂▂▂▂▂▂▂▃▃▃▃▃▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▇▇▇▇▇▇███
wandb: train/global_step ▁▁▁▂▂▂▂▂▂▃▃▃▃▃▄▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▇▇▇▇▇▇███
wandb: train/learning_rate ▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁
wandb: train/loss █▄▄▅▃▅▃▃▄▅▃▃▃▄▃▃▃▃▂▂▂▂▃▂▄▂▃▂▂▂▂▂▃▂▁▃▂▂▂▁
wandb: train/total_flos ▁
wandb: train/train_loss ▁
wandb: train/train_runtime ▁
wandb: train/train_samples_per_second ▁
wandb: train/train_steps_per_second ▁
wandb:
wandb: Run summary:
wandb: eval/loss 0.27314
wandb: eval/runtime 129.6563
wandb: eval/samples_per_second 7.713
wandb: eval/steps_per_second 7.713
wandb: train/epoch 0.53
wandb: train/global_step 1875
wandb: train/learning_rate 0.0002
wandb: train/loss 0.258
wandb: train/total_flos 1.9547706216175334e+17
wandb: train/train_loss 0.30445
wandb: train/train_runtime 13368.3721
wandb: train/train_samples_per_second 2.244
wandb: train/train_steps_per_second 0.14
wandb:
wandb: 🚀 View run happy-deluge-17 at: https://wandb.ai/metric/llm_finetune_multiwoz22.sh/runs/4epf9h85
INFERENCE LOG
TODO
- Downloads last month
- 15