Train start time: 2024-12-08_10:58:26 Torch device: cuda Processing dataset... Loaded data: Batch(atomic_numbers=[2688000, 1], batch=[2688000], cell=[6000, 3, 3], edge_cell_shift=[104016464, 3], edge_index=[2, 104016464], forces=[2688000, 3], pbc=[6000, 3], pos=[2688000, 3], ptr=[6001], total_energy=[6000, 1]) processed data size: ~4132.50 MB Cached processed data to disk Done! Successfully loaded the data set of type ASEDataset(6000)... Replace string dataset_per_atom_total_energy_mean to -346.88965814708905 Atomic outputs are scaled by: [H, C, N, O, Zn: None], shifted by [H, C, N, O, Zn: -346.889658]. Replace string dataset_forces_rms to 1.218411654125436 Initially outputs are globally scaled by: 1.218411654125436, total_energy are globally shifted by None. Successfully built the network... Number of weights: 1406856 Number of trainable weights: 1406856 ! Starting training ... validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 0 100 264 1.13 241 0.951 1.29 18.9 18.9 0.0422 0.0422 Initialization # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Initial Validation 0 8.629 0.005 1.07 255 277 0.929 1.26 18.6 19.5 0.0415 0.0434 Wall time: 8.62962186569348 ! Best model 0 276.600 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 1 100 30.1 0.974 10.6 0.888 1.2 3.06 3.97 0.00683 0.00886 1 172 28.3 1 8.26 0.898 1.22 2.92 3.5 0.00651 0.00781 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 1 100 23 1.06 1.78 0.921 1.25 1.51 1.63 0.00337 0.00363 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 1 145.672 0.005 0.997 8.64e+03 8.66e+03 0.899 1.22 42.8 113 0.0956 0.253 ! Validation 1 145.672 0.005 1 6.88 26.9 0.9 1.22 2.7 3.2 0.00602 0.00713 Wall time: 145.67242301860824 ! Best model 1 26.896 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 2 100 29.9 1 9.88 0.899 1.22 3.04 3.83 0.00678 0.00855 2 172 28.1 0.976 8.63 0.888 1.2 2.99 3.58 0.00667 0.00799 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 2 100 25.4 1.05 4.27 0.92 1.25 2.45 2.52 0.00546 0.00562 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 2 286.125 0.005 0.991 9.36 29.2 0.895 1.21 3.02 3.73 0.00674 0.00832 ! Validation 2 286.125 0.005 0.998 6.68 26.6 0.898 1.22 2.65 3.15 0.00592 0.00703 Wall time: 286.1256426759064 ! Best model 2 26.634 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 3 100 28.6 0.997 8.66 0.898 1.22 2.93 3.58 0.00655 0.008 3 172 26.7 0.968 7.36 0.888 1.2 2.68 3.31 0.00597 0.00738 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 3 100 24.9 1.05 3.93 0.917 1.25 2.34 2.41 0.00523 0.00539 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 3 420.161 0.005 0.987 8.49 28.2 0.893 1.21 2.87 3.55 0.00641 0.00792 ! Validation 3 420.161 0.005 0.994 6.42 26.3 0.896 1.21 2.59 3.09 0.00578 0.00689 Wall time: 420.16180676268414 ! Best model 3 26.301 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 4 100 27.7 0.965 8.36 0.885 1.2 2.94 3.52 0.00656 0.00786 4 172 27.7 0.978 8.14 0.887 1.2 2.69 3.48 0.006 0.00776 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 4 100 21.2 1.05 0.236 0.915 1.25 0.533 0.592 0.00119 0.00132 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 4 554.060 0.005 0.983 7.76 27.4 0.891 1.21 2.74 3.39 0.00611 0.00758 ! Validation 4 554.060 0.005 0.99 7.19 27 0.894 1.21 2.6 3.27 0.0058 0.00729 Wall time: 554.0601695757359 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 5 100 27 0.985 7.3 0.89 1.21 2.57 3.29 0.00573 0.00735 5 172 27.2 0.972 7.73 0.885 1.2 2.69 3.39 0.00601 0.00756 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 5 100 23.5 1.04 2.63 0.913 1.24 1.9 1.97 0.00423 0.00441 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 5 687.443 0.005 0.979 7 26.6 0.889 1.21 2.61 3.22 0.00582 0.0072 ! Validation 5 687.443 0.005 0.986 5.03 24.8 0.892 1.21 2.27 2.73 0.00506 0.0061 Wall time: 687.4435591120273 ! Best model 5 24.762 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 6 100 23.5 0.991 3.69 0.896 1.21 2.07 2.34 0.00462 0.00522 6 172 27.1 0.966 7.75 0.881 1.2 2.84 3.39 0.00634 0.00757 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 6 100 23 1.04 2.27 0.911 1.24 1.75 1.83 0.00391 0.00409 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 6 820.847 0.005 0.976 6.5 26 0.887 1.2 2.48 3.11 0.00555 0.00693 ! Validation 6 820.847 0.005 0.984 4.7 24.4 0.891 1.21 2.19 2.64 0.00489 0.0059 Wall time: 820.8474807026796 ! Best model 6 24.379 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 7 100 24.8 0.956 5.66 0.881 1.19 2.33 2.9 0.0052 0.00647 7 172 23.9 0.956 4.8 0.881 1.19 2.08 2.67 0.00465 0.00596 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 7 100 22.1 1.03 1.44 0.91 1.24 1.36 1.46 0.00304 0.00326 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 7 954.226 0.005 0.973 5.66 25.1 0.886 1.2 2.32 2.9 0.00518 0.00647 ! Validation 7 954.226 0.005 0.982 4.35 24 0.89 1.21 2.08 2.54 0.00465 0.00567 Wall time: 954.226825080812 ! Best model 7 23.980 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 8 100 24.8 0.99 4.96 0.887 1.21 2.44 2.71 0.00545 0.00605 8 172 28.5 0.952 9.51 0.878 1.19 3 3.76 0.00671 0.00839 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 8 100 33.5 1.03 12.9 0.908 1.24 4.34 4.37 0.00969 0.00976 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 8 1087.612 0.005 0.971 5.61 25 0.885 1.2 2.3 2.89 0.00514 0.00644 ! Validation 8 1087.612 0.005 0.98 12.2 31.8 0.889 1.21 3.71 4.26 0.00829 0.0095 Wall time: 1087.61236378178 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 9 100 29.5 0.982 9.91 0.889 1.21 3.21 3.83 0.00717 0.00856 9 172 23.9 0.951 4.84 0.877 1.19 2.01 2.68 0.00449 0.00598 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 9 100 20.7 1.03 0.198 0.907 1.23 0.354 0.542 0.000789 0.00121 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 9 1220.974 0.005 0.968 5.64 25 0.884 1.2 2.31 2.89 0.00516 0.00646 ! Validation 9 1220.974 0.005 0.978 4.84 24.4 0.888 1.2 2.11 2.68 0.0047 0.00598 Wall time: 1220.974216798786 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 10 100 22.1 0.984 2.39 0.889 1.21 1.62 1.88 0.00362 0.0042 10 172 23 0.974 3.52 0.881 1.2 2 2.29 0.00446 0.00511 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 10 100 21.2 1.02 0.782 0.906 1.23 0.952 1.08 0.00212 0.0024 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 10 1355.497 0.005 0.966 4.4 23.7 0.883 1.2 2.04 2.56 0.00456 0.00571 ! Validation 10 1355.497 0.005 0.975 3.57 23.1 0.888 1.2 1.88 2.3 0.00421 0.00514 Wall time: 1355.4971671649255 ! Best model 10 23.067 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 11 100 24.2 0.97 4.8 0.887 1.2 2.09 2.67 0.00468 0.00596 11 172 24.6 0.957 5.5 0.881 1.19 2.26 2.86 0.00505 0.00638 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 11 100 23.1 1.02 2.71 0.905 1.23 1.94 2.01 0.00433 0.00448 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 11 1488.919 0.005 0.963 4.18 23.4 0.882 1.2 1.99 2.49 0.00444 0.00556 ! Validation 11 1488.919 0.005 0.972 4.66 24.1 0.887 1.2 2.24 2.63 0.00499 0.00587 Wall time: 1488.9192247376777 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 12 100 24.9 0.939 6.08 0.872 1.18 2.62 3.01 0.00585 0.00671 12 172 22.9 0.932 4.3 0.866 1.18 1.87 2.53 0.00418 0.00564 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 12 100 25.5 1.01 5.22 0.903 1.23 2.74 2.78 0.00611 0.00621 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 12 1622.225 0.005 0.958 4.05 23.2 0.88 1.19 1.96 2.45 0.00438 0.00547 ! Validation 12 1622.225 0.005 0.968 7 26.4 0.885 1.2 2.76 3.22 0.00616 0.00719 Wall time: 1622.2263397229835 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 13 100 21.6 0.948 2.6 0.879 1.19 1.43 1.96 0.00319 0.00438 13 172 21.8 0.944 2.9 0.873 1.18 1.7 2.07 0.0038 0.00463 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 13 100 22.3 1.01 2.2 0.899 1.22 1.73 1.81 0.00386 0.00403 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 13 1755.550 0.005 0.953 3.62 22.7 0.878 1.19 1.85 2.32 0.00412 0.00517 ! Validation 13 1755.550 0.005 0.962 4.55 23.8 0.883 1.19 2.21 2.6 0.00493 0.0058 Wall time: 1755.5504464628175 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 14 100 21.4 0.931 2.83 0.873 1.18 1.6 2.05 0.00357 0.00458 14 172 22.2 0.944 3.35 0.873 1.18 1.93 2.23 0.00431 0.00498 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 14 100 20.4 0.997 0.453 0.895 1.22 0.711 0.82 0.00159 0.00183 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 14 1888.875 0.005 0.945 3.59 22.5 0.874 1.18 1.85 2.31 0.00413 0.00515 ! Validation 14 1888.875 0.005 0.953 3.63 22.7 0.879 1.19 1.93 2.32 0.0043 0.00518 Wall time: 1888.875309519004 ! Best model 14 22.681 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 15 100 21 0.911 2.77 0.861 1.16 1.63 2.03 0.00363 0.00453 15 172 20.7 0.928 2.15 0.867 1.17 1.48 1.79 0.00331 0.00399 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 15 100 22.7 0.977 3.15 0.886 1.2 2.1 2.16 0.00468 0.00483 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 15 2022.300 0.005 0.932 3.36 22 0.869 1.18 1.78 2.23 0.00396 0.00499 ! Validation 15 2022.300 0.005 0.933 5.68 24.3 0.87 1.18 2.46 2.9 0.00548 0.00648 Wall time: 2022.3000591387972 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 16 100 23.7 0.912 5.47 0.858 1.16 2.38 2.85 0.00531 0.00636 16 172 22.7 0.867 5.35 0.839 1.13 2.39 2.82 0.00533 0.00629 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 16 100 19.8 0.933 1.15 0.864 1.18 1.22 1.31 0.00272 0.00292 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 16 2155.823 0.005 0.901 3.65 21.7 0.854 1.16 1.86 2.33 0.00415 0.00519 ! Validation 16 2155.823 0.005 0.887 4.15 21.9 0.848 1.15 2.02 2.48 0.00451 0.00554 Wall time: 2155.82349598594 ! Best model 16 21.893 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 17 100 19.3 0.821 2.84 0.813 1.1 1.72 2.05 0.00385 0.00459 17 172 17.8 0.788 1.98 0.801 1.08 1.43 1.72 0.00318 0.00383 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 17 100 17.8 0.845 0.871 0.823 1.12 1.04 1.14 0.00232 0.00254 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 17 2289.550 0.005 0.841 3.01 19.8 0.826 1.12 1.68 2.12 0.00375 0.00472 ! Validation 17 2289.550 0.005 0.802 3.99 20 0.808 1.09 1.99 2.43 0.00444 0.00543 Wall time: 2289.5499115870334 ! Best model 17 20.032 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 18 100 16.2 0.692 2.37 0.75 1.01 1.57 1.87 0.00351 0.00418 18 172 13.9 0.611 1.71 0.709 0.953 1.16 1.59 0.00258 0.00356 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 18 100 20.1 0.634 7.45 0.715 0.97 3.28 3.33 0.00731 0.00742 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 18 2424.993 0.005 0.704 3.08 17.2 0.758 1.02 1.69 2.14 0.00377 0.00477 ! Validation 18 2424.993 0.005 0.61 11.9 24.1 0.707 0.952 3.79 4.2 0.00846 0.00937 Wall time: 2424.9929183018394 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 19 100 11.9 0.503 1.82 0.643 0.864 1.28 1.64 0.00285 0.00367 19 172 11.3 0.478 1.77 0.628 0.842 1.2 1.62 0.00269 0.00362 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 19 100 16.2 0.508 6.08 0.639 0.869 2.95 3 0.00658 0.00671 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 19 2558.583 0.005 0.533 3.34 14 0.661 0.889 1.77 2.23 0.00395 0.00497 ! Validation 19 2558.583 0.005 0.494 8.51 18.4 0.638 0.857 3.18 3.55 0.0071 0.00793 Wall time: 2558.583266887814 ! Best model 19 18.398 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 20 100 11.4 0.448 2.47 0.609 0.816 1.66 1.91 0.00371 0.00427 20 172 58.7 0.428 50.1 0.596 0.797 8.49 8.63 0.019 0.0193 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 20 100 11 0.459 1.79 0.607 0.826 1.55 1.63 0.00346 0.00364 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 20 2691.974 0.005 0.458 4.69 13.8 0.614 0.825 2.05 2.63 0.00457 0.00587 ! Validation 20 2691.974 0.005 0.445 3.49 12.4 0.606 0.813 1.91 2.28 0.00426 0.00508 Wall time: 2691.9741660719737 ! Best model 20 12.397 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 21 100 9.54 0.412 1.3 0.584 0.782 1.19 1.39 0.00267 0.0031 21 172 10.1 0.41 1.9 0.582 0.78 1.23 1.68 0.00275 0.00375 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 21 100 10.7 0.427 2.2 0.586 0.796 1.74 1.81 0.00389 0.00404 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 21 2836.606 0.005 0.422 3.8 12.2 0.59 0.792 1.87 2.37 0.00417 0.0053 ! Validation 21 2836.606 0.005 0.412 2.47 10.7 0.583 0.782 1.6 1.91 0.00357 0.00427 Wall time: 2836.606314809993 ! Best model 21 10.700 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 22 100 8.38 0.379 0.808 0.558 0.75 0.831 1.1 0.00185 0.00245 22 172 11.7 0.368 4.36 0.55 0.739 2.24 2.54 0.005 0.00568 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 22 100 9.28 0.399 1.3 0.567 0.77 1.31 1.39 0.00292 0.0031 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 22 2982.261 0.005 0.385 4.43 12.1 0.564 0.756 1.99 2.56 0.00445 0.00572 ! Validation 22 2982.261 0.005 0.383 1.22 8.89 0.563 0.755 1.13 1.35 0.00252 0.00301 Wall time: 2982.2612147647887 ! Best model 22 8.893 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 23 100 9.31 0.385 1.61 0.562 0.756 1.29 1.55 0.00287 0.00345 23 172 9.81 0.359 2.63 0.544 0.73 1.75 1.98 0.0039 0.00441 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 23 100 14.5 0.387 6.79 0.559 0.758 3.15 3.17 0.00702 0.00708 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 23 3116.179 0.005 0.375 8.04 15.5 0.555 0.746 2.71 3.45 0.00605 0.00771 ! Validation 23 3116.179 0.005 0.372 10.4 17.9 0.555 0.743 3.76 3.94 0.0084 0.00879 Wall time: 3116.179098114837 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 24 100 21.9 0.357 14.7 0.542 0.728 4.53 4.68 0.0101 0.0104 24 172 7.38 0.339 0.608 0.531 0.709 0.79 0.95 0.00176 0.00212 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 24 100 9.56 0.362 2.31 0.542 0.733 1.8 1.85 0.00403 0.00414 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 24 3249.550 0.005 0.35 4.73 11.7 0.537 0.721 2.09 2.65 0.00467 0.00592 ! Validation 24 3249.550 0.005 0.349 2.53 9.51 0.537 0.72 1.64 1.94 0.00365 0.00433 Wall time: 3249.5504334806465 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 25 100 9.05 0.331 2.43 0.524 0.701 1.61 1.9 0.0036 0.00424 25 172 9.73 0.33 3.14 0.522 0.699 1.87 2.16 0.00418 0.00482 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 25 100 11.7 0.351 4.63 0.533 0.722 2.59 2.62 0.00578 0.00585 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 25 3383.377 0.005 0.33 6.98 13.6 0.522 0.7 2.5 3.22 0.00557 0.00719 ! Validation 25 3383.377 0.005 0.339 7.31 14.1 0.529 0.709 3.1 3.29 0.00693 0.00735 Wall time: 3383.37750179274 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 26 100 7.92 0.309 1.75 0.505 0.677 1.37 1.61 0.00307 0.00359 26 172 7.34 0.305 1.25 0.502 0.672 1.05 1.36 0.00235 0.00304 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 26 100 6.78 0.328 0.226 0.515 0.698 0.463 0.579 0.00103 0.00129 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 26 3516.759 0.005 0.315 4.88 11.2 0.509 0.683 2.16 2.69 0.00483 0.00601 ! Validation 26 3516.759 0.005 0.317 0.901 7.23 0.512 0.686 0.952 1.16 0.00212 0.00258 Wall time: 3516.7594394190237 ! Best model 26 7.235 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 27 100 6.9 0.301 0.884 0.498 0.668 0.835 1.15 0.00186 0.00256 27 172 10.5 0.297 4.53 0.496 0.664 2.34 2.59 0.00523 0.00579 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 27 100 20.2 0.314 13.9 0.504 0.682 4.52 4.54 0.0101 0.0101 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 27 3650.145 0.005 0.3 6.46 12.5 0.498 0.668 2.21 3.1 0.00493 0.00692 ! Validation 27 3650.145 0.005 0.303 15.1 21.2 0.501 0.671 4.53 4.74 0.0101 0.0106 Wall time: 3650.1456774109975 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 28 100 8.11 0.292 2.28 0.494 0.658 1.56 1.84 0.00348 0.00411 28 172 8.05 0.302 2.01 0.502 0.67 1.44 1.73 0.00321 0.00386 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 28 100 9.11 0.316 2.79 0.507 0.685 1.99 2.03 0.00445 0.00454 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 28 3786.131 0.005 0.29 9.51 15.3 0.489 0.656 3.11 3.76 0.00695 0.00839 ! Validation 28 3786.131 0.005 0.306 4.24 10.4 0.503 0.674 2.16 2.51 0.00482 0.0056 Wall time: 3786.131102227606 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 29 100 13.1 0.293 7.23 0.493 0.66 3.07 3.28 0.00685 0.00731 29 172 16.2 0.28 10.6 0.482 0.645 3.8 3.97 0.00848 0.00886 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 29 100 24.9 0.294 19 0.489 0.661 5.3 5.31 0.0118 0.0119 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 29 3920.104 0.005 0.281 6.06 11.7 0.482 0.646 2.38 3 0.00532 0.0067 ! Validation 29 3920.104 0.005 0.286 20.6 26.3 0.487 0.651 5.37 5.53 0.012 0.0123 Wall time: 3920.104113060981 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 30 100 8.88 0.27 3.48 0.473 0.633 1.98 2.27 0.00443 0.00507 30 172 14.4 0.271 8.93 0.475 0.634 3.18 3.64 0.0071 0.00813 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 30 100 6.09 0.287 0.358 0.483 0.652 0.612 0.729 0.00137 0.00163 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 30 4053.524 0.005 0.27 8.14 13.5 0.473 0.633 2.8 3.48 0.00626 0.00776 ! Validation 30 4053.524 0.005 0.279 0.708 6.29 0.482 0.644 0.843 1.03 0.00188 0.00229 Wall time: 4053.524088451639 ! Best model 30 6.292 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 31 100 6.23 0.251 1.2 0.457 0.611 1.14 1.33 0.00253 0.00298 31 172 6.15 0.25 1.15 0.457 0.609 1.11 1.31 0.00248 0.00291 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 31 100 7.77 0.27 2.37 0.469 0.633 1.83 1.87 0.00409 0.00418 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 31 4190.128 0.005 0.259 5.56 10.7 0.463 0.62 2.37 2.87 0.00529 0.00641 ! Validation 31 4190.128 0.005 0.266 3.03 8.36 0.471 0.629 1.74 2.12 0.00389 0.00474 Wall time: 4190.127964880783 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 32 100 11.1 0.251 6.11 0.457 0.611 2.83 3.01 0.00632 0.00672 32 172 26.1 0.243 21.2 0.451 0.601 5.38 5.61 0.012 0.0125 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 32 100 29.2 0.26 24 0.461 0.622 5.95 5.96 0.0133 0.0133 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 32 4323.543 0.005 0.249 6.71 11.7 0.455 0.608 2.57 3.15 0.00573 0.00704 ! Validation 32 4323.543 0.005 0.258 24.7 29.9 0.463 0.618 5.95 6.06 0.0133 0.0135 Wall time: 4323.543497852981 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 33 100 8.84 0.227 4.29 0.436 0.581 2.31 2.52 0.00515 0.00563 33 172 7.16 0.234 2.47 0.442 0.59 1.73 1.92 0.00387 0.00428 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 33 100 5.8 0.248 0.831 0.45 0.607 1.04 1.11 0.00232 0.00248 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 33 4458.540 0.005 0.238 5.13 9.88 0.445 0.594 2.22 2.76 0.00495 0.00616 ! Validation 33 4458.540 0.005 0.245 1.28 6.17 0.452 0.603 1.15 1.38 0.00256 0.00307 Wall time: 4458.540728919674 ! Best model 33 6.170 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 34 100 6.43 0.23 1.84 0.437 0.584 1.38 1.65 0.00308 0.00369 34 172 6.74 0.227 2.2 0.436 0.58 1.66 1.81 0.00371 0.00404 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 34 100 19.1 0.235 14.5 0.438 0.59 4.62 4.63 0.0103 0.0103 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 34 4591.959 0.005 0.23 4.53 9.13 0.438 0.584 2.01 2.59 0.00448 0.00579 ! Validation 34 4591.959 0.005 0.233 18.7 23.3 0.442 0.588 5.09 5.26 0.0114 0.0117 Wall time: 4591.9593450916 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 35 100 6.96 0.232 2.32 0.44 0.587 1.52 1.86 0.00338 0.00414 35 172 5.58 0.221 1.16 0.429 0.573 1.11 1.31 0.00248 0.00293 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 35 100 4.7 0.23 0.1 0.434 0.584 0.369 0.386 0.000823 0.000861 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 35 4731.375 0.005 0.223 6.21 10.7 0.432 0.576 2.46 3.04 0.00548 0.00678 ! Validation 35 4731.375 0.005 0.229 0.827 5.41 0.438 0.583 0.878 1.11 0.00196 0.00247 Wall time: 4731.375755554996 ! Best model 35 5.407 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 36 100 13.3 0.211 9.14 0.42 0.559 3.43 3.68 0.00767 0.00822 36 172 27.2 0.215 22.9 0.424 0.565 5.73 5.83 0.0128 0.013 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 36 100 21.3 0.228 16.8 0.431 0.581 4.97 4.99 0.0111 0.0111 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 36 4864.788 0.005 0.213 5.03 9.29 0.422 0.563 2.19 2.73 0.00488 0.00609 ! Validation 36 4864.788 0.005 0.226 16.7 21.3 0.435 0.579 4.91 4.99 0.011 0.0111 Wall time: 4864.788779557683 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 37 100 4.66 0.207 0.516 0.418 0.555 0.7 0.875 0.00156 0.00195 37 172 12 0.197 8.03 0.406 0.54 3.37 3.45 0.00752 0.00771 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 37 100 13.8 0.213 9.54 0.418 0.562 3.75 3.76 0.00836 0.0084 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 37 4998.548 0.005 0.209 4 8.18 0.418 0.557 2 2.43 0.00447 0.00543 ! Validation 37 4998.548 0.005 0.212 13.8 18 0.423 0.562 4.36 4.52 0.00972 0.0101 Wall time: 4998.54843165772 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 38 100 8.2 0.202 4.16 0.41 0.548 2.29 2.49 0.00511 0.00555 38 172 7.32 0.21 3.12 0.42 0.558 1.97 2.15 0.0044 0.00481 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 38 100 7.53 0.219 3.15 0.424 0.57 2.13 2.16 0.00475 0.00483 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 38 5131.940 0.005 0.202 6.02 10.1 0.411 0.548 2.49 2.99 0.00557 0.00667 ! Validation 38 5131.940 0.005 0.218 2.96 7.31 0.427 0.568 1.93 2.1 0.0043 0.00468 Wall time: 5131.940258490853 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 39 100 5.55 0.199 1.57 0.408 0.543 1.28 1.53 0.00286 0.00341 39 172 6.45 0.191 2.64 0.401 0.532 1.81 1.98 0.00403 0.00442 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 39 100 5.39 0.201 1.37 0.406 0.546 1.38 1.42 0.00307 0.00318 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 39 5265.332 0.005 0.199 3.72 7.71 0.409 0.544 1.85 2.35 0.00412 0.00525 ! Validation 39 5265.332 0.005 0.202 3.11 7.14 0.412 0.547 1.94 2.15 0.00433 0.00479 Wall time: 5265.332109221723 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 40 100 16.9 0.184 13.2 0.397 0.523 4.34 4.43 0.00968 0.00989 40 172 4.27 0.174 0.789 0.384 0.508 0.953 1.08 0.00213 0.00242 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 40 100 3.9 0.189 0.116 0.395 0.53 0.344 0.415 0.000768 0.000927 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 40 5398.742 0.005 0.188 2.95 6.71 0.398 0.529 1.69 2.09 0.00376 0.00467 ! Validation 40 5398.742 0.005 0.19 0.608 4.4 0.401 0.531 0.744 0.95 0.00166 0.00212 Wall time: 5398.742245405912 ! Best model 40 4.403 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 41 100 4.25 0.19 0.445 0.402 0.531 0.667 0.813 0.00149 0.00181 41 172 5.16 0.183 1.49 0.394 0.522 1.25 1.49 0.00279 0.00332 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 41 100 3.94 0.193 0.0874 0.4 0.535 0.333 0.36 0.000743 0.000804 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 41 5532.149 0.005 0.195 6.38 10.3 0.405 0.538 2.24 3.08 0.00499 0.00687 ! Validation 41 5532.149 0.005 0.193 1.04 4.91 0.404 0.536 1.01 1.24 0.00225 0.00277 Wall time: 5532.149356646929 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 42 100 4.66 0.186 0.938 0.395 0.525 1.05 1.18 0.00235 0.00263 42 172 4.62 0.178 1.06 0.384 0.515 0.849 1.25 0.00189 0.0028 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 42 100 3.96 0.178 0.409 0.384 0.513 0.692 0.78 0.00154 0.00174 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 42 5666.899 0.005 0.18 2.94 6.54 0.39 0.517 1.67 2.09 0.00373 0.00467 ! Validation 42 5666.899 0.005 0.179 1.39 4.98 0.39 0.516 1.23 1.44 0.00275 0.00321 Wall time: 5666.89889871981 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 43 100 4.14 0.171 0.731 0.378 0.503 0.82 1.04 0.00183 0.00233 43 172 4.18 0.166 0.864 0.373 0.496 0.91 1.13 0.00203 0.00253 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 43 100 3.67 0.169 0.288 0.376 0.501 0.551 0.654 0.00123 0.00146 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 43 5800.297 0.005 0.167 2.69 6.03 0.376 0.498 1.63 2 0.00363 0.00446 ! Validation 43 5800.297 0.005 0.17 0.451 3.86 0.381 0.503 0.631 0.818 0.00141 0.00183 Wall time: 5800.297364791855 ! Best model 43 3.860 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 44 100 6.52 0.158 3.37 0.365 0.484 2.05 2.24 0.00458 0.00499 44 172 3.52 0.154 0.444 0.36 0.478 0.69 0.812 0.00154 0.00181 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 44 100 4.13 0.154 1.05 0.359 0.479 1.2 1.25 0.00268 0.00278 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 44 5933.732 0.005 0.159 2.32 5.5 0.367 0.486 1.49 1.86 0.00333 0.00415 ! Validation 44 5933.732 0.005 0.158 2.88 6.04 0.367 0.484 1.86 2.07 0.00415 0.00462 Wall time: 5933.73237325903 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 45 100 3.33 0.15 0.33 0.357 0.472 0.481 0.7 0.00107 0.00156 45 172 4.39 0.147 1.46 0.354 0.467 1.26 1.47 0.00281 0.00328 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 45 100 6.65 0.151 3.64 0.356 0.473 2.3 2.32 0.00513 0.00519 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 45 6070.602 0.005 0.153 3.22 6.29 0.361 0.477 1.73 2.19 0.00387 0.00488 ! Validation 45 6070.602 0.005 0.154 2.58 5.65 0.363 0.478 1.81 1.96 0.00404 0.00437 Wall time: 6070.602058457676 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 46 100 8.99 0.146 6.08 0.353 0.465 2.95 3 0.00658 0.00671 46 172 4.65 0.144 1.76 0.352 0.463 1.38 1.62 0.00307 0.00361 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 46 100 3.15 0.143 0.295 0.348 0.461 0.566 0.661 0.00126 0.00148 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 46 6205.269 0.005 0.147 3.11 6.05 0.354 0.467 1.72 2.15 0.00384 0.0048 ! Validation 46 6205.269 0.005 0.147 0.416 3.36 0.355 0.467 0.615 0.786 0.00137 0.00175 Wall time: 6205.268907324877 ! Best model 46 3.360 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 47 100 8.31 0.151 5.29 0.359 0.473 2.67 2.8 0.00596 0.00625 47 172 2.93 0.13 0.321 0.335 0.44 0.544 0.691 0.00121 0.00154 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 47 100 4.62 0.135 1.93 0.339 0.447 1.66 1.69 0.0037 0.00377 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 47 6338.681 0.005 0.145 3.55 6.45 0.352 0.464 1.84 2.3 0.00411 0.00513 ! Validation 47 6338.681 0.005 0.14 1.33 4.13 0.347 0.456 1.22 1.4 0.00271 0.00313 Wall time: 6338.681006518658 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 48 100 3.06 0.132 0.428 0.336 0.442 0.686 0.798 0.00153 0.00178 48 172 5.3 0.128 2.74 0.332 0.436 1.84 2.02 0.00411 0.0045 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 48 100 3.81 0.127 1.26 0.33 0.435 1.33 1.37 0.00297 0.00306 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 48 6472.075 0.005 0.132 2.22 4.86 0.336 0.442 1.46 1.82 0.00327 0.00405 ! Validation 48 6472.075 0.005 0.132 4.69 7.33 0.337 0.443 2.33 2.64 0.0052 0.00589 Wall time: 6472.074947463814 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 49 100 4.22 0.126 1.7 0.327 0.432 1.43 1.59 0.00318 0.00355 49 172 5.05 0.122 2.61 0.324 0.425 1.78 1.97 0.00398 0.00439 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 49 100 3.78 0.126 1.26 0.328 0.432 1.32 1.37 0.00295 0.00305 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 49 6605.571 0.005 0.125 2.48 4.97 0.327 0.43 1.51 1.92 0.00336 0.00428 ! Validation 49 6605.571 0.005 0.13 1.52 4.12 0.334 0.439 1.3 1.5 0.00291 0.00336 Wall time: 6605.570992935915 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 50 100 3.03 0.116 0.718 0.316 0.414 0.844 1.03 0.00188 0.0023 50 172 2.6 0.111 0.38 0.309 0.406 0.629 0.751 0.0014 0.00168 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 50 100 3.32 0.108 1.15 0.304 0.401 1.27 1.31 0.00283 0.00292 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 50 6740.141 0.005 0.12 1.9 4.31 0.321 0.423 1.26 1.68 0.0028 0.00375 ! Validation 50 6740.141 0.005 0.114 2.09 4.38 0.314 0.412 1.61 1.76 0.0036 0.00394 Wall time: 6740.141573516652 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 51 100 3.1 0.113 0.834 0.31 0.41 0.957 1.11 0.00214 0.00248 51 172 2.87 0.116 0.551 0.315 0.415 0.794 0.904 0.00177 0.00202 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 51 100 3.19 0.112 0.944 0.311 0.408 1.15 1.18 0.00256 0.00264 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 51 6873.508 0.005 0.113 2.54 4.79 0.311 0.409 1.59 1.94 0.00354 0.00433 ! Validation 51 6873.508 0.005 0.117 1.01 3.34 0.318 0.416 0.996 1.22 0.00222 0.00273 Wall time: 6873.508856366854 ! Best model 51 3.337 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 52 100 9.62 0.104 7.54 0.299 0.393 3.3 3.35 0.00736 0.00747 52 172 6.51 0.114 4.24 0.314 0.411 2.42 2.51 0.00541 0.0056 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 52 100 3.24 0.112 0.992 0.31 0.408 1.18 1.21 0.00263 0.00271 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 52 7006.917 0.005 0.109 2.47 4.65 0.306 0.402 1.58 1.92 0.00353 0.00427 ! Validation 52 7006.917 0.005 0.116 3.84 6.16 0.318 0.416 2.09 2.39 0.00466 0.00533 Wall time: 7006.916919151787 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 53 100 2.49 0.109 0.319 0.304 0.402 0.569 0.688 0.00127 0.00154 53 172 2.83 0.104 0.744 0.3 0.393 0.872 1.05 0.00195 0.00235 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 53 100 2.11 0.0994 0.119 0.292 0.384 0.3 0.42 0.00067 0.000937 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 53 7140.311 0.005 0.104 1.6 3.69 0.299 0.394 1.22 1.54 0.00273 0.00344 ! Validation 53 7140.311 0.005 0.105 0.718 2.81 0.301 0.394 0.838 1.03 0.00187 0.00231 Wall time: 7140.311797623988 ! Best model 53 2.809 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 54 100 5.54 0.0969 3.6 0.289 0.379 2.22 2.31 0.00496 0.00516 54 172 2.98 0.095 1.08 0.286 0.375 1.14 1.27 0.00254 0.00283 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 54 100 3.96 0.0928 2.11 0.282 0.371 1.75 1.77 0.00391 0.00395 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 54 7273.711 0.005 0.0984 1.8 3.77 0.291 0.382 1.3 1.63 0.0029 0.00365 ! Validation 54 7273.711 0.005 0.0985 3.51 5.48 0.292 0.382 2.16 2.28 0.00482 0.0051 Wall time: 7273.711779102683 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 55 100 3.06 0.0938 1.18 0.284 0.373 1.17 1.32 0.00261 0.00296 55 172 2.12 0.0892 0.332 0.277 0.364 0.505 0.702 0.00113 0.00157 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 55 100 1.87 0.087 0.135 0.273 0.359 0.368 0.447 0.000821 0.000998 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 55 7413.449 0.005 0.0958 1.86 3.78 0.287 0.377 1.34 1.66 0.00299 0.00371 ! Validation 55 7413.449 0.005 0.093 0.747 2.61 0.284 0.372 0.859 1.05 0.00192 0.00235 Wall time: 7413.449038802646 ! Best model 55 2.606 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 56 100 2.13 0.0888 0.358 0.277 0.363 0.613 0.729 0.00137 0.00163 56 172 4.61 0.086 2.89 0.271 0.357 1.97 2.07 0.0044 0.00462 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 56 100 2.88 0.0884 1.11 0.276 0.362 1.25 1.28 0.0028 0.00287 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 56 7546.867 0.005 0.0898 1.64 3.43 0.278 0.365 1.23 1.56 0.00274 0.00348 ! Validation 56 7546.867 0.005 0.0919 0.802 2.64 0.282 0.369 0.897 1.09 0.002 0.00244 Wall time: 7546.867791931611 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 57 100 5.37 0.0931 3.51 0.283 0.372 2.2 2.28 0.00492 0.0051 57 172 2.76 0.0831 1.1 0.268 0.351 1.18 1.28 0.00262 0.00285 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 57 100 2.35 0.0819 0.709 0.265 0.349 0.996 1.03 0.00222 0.00229 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 57 7682.458 0.005 0.0876 1.76 3.51 0.274 0.361 1.32 1.61 0.00295 0.0036 ! Validation 57 7682.458 0.005 0.0874 0.675 2.42 0.275 0.36 0.831 1 0.00186 0.00223 Wall time: 7682.458373598754 ! Best model 57 2.422 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 58 100 2.04 0.0898 0.241 0.276 0.365 0.479 0.599 0.00107 0.00134 58 172 2.58 0.0904 0.773 0.276 0.366 0.923 1.07 0.00206 0.00239 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 58 100 3.8 0.084 2.12 0.268 0.353 1.76 1.78 0.00392 0.00396 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 58 7815.884 0.005 0.0866 2.07 3.8 0.272 0.358 1.42 1.75 0.00317 0.00391 ! Validation 58 7815.884 0.005 0.089 3.42 5.2 0.277 0.363 2.14 2.25 0.00478 0.00503 Wall time: 7815.884647674859 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 59 100 3.41 0.0766 1.88 0.257 0.337 1.53 1.67 0.00342 0.00373 59 172 2.26 0.0798 0.663 0.261 0.344 0.857 0.992 0.00191 0.00221 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 59 100 3.26 0.0787 1.69 0.259 0.342 1.57 1.58 0.0035 0.00354 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 59 7949.305 0.005 0.0818 1.45 3.09 0.265 0.348 1.16 1.47 0.0026 0.00328 ! Validation 59 7949.305 0.005 0.0836 1.62 3.29 0.268 0.352 1.43 1.55 0.00319 0.00346 Wall time: 7949.305798141751 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 60 100 3.41 0.078 1.85 0.258 0.34 1.46 1.66 0.00327 0.0037 60 172 1.89 0.079 0.314 0.26 0.342 0.534 0.682 0.00119 0.00152 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 60 100 1.53 0.0731 0.0629 0.25 0.329 0.228 0.306 0.000508 0.000682 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 60 8092.230 0.005 0.0777 1.32 2.87 0.258 0.34 1.14 1.4 0.00255 0.00313 ! Validation 60 8092.230 0.005 0.0786 0.59 2.16 0.26 0.342 0.74 0.936 0.00165 0.00209 Wall time: 8092.230307533871 ! Best model 60 2.163 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 61 100 1.86 0.0736 0.389 0.251 0.331 0.619 0.76 0.00138 0.0017 61 172 1.75 0.0674 0.401 0.241 0.316 0.659 0.771 0.00147 0.00172 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 61 100 1.93 0.0682 0.561 0.241 0.318 0.88 0.913 0.00196 0.00204 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 61 8225.652 0.005 0.0736 1.05 2.53 0.251 0.331 1.02 1.25 0.00227 0.00279 ! Validation 61 8225.652 0.005 0.0729 1.46 2.92 0.25 0.329 1.29 1.47 0.00289 0.00329 Wall time: 8225.652359688655 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 62 100 2.01 0.0722 0.57 0.249 0.327 0.817 0.92 0.00182 0.00205 62 172 1.88 0.0659 0.561 0.238 0.313 0.692 0.912 0.00155 0.00204 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 62 100 3.95 0.0699 2.55 0.244 0.322 1.93 1.95 0.00432 0.00435 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 62 8359.062 0.005 0.0706 1.36 2.77 0.245 0.324 1.17 1.42 0.00261 0.00317 ! Validation 62 8359.062 0.005 0.0736 1.99 3.46 0.251 0.331 1.47 1.72 0.00327 0.00383 Wall time: 8359.062874734867 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 63 100 2.28 0.069 0.905 0.243 0.32 1.03 1.16 0.00229 0.00259 63 172 3.12 0.066 1.8 0.238 0.313 1.54 1.63 0.00343 0.00365 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 63 100 2.48 0.063 1.22 0.232 0.306 1.33 1.35 0.00296 0.003 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 63 8492.471 0.005 0.0702 1.35 2.75 0.245 0.323 1.13 1.42 0.00253 0.00316 ! Validation 63 8492.471 0.005 0.0684 2.32 3.68 0.242 0.319 1.72 1.85 0.00385 0.00414 Wall time: 8492.470945549663 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 64 100 4.65 0.0655 3.34 0.237 0.312 2.17 2.23 0.00484 0.00497 64 172 1.57 0.0712 0.141 0.246 0.325 0.374 0.458 0.000835 0.00102 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 64 100 1.27 0.0622 0.0281 0.231 0.304 0.182 0.204 0.000406 0.000456 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 64 8626.152 0.005 0.0666 1.34 2.68 0.238 0.315 1.12 1.41 0.0025 0.00315 ! Validation 64 8626.152 0.005 0.0675 0.439 1.79 0.241 0.316 0.626 0.807 0.0014 0.0018 Wall time: 8626.152649638709 ! Best model 64 1.788 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 65 100 2.64 0.0612 1.41 0.228 0.302 1.35 1.45 0.00301 0.00323 65 172 1.71 0.0595 0.523 0.226 0.297 0.783 0.881 0.00175 0.00197 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 65 100 1.2 0.0583 0.0335 0.223 0.294 0.176 0.223 0.000394 0.000498 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 65 8760.512 0.005 0.0632 0.976 2.24 0.232 0.306 0.985 1.2 0.0022 0.00269 ! Validation 65 8760.512 0.005 0.0633 0.302 1.57 0.233 0.307 0.533 0.67 0.00119 0.0015 Wall time: 8760.512395174708 ! Best model 65 1.569 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 66 100 1.49 0.0606 0.273 0.227 0.3 0.509 0.636 0.00114 0.00142 66 172 1.57 0.0639 0.29 0.234 0.308 0.48 0.656 0.00107 0.00146 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 66 100 1.8 0.0613 0.574 0.229 0.302 0.902 0.923 0.00201 0.00206 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 66 8894.570 0.005 0.0611 1.14 2.37 0.228 0.301 1.04 1.3 0.00231 0.00291 ! Validation 66 8894.570 0.005 0.0642 0.545 1.83 0.234 0.309 0.753 0.9 0.00168 0.00201 Wall time: 8894.570656995755 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 67 100 1.36 0.058 0.199 0.222 0.293 0.435 0.544 0.000971 0.00121 67 172 1.46 0.0565 0.33 0.218 0.29 0.537 0.7 0.0012 0.00156 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 67 100 2.65 0.0528 1.59 0.213 0.28 1.52 1.54 0.0034 0.00343 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 67 9027.984 0.005 0.0585 0.951 2.12 0.223 0.295 0.966 1.19 0.00216 0.00265 ! Validation 67 9027.984 0.005 0.0574 1.18 2.33 0.222 0.292 1.21 1.33 0.0027 0.00296 Wall time: 9027.984427049756 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 68 100 1.88 0.0599 0.678 0.225 0.298 0.913 1 0.00204 0.00224 68 172 1.43 0.0548 0.339 0.214 0.285 0.611 0.709 0.00136 0.00158 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 68 100 1.06 0.0508 0.0448 0.208 0.275 0.233 0.258 0.00052 0.000576 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 68 9164.032 0.005 0.0585 1.29 2.46 0.223 0.295 1.08 1.38 0.00242 0.00309 ! Validation 68 9164.032 0.005 0.0558 0.27 1.39 0.218 0.288 0.503 0.633 0.00112 0.00141 Wall time: 9164.03241189383 ! Best model 68 1.386 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 69 100 2.39 0.0553 1.29 0.217 0.287 1.27 1.38 0.00283 0.00308 69 172 4.78 0.0611 3.56 0.227 0.301 2.06 2.3 0.0046 0.00513 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 69 100 6.08 0.058 4.92 0.222 0.293 2.7 2.7 0.00602 0.00603 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 69 9298.692 0.005 0.0533 0.978 2.04 0.212 0.281 0.938 1.2 0.00209 0.00269 ! Validation 69 9298.692 0.005 0.0609 6.22 7.44 0.227 0.301 2.96 3.04 0.0066 0.00678 Wall time: 9298.692237508949 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 70 100 2.27 0.0502 1.27 0.205 0.273 1.28 1.37 0.00285 0.00306 70 172 6.12 0.0617 4.89 0.228 0.303 2.56 2.69 0.00572 0.00601 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 70 100 3.7 0.0582 2.54 0.222 0.294 1.93 1.94 0.00432 0.00434 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 70 9440.230 0.005 0.0523 0.931 1.98 0.21 0.279 0.899 1.17 0.00201 0.00262 ! Validation 70 9440.230 0.005 0.0595 3.77 4.96 0.225 0.297 2.21 2.37 0.00494 0.00528 Wall time: 9440.230111047626 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 71 100 1.33 0.0481 0.369 0.202 0.267 0.622 0.74 0.00139 0.00165 71 172 1.35 0.0477 0.395 0.201 0.266 0.639 0.766 0.00143 0.00171 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 71 100 0.961 0.0445 0.07 0.195 0.257 0.279 0.322 0.000624 0.00072 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 71 9573.632 0.005 0.0497 0.649 1.64 0.205 0.272 0.788 0.982 0.00176 0.00219 ! Validation 71 9573.632 0.005 0.0498 0.487 1.48 0.205 0.272 0.692 0.85 0.00154 0.0019 Wall time: 9573.632493176963 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 72 100 1.61 0.0475 0.662 0.2 0.266 0.88 0.991 0.00196 0.00221 72 172 1.41 0.0499 0.412 0.205 0.272 0.609 0.783 0.00136 0.00175 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 72 100 1.96 0.0439 1.08 0.194 0.255 1.26 1.26 0.00281 0.00282 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 72 9707.052 0.005 0.0479 0.787 1.74 0.201 0.267 0.864 1.08 0.00193 0.00241 ! Validation 72 9707.052 0.005 0.0486 0.856 1.83 0.203 0.269 0.903 1.13 0.00202 0.00252 Wall time: 9707.05244614277 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 73 100 1.32 0.0467 0.39 0.198 0.263 0.655 0.761 0.00146 0.0017 73 172 1.8 0.0467 0.871 0.199 0.263 1.02 1.14 0.00229 0.00254 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 73 100 1.89 0.0437 1.01 0.193 0.255 1.22 1.23 0.00271 0.00274 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 73 9849.600 0.005 0.0465 0.798 1.73 0.198 0.263 0.887 1.09 0.00198 0.00243 ! Validation 73 9849.600 0.005 0.0482 0.679 1.64 0.202 0.268 0.846 1 0.00189 0.00224 Wall time: 9849.599923034664 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 74 100 1.99 0.0447 1.1 0.194 0.258 1.19 1.28 0.00265 0.00285 74 172 1.51 0.0462 0.589 0.197 0.262 0.819 0.935 0.00183 0.00209 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 74 100 1.45 0.0401 0.651 0.185 0.244 0.973 0.983 0.00217 0.00219 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 74 9982.996 0.005 0.0448 0.653 1.55 0.194 0.258 0.788 0.985 0.00176 0.0022 ! Validation 74 9982.996 0.005 0.0453 1.14 2.05 0.196 0.259 1.2 1.3 0.00267 0.00291 Wall time: 9982.996692419983 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 75 100 1.67 0.0414 0.84 0.186 0.248 1.04 1.12 0.00232 0.00249 75 172 1.1 0.0431 0.234 0.19 0.253 0.462 0.59 0.00103 0.00132 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 75 100 1.44 0.0402 0.636 0.185 0.244 0.96 0.972 0.00214 0.00217 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 75 10116.400 0.005 0.0452 0.893 1.8 0.195 0.259 0.908 1.15 0.00203 0.00257 ! Validation 75 10116.400 0.005 0.0449 0.582 1.48 0.195 0.258 0.816 0.929 0.00182 0.00207 Wall time: 10116.400334249716 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 76 100 1.28 0.0442 0.394 0.192 0.256 0.676 0.764 0.00151 0.00171 76 172 1.02 0.0407 0.21 0.185 0.246 0.442 0.558 0.000988 0.00125 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 76 100 0.917 0.038 0.158 0.18 0.237 0.461 0.484 0.00103 0.00108 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 76 10249.781 0.005 0.0441 0.783 1.66 0.192 0.256 0.853 1.08 0.0019 0.00241 ! Validation 76 10249.781 0.005 0.0431 0.352 1.21 0.191 0.253 0.604 0.722 0.00135 0.00161 Wall time: 10249.781302021816 ! Best model 76 1.213 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 77 100 2.19 0.0435 1.31 0.19 0.254 1.28 1.4 0.00286 0.00312 77 172 0.99 0.0408 0.173 0.185 0.246 0.373 0.507 0.000832 0.00113 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 77 100 0.795 0.0375 0.045 0.179 0.236 0.224 0.258 0.000499 0.000577 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 77 10383.184 0.005 0.0425 0.7 1.55 0.188 0.251 0.806 1.02 0.0018 0.00228 ! Validation 77 10383.184 0.005 0.0422 0.137 0.981 0.189 0.25 0.366 0.451 0.000816 0.00101 Wall time: 10383.184403527994 ! Best model 77 0.981 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 78 100 1.03 0.0379 0.268 0.179 0.237 0.545 0.631 0.00122 0.00141 78 172 0.982 0.0407 0.167 0.185 0.246 0.408 0.499 0.000912 0.00111 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 78 100 1.09 0.0381 0.331 0.179 0.238 0.688 0.701 0.00153 0.00156 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 78 10521.846 0.005 0.041 0.652 1.47 0.185 0.247 0.81 0.984 0.00181 0.0022 ! Validation 78 10521.846 0.005 0.0421 0.619 1.46 0.188 0.25 0.855 0.959 0.00191 0.00214 Wall time: 10521.846860549878 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 79 100 1.32 0.0394 0.536 0.182 0.242 0.815 0.892 0.00182 0.00199 79 172 1.98 0.0419 1.14 0.187 0.25 1.17 1.3 0.00262 0.0029 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 79 100 1.03 0.0392 0.244 0.181 0.241 0.585 0.602 0.00131 0.00134 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 79 10655.236 0.005 0.0397 0.629 1.42 0.182 0.243 0.768 0.966 0.00171 0.00216 ! Validation 79 10655.236 0.005 0.0424 0.943 1.79 0.189 0.251 1.03 1.18 0.00231 0.00264 Wall time: 10655.23648794368 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 80 100 1.52 0.0418 0.683 0.185 0.249 0.881 1.01 0.00197 0.00225 80 172 0.927 0.0386 0.155 0.18 0.239 0.399 0.48 0.000891 0.00107 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 80 100 0.842 0.0375 0.0915 0.178 0.236 0.355 0.369 0.000791 0.000823 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 80 10788.624 0.005 0.0405 0.796 1.61 0.184 0.245 0.854 1.09 0.00191 0.00243 ! Validation 80 10788.624 0.005 0.0418 0.453 1.29 0.188 0.249 0.664 0.82 0.00148 0.00183 Wall time: 10788.624211200979 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 81 100 0.909 0.0373 0.163 0.177 0.235 0.403 0.492 0.0009 0.0011 81 172 1.51 0.035 0.805 0.172 0.228 0.993 1.09 0.00222 0.00244 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 81 100 1.23 0.0358 0.514 0.174 0.23 0.865 0.873 0.00193 0.00195 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 81 10922.005 0.005 0.0385 0.521 1.29 0.179 0.239 0.7 0.879 0.00156 0.00196 ! Validation 81 10922.005 0.005 0.0393 0.55 1.34 0.182 0.242 0.78 0.904 0.00174 0.00202 Wall time: 10922.005267554894 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 82 100 1.2 0.0391 0.419 0.18 0.241 0.69 0.788 0.00154 0.00176 82 172 1.27 0.0383 0.509 0.179 0.238 0.698 0.87 0.00156 0.00194 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 82 100 0.804 0.0358 0.0883 0.173 0.23 0.348 0.362 0.000776 0.000808 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 82 11057.233 0.005 0.0382 0.689 1.45 0.179 0.238 0.789 1.01 0.00176 0.00226 ! Validation 82 11057.233 0.005 0.0395 0.374 1.16 0.182 0.242 0.596 0.746 0.00133 0.00166 Wall time: 11057.232909813989 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 83 100 1.59 0.0368 0.854 0.175 0.234 1.04 1.13 0.00232 0.00251 83 172 2.07 0.0366 1.34 0.175 0.233 1.28 1.41 0.00285 0.00315 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 83 100 1.92 0.0341 1.24 0.169 0.225 1.35 1.36 0.00302 0.00303 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 83 11190.711 0.005 0.0371 0.593 1.33 0.176 0.235 0.757 0.937 0.00169 0.00209 ! Validation 83 11190.711 0.005 0.0376 1.49 2.24 0.177 0.236 1.43 1.49 0.00319 0.00332 Wall time: 11190.711572119966 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 84 100 1.99 0.035 1.29 0.171 0.228 1.28 1.38 0.00286 0.00308 84 172 0.835 0.0348 0.138 0.17 0.227 0.327 0.453 0.000729 0.00101 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 84 100 1.26 0.0331 0.594 0.166 0.222 0.932 0.939 0.00208 0.0021 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 84 11330.092 0.005 0.0363 0.578 1.3 0.174 0.232 0.74 0.927 0.00165 0.00207 ! Validation 84 11330.092 0.005 0.0369 0.556 1.29 0.176 0.234 0.819 0.909 0.00183 0.00203 Wall time: 11330.092176978942 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 85 100 1.09 0.036 0.37 0.173 0.231 0.62 0.741 0.00138 0.00165 85 172 1.53 0.0363 0.799 0.173 0.232 1 1.09 0.00224 0.00243 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 85 100 1.94 0.033 1.28 0.166 0.221 1.38 1.38 0.00307 0.00308 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 85 11465.497 0.005 0.0352 0.509 1.21 0.171 0.229 0.694 0.869 0.00155 0.00194 ! Validation 85 11465.497 0.005 0.0362 1.29 2.01 0.174 0.232 1.3 1.38 0.00291 0.00308 Wall time: 11465.497425175738 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 86 100 28.4 0.992 8.58 0.893 1.21 2.97 3.57 0.00662 0.00796 86 172 22.6 0.966 3.28 0.883 1.2 1.65 2.21 0.00368 0.00492 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 86 100 21.9 1.02 1.48 0.902 1.23 1.41 1.48 0.00314 0.00331 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 86 11601.177 0.005 0.746 162 176 0.718 1.05 6.7 15.5 0.015 0.0346 ! Validation 86 11601.177 0.005 0.97 4.09 23.5 0.884 1.2 2.08 2.46 0.00465 0.0055 Wall time: 11601.176998383831 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 87 100 20.9 0.918 2.52 0.861 1.17 1.62 1.94 0.00362 0.00432 87 172 19.5 0.848 2.57 0.824 1.12 1.58 1.95 0.00354 0.00436 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 87 100 19.7 0.91 1.51 0.852 1.16 1.41 1.5 0.00315 0.00334 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 87 11734.618 0.005 0.918 3.11 21.5 0.859 1.17 1.69 2.15 0.00377 0.0048 ! Validation 87 11734.618 0.005 0.868 3.19 20.6 0.837 1.14 1.85 2.18 0.00412 0.00486 Wall time: 11734.61861063866 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 88 100 15.8 0.738 1.03 0.774 1.05 1.03 1.24 0.00229 0.00276 88 172 14.8 0.658 1.62 0.729 0.988 1.21 1.55 0.0027 0.00346 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 88 100 14.6 0.686 0.902 0.735 1.01 0.922 1.16 0.00206 0.00258 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 88 11867.956 0.005 0.76 2.23 17.4 0.783 1.06 1.45 1.82 0.00323 0.00406 ! Validation 88 11867.956 0.005 0.656 2.2 15.3 0.729 0.986 1.48 1.81 0.0033 0.00403 Wall time: 11867.955975049641 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 89 100 12.3 0.525 1.85 0.65 0.883 1.38 1.66 0.00309 0.0037 89 172 10.3 0.457 1.14 0.613 0.824 1.11 1.3 0.00247 0.0029 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 89 100 17.3 0.488 7.57 0.62 0.851 3.28 3.35 0.00733 0.00748 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 89 12001.299 0.005 0.54 2.37 13.2 0.662 0.895 1.48 1.87 0.00331 0.00418 ! Validation 89 12001.299 0.005 0.461 7.37 16.6 0.617 0.827 3.05 3.31 0.0068 0.00738 Wall time: 12001.299008208793 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 90 100 8.65 0.376 1.13 0.557 0.747 1.02 1.3 0.00229 0.00289 90 172 7.41 0.332 0.764 0.522 0.703 0.838 1.06 0.00187 0.00238 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 90 100 8.01 0.36 0.81 0.53 0.731 0.975 1.1 0.00218 0.00245 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 90 12134.629 0.005 0.39 2.49 10.3 0.566 0.761 1.53 1.92 0.00342 0.00429 ! Validation 90 12134.629 0.005 0.34 1.6 8.39 0.528 0.71 1.23 1.54 0.00274 0.00344 Wall time: 12134.628950037993 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 91 100 8.41 0.306 2.29 0.498 0.674 1.61 1.84 0.00359 0.00412 91 172 6.47 0.278 0.904 0.475 0.643 0.966 1.16 0.00216 0.00259 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 91 100 11.9 0.294 5.99 0.48 0.661 2.95 2.98 0.00658 0.00666 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 91 12267.970 0.005 0.299 2.66 8.65 0.494 0.666 1.6 1.99 0.00357 0.00444 ! Validation 91 12267.970 0.005 0.279 7.13 12.7 0.477 0.643 3.08 3.25 0.00688 0.00726 Wall time: 12267.970538266934 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 92 100 5.96 0.246 1.04 0.447 0.605 0.942 1.24 0.0021 0.00277 92 172 5.23 0.227 0.679 0.43 0.581 0.835 1 0.00186 0.00224 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 92 100 5.46 0.242 0.633 0.435 0.599 0.889 0.97 0.00198 0.00216 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 92 12401.310 0.005 0.251 1.85 6.87 0.452 0.61 1.32 1.66 0.00294 0.0037 ! Validation 92 12401.310 0.005 0.233 1.44 6.1 0.436 0.588 1.23 1.46 0.00274 0.00327 Wall time: 12401.310552724637 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 93 100 5.18 0.217 0.828 0.422 0.568 0.954 1.11 0.00213 0.00248 93 172 6.21 0.201 2.19 0.406 0.546 1.54 1.8 0.00345 0.00402 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 93 100 4.39 0.211 0.171 0.408 0.559 0.475 0.503 0.00106 0.00112 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 93 12534.644 0.005 0.215 1.75 6.05 0.419 0.565 1.29 1.61 0.00288 0.0036 ! Validation 93 12534.644 0.005 0.206 0.664 4.78 0.411 0.553 0.804 0.993 0.00179 0.00222 Wall time: 12534.64473291859 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 94 100 4.35 0.192 0.516 0.395 0.533 0.716 0.876 0.0016 0.00195 94 172 5.08 0.185 1.39 0.389 0.524 1.26 1.44 0.0028 0.00321 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 94 100 4.17 0.186 0.454 0.385 0.525 0.768 0.821 0.00171 0.00183 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 94 12668.396 0.005 0.195 1.88 5.78 0.4 0.538 1.34 1.67 0.00298 0.00373 ! Validation 94 12668.396 0.005 0.184 0.599 4.28 0.39 0.523 0.751 0.943 0.00168 0.00211 Wall time: 12668.396642858628 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 95 100 3.78 0.17 0.388 0.374 0.502 0.578 0.759 0.00129 0.00169 95 172 3.57 0.159 0.382 0.364 0.486 0.604 0.753 0.00135 0.00168 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 95 100 3.31 0.163 0.0486 0.363 0.492 0.219 0.269 0.00049 0.0006 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 95 12802.199 0.005 0.173 1.43 4.89 0.378 0.507 1.16 1.46 0.00259 0.00326 ! Validation 95 12802.199 0.005 0.164 0.569 3.85 0.371 0.494 0.738 0.919 0.00165 0.00205 Wall time: 12802.199469550978 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 96 100 5.2 0.146 2.28 0.35 0.465 1.72 1.84 0.00384 0.00411 96 172 3.63 0.149 0.647 0.354 0.47 0.792 0.98 0.00177 0.00219 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 96 100 4.44 0.144 1.57 0.343 0.462 1.51 1.52 0.00337 0.0034 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 96 12935.589 0.005 0.154 1.2 4.28 0.359 0.478 1.07 1.33 0.00239 0.00298 ! Validation 96 12935.589 0.005 0.148 2.68 5.64 0.353 0.468 1.84 2 0.00411 0.00446 Wall time: 12935.589418959804 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 97 100 3.64 0.143 0.773 0.348 0.461 0.912 1.07 0.00204 0.00239 97 172 4.21 0.141 1.39 0.344 0.458 1.24 1.44 0.00276 0.0032 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 97 100 3.77 0.13 1.17 0.329 0.439 1.31 1.32 0.00292 0.00294 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 97 13069.004 0.005 0.142 1.25 4.09 0.346 0.46 1.07 1.36 0.00238 0.00303 ! Validation 97 13069.004 0.005 0.136 1.04 3.75 0.339 0.449 1.07 1.24 0.00239 0.00277 Wall time: 13069.004409119021 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 98 100 3.14 0.123 0.681 0.324 0.428 0.82 1.01 0.00183 0.00224 98 172 4 0.131 1.37 0.332 0.442 1.3 1.43 0.0029 0.00319 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 98 100 6.79 0.124 4.3 0.321 0.43 2.52 2.53 0.00563 0.00564 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 98 13202.373 0.005 0.129 1.13 3.72 0.331 0.438 1.04 1.29 0.00232 0.00289 ! Validation 98 13202.373 0.005 0.128 4.33 6.9 0.33 0.436 2.45 2.54 0.00547 0.00566 Wall time: 13202.373358190991 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 99 100 2.77 0.122 0.329 0.321 0.426 0.571 0.699 0.00127 0.00156 99 172 3.13 0.121 0.711 0.32 0.424 0.854 1.03 0.00191 0.00229 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 99 100 4.33 0.119 1.95 0.314 0.42 1.7 1.7 0.0038 0.0038 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 99 13335.820 0.005 0.124 1.35 3.82 0.323 0.428 1.17 1.42 0.0026 0.00316 ! Validation 99 13335.820 0.005 0.123 3.33 5.79 0.323 0.427 2.06 2.22 0.0046 0.00497 Wall time: 13335.820393180009 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 100 100 2.72 0.115 0.409 0.314 0.414 0.587 0.779 0.00131 0.00174 100 172 3.7 0.11 1.49 0.306 0.405 1.34 1.49 0.00299 0.00332 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 100 100 3.56 0.106 1.45 0.298 0.396 1.46 1.47 0.00327 0.00327 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 100 13469.085 0.005 0.115 0.67 2.98 0.312 0.414 0.802 0.997 0.00179 0.00223 ! Validation 100 13469.085 0.005 0.112 1.42 3.66 0.308 0.407 1.33 1.45 0.00296 0.00324 Wall time: 13469.085340112913 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 101 100 4.56 0.11 2.35 0.306 0.405 1.71 1.87 0.00381 0.00417 101 172 2.67 0.111 0.453 0.305 0.406 0.698 0.82 0.00156 0.00183 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 101 100 2.12 0.102 0.0779 0.293 0.389 0.334 0.34 0.000745 0.000759 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 101 13602.437 0.005 0.109 1.08 3.26 0.304 0.402 1.04 1.26 0.00231 0.00282 ! Validation 101 13602.437 0.005 0.107 0.456 2.6 0.302 0.399 0.659 0.822 0.00147 0.00184 Wall time: 13602.43707511574 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 102 100 4.01 0.102 1.98 0.293 0.389 1.58 1.71 0.00354 0.00382 102 172 2.32 0.0991 0.34 0.289 0.384 0.575 0.71 0.00128 0.00159 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 102 100 2.01 0.0987 0.0365 0.287 0.383 0.222 0.233 0.000495 0.000519 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 102 13736.899 0.005 0.104 1.03 3.1 0.296 0.392 0.998 1.24 0.00223 0.00276 ! Validation 102 13736.899 0.005 0.104 0.565 2.64 0.296 0.392 0.743 0.916 0.00166 0.00204 Wall time: 13736.899556352757 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 103 100 2.2 0.0997 0.207 0.29 0.385 0.444 0.554 0.000991 0.00124 103 172 2.16 0.0936 0.289 0.282 0.373 0.574 0.655 0.00128 0.00146 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 103 100 1.91 0.0918 0.0723 0.278 0.369 0.322 0.328 0.00072 0.000731 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 103 13870.161 0.005 0.0988 0.826 2.8 0.289 0.383 0.9 1.11 0.00201 0.00247 ! Validation 103 13870.161 0.005 0.0967 0.442 2.38 0.286 0.379 0.649 0.81 0.00145 0.00181 Wall time: 13870.160942353774 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 104 100 2.32 0.101 0.294 0.293 0.388 0.546 0.661 0.00122 0.00148 104 172 2.7 0.0926 0.843 0.28 0.371 0.99 1.12 0.00221 0.0025 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 104 100 2.45 0.0884 0.684 0.273 0.362 1.01 1.01 0.00225 0.00225 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 104 14004.828 0.005 0.0985 1.2 3.17 0.288 0.382 1.02 1.33 0.00228 0.00298 ! Validation 104 14004.828 0.005 0.094 0.693 2.57 0.282 0.373 0.873 1.01 0.00195 0.00226 Wall time: 14004.828397464007 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 105 100 3.31 0.0908 1.49 0.277 0.367 1.35 1.49 0.00301 0.00332 105 172 2.17 0.0927 0.32 0.279 0.371 0.553 0.69 0.00123 0.00154 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 105 100 2.18 0.0874 0.43 0.27 0.36 0.796 0.799 0.00178 0.00178 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 105 14138.091 0.005 0.0915 0.966 2.8 0.277 0.369 0.975 1.2 0.00218 0.00267 ! Validation 105 14138.091 0.005 0.092 1.4 3.24 0.279 0.37 1.24 1.44 0.00277 0.00322 Wall time: 14138.09117127303 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 106 100 1.96 0.0847 0.267 0.267 0.355 0.521 0.63 0.00116 0.00141 106 172 1.94 0.0798 0.349 0.26 0.344 0.568 0.719 0.00127 0.00161 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 106 100 1.6 0.0798 0.00476 0.259 0.344 0.0786 0.0841 0.000175 0.000188 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 106 14271.297 0.005 0.0864 0.69 2.42 0.269 0.358 0.794 1.01 0.00177 0.00226 ! Validation 106 14271.297 0.005 0.0847 0.414 2.11 0.267 0.355 0.623 0.784 0.00139 0.00175 Wall time: 14271.297518379986 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 107 100 2.11 0.0855 0.395 0.266 0.356 0.634 0.766 0.00142 0.00171 107 172 2.41 0.082 0.771 0.262 0.349 0.864 1.07 0.00193 0.00239 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 107 100 1.56 0.077 0.0157 0.255 0.338 0.141 0.152 0.000315 0.00034 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 107 14404.514 0.005 0.0832 1 2.66 0.264 0.351 0.968 1.22 0.00216 0.00272 ! Validation 107 14404.514 0.005 0.0818 0.193 1.83 0.263 0.348 0.43 0.536 0.000959 0.0012 Wall time: 14404.514668292832 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 108 100 2.84 0.0823 1.19 0.264 0.349 1.21 1.33 0.0027 0.00297 108 172 4.4 0.0771 2.86 0.253 0.338 1.98 2.06 0.00442 0.0046 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 108 100 2.03 0.0731 0.572 0.247 0.329 0.918 0.922 0.00205 0.00206 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 108 14537.740 0.005 0.0804 0.937 2.54 0.26 0.345 0.944 1.18 0.00211 0.00263 ! Validation 108 14537.740 0.005 0.0782 0.888 2.45 0.256 0.341 1.01 1.15 0.00225 0.00256 Wall time: 14537.739978459664 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 109 100 1.89 0.081 0.272 0.261 0.347 0.462 0.635 0.00103 0.00142 109 172 2.23 0.0719 0.789 0.244 0.327 0.923 1.08 0.00206 0.00242 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 109 100 1.58 0.0682 0.219 0.24 0.318 0.564 0.57 0.00126 0.00127 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 109 14676.198 0.005 0.0778 0.885 2.44 0.255 0.34 0.918 1.15 0.00205 0.00256 ! Validation 109 14676.198 0.005 0.0739 0.331 1.81 0.249 0.331 0.582 0.701 0.0013 0.00157 Wall time: 14676.198431443889 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 110 100 4.38 0.0725 2.93 0.245 0.328 1.98 2.08 0.00441 0.00465 110 172 3.78 0.0689 2.4 0.24 0.32 1.81 1.89 0.00404 0.00421 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 110 100 4.11 0.0665 2.78 0.235 0.314 2.03 2.03 0.00453 0.00454 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 110 14809.421 0.005 0.0716 0.773 2.2 0.245 0.326 0.86 1.07 0.00192 0.00239 ! Validation 110 14809.421 0.005 0.0711 3.51 4.93 0.244 0.325 2.21 2.28 0.00493 0.00509 Wall time: 14809.421112607699 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 111 100 2.6 0.0683 1.23 0.239 0.318 1.23 1.35 0.00274 0.00302 111 172 2.84 0.0677 1.48 0.237 0.317 1.41 1.48 0.00314 0.00331 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 111 100 2.18 0.0642 0.893 0.232 0.309 1.15 1.15 0.00256 0.00257 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 111 14942.639 0.005 0.0692 0.849 2.23 0.24 0.321 0.916 1.12 0.00204 0.00251 ! Validation 111 14942.639 0.005 0.0687 0.833 2.21 0.24 0.319 0.927 1.11 0.00207 0.00248 Wall time: 14942.639031591825 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 112 100 2 0.0648 0.705 0.231 0.31 0.909 1.02 0.00203 0.00228 112 172 1.47 0.0622 0.222 0.228 0.304 0.462 0.574 0.00103 0.00128 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 112 100 1.54 0.058 0.378 0.22 0.293 0.743 0.749 0.00166 0.00167 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 112 15075.850 0.005 0.0648 0.72 2.02 0.232 0.31 0.823 1.03 0.00184 0.00231 ! Validation 112 15075.850 0.005 0.0632 0.636 1.9 0.23 0.306 0.842 0.972 0.00188 0.00217 Wall time: 15075.850061480887 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 113 100 1.87 0.0646 0.575 0.231 0.31 0.852 0.924 0.0019 0.00206 113 172 2.77 0.0626 1.51 0.227 0.305 1.4 1.5 0.00313 0.00335 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 113 100 1.1 0.0545 0.0107 0.213 0.284 0.101 0.126 0.000225 0.000282 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 113 15209.218 0.005 0.0639 0.861 2.14 0.231 0.308 0.887 1.13 0.00198 0.00252 ! Validation 113 15209.218 0.005 0.0601 0.343 1.54 0.224 0.299 0.581 0.714 0.0013 0.00159 Wall time: 15209.218684294727 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 114 100 1.42 0.059 0.241 0.221 0.296 0.453 0.598 0.00101 0.00133 114 172 1.97 0.056 0.851 0.216 0.288 0.99 1.12 0.00221 0.00251 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 114 100 1.57 0.0532 0.503 0.211 0.281 0.861 0.864 0.00192 0.00193 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 114 15342.430 0.005 0.0593 0.799 1.98 0.222 0.297 0.878 1.09 0.00196 0.00243 ! Validation 114 15342.430 0.005 0.0587 0.706 1.88 0.221 0.295 0.886 1.02 0.00198 0.00229 Wall time: 15342.430647640023 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 115 100 1.31 0.0521 0.269 0.208 0.278 0.487 0.632 0.00109 0.00141 115 172 1.45 0.0507 0.438 0.206 0.274 0.656 0.806 0.00146 0.0018 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 115 100 1.37 0.0501 0.374 0.204 0.273 0.739 0.745 0.00165 0.00166 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 115 15479.801 0.005 0.0556 0.654 1.77 0.215 0.287 0.779 0.986 0.00174 0.0022 ! Validation 115 15479.801 0.005 0.0548 0.561 1.66 0.213 0.285 0.752 0.913 0.00168 0.00204 Wall time: 15479.801544987597 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 116 100 2.07 0.0517 1.04 0.207 0.277 1.19 1.24 0.00265 0.00277 116 172 3.16 0.0539 2.08 0.211 0.283 1.68 1.76 0.00376 0.00392 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 116 100 1.57 0.0508 0.559 0.206 0.275 0.906 0.911 0.00202 0.00203 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 116 15613.037 0.005 0.053 0.727 1.79 0.21 0.281 0.825 1.04 0.00184 0.00232 ! Validation 116 15613.037 0.005 0.0551 0.964 2.07 0.214 0.286 1.05 1.2 0.00234 0.00267 Wall time: 15613.036955744028 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 117 100 1.51 0.0486 0.536 0.2 0.268 0.808 0.892 0.0018 0.00199 117 172 1.25 0.0479 0.295 0.2 0.267 0.543 0.662 0.00121 0.00148 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 117 100 1.29 0.0441 0.41 0.192 0.256 0.776 0.78 0.00173 0.00174 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 117 15746.278 0.005 0.0506 0.581 1.59 0.205 0.274 0.752 0.929 0.00168 0.00207 ! Validation 117 15746.278 0.005 0.0494 0.367 1.36 0.203 0.271 0.621 0.738 0.00139 0.00165 Wall time: 15746.278576666024 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 118 100 1.66 0.0479 0.699 0.2 0.267 0.836 1.02 0.00187 0.00227 118 172 1.16 0.0477 0.202 0.199 0.266 0.422 0.547 0.000943 0.00122 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 118 100 1.17 0.0421 0.333 0.187 0.25 0.699 0.703 0.00156 0.00157 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 118 15880.670 0.005 0.0487 0.682 1.66 0.201 0.269 0.806 1.01 0.0018 0.00225 ! Validation 118 15880.670 0.005 0.0473 0.249 1.19 0.198 0.265 0.504 0.608 0.00113 0.00136 Wall time: 15880.670425628778 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 119 100 1.61 0.0453 0.702 0.194 0.259 0.891 1.02 0.00199 0.00228 119 172 2.14 0.0445 1.25 0.191 0.257 1.26 1.36 0.0028 0.00304 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 119 100 0.803 0.0399 0.00608 0.182 0.243 0.0932 0.095 0.000208 0.000212 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 119 16016.523 0.005 0.0464 0.638 1.57 0.196 0.263 0.785 0.973 0.00175 0.00217 ! Validation 119 16016.523 0.005 0.045 0.225 1.13 0.194 0.259 0.461 0.578 0.00103 0.00129 Wall time: 16016.522906570695 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 120 100 1.76 0.045 0.864 0.192 0.258 1.02 1.13 0.00228 0.00253 120 172 1.7 0.0441 0.818 0.19 0.256 0.957 1.1 0.00214 0.00246 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 120 100 1.68 0.038 0.924 0.178 0.237 1.17 1.17 0.00261 0.00261 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 120 16149.754 0.005 0.0438 0.551 1.43 0.191 0.255 0.734 0.904 0.00164 0.00202 ! Validation 120 16149.754 0.005 0.0431 1.24 2.1 0.19 0.253 1.21 1.35 0.0027 0.00302 Wall time: 16149.754539692774 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 121 100 1.07 0.0456 0.161 0.195 0.26 0.384 0.489 0.000857 0.00109 121 172 2.06 0.0397 1.27 0.181 0.243 1.31 1.37 0.00291 0.00306 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 121 100 1.51 0.0381 0.752 0.178 0.238 1.05 1.06 0.00235 0.00236 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 121 16282.996 0.005 0.0432 0.65 1.51 0.189 0.253 0.788 0.982 0.00176 0.00219 ! Validation 121 16282.996 0.005 0.0422 1.36 2.2 0.187 0.25 1.36 1.42 0.00303 0.00317 Wall time: 16282.996527890675 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 122 100 0.94 0.0424 0.0922 0.187 0.251 0.281 0.37 0.000627 0.000826 122 172 2.12 0.0423 1.28 0.188 0.251 1.25 1.38 0.00278 0.00308 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 122 100 0.849 0.0402 0.0461 0.183 0.244 0.245 0.262 0.000548 0.000584 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 122 16416.218 0.005 0.0409 0.582 1.4 0.184 0.246 0.751 0.929 0.00168 0.00207 ! Validation 122 16416.218 0.005 0.043 0.325 1.19 0.19 0.253 0.591 0.695 0.00132 0.00155 Wall time: 16416.218515557703 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 123 100 1 0.0381 0.241 0.178 0.238 0.482 0.598 0.00108 0.00133 123 172 1.2 0.0359 0.484 0.173 0.231 0.771 0.848 0.00172 0.00189 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 123 100 1.35 0.0343 0.665 0.169 0.225 0.989 0.994 0.00221 0.00222 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 123 16549.438 0.005 0.0402 0.549 1.35 0.183 0.244 0.702 0.903 0.00157 0.00202 ! Validation 123 16549.438 0.005 0.0383 1.06 1.83 0.179 0.238 1.11 1.25 0.00247 0.0028 Wall time: 16549.43796354765 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 124 100 1.21 0.0367 0.476 0.175 0.234 0.774 0.841 0.00173 0.00188 124 172 0.926 0.0377 0.172 0.177 0.237 0.405 0.506 0.000905 0.00113 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 124 100 0.728 0.0345 0.0375 0.17 0.226 0.217 0.236 0.000484 0.000526 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 124 16687.616 0.005 0.0377 0.52 1.27 0.177 0.237 0.709 0.878 0.00158 0.00196 ! Validation 124 16687.616 0.005 0.0379 0.204 0.962 0.178 0.237 0.456 0.55 0.00102 0.00123 Wall time: 16687.615941420663 ! Best model 124 0.962 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 125 100 0.785 0.0323 0.14 0.164 0.219 0.357 0.455 0.000798 0.00102 125 172 0.902 0.0349 0.204 0.17 0.228 0.438 0.55 0.000978 0.00123 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 125 100 0.999 0.0322 0.355 0.164 0.219 0.722 0.726 0.00161 0.00162 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 125 16820.829 0.005 0.0365 0.43 1.16 0.174 0.233 0.62 0.799 0.00138 0.00178 ! Validation 125 16820.829 0.005 0.0364 0.514 1.24 0.175 0.232 0.751 0.874 0.00168 0.00195 Wall time: 16820.829830006696 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 126 100 0.864 0.0377 0.11 0.177 0.237 0.323 0.404 0.000722 0.000901 126 172 0.88 0.0346 0.188 0.17 0.227 0.437 0.528 0.000975 0.00118 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 126 100 1.26 0.0336 0.587 0.167 0.223 0.929 0.934 0.00207 0.00208 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 126 16954.035 0.005 0.0359 0.578 1.29 0.173 0.231 0.74 0.926 0.00165 0.00207 ! Validation 126 16954.035 0.005 0.0364 0.781 1.51 0.175 0.233 0.956 1.08 0.00213 0.0024 Wall time: 16954.03529688064 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 127 100 1.35 0.0343 0.66 0.168 0.226 0.884 0.99 0.00197 0.00221 127 172 1.04 0.0339 0.366 0.168 0.224 0.636 0.737 0.00142 0.00165 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 127 100 1.17 0.0318 0.539 0.163 0.217 0.891 0.895 0.00199 0.002 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 127 17087.226 0.005 0.0346 0.447 1.14 0.17 0.227 0.632 0.815 0.00141 0.00182 ! Validation 127 17087.226 0.005 0.0347 0.826 1.52 0.171 0.227 1.03 1.11 0.00229 0.00247 Wall time: 17087.22624330595 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 128 100 1.22 0.036 0.5 0.172 0.231 0.77 0.862 0.00172 0.00192 128 172 1.36 0.0304 0.756 0.16 0.213 0.917 1.06 0.00205 0.00236 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 128 100 1.84 0.0304 1.24 0.159 0.213 1.35 1.35 0.00302 0.00302 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 128 17220.413 0.005 0.033 0.419 1.08 0.166 0.221 0.621 0.789 0.00139 0.00176 ! Validation 128 17220.413 0.005 0.0331 1.95 2.61 0.167 0.222 1.66 1.7 0.00371 0.0038 Wall time: 17220.413467144594 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 129 100 1.08 0.0318 0.439 0.163 0.217 0.711 0.807 0.00159 0.0018 129 172 0.842 0.0314 0.215 0.161 0.216 0.427 0.565 0.000953 0.00126 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 129 100 0.724 0.0302 0.119 0.159 0.212 0.41 0.42 0.000915 0.000938 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 129 17353.595 0.005 0.0329 0.496 1.15 0.165 0.221 0.69 0.859 0.00154 0.00192 ! Validation 129 17353.595 0.005 0.0327 0.133 0.786 0.166 0.22 0.353 0.445 0.000789 0.000993 Wall time: 17353.59502317896 ! Best model 129 0.786 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 130 100 1.34 0.0306 0.73 0.161 0.213 0.99 1.04 0.00221 0.00232 130 172 0.798 0.0329 0.139 0.165 0.221 0.371 0.454 0.000829 0.00101 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 130 100 0.592 0.0289 0.0134 0.156 0.207 0.11 0.141 0.000246 0.000315 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 130 17487.000 0.005 0.0322 0.46 1.1 0.163 0.218 0.66 0.826 0.00147 0.00184 ! Validation 130 17487.000 0.005 0.0321 0.1 0.743 0.164 0.218 0.317 0.386 0.000708 0.000861 Wall time: 17487.000008229632 ! Best model 130 0.743 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 131 100 0.956 0.0368 0.221 0.175 0.234 0.398 0.572 0.000889 0.00128 131 172 1.44 0.0325 0.793 0.164 0.22 0.974 1.09 0.00217 0.00242 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 131 100 0.845 0.0299 0.247 0.159 0.211 0.592 0.605 0.00132 0.00135 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 131 17620.218 0.005 0.0321 0.541 1.18 0.163 0.218 0.702 0.896 0.00157 0.002 ! Validation 131 17620.218 0.005 0.0325 0.407 1.06 0.165 0.22 0.659 0.777 0.00147 0.00173 Wall time: 17620.218217948917 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 132 100 1 0.0268 0.464 0.15 0.2 0.749 0.83 0.00167 0.00185 132 172 0.7 0.0283 0.135 0.153 0.205 0.341 0.448 0.000761 0.001 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 132 100 0.654 0.028 0.0934 0.153 0.204 0.363 0.372 0.000811 0.000831 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 132 17766.628 0.005 0.0314 0.453 1.08 0.161 0.216 0.62 0.82 0.00138 0.00183 ! Validation 132 17766.628 0.005 0.031 0.124 0.743 0.161 0.214 0.34 0.428 0.000759 0.000956 Wall time: 17766.62871456798 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 133 100 0.729 0.0291 0.146 0.155 0.208 0.387 0.466 0.000864 0.00104 133 172 1.27 0.0294 0.687 0.156 0.209 0.947 1.01 0.00211 0.00225 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 133 100 0.615 0.0285 0.0452 0.155 0.206 0.243 0.259 0.000542 0.000578 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 133 17901.410 0.005 0.0296 0.334 0.927 0.157 0.21 0.567 0.704 0.00126 0.00157 ! Validation 133 17901.410 0.005 0.0303 0.0978 0.705 0.16 0.212 0.301 0.381 0.000672 0.000851 Wall time: 17901.41048759781 ! Best model 133 0.705 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 134 100 1.23 0.0353 0.519 0.171 0.229 0.701 0.878 0.00157 0.00196 134 172 0.757 0.0275 0.207 0.151 0.202 0.473 0.555 0.00105 0.00124 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 134 100 0.591 0.0258 0.0745 0.147 0.196 0.327 0.333 0.000729 0.000742 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 134 18034.589 0.005 0.03 0.504 1.1 0.158 0.211 0.666 0.865 0.00149 0.00193 ! Validation 134 18034.589 0.005 0.0285 0.123 0.693 0.155 0.206 0.342 0.427 0.000764 0.000954 Wall time: 18034.589700032957 ! Best model 134 0.693 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 135 100 1.2 0.0271 0.658 0.15 0.201 0.899 0.988 0.00201 0.00221 135 172 0.726 0.0286 0.154 0.155 0.206 0.402 0.477 0.000898 0.00107 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 135 100 0.84 0.0281 0.278 0.154 0.204 0.626 0.642 0.0014 0.00143 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 135 18167.793 0.005 0.0289 0.451 1.03 0.155 0.207 0.649 0.818 0.00145 0.00183 ! Validation 135 18167.793 0.005 0.03 0.45 1.05 0.159 0.211 0.689 0.818 0.00154 0.00183 Wall time: 18167.792936909012 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 136 100 0.835 0.0284 0.266 0.154 0.205 0.54 0.629 0.0012 0.0014 136 172 0.738 0.028 0.178 0.153 0.204 0.436 0.514 0.000972 0.00115 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 136 100 0.672 0.0253 0.165 0.145 0.194 0.49 0.495 0.00109 0.00111 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 136 18303.398 0.005 0.0283 0.366 0.931 0.153 0.205 0.585 0.737 0.0013 0.00164 ! Validation 136 18303.398 0.005 0.0277 0.126 0.68 0.152 0.203 0.356 0.433 0.000794 0.000967 Wall time: 18303.398350222036 ! Best model 136 0.680 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 137 100 0.925 0.0271 0.383 0.15 0.201 0.676 0.754 0.00151 0.00168 137 172 1.12 0.0267 0.588 0.149 0.199 0.89 0.934 0.00199 0.00208 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 137 100 1.74 0.025 1.24 0.145 0.193 1.35 1.36 0.00302 0.00303 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 137 18436.615 0.005 0.027 0.298 0.837 0.15 0.2 0.527 0.665 0.00118 0.00148 ! Validation 137 18436.615 0.005 0.027 1.32 1.86 0.151 0.2 1.34 1.4 0.00299 0.00312 Wall time: 18436.614893179853 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 138 100 0.637 0.0258 0.12 0.147 0.196 0.346 0.421 0.000773 0.00094 138 172 0.588 0.0257 0.0746 0.147 0.195 0.28 0.333 0.000624 0.000743 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 138 100 0.509 0.0245 0.0196 0.144 0.191 0.148 0.171 0.00033 0.000381 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 138 18569.812 0.005 0.0261 0.331 0.852 0.147 0.197 0.567 0.701 0.00127 0.00156 ! Validation 138 18569.812 0.005 0.0263 0.182 0.709 0.149 0.198 0.431 0.52 0.000961 0.00116 Wall time: 18569.812778756022 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 139 100 0.802 0.0275 0.253 0.152 0.202 0.512 0.612 0.00114 0.00137 139 172 1.04 0.0272 0.495 0.15 0.201 0.743 0.857 0.00166 0.00191 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 139 100 0.575 0.0236 0.103 0.141 0.187 0.37 0.391 0.000826 0.000874 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 139 18703.017 0.005 0.0268 0.396 0.932 0.149 0.199 0.612 0.767 0.00137 0.00171 ! Validation 139 18703.017 0.005 0.0259 0.229 0.747 0.147 0.196 0.5 0.583 0.00112 0.0013 Wall time: 18703.017745895777 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 140 100 0.621 0.0237 0.147 0.141 0.188 0.374 0.467 0.000835 0.00104 140 172 0.664 0.0246 0.173 0.143 0.191 0.416 0.506 0.000928 0.00113 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 140 100 0.527 0.0232 0.0635 0.139 0.185 0.296 0.307 0.000661 0.000685 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 140 18838.852 0.005 0.0249 0.283 0.78 0.144 0.192 0.525 0.648 0.00117 0.00145 ! Validation 140 18838.852 0.005 0.0249 0.11 0.608 0.144 0.192 0.319 0.404 0.000712 0.000902 Wall time: 18838.8526211367 ! Best model 140 0.608 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 141 100 1.78 0.0252 1.28 0.144 0.193 1.33 1.38 0.00296 0.00307 141 172 0.625 0.0241 0.142 0.142 0.189 0.267 0.459 0.000596 0.00103 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 141 100 0.456 0.0226 0.00312 0.138 0.183 0.0643 0.0681 0.000143 0.000152 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 141 18972.063 0.005 0.0263 0.418 0.943 0.148 0.197 0.628 0.788 0.0014 0.00176 ! Validation 141 18972.063 0.005 0.0246 0.102 0.594 0.143 0.191 0.308 0.39 0.000687 0.00087 Wall time: 18972.063344128896 ! Best model 141 0.594 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 142 100 1.22 0.0233 0.759 0.139 0.186 0.986 1.06 0.0022 0.00237 142 172 0.688 0.0277 0.133 0.151 0.203 0.376 0.444 0.000839 0.000992 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 142 100 0.78 0.026 0.259 0.15 0.197 0.616 0.62 0.00137 0.00138 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 142 19105.260 0.005 0.0243 0.304 0.789 0.142 0.19 0.52 0.672 0.00116 0.0015 ! Validation 142 19105.260 0.005 0.029 0.465 1.04 0.159 0.207 0.731 0.831 0.00163 0.00185 Wall time: 19105.260156705044 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 143 100 0.756 0.0233 0.291 0.139 0.186 0.562 0.657 0.00126 0.00147 143 172 1.27 0.0284 0.703 0.157 0.205 0.982 1.02 0.00219 0.00228 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 143 100 0.662 0.025 0.162 0.147 0.193 0.475 0.49 0.00106 0.00109 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 143 19238.459 0.005 0.0266 0.402 0.933 0.149 0.199 0.611 0.772 0.00136 0.00172 ! Validation 143 19238.459 0.005 0.027 0.115 0.655 0.151 0.2 0.336 0.414 0.00075 0.000923 Wall time: 19238.459371072706 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 144 100 0.652 0.0219 0.213 0.135 0.18 0.46 0.563 0.00103 0.00126 144 172 0.553 0.0219 0.116 0.135 0.18 0.346 0.414 0.000771 0.000924 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 144 100 0.427 0.0205 0.0178 0.131 0.174 0.141 0.163 0.000315 0.000363 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 144 19374.683 0.005 0.0238 0.271 0.748 0.141 0.188 0.502 0.635 0.00112 0.00142 ! Validation 144 19374.683 0.005 0.023 0.127 0.587 0.139 0.185 0.345 0.434 0.000769 0.00097 Wall time: 19374.683374447748 ! Best model 144 0.587 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 145 100 1.54 0.0289 0.963 0.159 0.207 1.13 1.2 0.00253 0.00267 145 172 0.568 0.023 0.107 0.139 0.185 0.31 0.399 0.000692 0.000891 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 145 100 0.454 0.0202 0.0501 0.131 0.173 0.26 0.273 0.000581 0.000609 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 145 19507.892 0.005 0.0243 0.353 0.839 0.142 0.19 0.582 0.724 0.0013 0.00162 ! Validation 145 19507.892 0.005 0.0226 0.1 0.553 0.138 0.183 0.313 0.386 0.000699 0.000862 Wall time: 19507.89274811279 ! Best model 145 0.553 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 146 100 27 0.964 7.74 0.883 1.2 2.81 3.39 0.00627 0.00757 146 172 20.4 0.849 3.39 0.833 1.12 1.72 2.24 0.00384 0.00501 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 146 100 18.6 0.906 0.462 0.854 1.16 0.765 0.829 0.00171 0.00185 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 146 19641.163 0.005 0.721 141 155 0.701 1.03 6.09 14.5 0.0136 0.0323 ! Validation 146 19641.163 0.005 0.859 3.45 20.6 0.836 1.13 1.83 2.26 0.0041 0.00505 Wall time: 19641.163366531022 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 147 100 16.4 0.664 3.15 0.739 0.993 1.76 2.16 0.00393 0.00482 147 172 11.1 0.484 1.38 0.634 0.847 1.22 1.43 0.00273 0.00319 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 147 100 10.4 0.513 0.0988 0.648 0.873 0.327 0.383 0.00073 0.000855 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 147 19774.347 0.005 0.69 2.59 16.4 0.751 1.01 1.56 1.96 0.00349 0.00437 ! Validation 147 19774.347 0.005 0.491 2.01 11.8 0.638 0.854 1.42 1.73 0.00317 0.00386 Wall time: 19774.347140586004 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 148 100 8.85 0.33 2.25 0.523 0.7 1.51 1.83 0.00338 0.00408 148 172 7.15 0.274 1.68 0.476 0.637 1.33 1.58 0.00298 0.00352 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 148 100 6 0.292 0.169 0.486 0.658 0.349 0.501 0.00078 0.00112 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 148 19907.520 0.005 0.354 1.68 8.75 0.54 0.725 1.26 1.58 0.00281 0.00352 ! Validation 148 19907.520 0.005 0.282 1.44 7.08 0.484 0.647 1.21 1.46 0.00271 0.00327 Wall time: 19907.520186665934 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 149 100 5.52 0.224 1.03 0.432 0.577 1.01 1.24 0.00225 0.00276 149 172 5.22 0.201 1.21 0.409 0.546 1.07 1.34 0.00239 0.00299 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 149 100 7.03 0.207 2.9 0.41 0.554 2.05 2.07 0.00459 0.00463 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 149 20042.535 0.005 0.233 1.6 6.26 0.441 0.589 1.24 1.54 0.00277 0.00344 ! Validation 149 20042.535 0.005 0.2 2.12 6.13 0.41 0.545 1.53 1.78 0.00342 0.00396 Wall time: 20042.53489836864 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 150 100 7.83 0.172 4.39 0.378 0.505 2.36 2.55 0.00528 0.0057 150 172 3.5 0.156 0.381 0.363 0.481 0.6 0.752 0.00134 0.00168 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 150 100 4.28 0.163 1.02 0.366 0.491 1.22 1.23 0.00273 0.00275 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 150 20180.837 0.005 0.177 1.92 5.47 0.385 0.513 1.37 1.69 0.00305 0.00377 ! Validation 150 20180.837 0.005 0.159 0.584 3.76 0.367 0.486 0.755 0.931 0.00169 0.00208 Wall time: 20180.837703960016 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 151 100 5.04 0.137 2.3 0.341 0.451 1.69 1.85 0.00376 0.00412 151 172 3.76 0.135 1.05 0.34 0.448 0.934 1.25 0.00208 0.00279 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 151 100 2.69 0.134 0.0179 0.335 0.446 0.133 0.163 0.000297 0.000364 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 151 20314.015 0.005 0.144 1.24 4.12 0.349 0.463 1.08 1.36 0.0024 0.00303 ! Validation 151 20314.015 0.005 0.133 1.54 4.19 0.337 0.444 1.3 1.51 0.0029 0.00337 Wall time: 20314.015446895733 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 152 100 3.05 0.122 0.608 0.321 0.426 0.802 0.95 0.00179 0.00212 152 172 2.92 0.11 0.713 0.308 0.404 0.858 1.03 0.00192 0.0023 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 152 100 2.43 0.114 0.151 0.31 0.411 0.465 0.473 0.00104 0.00106 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 152 20447.183 0.005 0.122 0.858 3.29 0.323 0.425 0.891 1.13 0.00199 0.00252 ! Validation 152 20447.183 0.005 0.114 0.949 3.23 0.314 0.412 0.974 1.19 0.00217 0.00265 Wall time: 20447.18309917394 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 153 100 2.63 0.109 0.454 0.305 0.402 0.698 0.821 0.00156 0.00183 153 172 3.8 0.0999 1.8 0.294 0.385 1.49 1.64 0.00332 0.00365 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 153 100 2.67 0.0995 0.683 0.291 0.384 1 1.01 0.00224 0.00225 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 153 20580.367 0.005 0.106 1.03 3.16 0.303 0.397 1 1.24 0.00224 0.00277 ! Validation 153 20580.367 0.005 0.101 3.28 5.3 0.296 0.387 2.05 2.21 0.00458 0.00493 Wall time: 20580.367460250854 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 154 100 2.33 0.0939 0.45 0.284 0.373 0.603 0.817 0.00135 0.00182 154 172 2.42 0.0932 0.553 0.284 0.372 0.713 0.906 0.00159 0.00202 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 154 100 2.96 0.0883 1.19 0.275 0.362 1.33 1.33 0.00296 0.00297 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 154 20713.537 0.005 0.0946 0.958 2.85 0.286 0.375 0.951 1.19 0.00212 0.00266 ! Validation 154 20713.537 0.005 0.0897 0.572 2.37 0.28 0.365 0.772 0.922 0.00172 0.00206 Wall time: 20713.537796574645 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 155 100 4.16 0.0828 2.5 0.269 0.351 1.79 1.93 0.004 0.0043 155 172 1.99 0.0805 0.383 0.264 0.346 0.563 0.754 0.00126 0.00168 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 155 100 3.3 0.077 1.76 0.256 0.338 1.61 1.61 0.0036 0.0036 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 155 20846.725 0.005 0.084 0.667 2.35 0.27 0.353 0.8 0.995 0.00179 0.00222 ! Validation 155 20846.725 0.005 0.0796 0.673 2.26 0.264 0.344 0.847 1 0.00189 0.00223 Wall time: 20846.72508463869 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 156 100 2.07 0.0773 0.525 0.259 0.339 0.716 0.883 0.0016 0.00197 156 172 2.59 0.0739 1.11 0.253 0.331 1.09 1.29 0.00242 0.00287 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 156 100 1.49 0.0719 0.0555 0.247 0.327 0.266 0.287 0.000595 0.000641 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 156 20979.891 0.005 0.0757 0.861 2.38 0.256 0.335 0.904 1.13 0.00202 0.00252 ! Validation 156 20979.891 0.005 0.0737 1.01 2.48 0.253 0.331 1.05 1.22 0.00234 0.00273 Wall time: 20979.891512223985 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 157 100 1.97 0.0699 0.574 0.246 0.322 0.699 0.923 0.00156 0.00206 157 172 1.91 0.0649 0.607 0.237 0.311 0.783 0.949 0.00175 0.00212 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 157 100 1.29 0.064 0.0131 0.233 0.308 0.121 0.14 0.000269 0.000312 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 157 21113.049 0.005 0.07 0.714 2.11 0.246 0.322 0.829 1.03 0.00185 0.0023 ! Validation 157 21113.049 0.005 0.0666 0.472 1.8 0.241 0.314 0.678 0.837 0.00151 0.00187 Wall time: 21113.049089391716 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 158 100 1.52 0.0641 0.24 0.235 0.309 0.462 0.596 0.00103 0.00133 158 172 1.56 0.0624 0.31 0.233 0.304 0.531 0.678 0.00118 0.00151 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 158 100 1.43 0.0594 0.242 0.225 0.297 0.581 0.599 0.0013 0.00134 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 158 21246.395 0.005 0.0639 0.725 2 0.235 0.308 0.838 1.04 0.00187 0.00232 ! Validation 158 21246.395 0.005 0.0619 0.253 1.49 0.232 0.303 0.479 0.612 0.00107 0.00137 Wall time: 21246.395559049677 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 159 100 1.69 0.0637 0.417 0.235 0.307 0.657 0.787 0.00147 0.00176 159 172 1.4 0.0596 0.213 0.226 0.297 0.452 0.562 0.00101 0.00125 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 159 100 1.15 0.0561 0.0329 0.219 0.289 0.18 0.221 0.000401 0.000494 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 159 21379.563 0.005 0.0608 0.918 2.13 0.229 0.3 0.927 1.17 0.00207 0.00261 ! Validation 159 21379.563 0.005 0.0586 0.608 1.78 0.226 0.295 0.797 0.95 0.00178 0.00212 Wall time: 21379.563529407606 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 160 100 1.6 0.0542 0.518 0.216 0.284 0.766 0.877 0.00171 0.00196 160 172 1.3 0.0528 0.246 0.214 0.28 0.491 0.604 0.0011 0.00135 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 160 100 1.43 0.0517 0.397 0.21 0.277 0.749 0.768 0.00167 0.00171 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 160 21516.266 0.005 0.0561 0.68 1.8 0.22 0.289 0.815 1 0.00182 0.00224 ! Validation 160 21516.266 0.005 0.0543 0.23 1.32 0.218 0.284 0.459 0.584 0.00102 0.0013 Wall time: 21516.26677067578 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 161 100 1.13 0.0516 0.101 0.212 0.277 0.318 0.387 0.00071 0.000863 161 172 1.36 0.0504 0.355 0.208 0.274 0.582 0.726 0.0013 0.00162 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 161 100 1.68 0.0493 0.698 0.206 0.271 1 1.02 0.00224 0.00227 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 161 21650.464 0.005 0.054 0.943 2.02 0.216 0.283 0.947 1.18 0.00211 0.00264 ! Validation 161 21650.464 0.005 0.052 0.315 1.35 0.213 0.278 0.559 0.684 0.00125 0.00153 Wall time: 21650.46401170874 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 162 100 1.4 0.0497 0.41 0.206 0.272 0.679 0.78 0.00152 0.00174 162 172 1.19 0.0483 0.22 0.204 0.268 0.471 0.572 0.00105 0.00128 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 162 100 1.14 0.0451 0.235 0.197 0.259 0.565 0.591 0.00126 0.00132 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 162 21783.667 0.005 0.0497 0.648 1.64 0.207 0.272 0.794 0.981 0.00177 0.00219 ! Validation 162 21783.667 0.005 0.0482 0.192 1.16 0.205 0.267 0.417 0.534 0.00093 0.00119 Wall time: 21783.66724581085 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 163 100 1.44 0.049 0.46 0.205 0.27 0.711 0.826 0.00159 0.00184 163 172 1.58 0.0448 0.687 0.197 0.258 0.91 1.01 0.00203 0.00225 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 163 100 1.04 0.0427 0.185 0.192 0.252 0.49 0.523 0.00109 0.00117 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 163 21934.041 0.005 0.0471 0.785 1.73 0.202 0.264 0.872 1.08 0.00195 0.00241 ! Validation 163 21934.041 0.005 0.0455 0.644 1.55 0.199 0.26 0.862 0.977 0.00192 0.00218 Wall time: 21934.041723114904 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 164 100 0.972 0.0413 0.146 0.189 0.248 0.381 0.466 0.000851 0.00104 164 172 1.13 0.0419 0.297 0.191 0.249 0.564 0.664 0.00126 0.00148 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 164 100 0.866 0.0409 0.0473 0.188 0.247 0.235 0.265 0.000524 0.000591 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 164 22067.209 0.005 0.0434 0.667 1.53 0.194 0.254 0.809 0.995 0.00181 0.00222 ! Validation 164 22067.209 0.005 0.0434 0.163 1.03 0.194 0.254 0.387 0.492 0.000863 0.0011 Wall time: 22067.209835585672 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 165 100 2.81 0.0427 1.96 0.191 0.252 1.64 1.71 0.00366 0.00381 165 172 3.95 0.0401 3.15 0.186 0.244 2.11 2.16 0.00471 0.00483 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 165 100 1.72 0.0412 0.894 0.189 0.247 1.13 1.15 0.00253 0.00257 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 165 22200.401 0.005 0.0415 0.805 1.63 0.189 0.248 0.877 1.09 0.00196 0.00244 ! Validation 165 22200.401 0.005 0.0434 0.664 1.53 0.194 0.254 0.887 0.993 0.00198 0.00222 Wall time: 22200.401268777903 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 166 100 0.924 0.0373 0.177 0.179 0.235 0.403 0.513 0.0009 0.00114 166 172 0.993 0.0382 0.229 0.181 0.238 0.514 0.583 0.00115 0.0013 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 166 100 1.19 0.0378 0.435 0.181 0.237 0.775 0.804 0.00173 0.00179 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 166 22337.296 0.005 0.0398 0.606 1.4 0.185 0.243 0.75 0.949 0.00167 0.00212 ! Validation 166 22337.296 0.005 0.0406 1.15 1.96 0.187 0.245 1.19 1.31 0.00265 0.00292 Wall time: 22337.295904980972 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 167 100 1.93 0.0371 1.19 0.179 0.235 1.24 1.33 0.00277 0.00297 167 172 0.912 0.0375 0.163 0.179 0.236 0.391 0.492 0.000873 0.0011 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 167 100 1.16 0.0333 0.491 0.17 0.222 0.827 0.854 0.00185 0.00191 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 167 22477.422 0.005 0.0371 0.521 1.26 0.179 0.235 0.706 0.879 0.00158 0.00196 ! Validation 167 22477.422 0.005 0.0369 0.305 1.04 0.178 0.234 0.579 0.673 0.00129 0.0015 Wall time: 22477.422426807694 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 168 100 0.855 0.0345 0.165 0.171 0.226 0.391 0.496 0.000873 0.00111 168 172 1.15 0.0341 0.468 0.17 0.225 0.695 0.834 0.00155 0.00186 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 168 100 0.854 0.0309 0.236 0.163 0.214 0.553 0.592 0.00123 0.00132 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 168 22610.710 0.005 0.0349 0.452 1.15 0.173 0.228 0.658 0.819 0.00147 0.00183 ! Validation 168 22610.710 0.005 0.0346 0.53 1.22 0.172 0.226 0.776 0.887 0.00173 0.00198 Wall time: 22610.710834496655 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 169 100 1.18 0.0323 0.535 0.166 0.219 0.814 0.891 0.00182 0.00199 169 172 0.921 0.0341 0.24 0.17 0.225 0.479 0.597 0.00107 0.00133 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 169 100 0.749 0.0315 0.118 0.165 0.216 0.364 0.419 0.000812 0.000936 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 169 22744.038 0.005 0.0333 0.579 1.25 0.168 0.222 0.736 0.928 0.00164 0.00207 ! Validation 169 22744.038 0.005 0.0349 0.152 0.85 0.173 0.228 0.38 0.475 0.000849 0.00106 Wall time: 22744.038688198663 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 170 100 0.964 0.0312 0.34 0.163 0.215 0.625 0.711 0.00139 0.00159 170 172 0.643 0.0288 0.0662 0.157 0.207 0.248 0.313 0.000553 0.0007 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 170 100 1 0.029 0.423 0.158 0.207 0.766 0.793 0.00171 0.00177 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 170 22877.274 0.005 0.0321 0.45 1.09 0.165 0.218 0.666 0.818 0.00149 0.00183 ! Validation 170 22877.274 0.005 0.0326 0.325 0.977 0.167 0.22 0.563 0.694 0.00126 0.00155 Wall time: 22877.27413781779 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 171 100 0.794 0.0328 0.139 0.166 0.221 0.357 0.454 0.000797 0.00101 171 172 0.719 0.0319 0.0812 0.164 0.218 0.29 0.347 0.000646 0.000775 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 171 100 2.36 0.0291 1.78 0.158 0.208 1.61 1.62 0.00359 0.00363 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 171 23012.193 0.005 0.032 0.667 1.31 0.165 0.218 0.802 0.995 0.00179 0.00222 ! Validation 171 23012.193 0.005 0.0326 2.77 3.42 0.166 0.22 1.95 2.03 0.00436 0.00452 Wall time: 23012.192971308716 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 172 100 0.764 0.0304 0.157 0.16 0.212 0.397 0.482 0.000887 0.00108 172 172 0.645 0.0283 0.0786 0.155 0.205 0.281 0.342 0.000626 0.000762 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 172 100 0.636 0.0267 0.102 0.151 0.199 0.33 0.39 0.000737 0.00087 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 172 23145.423 0.005 0.0309 0.464 1.08 0.161 0.214 0.658 0.831 0.00147 0.00185 ! Validation 172 23145.423 0.005 0.0303 0.151 0.756 0.16 0.212 0.384 0.473 0.000857 0.00106 Wall time: 23145.423195052892 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 173 100 0.699 0.0271 0.158 0.151 0.2 0.404 0.484 0.000901 0.00108 173 172 0.715 0.0288 0.139 0.156 0.207 0.381 0.455 0.000851 0.00101 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 173 100 1.22 0.0272 0.674 0.153 0.201 0.976 1 0.00218 0.00223 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 173 23278.658 0.005 0.0288 0.38 0.955 0.156 0.207 0.591 0.751 0.00132 0.00168 ! Validation 173 23278.658 0.005 0.0299 0.633 1.23 0.159 0.211 0.891 0.969 0.00199 0.00216 Wall time: 23278.65842398163 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 174 100 0.871 0.0295 0.281 0.157 0.209 0.538 0.646 0.0012 0.00144 174 172 0.93 0.0293 0.344 0.156 0.209 0.584 0.714 0.0013 0.00159 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 174 100 0.539 0.0248 0.0434 0.145 0.192 0.2 0.254 0.000447 0.000567 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 174 23423.492 0.005 0.0284 0.457 1.02 0.154 0.205 0.656 0.824 0.00146 0.00184 ! Validation 174 23423.492 0.005 0.0283 0.143 0.709 0.155 0.205 0.367 0.461 0.00082 0.00103 Wall time: 23423.492664269637 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 175 100 0.957 0.0272 0.413 0.151 0.201 0.683 0.783 0.00152 0.00175 175 172 0.748 0.026 0.227 0.148 0.197 0.489 0.581 0.00109 0.0013 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 175 100 0.585 0.0245 0.0949 0.145 0.191 0.319 0.375 0.000713 0.000838 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 175 23556.740 0.005 0.0271 0.41 0.953 0.151 0.201 0.616 0.78 0.00138 0.00174 ! Validation 175 23556.740 0.005 0.0275 0.114 0.665 0.153 0.202 0.327 0.412 0.000729 0.000919 Wall time: 23556.74017071398 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 176 100 1.31 0.029 0.731 0.155 0.207 0.893 1.04 0.00199 0.00233 176 172 1.62 0.0265 1.09 0.149 0.198 1.17 1.27 0.00261 0.00284 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 176 100 2.48 0.024 2 0.143 0.189 1.71 1.72 0.00382 0.00385 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 176 23690.426 0.005 0.0276 0.541 1.09 0.152 0.202 0.701 0.896 0.00156 0.002 ! Validation 176 23690.426 0.005 0.0275 1.4 1.95 0.153 0.202 1.33 1.44 0.00297 0.00322 Wall time: 23690.42646183679 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 177 100 0.77 0.0293 0.184 0.156 0.208 0.422 0.523 0.000943 0.00117 177 172 0.883 0.0259 0.365 0.147 0.196 0.656 0.736 0.00146 0.00164 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 177 100 0.509 0.024 0.0279 0.143 0.189 0.173 0.204 0.000386 0.000455 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 177 23825.352 0.005 0.0275 0.572 1.12 0.151 0.202 0.754 0.922 0.00168 0.00206 ! Validation 177 23825.352 0.005 0.0273 0.143 0.689 0.152 0.201 0.349 0.461 0.000778 0.00103 Wall time: 23825.351977210026 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 178 100 0.631 0.0247 0.138 0.143 0.191 0.388 0.452 0.000867 0.00101 178 172 0.851 0.0271 0.308 0.149 0.201 0.541 0.676 0.00121 0.00151 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 178 100 0.509 0.0243 0.0229 0.144 0.19 0.154 0.184 0.000345 0.000412 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 178 23958.702 0.005 0.0258 0.377 0.892 0.146 0.196 0.597 0.748 0.00133 0.00167 ! Validation 178 23958.702 0.005 0.0271 0.106 0.648 0.151 0.201 0.306 0.396 0.000682 0.000884 Wall time: 23958.702686102595 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 179 100 0.615 0.025 0.115 0.143 0.193 0.332 0.413 0.000741 0.000922 179 172 0.876 0.0242 0.391 0.141 0.19 0.667 0.762 0.00149 0.0017 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 179 100 0.55 0.0225 0.0997 0.138 0.183 0.351 0.385 0.000784 0.000859 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 179 24094.662 0.005 0.0251 0.315 0.817 0.144 0.193 0.545 0.684 0.00122 0.00153 ! Validation 179 24094.662 0.005 0.0256 0.157 0.668 0.147 0.195 0.392 0.483 0.000875 0.00108 Wall time: 24094.662205270957 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 180 100 0.622 0.0231 0.161 0.139 0.185 0.361 0.489 0.000806 0.00109 180 172 0.677 0.0258 0.16 0.147 0.196 0.415 0.488 0.000926 0.00109 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 180 100 0.943 0.0242 0.459 0.143 0.19 0.811 0.825 0.00181 0.00184 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 180 24230.952 0.005 0.0247 0.419 0.913 0.143 0.191 0.617 0.789 0.00138 0.00176 ! Validation 180 24230.952 0.005 0.027 0.798 1.34 0.151 0.2 0.999 1.09 0.00223 0.00243 Wall time: 24230.952053366695 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 181 100 0.636 0.0256 0.124 0.145 0.195 0.348 0.43 0.000777 0.000959 181 172 1.15 0.0244 0.66 0.142 0.19 0.931 0.99 0.00208 0.00221 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 181 100 0.889 0.0214 0.462 0.135 0.178 0.812 0.828 0.00181 0.00185 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 181 24364.220 0.005 0.0243 0.35 0.836 0.142 0.19 0.583 0.721 0.0013 0.00161 ! Validation 181 24364.220 0.005 0.0242 0.363 0.847 0.143 0.19 0.657 0.734 0.00147 0.00164 Wall time: 24364.22069570562 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 182 100 0.523 0.023 0.0631 0.138 0.185 0.269 0.306 0.0006 0.000683 182 172 1.66 0.0225 1.21 0.138 0.183 1.28 1.34 0.00285 0.003 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 182 100 1.98 0.0218 1.55 0.136 0.18 1.51 1.52 0.00336 0.00338 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 182 24503.460 0.005 0.0234 0.338 0.807 0.139 0.187 0.563 0.708 0.00126 0.00158 ! Validation 182 24503.460 0.005 0.0243 1.85 2.34 0.143 0.19 1.59 1.66 0.00355 0.0037 Wall time: 24503.460164384916 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 183 100 0.6 0.022 0.16 0.135 0.181 0.4 0.487 0.000892 0.00109 183 172 0.973 0.0225 0.523 0.138 0.183 0.817 0.881 0.00182 0.00197 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 183 100 0.557 0.0211 0.135 0.134 0.177 0.43 0.447 0.00096 0.000999 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 183 24636.704 0.005 0.0236 0.417 0.889 0.14 0.187 0.616 0.787 0.00138 0.00176 ! Validation 183 24636.704 0.005 0.0239 0.348 0.826 0.142 0.188 0.617 0.719 0.00138 0.00161 Wall time: 24636.70426077163 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 184 100 0.575 0.0234 0.106 0.139 0.186 0.301 0.397 0.000673 0.000887 184 172 0.747 0.024 0.268 0.141 0.189 0.532 0.63 0.00119 0.00141 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 184 100 0.453 0.0205 0.0434 0.131 0.174 0.216 0.254 0.000483 0.000567 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 184 24769.956 0.005 0.0231 0.372 0.833 0.138 0.185 0.604 0.743 0.00135 0.00166 ! Validation 184 24769.956 0.005 0.0232 0.0958 0.56 0.139 0.186 0.298 0.377 0.000665 0.000842 Wall time: 24769.956320003606 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 185 100 0.628 0.0237 0.155 0.14 0.187 0.346 0.48 0.000773 0.00107 185 172 0.563 0.022 0.123 0.135 0.181 0.347 0.427 0.000775 0.000952 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 185 100 0.629 0.0202 0.225 0.13 0.173 0.563 0.578 0.00126 0.00129 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 185 24905.793 0.005 0.0222 0.304 0.748 0.136 0.182 0.545 0.672 0.00122 0.0015 ! Validation 185 24905.793 0.005 0.0228 0.192 0.648 0.138 0.184 0.445 0.534 0.000992 0.00119 Wall time: 24905.792870850768 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 186 100 1.26 0.0218 0.829 0.135 0.18 1.07 1.11 0.00239 0.00248 186 172 0.521 0.0223 0.0748 0.137 0.182 0.257 0.333 0.000573 0.000744 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 186 100 0.76 0.0214 0.331 0.135 0.178 0.69 0.701 0.00154 0.00156 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 186 25039.985 0.005 0.0227 0.488 0.943 0.137 0.184 0.696 0.851 0.00155 0.0019 ! Validation 186 25039.985 0.005 0.024 0.263 0.743 0.142 0.189 0.542 0.625 0.00121 0.00139 Wall time: 25039.98560937494 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 187 100 0.487 0.0214 0.0582 0.133 0.178 0.241 0.294 0.000537 0.000656 187 172 0.68 0.0222 0.236 0.135 0.182 0.53 0.592 0.00118 0.00132 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 187 100 0.476 0.0203 0.0698 0.131 0.174 0.297 0.322 0.000663 0.000718 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 187 25173.241 0.005 0.0219 0.285 0.723 0.134 0.18 0.51 0.651 0.00114 0.00145 ! Validation 187 25173.241 0.005 0.0228 0.24 0.695 0.138 0.184 0.485 0.597 0.00108 0.00133 Wall time: 25173.24088667985 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 188 100 0.653 0.0215 0.222 0.134 0.179 0.497 0.574 0.00111 0.00128 188 172 0.805 0.0228 0.349 0.137 0.184 0.622 0.719 0.00139 0.00161 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 188 100 0.449 0.0201 0.0465 0.13 0.173 0.242 0.263 0.00054 0.000587 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 188 25306.503 0.005 0.0212 0.276 0.701 0.132 0.178 0.515 0.64 0.00115 0.00143 ! Validation 188 25306.503 0.005 0.0226 0.101 0.553 0.138 0.183 0.303 0.388 0.000676 0.000866 Wall time: 25306.503725675866 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 189 100 1.51 0.0227 1.06 0.137 0.184 1.08 1.25 0.00242 0.0028 189 172 0.568 0.023 0.107 0.138 0.185 0.326 0.399 0.000728 0.000891 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 189 100 0.785 0.0201 0.384 0.13 0.173 0.748 0.755 0.00167 0.00168 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 189 25443.868 0.005 0.0219 0.438 0.877 0.135 0.181 0.647 0.807 0.00144 0.0018 ! Validation 189 25443.868 0.005 0.0226 0.606 1.06 0.137 0.183 0.882 0.948 0.00197 0.00212 Wall time: 25443.86835790472 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 190 100 0.447 0.0193 0.0611 0.127 0.169 0.248 0.301 0.000553 0.000672 190 172 1.43 0.0213 1 0.134 0.178 1.17 1.22 0.00261 0.00272 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 190 100 0.554 0.0204 0.147 0.131 0.174 0.456 0.467 0.00102 0.00104 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 190 25578.809 0.005 0.0208 0.267 0.682 0.131 0.176 0.496 0.629 0.00111 0.0014 ! Validation 190 25578.809 0.005 0.0224 0.147 0.596 0.137 0.182 0.378 0.468 0.000844 0.00104 Wall time: 25578.809421131853 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 191 100 0.504 0.0199 0.106 0.128 0.172 0.327 0.397 0.000729 0.000887 191 172 0.49 0.0192 0.105 0.126 0.169 0.319 0.395 0.000712 0.000882 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 191 100 0.376 0.018 0.0159 0.123 0.163 0.125 0.154 0.000279 0.000343 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 191 25712.104 0.005 0.021 0.294 0.714 0.131 0.177 0.529 0.661 0.00118 0.00148 ! Validation 191 25712.104 0.005 0.0205 0.103 0.512 0.131 0.174 0.307 0.391 0.000685 0.000872 Wall time: 25712.103860108647 ! Best model 191 0.512 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 192 100 1.26 0.0193 0.87 0.126 0.169 1.08 1.14 0.00242 0.00254 192 172 0.489 0.0199 0.0905 0.128 0.172 0.306 0.367 0.000683 0.000818 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 192 100 0.668 0.0187 0.295 0.125 0.166 0.653 0.661 0.00146 0.00148 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 192 25845.393 0.005 0.0198 0.289 0.685 0.128 0.172 0.519 0.655 0.00116 0.00146 ! Validation 192 25845.393 0.005 0.0208 0.357 0.773 0.132 0.176 0.648 0.728 0.00145 0.00163 Wall time: 25845.393754107878 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 193 100 1.11 0.0188 0.738 0.125 0.167 0.988 1.05 0.00221 0.00234 193 172 0.602 0.0197 0.207 0.127 0.171 0.473 0.554 0.00106 0.00124 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 193 100 0.407 0.0174 0.0584 0.121 0.161 0.279 0.294 0.000622 0.000657 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 193 25978.731 0.005 0.0197 0.261 0.655 0.127 0.171 0.497 0.622 0.00111 0.00139 ! Validation 193 25978.731 0.005 0.0198 0.174 0.57 0.129 0.171 0.427 0.508 0.000952 0.00113 Wall time: 25978.73177129496 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 194 100 2.04 0.0202 1.64 0.128 0.173 1.52 1.56 0.00339 0.00348 194 172 0.918 0.02 0.519 0.128 0.172 0.8 0.878 0.00179 0.00196 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 194 100 0.485 0.0188 0.109 0.126 0.167 0.394 0.402 0.000878 0.000896 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 194 26114.393 0.005 0.0203 0.375 0.781 0.129 0.174 0.596 0.746 0.00133 0.00166 ! Validation 194 26114.393 0.005 0.0211 0.259 0.68 0.133 0.177 0.536 0.62 0.0012 0.00138 Wall time: 26114.392915626988 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 195 100 0.822 0.0182 0.458 0.123 0.165 0.733 0.824 0.00164 0.00184 195 172 0.562 0.0192 0.179 0.126 0.169 0.458 0.516 0.00102 0.00115 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 195 100 0.387 0.0179 0.0289 0.124 0.163 0.175 0.207 0.000391 0.000462 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 195 26248.021 0.005 0.0196 0.261 0.654 0.127 0.171 0.505 0.623 0.00113 0.00139 ! Validation 195 26248.021 0.005 0.0199 0.107 0.505 0.13 0.172 0.313 0.399 0.000699 0.00089 Wall time: 26248.02181360079 ! Best model 195 0.505 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 196 100 1.52 0.0186 1.15 0.125 0.166 1.26 1.31 0.00282 0.00291 196 172 0.675 0.0181 0.313 0.122 0.164 0.56 0.682 0.00125 0.00152 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 196 100 0.692 0.017 0.352 0.119 0.159 0.717 0.723 0.0016 0.00161 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 196 26381.310 0.005 0.019 0.262 0.641 0.125 0.168 0.495 0.623 0.0011 0.00139 ! Validation 196 26381.310 0.005 0.0193 0.542 0.927 0.127 0.169 0.84 0.897 0.00188 0.002 Wall time: 26381.31055888068 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 197 100 0.563 0.0205 0.154 0.13 0.174 0.381 0.478 0.000851 0.00107 197 172 0.43 0.0189 0.0527 0.125 0.167 0.219 0.28 0.000488 0.000624 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 197 100 0.443 0.0166 0.111 0.118 0.157 0.396 0.406 0.000884 0.000907 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 197 26514.580 0.005 0.0194 0.314 0.702 0.127 0.17 0.544 0.683 0.00122 0.00152 ! Validation 197 26514.580 0.005 0.0186 0.104 0.476 0.125 0.166 0.312 0.393 0.000697 0.000877 Wall time: 26514.580703439657 ! Best model 197 0.476 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 198 100 0.411 0.0181 0.049 0.122 0.164 0.2 0.27 0.000446 0.000602 198 172 0.699 0.0191 0.317 0.126 0.168 0.551 0.686 0.00123 0.00153 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 198 100 2.02 0.0171 1.68 0.12 0.159 1.58 1.58 0.00352 0.00352 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 198 26650.688 0.005 0.0182 0.237 0.6 0.123 0.164 0.471 0.593 0.00105 0.00132 ! Validation 198 26650.688 0.005 0.0192 1.33 1.72 0.127 0.169 1.34 1.41 0.00298 0.00314 Wall time: 26650.688375124708 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 199 100 0.599 0.0177 0.246 0.121 0.162 0.51 0.604 0.00114 0.00135 199 172 2.08 0.0204 1.67 0.13 0.174 1.55 1.57 0.00345 0.00351 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 199 100 0.856 0.0223 0.41 0.137 0.182 0.776 0.78 0.00173 0.00174 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 199 26783.998 0.005 0.0185 0.345 0.715 0.124 0.166 0.565 0.715 0.00126 0.0016 ! Validation 199 26783.998 0.005 0.0233 0.557 1.02 0.141 0.186 0.829 0.909 0.00185 0.00203 Wall time: 26783.99834268866 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 200 100 1.02 0.0178 0.662 0.121 0.162 0.934 0.992 0.00209 0.00221 200 172 0.516 0.0178 0.161 0.12 0.162 0.415 0.489 0.000926 0.00109 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 200 100 0.466 0.0158 0.151 0.115 0.153 0.466 0.474 0.00104 0.00106 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 200 26917.296 0.005 0.0186 0.22 0.592 0.124 0.166 0.453 0.571 0.00101 0.00128 ! Validation 200 26917.296 0.005 0.0177 0.173 0.527 0.122 0.162 0.413 0.507 0.000921 0.00113 Wall time: 26917.296204120852 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 201 100 1.01 0.0195 0.615 0.128 0.17 0.884 0.956 0.00197 0.00213 201 172 0.438 0.019 0.0587 0.125 0.168 0.237 0.295 0.000528 0.000659 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 201 100 0.429 0.0159 0.112 0.116 0.153 0.399 0.408 0.000891 0.000911 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 201 27050.608 0.005 0.019 0.327 0.707 0.126 0.168 0.571 0.697 0.00127 0.00156 ! Validation 201 27050.608 0.005 0.018 0.238 0.598 0.123 0.164 0.524 0.594 0.00117 0.00133 Wall time: 27050.608111144044 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 202 100 0.496 0.0164 0.168 0.117 0.156 0.433 0.5 0.000966 0.00112 202 172 0.587 0.0185 0.217 0.126 0.166 0.511 0.567 0.00114 0.00127 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 202 100 0.375 0.0167 0.0421 0.119 0.157 0.237 0.25 0.000528 0.000558 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 202 27183.913 0.005 0.0172 0.22 0.564 0.119 0.16 0.463 0.572 0.00103 0.00128 ! Validation 202 27183.913 0.005 0.0187 0.182 0.556 0.126 0.167 0.443 0.52 0.00099 0.00116 Wall time: 27183.9136615498 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 203 100 0.394 0.0159 0.0749 0.116 0.154 0.274 0.334 0.000611 0.000744 203 172 0.513 0.0174 0.166 0.12 0.161 0.408 0.496 0.000912 0.00111 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 203 100 0.309 0.0153 0.00318 0.115 0.151 0.0649 0.0687 0.000145 0.000153 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 203 27317.818 0.005 0.0167 0.191 0.524 0.118 0.157 0.431 0.532 0.000962 0.00119 ! Validation 203 27317.818 0.005 0.0174 0.098 0.446 0.122 0.161 0.318 0.381 0.000709 0.000851 Wall time: 27317.81788634695 ! Best model 203 0.446 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 204 100 0.441 0.0168 0.106 0.118 0.158 0.319 0.396 0.000711 0.000884 204 172 1.91 0.0191 1.53 0.127 0.168 1.43 1.51 0.00319 0.00336 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 204 100 1.66 0.0242 1.17 0.145 0.19 1.31 1.32 0.00293 0.00295 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 204 27451.113 0.005 0.0169 0.281 0.619 0.119 0.158 0.499 0.645 0.00111 0.00144 ! Validation 204 27451.113 0.005 0.025 1.24 1.74 0.147 0.192 1.29 1.36 0.00287 0.00303 Wall time: 27451.112972277682 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 205 100 25.1 0.954 6.03 0.876 1.19 2.22 2.99 0.00494 0.00668 205 172 20 0.826 3.47 0.816 1.11 2.02 2.27 0.00452 0.00507 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 205 100 17.6 0.868 0.257 0.834 1.13 0.594 0.618 0.00133 0.00138 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 205 27584.396 0.005 0.791 33.9 49.8 0.764 1.08 3.71 7.1 0.00828 0.0159 ! Validation 205 27584.396 0.005 0.835 2.48 19.2 0.821 1.11 1.57 1.92 0.00352 0.00428 Wall time: 27584.396039996762 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 206 100 7.34 0.317 0.989 0.512 0.686 0.956 1.21 0.00213 0.0027 206 172 8.86 0.233 4.2 0.442 0.588 2.23 2.5 0.00497 0.00557 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 206 100 5.76 0.24 0.968 0.447 0.596 1.16 1.2 0.00259 0.00268 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 206 27717.692 0.005 0.442 2.43 11.3 0.588 0.81 1.51 1.9 0.00337 0.00423 ! Validation 206 27717.692 0.005 0.233 2.86 7.51 0.442 0.588 1.77 2.06 0.00396 0.0046 Wall time: 27717.692453250755 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 207 100 4.73 0.152 1.7 0.361 0.474 1.38 1.59 0.00309 0.00354 207 172 3.59 0.107 1.45 0.302 0.398 1.35 1.47 0.003 0.00328 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 207 100 2.24 0.11 0.0371 0.306 0.404 0.205 0.235 0.000457 0.000524 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 207 27851.211 0.005 0.163 1.52 4.77 0.37 0.491 1.21 1.5 0.0027 0.00335 ! Validation 207 27851.211 0.005 0.111 0.577 2.8 0.31 0.406 0.753 0.925 0.00168 0.00207 Wall time: 27851.210922987666 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 208 100 2.94 0.0869 1.2 0.274 0.359 1.14 1.33 0.00254 0.00298 208 172 1.72 0.0655 0.413 0.238 0.312 0.642 0.783 0.00143 0.00175 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 208 100 1.35 0.0645 0.0547 0.236 0.309 0.241 0.285 0.000539 0.000636 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 208 27985.569 0.005 0.0882 1.11 2.88 0.275 0.362 1.03 1.29 0.00231 0.00287 ! Validation 208 27985.569 0.005 0.0694 0.422 1.81 0.246 0.321 0.638 0.792 0.00142 0.00177 Wall time: 27985.56983695086 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 209 100 1.46 0.0578 0.306 0.223 0.293 0.548 0.674 0.00122 0.00151 209 172 4.69 0.0542 3.61 0.216 0.284 2.15 2.31 0.00479 0.00516 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 209 100 1.14 0.0531 0.0815 0.212 0.281 0.324 0.348 0.000723 0.000777 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 209 28118.852 0.005 0.0604 1.12 2.33 0.228 0.299 1.05 1.29 0.00235 0.00288 ! Validation 209 28118.852 0.005 0.057 0.548 1.69 0.221 0.291 0.758 0.902 0.00169 0.00201 Wall time: 28118.852479091845 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 210 100 1.27 0.0462 0.343 0.199 0.262 0.552 0.714 0.00123 0.00159 210 172 1.16 0.0423 0.315 0.19 0.251 0.577 0.684 0.00129 0.00153 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 210 100 2.23 0.0384 1.46 0.181 0.239 1.47 1.47 0.00328 0.00329 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 210 28252.151 0.005 0.0484 0.584 1.55 0.203 0.268 0.756 0.931 0.00169 0.00208 ! Validation 210 28252.151 0.005 0.0432 0.817 1.68 0.193 0.253 0.973 1.1 0.00217 0.00246 Wall time: 28252.15133256279 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 211 100 1.78 0.0396 0.99 0.182 0.242 1.09 1.21 0.00244 0.00271 211 172 1.48 0.038 0.721 0.179 0.237 0.943 1.03 0.00211 0.00231 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 211 100 0.951 0.0343 0.266 0.17 0.226 0.62 0.629 0.00138 0.0014 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 211 28387.000 0.005 0.039 0.545 1.33 0.182 0.241 0.723 0.9 0.00161 0.00201 ! Validation 211 28387.000 0.005 0.0388 0.287 1.06 0.181 0.24 0.521 0.653 0.00116 0.00146 Wall time: 28387.00052910298 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 212 100 1.51 0.0339 0.827 0.168 0.224 1.02 1.11 0.00228 0.00247 212 172 2.08 0.034 1.4 0.169 0.225 1.38 1.44 0.00307 0.00322 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 212 100 1.38 0.0317 0.75 0.163 0.217 1.05 1.06 0.00234 0.00236 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 212 28520.260 0.005 0.0351 0.591 1.29 0.171 0.228 0.763 0.936 0.0017 0.00209 ! Validation 212 28520.260 0.005 0.036 2.12 2.84 0.174 0.231 1.61 1.77 0.00359 0.00396 Wall time: 28520.260245061945 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 213 100 1.45 0.0331 0.791 0.166 0.222 1.01 1.08 0.00226 0.00242 213 172 0.771 0.0319 0.132 0.162 0.218 0.359 0.443 0.000801 0.000988 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 213 100 0.743 0.0285 0.173 0.155 0.206 0.495 0.507 0.0011 0.00113 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 213 28653.532 0.005 0.0334 0.59 1.26 0.166 0.223 0.739 0.936 0.00165 0.00209 ! Validation 213 28653.532 0.005 0.0323 0.198 0.843 0.164 0.219 0.449 0.542 0.001 0.00121 Wall time: 28653.53268617997 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 214 100 0.689 0.0294 0.101 0.155 0.209 0.297 0.388 0.000662 0.000866 214 172 1.25 0.0294 0.661 0.155 0.209 0.908 0.991 0.00203 0.00221 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 214 100 0.797 0.0263 0.271 0.149 0.197 0.625 0.635 0.0014 0.00142 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 214 28786.805 0.005 0.0297 0.335 0.93 0.156 0.21 0.563 0.705 0.00126 0.00157 ! Validation 214 28786.805 0.005 0.03 0.753 1.35 0.158 0.211 0.939 1.06 0.0021 0.00236 Wall time: 28786.805450684857 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 215 100 0.66 0.0277 0.106 0.152 0.203 0.338 0.397 0.000756 0.000887 215 172 0.988 0.0281 0.426 0.152 0.204 0.648 0.795 0.00145 0.00178 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 215 100 0.906 0.0249 0.407 0.144 0.192 0.767 0.778 0.00171 0.00174 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 215 28920.078 0.005 0.0288 0.517 1.09 0.154 0.207 0.703 0.876 0.00157 0.00196 ! Validation 215 28920.078 0.005 0.0287 0.708 1.28 0.154 0.206 0.929 1.02 0.00207 0.00229 Wall time: 28920.07824051194 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 216 100 0.703 0.0276 0.152 0.15 0.202 0.393 0.475 0.000878 0.00106 216 172 1.33 0.0251 0.834 0.144 0.193 1.05 1.11 0.00234 0.00248 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 216 100 0.763 0.0239 0.285 0.142 0.188 0.641 0.651 0.00143 0.00145 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 216 29053.859 0.005 0.0271 0.419 0.962 0.149 0.201 0.64 0.789 0.00143 0.00176 ! Validation 216 29053.859 0.005 0.0275 0.284 0.834 0.151 0.202 0.534 0.649 0.00119 0.00145 Wall time: 29053.859702735674 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 217 100 0.737 0.0239 0.258 0.14 0.188 0.517 0.619 0.00115 0.00138 217 172 0.617 0.0243 0.131 0.141 0.19 0.369 0.442 0.000824 0.000986 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 217 100 0.701 0.0223 0.255 0.137 0.182 0.601 0.615 0.00134 0.00137 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 217 29187.136 0.005 0.0256 0.3 0.813 0.145 0.195 0.529 0.668 0.00118 0.00149 ! Validation 217 29187.136 0.005 0.0258 0.229 0.745 0.147 0.196 0.488 0.583 0.00109 0.0013 Wall time: 29187.136838804 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 218 100 0.708 0.0236 0.235 0.14 0.187 0.499 0.591 0.00111 0.00132 218 172 0.824 0.0244 0.336 0.143 0.19 0.607 0.706 0.00135 0.00158 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 218 100 0.999 0.0236 0.528 0.143 0.187 0.877 0.885 0.00196 0.00198 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 218 29320.410 0.005 0.0245 0.392 0.881 0.142 0.191 0.615 0.763 0.00137 0.0017 ! Validation 218 29320.410 0.005 0.0267 0.517 1.05 0.15 0.199 0.753 0.876 0.00168 0.00195 Wall time: 29320.410461975727 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 219 100 1.41 0.0244 0.919 0.142 0.19 1.11 1.17 0.00247 0.00261 219 172 0.573 0.0217 0.139 0.134 0.179 0.347 0.455 0.000774 0.00102 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 219 100 0.445 0.0209 0.0267 0.133 0.176 0.167 0.199 0.000372 0.000445 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 219 29453.688 0.005 0.0242 0.385 0.869 0.141 0.19 0.613 0.757 0.00137 0.00169 ! Validation 219 29453.688 0.005 0.0243 0.307 0.792 0.142 0.19 0.549 0.675 0.00122 0.00151 Wall time: 29453.688106494956 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 220 100 1.2 0.0238 0.725 0.139 0.188 0.977 1.04 0.00218 0.00232 220 172 1.31 0.0245 0.817 0.142 0.191 1.01 1.1 0.00225 0.00246 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 220 100 0.459 0.0223 0.0128 0.137 0.182 0.108 0.138 0.00024 0.000307 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 220 29589.836 0.005 0.023 0.405 0.865 0.137 0.185 0.619 0.775 0.00138 0.00173 ! Validation 220 29589.836 0.005 0.025 0.172 0.672 0.144 0.193 0.405 0.505 0.000904 0.00113 Wall time: 29589.836199654732 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 221 100 0.544 0.0226 0.0921 0.135 0.183 0.31 0.37 0.000692 0.000825 221 172 0.541 0.021 0.121 0.131 0.177 0.354 0.424 0.00079 0.000946 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 221 100 0.402 0.0192 0.0183 0.127 0.169 0.142 0.165 0.000317 0.000368 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 221 29723.068 0.005 0.0224 0.238 0.687 0.135 0.182 0.472 0.595 0.00105 0.00133 ! Validation 221 29723.068 0.005 0.0224 0.105 0.552 0.136 0.182 0.321 0.394 0.000717 0.00088 Wall time: 29723.068534142803 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 222 100 0.667 0.022 0.227 0.134 0.181 0.522 0.58 0.00116 0.00129 222 172 0.546 0.0211 0.125 0.131 0.177 0.368 0.431 0.00082 0.000961 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 222 100 1.02 0.0186 0.645 0.125 0.166 0.973 0.979 0.00217 0.00218 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 222 29856.463 0.005 0.0216 0.306 0.738 0.133 0.179 0.53 0.674 0.00118 0.00151 ! Validation 222 29856.463 0.005 0.0216 0.696 1.13 0.134 0.179 0.946 1.02 0.00211 0.00227 Wall time: 29856.463135891594 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 223 100 0.536 0.0203 0.13 0.13 0.174 0.348 0.439 0.000777 0.000981 223 172 0.828 0.0211 0.407 0.132 0.177 0.665 0.778 0.00149 0.00174 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 223 100 0.41 0.0198 0.0138 0.129 0.172 0.124 0.143 0.000276 0.00032 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 223 29989.715 0.005 0.0214 0.426 0.854 0.132 0.178 0.654 0.796 0.00146 0.00178 ! Validation 223 29989.715 0.005 0.0225 0.12 0.57 0.137 0.183 0.342 0.421 0.000763 0.000941 Wall time: 29989.715496018995 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 224 100 1.08 0.0211 0.66 0.131 0.177 0.929 0.99 0.00207 0.00221 224 172 1.2 0.0218 0.761 0.134 0.18 1.01 1.06 0.00226 0.00237 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 224 100 1.07 0.0199 0.677 0.13 0.172 0.999 1 0.00223 0.00224 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 224 30122.934 0.005 0.0206 0.286 0.697 0.13 0.175 0.506 0.651 0.00113 0.00145 ! Validation 224 30122.934 0.005 0.0224 0.771 1.22 0.137 0.182 0.986 1.07 0.0022 0.00239 Wall time: 30122.934115709737 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 225 100 0.467 0.0202 0.0626 0.129 0.173 0.245 0.305 0.000548 0.000681 225 172 0.876 0.0196 0.483 0.127 0.171 0.758 0.847 0.00169 0.00189 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 225 100 0.408 0.0181 0.0471 0.123 0.164 0.254 0.265 0.000568 0.00059 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 225 30257.867 0.005 0.0207 0.366 0.779 0.13 0.175 0.598 0.737 0.00134 0.00165 ! Validation 225 30257.867 0.005 0.0209 0.236 0.655 0.132 0.176 0.484 0.592 0.00108 0.00132 Wall time: 30257.86754383566 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 226 100 1.83 0.0193 1.44 0.126 0.169 1.42 1.46 0.00316 0.00326 226 172 0.524 0.0198 0.128 0.128 0.171 0.33 0.436 0.000737 0.000972 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 226 100 0.977 0.0185 0.607 0.124 0.166 0.946 0.949 0.00211 0.00212 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 226 30391.098 0.005 0.0204 0.418 0.826 0.129 0.174 0.639 0.787 0.00143 0.00176 ! Validation 226 30391.098 0.005 0.0212 0.891 1.32 0.133 0.177 1.07 1.15 0.00239 0.00257 Wall time: 30391.09811366489 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 227 100 0.576 0.0184 0.208 0.123 0.165 0.495 0.556 0.0011 0.00124 227 172 0.916 0.0203 0.511 0.129 0.173 0.78 0.871 0.00174 0.00194 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 227 100 0.364 0.0171 0.0216 0.12 0.159 0.156 0.179 0.000349 0.0004 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 227 30524.334 0.005 0.0196 0.244 0.636 0.127 0.17 0.479 0.602 0.00107 0.00134 ! Validation 227 30524.334 0.005 0.0198 0.134 0.529 0.128 0.171 0.362 0.446 0.000808 0.000996 Wall time: 30524.334091064986 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 228 100 0.839 0.0192 0.454 0.126 0.169 0.741 0.821 0.00165 0.00183 228 172 0.938 0.0185 0.569 0.124 0.166 0.823 0.919 0.00184 0.00205 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 228 100 0.721 0.0164 0.394 0.117 0.156 0.759 0.765 0.0017 0.00171 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 228 30657.559 0.005 0.0187 0.233 0.606 0.124 0.167 0.471 0.587 0.00105 0.00131 ! Validation 228 30657.559 0.005 0.019 0.477 0.857 0.126 0.168 0.768 0.842 0.00171 0.00188 Wall time: 30657.55980411684 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 229 100 0.516 0.0198 0.12 0.127 0.172 0.359 0.422 0.000801 0.000941 229 172 0.416 0.0169 0.0783 0.118 0.158 0.272 0.341 0.000607 0.000761 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 229 100 0.413 0.0159 0.0947 0.116 0.154 0.369 0.375 0.000823 0.000837 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 229 30795.187 0.005 0.0184 0.273 0.641 0.123 0.165 0.512 0.636 0.00114 0.00142 ! Validation 229 30795.187 0.005 0.0187 0.192 0.566 0.125 0.167 0.452 0.534 0.00101 0.00119 Wall time: 30795.18768271897 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 230 100 0.782 0.019 0.402 0.126 0.168 0.696 0.773 0.00155 0.00172 230 172 0.474 0.017 0.134 0.118 0.159 0.327 0.446 0.000731 0.000995 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 230 100 0.433 0.0159 0.115 0.116 0.154 0.406 0.414 0.000907 0.000924 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 230 30928.386 0.005 0.0186 0.312 0.683 0.124 0.166 0.528 0.68 0.00118 0.00152 ! Validation 230 30928.386 0.005 0.0184 0.156 0.524 0.124 0.165 0.396 0.482 0.000884 0.00108 Wall time: 30928.38621262787 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 231 100 0.802 0.018 0.442 0.122 0.164 0.711 0.81 0.00159 0.00181 231 172 0.542 0.0171 0.2 0.119 0.159 0.434 0.545 0.000968 0.00122 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 231 100 0.315 0.0155 0.00452 0.114 0.152 0.0682 0.0819 0.000152 0.000183 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 231 31061.605 0.005 0.0178 0.256 0.611 0.121 0.162 0.496 0.617 0.00111 0.00138 ! Validation 231 31061.605 0.005 0.018 0.0793 0.438 0.122 0.163 0.276 0.343 0.000615 0.000766 Wall time: 31061.605204915628 ! Best model 231 0.438 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 232 100 2.64 0.0204 2.24 0.13 0.174 1.77 1.82 0.00395 0.00407 232 172 0.473 0.0181 0.111 0.122 0.164 0.345 0.406 0.000771 0.000907 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 232 100 0.385 0.0157 0.0714 0.114 0.153 0.318 0.325 0.000711 0.000726 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 232 31194.820 0.005 0.0182 0.349 0.713 0.122 0.164 0.557 0.72 0.00124 0.00161 ! Validation 232 31194.820 0.005 0.0181 0.229 0.59 0.123 0.164 0.494 0.583 0.0011 0.0013 Wall time: 31194.82029498974 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 233 100 0.453 0.0184 0.0845 0.123 0.165 0.286 0.354 0.000638 0.000791 233 172 0.581 0.0167 0.246 0.117 0.158 0.499 0.604 0.00111 0.00135 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 233 100 0.447 0.0152 0.143 0.113 0.15 0.457 0.46 0.00102 0.00103 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 233 31328.033 0.005 0.0175 0.265 0.616 0.12 0.161 0.502 0.628 0.00112 0.0014 ! Validation 233 31328.033 0.005 0.0177 0.385 0.738 0.121 0.162 0.667 0.756 0.00149 0.00169 Wall time: 31328.03364193486 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 234 100 0.382 0.0164 0.0541 0.117 0.156 0.224 0.283 0.000499 0.000632 234 172 0.684 0.0191 0.302 0.125 0.168 0.582 0.669 0.0013 0.00149 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 234 100 0.489 0.0171 0.147 0.12 0.159 0.465 0.468 0.00104 0.00104 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 234 31461.513 0.005 0.0171 0.28 0.623 0.119 0.159 0.5 0.645 0.00112 0.00144 ! Validation 234 31461.513 0.005 0.0193 0.209 0.596 0.127 0.169 0.467 0.557 0.00104 0.00124 Wall time: 31461.513722527772 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 235 100 0.459 0.0169 0.122 0.119 0.158 0.357 0.426 0.000797 0.000951 235 172 0.435 0.0162 0.111 0.115 0.155 0.349 0.405 0.000779 0.000905 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 235 100 0.448 0.0144 0.16 0.11 0.146 0.485 0.488 0.00108 0.00109 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 235 31599.304 0.005 0.0166 0.194 0.527 0.117 0.157 0.422 0.537 0.000942 0.0012 ! Validation 235 31599.304 0.005 0.0168 0.284 0.619 0.118 0.158 0.575 0.649 0.00128 0.00145 Wall time: 31599.304012265988 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 236 100 0.436 0.017 0.0958 0.119 0.159 0.337 0.377 0.000751 0.000842 236 172 0.58 0.0165 0.249 0.117 0.157 0.503 0.608 0.00112 0.00136 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 236 100 0.593 0.016 0.272 0.117 0.154 0.633 0.636 0.00141 0.00142 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 236 31732.500 0.005 0.0164 0.282 0.609 0.116 0.156 0.516 0.647 0.00115 0.00144 ! Validation 236 31732.500 0.005 0.0179 0.266 0.625 0.123 0.163 0.493 0.628 0.0011 0.0014 Wall time: 31732.500768492 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 237 100 0.392 0.0157 0.0782 0.113 0.153 0.265 0.341 0.000591 0.00076 237 172 0.6 0.016 0.279 0.115 0.154 0.553 0.644 0.00123 0.00144 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 237 100 0.289 0.0143 0.00226 0.11 0.146 0.0473 0.0579 0.000105 0.000129 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 237 31865.711 0.005 0.0162 0.232 0.556 0.116 0.155 0.452 0.587 0.00101 0.00131 ! Validation 237 31865.711 0.005 0.0164 0.123 0.451 0.117 0.156 0.334 0.427 0.000745 0.000954 Wall time: 31865.711757226847 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 238 100 0.552 0.0171 0.211 0.12 0.159 0.445 0.56 0.000993 0.00125 238 172 0.562 0.0144 0.275 0.109 0.146 0.537 0.639 0.0012 0.00143 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 238 100 0.352 0.0134 0.0849 0.106 0.141 0.351 0.355 0.000784 0.000792 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 238 31998.909 0.005 0.0155 0.2 0.511 0.113 0.152 0.434 0.545 0.000969 0.00122 ! Validation 238 31998.909 0.005 0.0158 0.203 0.518 0.115 0.153 0.477 0.549 0.00106 0.00122 Wall time: 31998.908877270762 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 239 100 0.58 0.0154 0.272 0.113 0.151 0.575 0.636 0.00128 0.00142 239 172 0.386 0.0146 0.0932 0.11 0.147 0.283 0.372 0.000631 0.00083 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 239 100 0.555 0.0141 0.274 0.108 0.145 0.635 0.637 0.00142 0.00142 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 239 32132.118 0.005 0.0155 0.255 0.565 0.113 0.152 0.495 0.616 0.0011 0.00137 ! Validation 239 32132.118 0.005 0.016 0.386 0.707 0.116 0.154 0.685 0.757 0.00153 0.00169 Wall time: 32132.118193661794 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 240 100 0.438 0.0155 0.127 0.114 0.152 0.35 0.434 0.000781 0.000969 240 172 0.414 0.0152 0.109 0.113 0.15 0.322 0.403 0.000719 0.000899 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 240 100 0.312 0.0136 0.0393 0.107 0.142 0.235 0.242 0.000525 0.000539 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 240 32265.377 0.005 0.0153 0.226 0.531 0.113 0.15 0.463 0.579 0.00103 0.00129 ! Validation 240 32265.377 0.005 0.0159 0.136 0.454 0.116 0.153 0.379 0.45 0.000845 0.001 Wall time: 32265.37710963795 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 241 100 0.421 0.0152 0.116 0.112 0.15 0.366 0.416 0.000818 0.000928 241 172 0.531 0.015 0.231 0.111 0.149 0.521 0.586 0.00116 0.00131 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 241 100 0.305 0.0134 0.038 0.106 0.141 0.235 0.238 0.000525 0.00053 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 241 32401.385 0.005 0.0152 0.252 0.557 0.112 0.15 0.496 0.612 0.00111 0.00137 ! Validation 241 32401.385 0.005 0.0157 0.134 0.447 0.115 0.153 0.374 0.445 0.000834 0.000994 Wall time: 32401.38560707774 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 242 100 0.986 0.0146 0.693 0.11 0.147 0.984 1.01 0.0022 0.00226 242 172 0.7 0.0133 0.434 0.106 0.141 0.709 0.802 0.00158 0.00179 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 242 100 0.541 0.0127 0.288 0.104 0.137 0.65 0.654 0.00145 0.00146 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 242 32534.590 0.005 0.0147 0.19 0.484 0.111 0.148 0.417 0.531 0.000931 0.00119 ! Validation 242 32534.590 0.005 0.0148 0.211 0.508 0.112 0.148 0.454 0.56 0.00101 0.00125 Wall time: 32534.590176722035 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 243 100 0.419 0.0137 0.145 0.107 0.142 0.407 0.465 0.000908 0.00104 243 172 0.388 0.0131 0.126 0.105 0.139 0.377 0.432 0.000841 0.000965 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 243 100 0.453 0.0118 0.216 0.1 0.132 0.564 0.567 0.00126 0.00127 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 243 32668.805 0.005 0.0142 0.187 0.47 0.109 0.145 0.422 0.526 0.000941 0.00117 ! Validation 243 32668.805 0.005 0.0141 0.19 0.472 0.109 0.145 0.448 0.531 0.001 0.00118 Wall time: 32668.804896148853 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 244 100 0.465 0.0141 0.183 0.109 0.145 0.433 0.521 0.000968 0.00116 244 172 0.496 0.015 0.195 0.112 0.149 0.487 0.538 0.00109 0.0012 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 244 100 0.467 0.0116 0.236 0.0991 0.131 0.589 0.591 0.00131 0.00132 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 244 32802.009 0.005 0.0139 0.2 0.477 0.108 0.144 0.426 0.544 0.000951 0.00121 ! Validation 244 32802.009 0.005 0.0139 0.294 0.571 0.108 0.144 0.576 0.66 0.00129 0.00147 Wall time: 32802.009064648766 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 245 100 0.382 0.0137 0.108 0.107 0.143 0.336 0.401 0.000751 0.000894 245 172 0.48 0.015 0.181 0.112 0.149 0.451 0.518 0.00101 0.00116 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 245 100 0.361 0.0125 0.111 0.104 0.136 0.4 0.407 0.000894 0.000907 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 245 32935.180 0.005 0.0137 0.189 0.464 0.107 0.143 0.427 0.53 0.000952 0.00118 ! Validation 245 32935.180 0.005 0.0144 0.287 0.574 0.111 0.146 0.569 0.652 0.00127 0.00146 Wall time: 32935.17992140679 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 246 100 0.87 0.0141 0.587 0.109 0.145 0.879 0.934 0.00196 0.00208 246 172 0.439 0.0154 0.132 0.114 0.151 0.367 0.443 0.000819 0.000988 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 246 100 0.54 0.0143 0.253 0.11 0.146 0.609 0.612 0.00136 0.00137 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 246 33068.373 0.005 0.0142 0.264 0.547 0.109 0.145 0.498 0.626 0.00111 0.0014 ! Validation 246 33068.373 0.005 0.0161 0.325 0.646 0.117 0.155 0.61 0.694 0.00136 0.00155 Wall time: 33068.37354464503 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 247 100 0.441 0.0141 0.159 0.109 0.145 0.382 0.486 0.000852 0.00109 247 172 0.443 0.0124 0.195 0.102 0.136 0.503 0.538 0.00112 0.0012 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 247 100 0.475 0.0109 0.257 0.0964 0.127 0.616 0.618 0.00137 0.00138 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 247 33208.055 0.005 0.0137 0.171 0.445 0.107 0.143 0.402 0.503 0.000897 0.00112 ! Validation 247 33208.055 0.005 0.0131 0.202 0.464 0.105 0.14 0.472 0.547 0.00105 0.00122 Wall time: 33208.05584722059 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 248 100 0.352 0.0122 0.108 0.101 0.135 0.331 0.4 0.00074 0.000893 248 172 0.31 0.0131 0.0487 0.104 0.139 0.231 0.269 0.000516 0.0006 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 248 100 0.288 0.0109 0.071 0.0957 0.127 0.323 0.325 0.000721 0.000725 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 248 33342.214 0.005 0.0133 0.192 0.458 0.105 0.141 0.421 0.534 0.00094 0.00119 ! Validation 248 33342.214 0.005 0.0131 0.174 0.436 0.105 0.14 0.436 0.508 0.000974 0.00113 Wall time: 33342.21451771073 ! Best model 248 0.436 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 249 100 0.398 0.0122 0.154 0.101 0.135 0.418 0.479 0.000932 0.00107 249 172 0.499 0.0119 0.261 0.0992 0.133 0.55 0.623 0.00123 0.00139 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 249 100 0.306 0.0111 0.0853 0.0965 0.128 0.354 0.356 0.000789 0.000794 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 249 33475.394 0.005 0.0127 0.181 0.435 0.103 0.137 0.405 0.518 0.000904 0.00116 ! Validation 249 33475.394 0.005 0.0131 0.162 0.423 0.105 0.139 0.427 0.491 0.000954 0.0011 Wall time: 33475.39416581066 ! Best model 249 0.423 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 250 100 0.268 0.0114 0.039 0.098 0.13 0.185 0.241 0.000412 0.000537 250 172 0.533 0.0176 0.181 0.124 0.161 0.46 0.519 0.00103 0.00116 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 250 100 0.872 0.0168 0.536 0.122 0.158 0.887 0.892 0.00198 0.00199 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 250 33608.602 0.005 0.0127 0.185 0.439 0.104 0.138 0.424 0.524 0.000947 0.00117 ! Validation 250 33608.602 0.005 0.0184 0.515 0.883 0.128 0.165 0.72 0.874 0.00161 0.00195 Wall time: 33608.60218663467 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 251 100 0.413 0.0117 0.179 0.0997 0.132 0.461 0.516 0.00103 0.00115 251 172 0.327 0.0111 0.105 0.0971 0.129 0.348 0.394 0.000776 0.000879 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 251 100 0.293 0.00984 0.096 0.0918 0.121 0.373 0.378 0.000833 0.000843 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 251 33741.778 0.005 0.0134 0.213 0.481 0.106 0.141 0.447 0.562 0.000998 0.00126 ! Validation 251 33741.778 0.005 0.0121 0.0829 0.325 0.101 0.134 0.278 0.351 0.000621 0.000783 Wall time: 33741.77858536085 ! Best model 251 0.325 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 252 100 0.439 0.0126 0.188 0.103 0.137 0.463 0.528 0.00103 0.00118 252 172 0.388 0.0121 0.145 0.101 0.134 0.411 0.465 0.000918 0.00104 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 252 100 0.379 0.0102 0.175 0.0937 0.123 0.507 0.509 0.00113 0.00114 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 252 33875.740 0.005 0.013 0.207 0.467 0.104 0.139 0.447 0.555 0.000998 0.00124 ! Validation 252 33875.740 0.005 0.0126 0.183 0.436 0.104 0.137 0.449 0.521 0.001 0.00116 Wall time: 33875.740460413974 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 253 100 0.293 0.0111 0.0718 0.0968 0.128 0.286 0.326 0.000639 0.000729 253 172 0.278 0.0121 0.0348 0.101 0.134 0.188 0.227 0.000419 0.000507 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 253 100 0.314 0.01 0.114 0.0927 0.122 0.406 0.411 0.000907 0.000917 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 253 34008.930 0.005 0.0122 0.192 0.436 0.101 0.135 0.432 0.533 0.000964 0.00119 ! Validation 253 34008.930 0.005 0.0119 0.15 0.388 0.101 0.133 0.388 0.471 0.000866 0.00105 Wall time: 34008.92997713899 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 254 100 24.8 0.947 5.81 0.877 1.19 2.36 2.94 0.00526 0.00656 254 172 26.7 0.42 18.3 0.589 0.79 5 5.21 0.0112 0.0116 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 254 100 10.2 0.467 0.839 0.612 0.833 1.06 1.12 0.00237 0.00249 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 254 34142.121 0.005 0.435 15.6 24.3 0.485 0.803 2.43 4.82 0.00543 0.0108 ! Validation 254 34142.121 0.005 0.448 2.24 11.2 0.608 0.815 1.54 1.82 0.00344 0.00407 Wall time: 34142.12142103072 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 255 100 7.76 0.226 3.23 0.44 0.58 1.73 2.19 0.00386 0.00489 255 172 4.26 0.158 1.11 0.367 0.484 1.05 1.29 0.00235 0.00287 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 255 100 3.46 0.161 0.23 0.368 0.489 0.577 0.585 0.00129 0.00131 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 255 34275.312 0.005 0.28 2.35 7.95 0.479 0.645 1.41 1.87 0.00315 0.00417 ! Validation 255 34275.312 0.005 0.163 1.03 4.3 0.374 0.492 1.03 1.24 0.00229 0.00277 Wall time: 34275.31261262577 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 256 100 5.16 0.118 2.81 0.32 0.418 1.89 2.04 0.00422 0.00456 256 172 2.64 0.0994 0.655 0.293 0.384 0.838 0.986 0.00187 0.0022 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 256 100 2 0.0952 0.101 0.285 0.376 0.356 0.387 0.000796 0.000864 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 256 34408.557 0.005 0.13 1.24 3.84 0.333 0.44 1.06 1.36 0.00236 0.00303 ! Validation 256 34408.557 0.005 0.0993 0.949 2.93 0.293 0.384 0.97 1.19 0.00217 0.00265 Wall time: 34408.5574218547 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 257 100 2.3 0.0774 0.75 0.256 0.339 0.91 1.06 0.00203 0.00236 257 172 2.16 0.0767 0.625 0.255 0.337 0.812 0.963 0.00181 0.00215 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 257 100 1.68 0.0738 0.208 0.248 0.331 0.549 0.556 0.00122 0.00124 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 257 34541.753 0.005 0.0832 0.82 2.48 0.266 0.351 0.896 1.1 0.002 0.00246 ! Validation 257 34541.753 0.005 0.0754 0.314 1.82 0.253 0.334 0.561 0.683 0.00125 0.00152 Wall time: 34541.75293937465 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 258 100 1.57 0.0607 0.356 0.226 0.3 0.567 0.727 0.00126 0.00162 258 172 1.37 0.0592 0.189 0.225 0.297 0.384 0.53 0.000857 0.00118 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 258 100 1.12 0.0558 0.00774 0.217 0.288 0.0768 0.107 0.000172 0.000239 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 258 34675.488 0.005 0.0646 0.773 2.07 0.234 0.31 0.876 1.07 0.00196 0.00239 ! Validation 258 34675.488 0.005 0.0576 0.262 1.41 0.222 0.293 0.504 0.623 0.00112 0.00139 Wall time: 34675.48800097266 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 259 100 1.02 0.0429 0.16 0.192 0.252 0.382 0.487 0.000853 0.00109 259 172 0.818 0.0322 0.174 0.167 0.219 0.406 0.508 0.000906 0.00113 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 259 100 0.61 0.0298 0.0134 0.161 0.21 0.126 0.141 0.000281 0.000315 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 259 34810.084 0.005 0.044 0.374 1.25 0.193 0.256 0.591 0.746 0.00132 0.00166 ! Validation 259 34810.084 0.005 0.0335 0.115 0.784 0.17 0.223 0.333 0.412 0.000743 0.000921 Wall time: 34810.084461550694 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 260 100 0.628 0.0257 0.114 0.148 0.195 0.334 0.411 0.000746 0.000917 260 172 0.949 0.0269 0.411 0.15 0.2 0.691 0.781 0.00154 0.00174 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 260 100 1.09 0.0243 0.607 0.144 0.19 0.948 0.949 0.00212 0.00212 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 260 34943.277 0.005 0.0292 0.569 1.15 0.158 0.208 0.724 0.919 0.00162 0.00205 ! Validation 260 34943.277 0.005 0.0277 0.69 1.24 0.153 0.203 0.944 1.01 0.00211 0.00226 Wall time: 34943.277551454026 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 261 100 0.786 0.0241 0.303 0.142 0.189 0.596 0.671 0.00133 0.0015 261 172 0.779 0.0229 0.32 0.138 0.185 0.587 0.69 0.00131 0.00154 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 261 100 0.725 0.0194 0.337 0.128 0.17 0.706 0.707 0.00158 0.00158 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 261 35078.287 0.005 0.0239 0.267 0.746 0.141 0.189 0.514 0.629 0.00115 0.0014 ! Validation 261 35078.287 0.005 0.0227 0.391 0.846 0.138 0.184 0.687 0.762 0.00153 0.0017 Wall time: 35078.28718111059 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 262 100 0.426 0.0193 0.0403 0.127 0.169 0.192 0.245 0.000428 0.000546 262 172 0.433 0.0186 0.0613 0.124 0.166 0.246 0.302 0.000549 0.000674 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 262 100 0.353 0.0169 0.0152 0.12 0.158 0.139 0.15 0.00031 0.000335 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 262 35211.472 0.005 0.0206 0.192 0.604 0.131 0.175 0.431 0.534 0.000961 0.00119 ! Validation 262 35211.472 0.005 0.0199 0.0686 0.467 0.129 0.172 0.254 0.319 0.000566 0.000712 Wall time: 35211.47224521171 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 263 100 0.569 0.0192 0.184 0.127 0.169 0.445 0.523 0.000994 0.00117 263 172 0.663 0.0163 0.338 0.116 0.155 0.65 0.708 0.00145 0.00158 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 263 100 0.315 0.0157 0.000612 0.115 0.153 0.0254 0.0302 5.66e-05 6.73e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 263 35344.655 0.005 0.0187 0.271 0.646 0.125 0.167 0.509 0.634 0.00114 0.00142 ! Validation 263 35344.655 0.005 0.0184 0.0632 0.432 0.124 0.165 0.248 0.306 0.000554 0.000684 Wall time: 35344.65544324182 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 264 100 0.65 0.0182 0.286 0.122 0.164 0.583 0.652 0.0013 0.00145 264 172 0.429 0.0162 0.104 0.115 0.155 0.342 0.393 0.000763 0.000878 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 264 100 0.469 0.0145 0.179 0.111 0.147 0.514 0.516 0.00115 0.00115 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 264 35477.835 0.005 0.0171 0.202 0.544 0.119 0.159 0.444 0.548 0.000991 0.00122 ! Validation 264 35477.835 0.005 0.017 0.401 0.741 0.119 0.159 0.71 0.772 0.00159 0.00172 Wall time: 35477.83496540785 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 265 100 0.448 0.0163 0.121 0.116 0.156 0.365 0.423 0.000815 0.000945 265 172 0.359 0.0152 0.0551 0.112 0.15 0.214 0.286 0.000478 0.000638 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 265 100 0.275 0.0137 0.00146 0.107 0.143 0.0329 0.0465 7.34e-05 0.000104 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 265 35611.513 0.005 0.0163 0.246 0.572 0.116 0.155 0.487 0.604 0.00109 0.00135 ! Validation 265 35611.513 0.005 0.0161 0.0876 0.41 0.116 0.155 0.297 0.361 0.000663 0.000805 Wall time: 35611.513766708784 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 266 100 0.664 0.0147 0.37 0.11 0.148 0.683 0.741 0.00152 0.00165 266 172 0.395 0.0151 0.0924 0.111 0.15 0.305 0.37 0.000681 0.000826 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 266 100 0.279 0.0129 0.0207 0.104 0.138 0.172 0.175 0.000383 0.000391 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 266 35744.688 0.005 0.0153 0.209 0.515 0.112 0.151 0.448 0.557 0.001 0.00124 ! Validation 266 35744.688 0.005 0.0153 0.0567 0.362 0.113 0.15 0.22 0.29 0.000492 0.000648 Wall time: 35744.68814200582 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 267 100 0.414 0.0134 0.146 0.105 0.141 0.411 0.466 0.000917 0.00104 267 172 0.527 0.0143 0.241 0.109 0.146 0.518 0.598 0.00116 0.00133 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 267 100 0.254 0.0126 0.00178 0.103 0.137 0.0335 0.0514 7.47e-05 0.000115 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 267 35879.694 0.005 0.0145 0.231 0.521 0.11 0.147 0.467 0.585 0.00104 0.00131 ! Validation 267 35879.694 0.005 0.0149 0.0761 0.373 0.112 0.149 0.268 0.336 0.000599 0.00075 Wall time: 35879.69441067893 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 268 100 0.305 0.0128 0.0496 0.103 0.138 0.226 0.271 0.000506 0.000606 268 172 0.357 0.0142 0.0736 0.107 0.145 0.262 0.331 0.000585 0.000738 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 268 100 0.244 0.0122 0.000846 0.101 0.134 0.0308 0.0354 6.88e-05 7.91e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 268 36014.385 0.005 0.014 0.206 0.485 0.107 0.144 0.446 0.553 0.000996 0.00123 ! Validation 268 36014.385 0.005 0.0143 0.068 0.354 0.11 0.146 0.259 0.318 0.000579 0.000709 Wall time: 36014.384880542755 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 269 100 0.581 0.0136 0.308 0.107 0.142 0.629 0.677 0.0014 0.00151 269 172 0.285 0.0126 0.0323 0.103 0.137 0.163 0.219 0.000364 0.000489 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 269 100 0.249 0.0115 0.0183 0.0983 0.131 0.157 0.165 0.000351 0.000367 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 269 36147.618 0.005 0.0133 0.159 0.425 0.105 0.141 0.39 0.485 0.000871 0.00108 ! Validation 269 36147.618 0.005 0.0136 0.0506 0.322 0.107 0.142 0.218 0.274 0.000487 0.000612 Wall time: 36147.618381747976 ! Best model 269 0.322 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 270 100 0.313 0.0121 0.0719 0.1 0.134 0.254 0.327 0.000566 0.000729 270 172 0.295 0.013 0.0354 0.104 0.139 0.206 0.229 0.00046 0.000512 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 270 100 0.248 0.0116 0.0159 0.0984 0.131 0.141 0.153 0.000315 0.000343 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 270 36284.682 0.005 0.013 0.214 0.473 0.104 0.139 0.448 0.564 0.000999 0.00126 ! Validation 270 36284.682 0.005 0.0137 0.0501 0.324 0.107 0.143 0.215 0.273 0.000481 0.000609 Wall time: 36284.682581428904 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 271 100 0.725 0.0126 0.474 0.102 0.137 0.788 0.839 0.00176 0.00187 271 172 0.373 0.0133 0.107 0.105 0.14 0.324 0.398 0.000722 0.000888 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 271 100 0.442 0.0117 0.208 0.0986 0.132 0.554 0.556 0.00124 0.00124 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 271 36418.367 0.005 0.0129 0.234 0.492 0.103 0.138 0.486 0.59 0.00109 0.00132 ! Validation 271 36418.367 0.005 0.0135 0.18 0.449 0.106 0.141 0.446 0.517 0.000996 0.00115 Wall time: 36418.3673218186 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 272 100 0.325 0.013 0.0652 0.104 0.139 0.247 0.311 0.000551 0.000695 272 172 0.295 0.0124 0.0471 0.102 0.136 0.21 0.264 0.00047 0.00059 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 272 100 0.215 0.0105 0.00624 0.0937 0.125 0.0776 0.0962 0.000173 0.000215 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 272 36551.565 0.005 0.0124 0.146 0.394 0.102 0.136 0.368 0.465 0.000821 0.00104 ! Validation 272 36551.565 0.005 0.0125 0.0652 0.315 0.103 0.136 0.257 0.311 0.000574 0.000694 Wall time: 36551.56492235698 ! Best model 272 0.315 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 273 100 0.817 0.0131 0.555 0.106 0.139 0.876 0.907 0.00196 0.00203 273 172 0.737 0.0122 0.493 0.101 0.135 0.75 0.856 0.00167 0.00191 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 273 100 0.225 0.0111 0.00265 0.0964 0.128 0.0516 0.0627 0.000115 0.00014 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 273 36685.989 0.005 0.0121 0.228 0.471 0.101 0.134 0.461 0.581 0.00103 0.0013 ! Validation 273 36685.989 0.005 0.0129 0.0765 0.335 0.104 0.139 0.269 0.337 0.0006 0.000752 Wall time: 36685.988875833806 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 274 100 0.297 0.0125 0.0476 0.102 0.136 0.205 0.266 0.000458 0.000593 274 172 0.275 0.0111 0.0523 0.0964 0.129 0.221 0.279 0.000494 0.000622 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 274 100 0.234 0.01 0.0334 0.0918 0.122 0.212 0.223 0.000473 0.000497 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 274 36824.520 0.005 0.012 0.165 0.405 0.1 0.134 0.387 0.494 0.000863 0.0011 ! Validation 274 36824.520 0.005 0.012 0.0742 0.314 0.101 0.133 0.252 0.332 0.000563 0.000741 Wall time: 36824.52009575581 ! Best model 274 0.314 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 275 100 0.256 0.0109 0.0378 0.0954 0.127 0.196 0.237 0.000438 0.000528 275 172 0.288 0.0113 0.0624 0.0967 0.129 0.246 0.304 0.000549 0.000679 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 275 100 0.391 0.0095 0.201 0.0894 0.119 0.542 0.547 0.00121 0.00122 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 275 36970.909 0.005 0.0113 0.137 0.364 0.0972 0.13 0.361 0.451 0.000807 0.00101 ! Validation 275 36970.909 0.005 0.0117 0.128 0.361 0.0994 0.132 0.351 0.436 0.000784 0.000973 Wall time: 36970.908895873 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 276 100 0.525 0.011 0.304 0.0965 0.128 0.629 0.672 0.0014 0.0015 276 172 0.563 0.0105 0.353 0.0934 0.125 0.687 0.724 0.00153 0.00162 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 276 100 0.39 0.00967 0.196 0.0905 0.12 0.539 0.54 0.0012 0.0012 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 276 37104.319 0.005 0.0114 0.212 0.44 0.0975 0.13 0.446 0.561 0.000996 0.00125 ! Validation 276 37104.319 0.005 0.0116 0.291 0.523 0.0991 0.131 0.568 0.658 0.00127 0.00147 Wall time: 37104.3197738328 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 277 100 0.341 0.01 0.141 0.0916 0.122 0.406 0.457 0.000907 0.00102 277 172 1.23 0.0113 1.01 0.0976 0.129 1.18 1.22 0.00262 0.00273 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 277 100 1.72 0.0101 1.52 0.0924 0.122 1.5 1.5 0.00335 0.00335 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 277 37237.593 0.005 0.0109 0.165 0.383 0.0955 0.127 0.39 0.493 0.00087 0.0011 ! Validation 277 37237.593 0.005 0.012 1.29 1.53 0.101 0.133 1.35 1.39 0.00301 0.00309 Wall time: 37237.59387473762 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 278 100 0.326 0.0106 0.114 0.0942 0.125 0.368 0.411 0.000822 0.000918 278 172 1.02 0.0117 0.788 0.0996 0.132 0.933 1.08 0.00208 0.00241 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 278 100 0.561 0.0102 0.357 0.0936 0.123 0.725 0.728 0.00162 0.00162 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 278 37396.327 0.005 0.0111 0.22 0.443 0.0965 0.129 0.451 0.571 0.00101 0.00127 ! Validation 278 37396.327 0.005 0.0122 0.547 0.791 0.102 0.135 0.842 0.901 0.00188 0.00201 Wall time: 37396.327615770046 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 279 100 0.241 0.0107 0.0261 0.0947 0.126 0.161 0.197 0.000359 0.00044 279 172 0.351 0.0108 0.135 0.0946 0.127 0.362 0.447 0.000809 0.000998 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 279 100 0.247 0.00888 0.0695 0.0869 0.115 0.314 0.321 0.0007 0.000717 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 279 37529.554 0.005 0.0108 0.162 0.378 0.0951 0.127 0.398 0.491 0.000889 0.0011 ! Validation 279 37529.554 0.005 0.011 0.182 0.402 0.0965 0.128 0.462 0.52 0.00103 0.00116 Wall time: 37529.55444352096 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 280 100 0.355 0.00996 0.156 0.0915 0.122 0.448 0.481 0.000999 0.00107 280 172 0.277 0.0111 0.0538 0.0966 0.129 0.225 0.283 0.000503 0.000631 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 280 100 0.262 0.00878 0.0863 0.0862 0.114 0.355 0.358 0.000793 0.000799 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 280 37675.170 0.005 0.01 0.105 0.306 0.0916 0.122 0.307 0.395 0.000685 0.000882 ! Validation 280 37675.170 0.005 0.0109 0.195 0.413 0.0964 0.127 0.479 0.538 0.00107 0.0012 Wall time: 37675.17060332792 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 281 100 0.26 0.00988 0.0619 0.0915 0.121 0.234 0.303 0.000522 0.000677 281 172 0.606 0.0117 0.371 0.0992 0.132 0.693 0.743 0.00155 0.00166 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 281 100 0.209 0.0104 0.00189 0.0939 0.124 0.0442 0.0529 9.86e-05 0.000118 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 281 37808.547 0.005 0.0101 0.173 0.374 0.0918 0.122 0.413 0.507 0.000922 0.00113 ! Validation 281 37808.547 0.005 0.0119 0.119 0.358 0.101 0.133 0.327 0.42 0.00073 0.000938 Wall time: 37808.54842473194 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 282 100 0.379 0.0108 0.162 0.0949 0.127 0.408 0.491 0.000911 0.0011 282 172 0.311 0.00945 0.122 0.0893 0.118 0.372 0.426 0.00083 0.000952 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 282 100 0.282 0.00834 0.116 0.084 0.111 0.407 0.414 0.000908 0.000925 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 282 37941.818 0.005 0.0101 0.162 0.365 0.0922 0.123 0.392 0.491 0.000875 0.0011 ! Validation 282 37941.818 0.005 0.0104 0.319 0.526 0.0939 0.124 0.611 0.688 0.00136 0.00153 Wall time: 37941.81801726902 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 283 100 0.424 0.0124 0.175 0.101 0.136 0.451 0.51 0.00101 0.00114 283 172 0.301 0.01 0.1 0.0909 0.122 0.332 0.386 0.000742 0.000861 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 283 100 0.185 0.00858 0.0131 0.0855 0.113 0.118 0.139 0.000264 0.000311 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 283 38077.363 0.005 0.0125 0.378 0.627 0.102 0.136 0.545 0.75 0.00122 0.00167 ! Validation 283 38077.363 0.005 0.0105 0.124 0.334 0.0947 0.125 0.365 0.428 0.000816 0.000956 Wall time: 38077.36305984296 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 284 100 0.506 0.0101 0.304 0.0923 0.123 0.554 0.671 0.00124 0.0015 284 172 0.697 0.0102 0.494 0.0925 0.123 0.83 0.856 0.00185 0.00191 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 284 100 0.324 0.00827 0.159 0.0838 0.111 0.481 0.485 0.00107 0.00108 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 284 38211.321 0.005 0.00978 0.109 0.305 0.0905 0.12 0.317 0.402 0.000708 0.000898 ! Validation 284 38211.321 0.005 0.0101 0.187 0.39 0.0929 0.123 0.445 0.528 0.000993 0.00118 Wall time: 38211.32200543396 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 285 100 0.235 0.00961 0.0427 0.0909 0.119 0.213 0.252 0.000475 0.000562 285 172 0.266 0.0098 0.0695 0.0904 0.121 0.281 0.321 0.000628 0.000717 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 285 100 0.169 0.00773 0.0146 0.0814 0.107 0.125 0.147 0.000279 0.000328 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 285 38345.778 0.005 0.00972 0.146 0.34 0.0903 0.12 0.374 0.465 0.000835 0.00104 ! Validation 285 38345.778 0.005 0.00981 0.121 0.317 0.0913 0.121 0.363 0.423 0.000811 0.000945 Wall time: 38345.77835972281 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 286 100 0.404 0.00918 0.221 0.087 0.117 0.541 0.573 0.00121 0.00128 286 172 0.276 0.0113 0.0499 0.0974 0.13 0.223 0.272 0.000499 0.000608 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 286 100 1.39 0.0103 1.19 0.0947 0.124 1.33 1.33 0.00296 0.00296 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 286 38479.044 0.005 0.00937 0.156 0.343 0.0887 0.118 0.365 0.481 0.000814 0.00107 ! Validation 286 38479.044 0.005 0.012 1.19 1.43 0.102 0.133 1.3 1.33 0.0029 0.00297 Wall time: 38479.04402952269 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 287 100 0.26 0.00936 0.0729 0.0883 0.118 0.274 0.329 0.000612 0.000734 287 172 0.253 0.00831 0.0872 0.0838 0.111 0.311 0.36 0.000694 0.000803 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 287 100 0.155 0.00738 0.00741 0.0793 0.105 0.0952 0.105 0.000213 0.000234 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 287 38613.940 0.005 0.00976 0.134 0.33 0.0905 0.12 0.344 0.447 0.000768 0.000997 ! Validation 287 38613.940 0.005 0.00925 0.0957 0.281 0.0886 0.117 0.304 0.377 0.000679 0.000841 Wall time: 38613.94035441987 ! Best model 287 0.281 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 288 100 0.229 0.009 0.0492 0.0873 0.116 0.21 0.27 0.00047 0.000603 288 172 0.445 0.00952 0.255 0.0893 0.119 0.56 0.615 0.00125 0.00137 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 288 100 0.436 0.00848 0.267 0.0849 0.112 0.628 0.629 0.0014 0.0014 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 288 38747.205 0.005 0.00888 0.129 0.307 0.0862 0.115 0.35 0.438 0.000782 0.000977 ! Validation 288 38747.205 0.005 0.01 0.427 0.628 0.0924 0.122 0.704 0.796 0.00157 0.00178 Wall time: 38747.20571652893 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 289 100 0.209 0.00906 0.0278 0.0866 0.116 0.156 0.203 0.000348 0.000453 289 172 0.191 0.00816 0.028 0.0828 0.11 0.168 0.204 0.000375 0.000455 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 289 100 0.172 0.00707 0.0305 0.0778 0.102 0.206 0.213 0.00046 0.000475 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 289 38883.613 0.005 0.00897 0.129 0.308 0.0868 0.115 0.344 0.437 0.000768 0.000977 ! Validation 289 38883.613 0.005 0.00904 0.0501 0.231 0.0877 0.116 0.211 0.273 0.000472 0.000609 Wall time: 38883.61341581866 ! Best model 289 0.231 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 290 100 0.221 0.00875 0.0464 0.0855 0.114 0.205 0.262 0.000458 0.000586 290 172 0.226 0.00811 0.0636 0.0826 0.11 0.261 0.307 0.000583 0.000686 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 290 100 0.183 0.00712 0.0403 0.0779 0.103 0.23 0.245 0.000513 0.000546 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 290 39016.874 0.005 0.00875 0.131 0.306 0.0857 0.114 0.353 0.441 0.000788 0.000985 ! Validation 290 39016.874 0.005 0.00904 0.0502 0.231 0.0875 0.116 0.223 0.273 0.000498 0.000609 Wall time: 39016.87426005164 ! Best model 290 0.231 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 291 100 0.61 0.00892 0.432 0.0871 0.115 0.76 0.801 0.0017 0.00179 291 172 0.215 0.00899 0.0346 0.0874 0.116 0.168 0.227 0.000376 0.000506 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 291 100 0.159 0.00726 0.0137 0.0787 0.104 0.125 0.142 0.000279 0.000318 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 291 39150.127 0.005 0.00899 0.155 0.334 0.087 0.116 0.374 0.479 0.000834 0.00107 ! Validation 291 39150.127 0.005 0.00904 0.0577 0.239 0.0878 0.116 0.227 0.293 0.000506 0.000653 Wall time: 39150.127468755 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 292 100 0.927 0.0118 0.691 0.0998 0.132 0.845 1.01 0.00189 0.00226 292 172 0.264 0.00909 0.0823 0.0874 0.116 0.289 0.349 0.000646 0.00078 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 292 100 0.142 0.00699 0.0025 0.0771 0.102 0.05 0.0609 0.000112 0.000136 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 292 39283.686 0.005 0.00907 0.154 0.336 0.0873 0.116 0.381 0.478 0.00085 0.00107 ! Validation 292 39283.686 0.005 0.00892 0.0507 0.229 0.087 0.115 0.22 0.274 0.000492 0.000613 Wall time: 39283.68673812086 ! Best model 292 0.229 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 293 100 0.27 0.00867 0.0967 0.0851 0.113 0.329 0.379 0.000735 0.000846 293 172 0.269 0.00882 0.0929 0.0861 0.114 0.332 0.371 0.000742 0.000829 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 293 100 0.336 0.00679 0.2 0.076 0.1 0.544 0.545 0.00121 0.00122 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 293 39416.952 0.005 0.00866 0.131 0.305 0.0854 0.113 0.355 0.442 0.000792 0.000986 ! Validation 293 39416.952 0.005 0.00854 0.23 0.401 0.0851 0.113 0.506 0.585 0.00113 0.00131 Wall time: 39416.95276313461 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 294 100 0.681 0.00901 0.501 0.087 0.116 0.837 0.863 0.00187 0.00193 294 172 0.528 0.0129 0.27 0.105 0.139 0.562 0.633 0.00125 0.00141 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 294 100 0.224 0.0107 0.01 0.0953 0.126 0.104 0.122 0.000232 0.000272 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 294 39551.040 0.005 0.00866 0.166 0.339 0.0853 0.113 0.381 0.497 0.000851 0.00111 ! Validation 294 39551.040 0.005 0.0121 0.0721 0.314 0.102 0.134 0.269 0.327 0.0006 0.00073 Wall time: 39551.040213108994 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 295 100 0.233 0.00889 0.0555 0.0867 0.115 0.243 0.287 0.000543 0.000641 295 172 0.331 0.00783 0.174 0.0811 0.108 0.451 0.508 0.00101 0.00113 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 295 100 0.261 0.00664 0.128 0.0752 0.0993 0.432 0.435 0.000964 0.000972 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 295 39684.297 0.005 0.00897 0.118 0.297 0.087 0.115 0.336 0.418 0.00075 0.000934 ! Validation 295 39684.297 0.005 0.00834 0.191 0.358 0.0841 0.111 0.463 0.532 0.00103 0.00119 Wall time: 39684.29925306793 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 296 100 0.227 0.00818 0.063 0.0829 0.11 0.244 0.306 0.000545 0.000683 296 172 0.317 0.0136 0.0443 0.107 0.142 0.204 0.257 0.000456 0.000573 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 296 100 0.292 0.0113 0.0661 0.097 0.129 0.28 0.313 0.000625 0.000699 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 296 39818.187 0.005 0.00973 0.159 0.354 0.0893 0.12 0.375 0.487 0.000837 0.00109 ! Validation 296 39818.187 0.005 0.013 0.0566 0.316 0.105 0.139 0.231 0.29 0.000517 0.000647 Wall time: 39818.187533828896 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 297 100 0.231 0.00801 0.0711 0.0824 0.109 0.279 0.325 0.000624 0.000725 297 172 0.211 0.00711 0.0688 0.0767 0.103 0.287 0.32 0.000641 0.000714 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 297 100 0.162 0.00645 0.0334 0.0741 0.0979 0.213 0.223 0.000475 0.000497 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 297 39951.434 0.005 0.00812 0.0733 0.236 0.0825 0.11 0.261 0.33 0.000583 0.000736 ! Validation 297 39951.434 0.005 0.00814 0.0917 0.254 0.0832 0.11 0.317 0.369 0.000708 0.000823 Wall time: 39951.43461699784 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 298 100 0.363 0.0105 0.154 0.094 0.125 0.423 0.478 0.000944 0.00107 298 172 0.235 0.00782 0.0785 0.0805 0.108 0.282 0.341 0.000629 0.000762 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 298 100 0.18 0.00604 0.0595 0.0718 0.0947 0.294 0.297 0.000655 0.000663 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 298 40084.668 0.005 0.00855 0.143 0.314 0.0843 0.113 0.362 0.462 0.000808 0.00103 ! Validation 298 40084.668 0.005 0.00781 0.121 0.277 0.0812 0.108 0.369 0.424 0.000824 0.000947 Wall time: 40084.66824756283 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 299 100 0.19 0.00714 0.0467 0.0771 0.103 0.226 0.263 0.000505 0.000588 299 172 0.358 0.0107 0.144 0.0958 0.126 0.423 0.462 0.000944 0.00103 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 299 100 0.444 0.00693 0.305 0.077 0.101 0.67 0.673 0.0015 0.0015 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 299 40217.910 0.005 0.00774 0.1 0.255 0.0802 0.107 0.304 0.386 0.00068 0.000861 ! Validation 299 40217.910 0.005 0.00885 0.231 0.408 0.0869 0.115 0.53 0.585 0.00118 0.00131 Wall time: 40217.91057595704 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 300 100 0.179 0.00724 0.0339 0.0778 0.104 0.181 0.224 0.000404 0.000501 300 172 0.204 0.00786 0.0464 0.0817 0.108 0.226 0.262 0.000503 0.000586 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 300 100 0.145 0.00701 0.00519 0.0773 0.102 0.0726 0.0878 0.000162 0.000196 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 300 40351.269 0.005 0.00821 0.124 0.289 0.0826 0.11 0.331 0.43 0.000739 0.000959 ! Validation 300 40351.269 0.005 0.00867 0.0659 0.239 0.0861 0.113 0.254 0.313 0.000568 0.000698 Wall time: 40351.26971083088 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 301 100 0.232 0.00728 0.0865 0.0777 0.104 0.289 0.358 0.000646 0.0008 301 172 0.157 0.0069 0.019 0.0763 0.101 0.138 0.168 0.000307 0.000375 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 301 100 0.143 0.00603 0.0219 0.0716 0.0946 0.179 0.18 0.000399 0.000403 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 301 40486.884 0.005 0.00772 0.106 0.26 0.0804 0.107 0.314 0.396 0.000701 0.000884 ! Validation 301 40486.884 0.005 0.00766 0.0493 0.202 0.0806 0.107 0.206 0.27 0.00046 0.000604 Wall time: 40486.88491541566 ! Best model 301 0.202 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 302 100 0.322 0.00725 0.177 0.0779 0.104 0.461 0.512 0.00103 0.00114 302 172 0.228 0.00965 0.0346 0.0897 0.12 0.182 0.227 0.000406 0.000506 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 302 100 0.409 0.00752 0.259 0.0791 0.106 0.618 0.62 0.00138 0.00138 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 302 40620.157 0.005 0.00804 0.127 0.288 0.0818 0.109 0.34 0.434 0.000758 0.00097 ! Validation 302 40620.157 0.005 0.00905 0.228 0.409 0.0874 0.116 0.528 0.582 0.00118 0.0013 Wall time: 40620.15717729088 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 303 100 0.164 0.00682 0.0281 0.0751 0.101 0.176 0.204 0.000393 0.000456 303 172 0.187 0.00707 0.0452 0.0771 0.102 0.212 0.259 0.000474 0.000578 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 303 100 0.14 0.00611 0.0181 0.0721 0.0952 0.153 0.164 0.000342 0.000366 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 303 40753.423 0.005 0.0075 0.0971 0.247 0.0793 0.106 0.309 0.38 0.000691 0.000848 ! Validation 303 40753.423 0.005 0.00758 0.0997 0.251 0.0804 0.106 0.319 0.385 0.000713 0.000859 Wall time: 40753.42343594693 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 304 100 0.217 0.00683 0.0807 0.0756 0.101 0.29 0.346 0.000648 0.000773 304 172 0.206 0.00839 0.038 0.084 0.112 0.203 0.237 0.000454 0.00053 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 304 100 0.135 0.00674 0.000736 0.076 0.1 0.0279 0.0331 6.24e-05 7.38e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 304 40890.731 0.005 0.00811 0.108 0.27 0.0821 0.11 0.321 0.4 0.000717 0.000892 ! Validation 304 40890.731 0.005 0.00837 0.0466 0.214 0.0845 0.111 0.208 0.263 0.000465 0.000587 Wall time: 40890.73150355369 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 305 100 0.191 0.00719 0.0474 0.0779 0.103 0.21 0.265 0.000469 0.000592 305 172 0.424 0.0068 0.288 0.0747 0.1 0.571 0.654 0.00128 0.00146 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 305 100 0.185 0.00539 0.0773 0.0676 0.0894 0.337 0.339 0.000752 0.000756 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 305 41024.625 0.005 0.00751 0.0918 0.242 0.0793 0.106 0.297 0.369 0.000663 0.000823 ! Validation 305 41024.625 0.005 0.00694 0.172 0.311 0.0764 0.101 0.438 0.506 0.000977 0.00113 Wall time: 41024.62506556697 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 306 100 0.469 0.00704 0.328 0.0765 0.102 0.68 0.698 0.00152 0.00156 306 172 0.24 0.00767 0.087 0.0802 0.107 0.288 0.359 0.000643 0.000802 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 306 100 0.151 0.00588 0.0336 0.071 0.0934 0.222 0.223 0.000495 0.000499 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 306 41174.335 0.005 0.00872 0.154 0.329 0.0852 0.114 0.373 0.479 0.000833 0.00107 ! Validation 306 41174.335 0.005 0.00737 0.0961 0.243 0.0796 0.105 0.294 0.378 0.000657 0.000843 Wall time: 41174.335632120725 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 307 100 0.559 0.00745 0.41 0.0794 0.105 0.75 0.78 0.00167 0.00174 307 172 0.183 0.00677 0.0475 0.0752 0.1 0.213 0.266 0.000474 0.000593 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 307 100 0.124 0.00581 0.00817 0.0702 0.0929 0.0859 0.11 0.000192 0.000246 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 307 41307.538 0.005 0.00901 0.156 0.336 0.0847 0.116 0.355 0.482 0.000793 0.00108 ! Validation 307 41307.538 0.005 0.00734 0.0538 0.201 0.0789 0.104 0.233 0.283 0.000521 0.000631 Wall time: 41307.53842129884 ! Best model 307 0.201 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 308 100 0.156 0.00639 0.0284 0.0729 0.0974 0.164 0.205 0.000366 0.000458 308 172 1.01 0.00788 0.855 0.0821 0.108 1.1 1.13 0.00246 0.00251 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 308 100 0.344 0.0156 0.0329 0.115 0.152 0.21 0.221 0.000468 0.000493 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 308 41440.755 0.005 0.00667 0.0874 0.221 0.0743 0.0995 0.266 0.359 0.000594 0.000802 ! Validation 308 41440.755 0.005 0.016 0.0862 0.405 0.117 0.154 0.281 0.358 0.000628 0.000798 Wall time: 41440.755496609956 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 309 100 0.161 0.0061 0.0387 0.0714 0.0952 0.203 0.24 0.000453 0.000535 309 172 0.349 0.00627 0.223 0.0726 0.0965 0.541 0.576 0.00121 0.00129 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 309 100 0.151 0.00565 0.0377 0.0685 0.0916 0.231 0.237 0.000516 0.000528 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 309 41574.404 0.005 0.00768 0.113 0.267 0.0796 0.107 0.303 0.41 0.000676 0.000915 ! Validation 309 41574.404 0.005 0.00702 0.0466 0.187 0.0768 0.102 0.206 0.263 0.000461 0.000587 Wall time: 41574.40431043785 ! Best model 309 0.187 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 310 100 0.155 0.00629 0.0292 0.0718 0.0966 0.161 0.208 0.000358 0.000465 310 172 0.305 0.00877 0.13 0.0861 0.114 0.395 0.439 0.000881 0.000979 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 310 100 0.176 0.00814 0.0135 0.0827 0.11 0.117 0.141 0.000261 0.000316 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 310 41716.710 0.005 0.00716 0.083 0.226 0.0766 0.103 0.263 0.351 0.000588 0.000783 ! Validation 310 41716.710 0.005 0.00935 0.0984 0.285 0.0895 0.118 0.32 0.382 0.000713 0.000853 Wall time: 41716.710127848666 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 311 100 0.197 0.00795 0.0377 0.0815 0.109 0.179 0.237 0.000399 0.000528 311 172 0.233 0.0093 0.0468 0.0883 0.117 0.229 0.264 0.000512 0.000588 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 311 100 0.212 0.00574 0.0977 0.0699 0.0923 0.38 0.381 0.000849 0.00085 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 311 41850.722 0.005 0.00795 0.116 0.275 0.0813 0.109 0.328 0.416 0.000732 0.000928 ! Validation 311 41850.722 0.005 0.00693 0.082 0.22 0.0767 0.101 0.293 0.349 0.000654 0.000779 Wall time: 41850.72268092586 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 312 100 0.175 0.00648 0.045 0.0735 0.0981 0.206 0.258 0.00046 0.000577 312 172 0.179 0.006 0.0595 0.07 0.0943 0.25 0.297 0.000559 0.000663 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 312 100 0.105 0.00509 0.00362 0.0651 0.0869 0.0567 0.0733 0.000126 0.000164 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 312 41983.911 0.005 0.00691 0.0824 0.221 0.0757 0.101 0.277 0.35 0.000619 0.000781 ! Validation 312 41983.911 0.005 0.00643 0.0737 0.202 0.0735 0.0977 0.274 0.331 0.000611 0.000738 Wall time: 41983.91139940871 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 313 100 0.153 0.00584 0.0356 0.0695 0.0931 0.178 0.23 0.000397 0.000513 313 172 0.259 0.00742 0.111 0.0797 0.105 0.374 0.405 0.000834 0.000905 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 313 100 0.177 0.00744 0.0284 0.0791 0.105 0.203 0.205 0.000453 0.000458 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 313 42117.113 0.005 0.00665 0.0939 0.227 0.0743 0.0993 0.301 0.373 0.000671 0.000833 ! Validation 313 42117.113 0.005 0.00875 0.0576 0.233 0.0855 0.114 0.229 0.292 0.00051 0.000653 Wall time: 42117.11312629562 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 314 100 298 0.971 278 0.892 1.2 19.9 20.3 0.0445 0.0454 314 172 22 0.924 3.5 0.865 1.17 1.9 2.28 0.00423 0.00509 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 314 100 21.8 0.97 2.38 0.881 1.2 1.81 1.88 0.00404 0.00419 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 314 42251.353 0.005 0.482 64 73.7 0.48 0.846 3.69 9.75 0.00823 0.0218 ! Validation 314 42251.353 0.005 0.917 5.15 23.5 0.861 1.17 2.32 2.77 0.00518 0.00617 Wall time: 42251.35303876083 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 315 100 13.7 0.607 1.51 0.703 0.95 1.21 1.5 0.00269 0.00335 315 172 8.57 0.331 1.96 0.533 0.701 1.26 1.71 0.0028 0.00381 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 315 100 7.46 0.351 0.443 0.551 0.722 0.774 0.811 0.00173 0.00181 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 315 42384.565 0.005 0.641 3.19 16 0.717 0.976 1.67 2.18 0.00373 0.00486 ! Validation 315 42384.565 0.005 0.347 2.85 9.79 0.548 0.718 1.7 2.06 0.00379 0.00459 Wall time: 42384.56586913392 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 316 100 6.3 0.248 1.34 0.458 0.606 1.09 1.41 0.00243 0.00315 316 172 7.17 0.239 2.39 0.451 0.596 1.46 1.88 0.00326 0.0042 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 316 100 4.89 0.241 0.0732 0.45 0.598 0.257 0.33 0.000573 0.000736 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 316 42517.756 0.005 0.266 1.6 6.92 0.477 0.628 1.23 1.54 0.00274 0.00344 ! Validation 316 42517.756 0.005 0.235 0.777 5.47 0.447 0.59 0.89 1.07 0.00199 0.0024 Wall time: 42517.756852369756 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 317 100 4.73 0.201 0.7 0.413 0.547 0.814 1.02 0.00182 0.00227 317 172 7.71 0.178 4.15 0.39 0.514 2.24 2.48 0.00499 0.00554 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 317 100 4.11 0.177 0.566 0.387 0.513 0.9 0.916 0.00201 0.00205 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 317 42651.106 0.005 0.201 1.07 5.09 0.414 0.546 0.99 1.26 0.00221 0.00281 ! Validation 317 42651.106 0.005 0.178 0.767 4.32 0.391 0.514 0.888 1.07 0.00198 0.00238 Wall time: 42651.106468190905 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 318 100 4.52 0.15 1.51 0.359 0.472 1.29 1.5 0.00287 0.00334 318 172 3.66 0.147 0.722 0.355 0.467 0.872 1.04 0.00195 0.00231 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 318 100 3.97 0.142 1.13 0.348 0.459 1.29 1.3 0.00288 0.0029 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 318 42788.363 0.005 0.153 1.25 4.31 0.363 0.477 1.09 1.36 0.00243 0.00304 ! Validation 318 42788.363 0.005 0.144 2.77 5.65 0.353 0.462 1.81 2.03 0.00403 0.00453 Wall time: 42788.36343554687 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 319 100 3.92 0.12 1.52 0.322 0.422 1.38 1.5 0.00308 0.00336 319 172 3.2 0.117 0.853 0.321 0.417 0.958 1.13 0.00214 0.00251 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 319 100 2.36 0.115 0.0698 0.314 0.412 0.311 0.322 0.000695 0.000718 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 319 42921.549 0.005 0.127 0.919 3.45 0.331 0.434 0.937 1.17 0.00209 0.00261 ! Validation 319 42921.549 0.005 0.119 1.18 3.56 0.322 0.42 1.09 1.33 0.00243 0.00296 Wall time: 42921.54989149794 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 320 100 2.97 0.108 0.819 0.306 0.4 0.844 1.1 0.00188 0.00246 320 172 3.22 0.104 1.14 0.299 0.392 1.09 1.3 0.00244 0.00291 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 320 100 2.8 0.101 0.787 0.294 0.386 1.08 1.08 0.0024 0.00241 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 320 43054.733 0.005 0.109 0.908 3.08 0.307 0.402 0.942 1.16 0.0021 0.00259 ! Validation 320 43054.733 0.005 0.103 0.547 2.61 0.3 0.391 0.747 0.901 0.00167 0.00201 Wall time: 43054.733834888786 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 321 100 2.31 0.0983 0.344 0.291 0.382 0.559 0.715 0.00125 0.0016 321 172 1.88 0.0842 0.196 0.271 0.353 0.472 0.54 0.00105 0.0012 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 321 100 1.75 0.0863 0.0288 0.271 0.358 0.17 0.207 0.000379 0.000462 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 321 43192.908 0.005 0.0955 0.727 2.64 0.288 0.376 0.826 1.04 0.00184 0.00232 ! Validation 321 43192.908 0.005 0.089 0.796 2.58 0.279 0.363 0.868 1.09 0.00194 0.00243 Wall time: 43192.90800836263 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 322 100 2.19 0.0855 0.475 0.273 0.356 0.706 0.84 0.00158 0.00187 322 172 2.01 0.0861 0.287 0.272 0.358 0.51 0.653 0.00114 0.00146 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 322 100 3.2 0.0847 1.51 0.269 0.355 1.49 1.5 0.00332 0.00334 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 322 43326.110 0.005 0.0851 1.07 2.77 0.271 0.355 1 1.26 0.00224 0.00281 ! Validation 322 43326.110 0.005 0.0853 0.887 2.59 0.272 0.356 0.969 1.15 0.00216 0.00256 Wall time: 43326.11067741085 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 323 100 2.19 0.0807 0.58 0.261 0.346 0.799 0.928 0.00178 0.00207 323 172 5.13 0.0785 3.56 0.261 0.341 2.17 2.3 0.00484 0.00513 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 323 100 2.66 0.0777 1.11 0.256 0.34 1.27 1.28 0.00284 0.00286 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 323 43460.872 0.005 0.0781 0.915 2.48 0.26 0.341 0.942 1.16 0.0021 0.0026 ! Validation 323 43460.872 0.005 0.0778 2.47 4.03 0.26 0.34 1.77 1.92 0.00396 0.00428 Wall time: 43460.8719983967 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 324 100 1.8 0.0736 0.331 0.25 0.33 0.538 0.701 0.0012 0.00156 324 172 1.8 0.0687 0.429 0.243 0.319 0.642 0.798 0.00143 0.00178 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 324 100 1.57 0.0684 0.2 0.24 0.319 0.507 0.544 0.00113 0.00122 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 324 43596.405 0.005 0.073 0.785 2.24 0.25 0.329 0.865 1.08 0.00193 0.00241 ! Validation 324 43596.405 0.005 0.069 0.313 1.69 0.244 0.32 0.562 0.682 0.00125 0.00152 Wall time: 43596.40499271685 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 325 100 1.51 0.0668 0.177 0.239 0.315 0.427 0.512 0.000953 0.00114 325 172 3 0.0637 1.73 0.233 0.308 1.44 1.6 0.00322 0.00357 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 325 100 2.31 0.0623 1.06 0.228 0.304 1.24 1.26 0.00277 0.00281 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 325 43741.435 0.005 0.0649 0.616 1.91 0.236 0.31 0.765 0.956 0.00171 0.00213 ! Validation 325 43741.435 0.005 0.0627 0.633 1.89 0.232 0.305 0.796 0.969 0.00178 0.00216 Wall time: 43741.435275612865 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 326 100 1.56 0.0606 0.352 0.228 0.3 0.609 0.723 0.00136 0.00161 326 172 3.05 0.0634 1.78 0.232 0.307 1.5 1.63 0.00335 0.00363 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 326 100 1.35 0.0609 0.135 0.225 0.301 0.403 0.447 0.0009 0.000998 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 326 43874.609 0.005 0.0618 0.862 2.1 0.229 0.303 0.929 1.13 0.00207 0.00252 ! Validation 326 43874.609 0.005 0.0615 0.817 2.05 0.23 0.302 0.925 1.1 0.00206 0.00246 Wall time: 43874.60957361059 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 327 100 1.4 0.0565 0.267 0.219 0.29 0.499 0.63 0.00111 0.00141 327 172 3.55 0.0552 2.45 0.216 0.286 1.83 1.91 0.00409 0.00425 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 327 100 2.23 0.0551 1.13 0.214 0.286 1.28 1.3 0.00286 0.00289 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 327 44016.798 0.005 0.0585 0.622 1.79 0.223 0.295 0.756 0.96 0.00169 0.00214 ! Validation 327 44016.798 0.005 0.0554 0.868 1.98 0.218 0.287 0.996 1.14 0.00222 0.00253 Wall time: 44016.79887089692 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 328 100 1.47 0.0552 0.365 0.217 0.286 0.624 0.736 0.00139 0.00164 328 172 1.25 0.0514 0.225 0.209 0.276 0.498 0.578 0.00111 0.00129 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 328 100 1.67 0.0523 0.63 0.209 0.279 0.948 0.967 0.00212 0.00216 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 328 44151.302 0.005 0.055 0.694 1.79 0.216 0.286 0.836 1.01 0.00187 0.00227 ! Validation 328 44151.302 0.005 0.0526 0.498 1.55 0.212 0.28 0.719 0.86 0.0016 0.00192 Wall time: 44151.30275917286 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 329 100 1.64 0.0501 0.639 0.206 0.273 0.892 0.974 0.00199 0.00217 329 172 1.09 0.0492 0.109 0.204 0.27 0.335 0.402 0.000747 0.000896 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 329 100 1.17 0.0494 0.18 0.202 0.271 0.482 0.517 0.00108 0.00115 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 329 44285.811 0.005 0.051 0.585 1.6 0.208 0.275 0.74 0.932 0.00165 0.00208 ! Validation 329 44285.811 0.005 0.0499 0.569 1.57 0.206 0.272 0.771 0.919 0.00172 0.00205 Wall time: 44285.81183817098 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 330 100 1.86 0.0492 0.873 0.205 0.27 1.06 1.14 0.00236 0.00254 330 172 1.87 0.0502 0.861 0.205 0.273 1.02 1.13 0.00228 0.00252 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 330 100 1.08 0.0477 0.124 0.199 0.266 0.386 0.429 0.000861 0.000958 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 330 44430.601 0.005 0.049 0.647 1.63 0.204 0.27 0.783 0.98 0.00175 0.00219 ! Validation 330 44430.601 0.005 0.0482 0.208 1.17 0.203 0.268 0.453 0.555 0.00101 0.00124 Wall time: 44430.60131745972 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 331 100 1.58 0.0462 0.655 0.199 0.262 0.886 0.986 0.00198 0.0022 331 172 2.5 0.0486 1.53 0.203 0.269 1.45 1.51 0.00323 0.00337 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 331 100 1.51 0.0473 0.568 0.198 0.265 0.904 0.918 0.00202 0.00205 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 331 44563.815 0.005 0.0468 0.631 1.57 0.199 0.263 0.785 0.967 0.00175 0.00216 ! Validation 331 44563.815 0.005 0.0478 1.42 2.38 0.202 0.266 1.32 1.45 0.00295 0.00324 Wall time: 44563.81534509268 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 332 100 1.12 0.0439 0.238 0.194 0.255 0.488 0.594 0.00109 0.00133 332 172 1.15 0.0442 0.268 0.194 0.256 0.514 0.631 0.00115 0.00141 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 332 100 0.983 0.0425 0.134 0.188 0.251 0.415 0.445 0.000926 0.000994 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 332 44700.059 0.005 0.0445 0.473 1.36 0.194 0.257 0.677 0.838 0.00151 0.00187 ! Validation 332 44700.059 0.005 0.0428 0.198 1.05 0.191 0.252 0.443 0.542 0.000988 0.00121 Wall time: 44700.06045386102 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 333 100 1.43 0.0446 0.541 0.194 0.257 0.786 0.896 0.00175 0.002 333 172 1.25 0.0432 0.389 0.192 0.253 0.618 0.76 0.00138 0.0017 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 333 100 1.49 0.0436 0.619 0.191 0.254 0.95 0.959 0.00212 0.00214 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 333 44833.254 0.005 0.0441 0.732 1.61 0.193 0.256 0.844 1.04 0.00188 0.00233 ! Validation 333 44833.254 0.005 0.044 0.475 1.36 0.194 0.256 0.693 0.84 0.00155 0.00187 Wall time: 44833.25443934975 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 334 100 0.949 0.0388 0.174 0.181 0.24 0.416 0.508 0.000928 0.00114 334 172 1.38 0.0483 0.411 0.202 0.268 0.663 0.782 0.00148 0.00174 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 334 100 1.6 0.0461 0.676 0.196 0.262 0.993 1 0.00222 0.00224 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 334 44966.536 0.005 0.042 0.604 1.44 0.188 0.25 0.739 0.947 0.00165 0.00211 ! Validation 334 44966.536 0.005 0.0461 0.585 1.51 0.198 0.262 0.786 0.932 0.00176 0.00208 Wall time: 44966.53611655068 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 335 100 0.849 0.0372 0.104 0.178 0.235 0.281 0.394 0.000628 0.000879 335 172 1.27 0.0426 0.421 0.188 0.252 0.681 0.791 0.00152 0.00177 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 335 100 0.865 0.0398 0.0697 0.182 0.243 0.29 0.322 0.000647 0.000718 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 335 45099.726 0.005 0.0401 0.536 1.34 0.184 0.244 0.713 0.892 0.00159 0.00199 ! Validation 335 45099.726 0.005 0.0402 0.256 1.06 0.185 0.244 0.496 0.617 0.00111 0.00138 Wall time: 45099.72609039396 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 336 100 1.46 0.0365 0.728 0.177 0.233 0.959 1.04 0.00214 0.00232 336 172 0.854 0.0354 0.146 0.174 0.229 0.364 0.465 0.000813 0.00104 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 336 100 0.705 0.0348 0.00965 0.171 0.227 0.112 0.12 0.00025 0.000267 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 336 45234.409 0.005 0.0373 0.44 1.19 0.178 0.235 0.666 0.808 0.00149 0.0018 ! Validation 336 45234.409 0.005 0.0356 0.166 0.877 0.174 0.23 0.396 0.496 0.000884 0.00111 Wall time: 45234.40937783895 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 337 100 2.74 0.033 2.08 0.168 0.221 1.72 1.76 0.00383 0.00392 337 172 0.936 0.0348 0.241 0.173 0.227 0.473 0.598 0.00106 0.00133 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 337 100 0.771 0.0345 0.082 0.171 0.226 0.338 0.349 0.000755 0.000779 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 337 45376.125 0.005 0.036 0.608 1.33 0.175 0.231 0.761 0.951 0.0017 0.00212 ! Validation 337 45376.125 0.005 0.0352 0.166 0.871 0.174 0.229 0.4 0.497 0.000892 0.00111 Wall time: 45376.125008484814 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 338 100 0.862 0.0334 0.194 0.169 0.223 0.44 0.536 0.000982 0.0012 338 172 0.881 0.03 0.281 0.161 0.211 0.556 0.645 0.00124 0.00144 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 338 100 0.605 0.0295 0.0143 0.16 0.209 0.118 0.146 0.000263 0.000326 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 338 45509.412 0.005 0.0331 0.452 1.11 0.168 0.222 0.628 0.819 0.0014 0.00183 ! Validation 338 45509.412 0.005 0.0307 0.127 0.741 0.163 0.213 0.351 0.435 0.000784 0.000971 Wall time: 45509.41259744577 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 339 100 0.638 0.0277 0.0842 0.154 0.203 0.259 0.354 0.000577 0.000789 339 172 0.626 0.0273 0.0802 0.153 0.201 0.269 0.345 0.000601 0.00077 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 339 100 0.831 0.0255 0.32 0.149 0.195 0.686 0.69 0.00153 0.00154 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 339 45642.589 0.005 0.0297 0.463 1.06 0.16 0.21 0.667 0.829 0.00149 0.00185 ! Validation 339 45642.589 0.005 0.027 0.493 1.03 0.153 0.2 0.766 0.856 0.00171 0.00191 Wall time: 45642.589061711915 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 340 100 1.51 0.0266 0.974 0.151 0.199 1.16 1.2 0.00259 0.00268 340 172 0.63 0.026 0.109 0.148 0.197 0.332 0.402 0.000741 0.000897 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 340 100 0.543 0.0241 0.0599 0.144 0.189 0.29 0.298 0.000647 0.000666 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 340 45775.772 0.005 0.0269 0.468 1.01 0.151 0.2 0.655 0.833 0.00146 0.00186 ! Validation 340 45775.772 0.005 0.0256 0.161 0.672 0.148 0.195 0.393 0.489 0.000877 0.00109 Wall time: 45775.77247638302 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 341 100 0.767 0.0236 0.294 0.141 0.187 0.603 0.661 0.00135 0.00148 341 172 0.686 0.0217 0.252 0.135 0.179 0.546 0.612 0.00122 0.00137 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 341 100 0.525 0.0203 0.119 0.132 0.173 0.418 0.421 0.000932 0.00094 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 341 45920.066 0.005 0.0238 0.36 0.837 0.142 0.188 0.59 0.731 0.00132 0.00163 ! Validation 341 45920.066 0.005 0.0221 0.278 0.72 0.137 0.181 0.554 0.642 0.00124 0.00143 Wall time: 45920.0665769577 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 342 100 3.28 0.0355 2.57 0.172 0.229 1.81 1.95 0.00405 0.00436 342 172 0.546 0.022 0.106 0.135 0.181 0.331 0.397 0.000738 0.000887 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 342 100 0.489 0.0209 0.0703 0.133 0.176 0.32 0.323 0.000714 0.000721 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 342 46053.269 0.005 0.0243 0.584 1.07 0.143 0.19 0.701 0.931 0.00156 0.00208 ! Validation 342 46053.269 0.005 0.0228 0.115 0.571 0.139 0.184 0.333 0.414 0.000744 0.000923 Wall time: 46053.26886702282 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 343 100 0.457 0.0191 0.0746 0.127 0.169 0.261 0.333 0.000582 0.000743 343 172 1.46 0.0186 1.09 0.125 0.166 1.23 1.27 0.00274 0.00283 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 343 100 1.97 0.0177 1.62 0.122 0.162 1.55 1.55 0.00346 0.00346 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 343 46186.473 0.005 0.0204 0.239 0.648 0.131 0.174 0.473 0.595 0.00105 0.00133 ! Validation 343 46186.473 0.005 0.0196 1.76 2.15 0.129 0.171 1.57 1.62 0.00349 0.00361 Wall time: 46186.47370515298 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 344 100 0.511 0.0192 0.127 0.126 0.169 0.349 0.434 0.00078 0.00097 344 172 0.457 0.0204 0.0495 0.13 0.174 0.21 0.271 0.000469 0.000605 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 344 100 0.354 0.0175 0.00352 0.121 0.161 0.0615 0.0723 0.000137 0.000161 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 344 46319.684 0.005 0.0205 0.425 0.835 0.13 0.174 0.626 0.795 0.0014 0.00177 ! Validation 344 46319.684 0.005 0.0194 0.0891 0.477 0.128 0.17 0.291 0.364 0.000649 0.000812 Wall time: 46319.684001687914 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 345 100 0.41 0.0165 0.0803 0.118 0.156 0.278 0.345 0.00062 0.000771 345 172 0.555 0.0174 0.207 0.12 0.161 0.508 0.554 0.00113 0.00124 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 345 100 0.353 0.0162 0.03 0.117 0.155 0.208 0.211 0.000465 0.000471 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 345 46453.163 0.005 0.0181 0.252 0.614 0.122 0.164 0.491 0.611 0.0011 0.00136 ! Validation 345 46453.163 0.005 0.0181 0.0786 0.44 0.123 0.164 0.274 0.342 0.000612 0.000763 Wall time: 46453.16402501287 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 346 100 0.605 0.0183 0.238 0.123 0.165 0.53 0.595 0.00118 0.00133 346 172 0.485 0.0179 0.127 0.121 0.163 0.35 0.434 0.000782 0.000968 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 346 100 0.337 0.0158 0.0205 0.115 0.153 0.169 0.174 0.000376 0.000389 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 346 46586.599 0.005 0.0173 0.292 0.639 0.12 0.16 0.525 0.659 0.00117 0.00147 ! Validation 346 46586.599 0.005 0.0177 0.0676 0.422 0.122 0.162 0.254 0.317 0.000566 0.000707 Wall time: 46586.59922636673 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 347 100 0.423 0.0173 0.077 0.119 0.16 0.273 0.338 0.000609 0.000754 347 172 0.947 0.0168 0.611 0.118 0.158 0.885 0.952 0.00197 0.00213 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 347 100 0.818 0.0155 0.508 0.113 0.152 0.867 0.868 0.00194 0.00194 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 347 46719.795 0.005 0.017 0.287 0.628 0.118 0.159 0.505 0.653 0.00113 0.00146 ! Validation 347 46719.795 0.005 0.0172 0.584 0.929 0.12 0.16 0.848 0.931 0.00189 0.00208 Wall time: 46719.795677833725 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 348 100 0.363 0.0158 0.0471 0.114 0.153 0.208 0.264 0.000464 0.00059 348 172 0.603 0.0153 0.297 0.113 0.151 0.58 0.664 0.0013 0.00148 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 348 100 0.293 0.0141 0.00992 0.109 0.145 0.108 0.121 0.000241 0.000271 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 348 46852.988 0.005 0.0158 0.24 0.556 0.114 0.153 0.483 0.597 0.00108 0.00133 ! Validation 348 46852.988 0.005 0.0161 0.113 0.434 0.116 0.155 0.326 0.409 0.000729 0.000912 Wall time: 46852.988563613035 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 349 100 0.374 0.0151 0.0727 0.111 0.15 0.262 0.329 0.000584 0.000733 349 172 0.371 0.0156 0.0598 0.114 0.152 0.247 0.298 0.00055 0.000665 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 349 100 0.314 0.0134 0.047 0.106 0.141 0.255 0.264 0.00057 0.00059 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 349 46986.555 0.005 0.0151 0.233 0.536 0.112 0.15 0.465 0.589 0.00104 0.00131 ! Validation 349 46986.555 0.005 0.0152 0.181 0.486 0.113 0.15 0.446 0.519 0.000996 0.00116 Wall time: 46986.55503129773 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 350 100 0.587 0.014 0.308 0.108 0.144 0.594 0.676 0.00133 0.00151 350 172 0.703 0.0143 0.417 0.109 0.146 0.732 0.787 0.00163 0.00176 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 350 100 0.567 0.013 0.306 0.104 0.139 0.671 0.674 0.0015 0.00151 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 350 47119.745 0.005 0.0149 0.267 0.564 0.111 0.149 0.504 0.629 0.00112 0.0014 ! Validation 350 47119.745 0.005 0.0149 0.698 0.996 0.112 0.149 0.957 1.02 0.00214 0.00227 Wall time: 47119.74537084391 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 351 100 0.373 0.0133 0.108 0.105 0.14 0.344 0.401 0.000768 0.000894 351 172 0.349 0.0137 0.0748 0.106 0.143 0.272 0.333 0.000606 0.000744 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 351 100 0.301 0.0119 0.0626 0.0999 0.133 0.3 0.305 0.00067 0.00068 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 351 47252.949 0.005 0.0142 0.189 0.473 0.108 0.145 0.423 0.53 0.000944 0.00118 ! Validation 351 47252.949 0.005 0.0138 0.187 0.464 0.108 0.143 0.463 0.527 0.00103 0.00118 Wall time: 47252.9497297327 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 352 100 0.392 0.0137 0.119 0.106 0.142 0.356 0.42 0.000795 0.000939 352 172 0.819 0.0136 0.548 0.106 0.142 0.873 0.902 0.00195 0.00201 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 352 100 0.79 0.0115 0.56 0.0985 0.131 0.91 0.912 0.00203 0.00204 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 352 47386.149 0.005 0.0132 0.19 0.455 0.104 0.14 0.432 0.531 0.000965 0.00119 ! Validation 352 47386.149 0.005 0.0135 0.869 1.14 0.106 0.141 1.1 1.14 0.00245 0.00254 Wall time: 47386.14959898684 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 353 100 0.32 0.0132 0.0562 0.103 0.14 0.238 0.289 0.000532 0.000645 353 172 0.31 0.0128 0.0539 0.102 0.138 0.232 0.283 0.000518 0.000632 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 353 100 0.234 0.0112 0.00906 0.0972 0.129 0.0963 0.116 0.000215 0.000259 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 353 47519.334 0.005 0.0132 0.237 0.501 0.104 0.14 0.477 0.593 0.00106 0.00132 ! Validation 353 47519.334 0.005 0.0131 0.0586 0.322 0.105 0.14 0.237 0.295 0.000529 0.000658 Wall time: 47519.33437406272 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 354 100 0.311 0.0124 0.0625 0.101 0.136 0.238 0.305 0.00053 0.00068 354 172 0.498 0.0139 0.221 0.108 0.144 0.489 0.572 0.00109 0.00128 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 354 100 0.718 0.013 0.457 0.105 0.139 0.823 0.824 0.00184 0.00184 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 354 47653.622 0.005 0.0127 0.222 0.476 0.102 0.137 0.447 0.574 0.000998 0.00128 ! Validation 354 47653.622 0.005 0.0149 0.347 0.645 0.112 0.149 0.659 0.718 0.00147 0.0016 Wall time: 47653.62265513884 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 355 100 0.29 0.0129 0.0312 0.103 0.139 0.168 0.215 0.000374 0.000481 355 172 0.326 0.0119 0.0882 0.0997 0.133 0.298 0.362 0.000665 0.000808 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 355 100 0.264 0.0106 0.0534 0.0946 0.125 0.276 0.282 0.000615 0.000629 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 355 47786.844 0.005 0.0123 0.17 0.416 0.101 0.135 0.405 0.503 0.000904 0.00112 ! Validation 355 47786.844 0.005 0.0125 0.0881 0.338 0.102 0.136 0.294 0.362 0.000657 0.000807 Wall time: 47786.8447869136 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 356 100 0.264 0.0116 0.0319 0.0975 0.131 0.173 0.218 0.000385 0.000486 356 172 0.363 0.0116 0.131 0.0977 0.131 0.357 0.441 0.000797 0.000984 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 356 100 0.227 0.00989 0.0295 0.0917 0.121 0.199 0.209 0.000444 0.000467 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 356 47920.119 0.005 0.0117 0.154 0.388 0.0984 0.132 0.379 0.478 0.000845 0.00107 ! Validation 356 47920.119 0.005 0.0118 0.128 0.363 0.0995 0.132 0.375 0.436 0.000837 0.000974 Wall time: 47920.119812699035 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 357 100 0.67 0.0105 0.46 0.0939 0.125 0.804 0.827 0.00179 0.00184 357 172 0.736 0.0109 0.517 0.0945 0.127 0.839 0.876 0.00187 0.00196 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 357 100 0.399 0.00944 0.211 0.0895 0.118 0.557 0.559 0.00124 0.00125 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 357 48053.406 0.005 0.0112 0.157 0.382 0.0965 0.129 0.39 0.483 0.000871 0.00108 ! Validation 357 48053.406 0.005 0.0113 0.286 0.513 0.0978 0.13 0.578 0.652 0.00129 0.00145 Wall time: 48053.40626338404 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 358 100 0.289 0.0115 0.0597 0.0975 0.13 0.248 0.298 0.000554 0.000664 358 172 0.863 0.011 0.643 0.0967 0.128 0.95 0.977 0.00212 0.00218 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 358 100 0.229 0.0106 0.0167 0.0957 0.125 0.15 0.158 0.000334 0.000352 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 358 48186.736 0.005 0.0112 0.207 0.43 0.0964 0.129 0.44 0.553 0.000981 0.00124 ! Validation 358 48186.736 0.005 0.0125 0.0856 0.335 0.103 0.136 0.294 0.356 0.000656 0.000796 Wall time: 48186.73620757274 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 359 100 0.443 0.0112 0.22 0.0964 0.129 0.521 0.571 0.00116 0.00128 359 172 0.294 0.0106 0.0814 0.0938 0.126 0.291 0.348 0.000649 0.000776 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 359 100 0.261 0.00891 0.083 0.0875 0.115 0.349 0.351 0.000778 0.000784 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 359 48321.376 0.005 0.0109 0.166 0.384 0.0951 0.127 0.394 0.497 0.000879 0.00111 ! Validation 359 48321.376 0.005 0.0108 0.201 0.417 0.0956 0.127 0.492 0.547 0.0011 0.00122 Wall time: 48321.3765009949 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 360 100 0.597 0.0105 0.386 0.0935 0.125 0.721 0.757 0.00161 0.00169 360 172 0.271 0.0107 0.0571 0.0944 0.126 0.238 0.291 0.000531 0.00065 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 360 100 0.278 0.00962 0.0858 0.0903 0.119 0.352 0.357 0.000786 0.000797 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 360 48454.610 0.005 0.0106 0.208 0.42 0.094 0.125 0.452 0.556 0.00101 0.00124 ! Validation 360 48454.610 0.005 0.0112 0.164 0.389 0.0972 0.129 0.416 0.494 0.00093 0.0011 Wall time: 48454.610388753936 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 361 100 0.357 0.00999 0.157 0.0914 0.122 0.43 0.483 0.00096 0.00108 361 172 0.241 0.00982 0.0444 0.0902 0.121 0.175 0.257 0.00039 0.000573 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 361 100 0.218 0.00864 0.0454 0.0859 0.113 0.258 0.26 0.000575 0.00058 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 361 48587.866 0.005 0.0106 0.165 0.376 0.094 0.125 0.395 0.495 0.000882 0.0011 ! Validation 361 48587.866 0.005 0.0104 0.0972 0.306 0.0939 0.124 0.304 0.38 0.000679 0.000848 Wall time: 48587.86637935182 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 362 100 0.311 0.00933 0.125 0.0883 0.118 0.381 0.43 0.000851 0.000961 362 172 0.427 0.0104 0.219 0.0933 0.124 0.507 0.57 0.00113 0.00127 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 362 100 0.207 0.00955 0.016 0.0909 0.119 0.151 0.154 0.000337 0.000344 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 362 48721.062 0.005 0.00988 0.167 0.365 0.091 0.121 0.397 0.498 0.000887 0.00111 ! Validation 362 48721.062 0.005 0.0113 0.0489 0.275 0.0985 0.129 0.218 0.269 0.000487 0.000601 Wall time: 48721.06284145173 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 363 100 0.487 0.0106 0.275 0.0943 0.125 0.603 0.639 0.00135 0.00143 363 172 0.324 0.00965 0.132 0.0894 0.12 0.402 0.442 0.000897 0.000987 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 363 100 0.162 0.00801 0.00219 0.0832 0.109 0.0507 0.057 0.000113 0.000127 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 363 48856.618 0.005 0.00966 0.139 0.333 0.0899 0.12 0.365 0.455 0.000816 0.00102 ! Validation 363 48856.618 0.005 0.00974 0.0816 0.276 0.0911 0.12 0.287 0.348 0.000641 0.000777 Wall time: 48856.61853324296 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 364 100 0.239 0.00864 0.0664 0.0856 0.113 0.253 0.314 0.000565 0.000701 364 172 0.698 0.00929 0.513 0.0888 0.117 0.817 0.872 0.00182 0.00195 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 364 100 0.326 0.00758 0.174 0.0811 0.106 0.503 0.509 0.00112 0.00114 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 364 48989.843 0.005 0.00919 0.125 0.309 0.0878 0.117 0.342 0.43 0.000764 0.00096 ! Validation 364 48989.843 0.005 0.00941 0.147 0.335 0.0896 0.118 0.404 0.466 0.000902 0.00104 Wall time: 48989.84373799665 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 365 100 0.235 0.00954 0.0445 0.0888 0.119 0.212 0.257 0.000474 0.000574 365 172 0.307 0.00885 0.13 0.0864 0.115 0.376 0.439 0.00084 0.000979 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 365 100 0.232 0.00827 0.0662 0.0844 0.111 0.305 0.313 0.000681 0.0007 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 365 49123.158 0.005 0.0089 0.149 0.327 0.0865 0.115 0.369 0.471 0.000824 0.00105 ! Validation 365 49123.158 0.005 0.00978 0.117 0.313 0.0914 0.121 0.351 0.418 0.000784 0.000932 Wall time: 49123.15815026499 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 366 100 0.218 0.0088 0.0417 0.0865 0.114 0.211 0.249 0.00047 0.000555 366 172 0.565 0.00848 0.395 0.086 0.112 0.718 0.766 0.0016 0.00171 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 366 100 0.343 0.00941 0.155 0.0925 0.118 0.479 0.48 0.00107 0.00107 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 366 49256.379 0.005 0.00917 0.159 0.343 0.0878 0.117 0.39 0.486 0.000871 0.00109 ! Validation 366 49256.379 0.005 0.0112 0.261 0.484 0.0997 0.129 0.561 0.622 0.00125 0.00139 Wall time: 49256.379015689716 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 367 100 0.227 0.00778 0.0712 0.0808 0.107 0.281 0.325 0.000627 0.000726 367 172 0.221 0.00828 0.0557 0.0838 0.111 0.23 0.288 0.000514 0.000642 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 367 100 0.206 0.00746 0.0572 0.0807 0.105 0.285 0.291 0.000636 0.000651 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 367 49390.346 0.005 0.00851 0.118 0.288 0.0846 0.112 0.337 0.418 0.000752 0.000933 ! Validation 367 49390.346 0.005 0.00913 0.0649 0.247 0.0885 0.116 0.248 0.31 0.000553 0.000693 Wall time: 49390.34629042586 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 368 100 0.206 0.00806 0.0449 0.0821 0.109 0.198 0.258 0.000443 0.000577 368 172 0.55 0.00802 0.39 0.0821 0.109 0.726 0.761 0.00162 0.0017 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 368 100 0.244 0.00709 0.102 0.0783 0.103 0.388 0.389 0.000866 0.000868 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 368 49523.547 0.005 0.00819 0.121 0.285 0.0831 0.11 0.346 0.423 0.000772 0.000945 ! Validation 368 49523.547 0.005 0.00865 0.242 0.415 0.0859 0.113 0.504 0.6 0.00112 0.00134 Wall time: 49523.54713725904 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 369 100 0.512 0.00788 0.355 0.0815 0.108 0.699 0.725 0.00156 0.00162 369 172 0.234 0.00742 0.086 0.0792 0.105 0.307 0.357 0.000684 0.000798 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 369 100 0.135 0.00656 0.00417 0.0754 0.0987 0.066 0.0787 0.000147 0.000176 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 369 49657.144 0.005 0.00837 0.157 0.325 0.0841 0.111 0.383 0.484 0.000856 0.00108 ! Validation 369 49657.144 0.005 0.00821 0.0712 0.235 0.0836 0.11 0.269 0.325 0.000601 0.000726 Wall time: 49657.14462063182 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 370 100 0.301 0.00798 0.141 0.082 0.109 0.406 0.458 0.000906 0.00102 370 172 0.301 0.00739 0.153 0.0791 0.105 0.432 0.477 0.000963 0.00106 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 370 100 0.417 0.00638 0.29 0.0743 0.0973 0.655 0.656 0.00146 0.00146 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 370 49790.356 0.005 0.00804 0.132 0.292 0.0823 0.109 0.359 0.442 0.000801 0.000987 ! Validation 370 49790.356 0.005 0.00799 0.57 0.73 0.0826 0.109 0.883 0.92 0.00197 0.00205 Wall time: 49790.35638384661 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 371 100 0.368 0.00708 0.226 0.0777 0.103 0.539 0.58 0.0012 0.00129 371 172 0.19 0.00736 0.0433 0.079 0.105 0.201 0.253 0.000449 0.000566 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 371 100 0.126 0.0062 0.00169 0.0729 0.0959 0.0484 0.0501 0.000108 0.000112 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 371 49923.536 0.005 0.00745 0.0869 0.236 0.0792 0.105 0.289 0.359 0.000645 0.000802 ! Validation 371 49923.536 0.005 0.0079 0.044 0.202 0.0819 0.108 0.204 0.255 0.000456 0.00057 Wall time: 49923.536202332936 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 372 100 0.2 0.00752 0.0492 0.0801 0.106 0.216 0.27 0.000483 0.000603 372 172 0.249 0.00759 0.0974 0.0796 0.106 0.351 0.38 0.000782 0.000849 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 372 100 0.232 0.00598 0.113 0.0719 0.0942 0.407 0.409 0.000909 0.000913 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 372 50060.459 0.005 0.00767 0.136 0.29 0.0804 0.107 0.355 0.45 0.000793 0.001 ! Validation 372 50060.459 0.005 0.00771 0.104 0.258 0.081 0.107 0.33 0.392 0.000738 0.000876 Wall time: 50060.45974663459 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 373 100 0.185 0.00678 0.0495 0.0758 0.1 0.236 0.271 0.000527 0.000605 373 172 0.278 0.00688 0.141 0.0762 0.101 0.426 0.457 0.000951 0.00102 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 373 100 0.217 0.00583 0.1 0.0709 0.093 0.385 0.386 0.000859 0.000862 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 373 50193.784 0.005 0.00827 0.141 0.307 0.0832 0.111 0.365 0.458 0.000816 0.00102 ! Validation 373 50193.784 0.005 0.00747 0.133 0.283 0.0797 0.105 0.369 0.444 0.000824 0.000992 Wall time: 50193.78388143191 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 374 100 0.221 0.00699 0.0811 0.0765 0.102 0.304 0.347 0.000678 0.000775 374 172 0.218 0.00644 0.0889 0.0737 0.0978 0.326 0.363 0.000728 0.000811 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 374 100 0.122 0.00602 0.00128 0.072 0.0945 0.0387 0.0436 8.63e-05 9.73e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 374 50333.027 0.005 0.00713 0.084 0.227 0.0773 0.103 0.273 0.353 0.000609 0.000788 ! Validation 374 50333.027 0.005 0.00763 0.0418 0.194 0.0808 0.106 0.196 0.249 0.000438 0.000556 Wall time: 50333.028063648846 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 375 100 0.171 0.00645 0.0421 0.0733 0.0979 0.218 0.25 0.000486 0.000558 375 172 0.209 0.00818 0.0454 0.083 0.11 0.198 0.26 0.000441 0.000579 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 375 100 0.33 0.00673 0.195 0.0759 0.0999 0.537 0.539 0.0012 0.0012 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 375 50470.660 0.005 0.00741 0.128 0.276 0.0788 0.105 0.345 0.436 0.000769 0.000973 ! Validation 375 50470.660 0.005 0.00837 0.219 0.386 0.0846 0.111 0.518 0.57 0.00116 0.00127 Wall time: 50470.66055963887 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 376 100 0.205 0.00642 0.0767 0.0732 0.0976 0.29 0.337 0.000647 0.000753 376 172 0.167 0.00723 0.0219 0.0782 0.104 0.141 0.18 0.000315 0.000403 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 376 100 0.115 0.00574 0.000491 0.07 0.0923 0.0214 0.027 4.78e-05 6.02e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 376 50607.288 0.005 0.00728 0.116 0.262 0.0782 0.104 0.325 0.415 0.000726 0.000927 ! Validation 376 50607.288 0.005 0.00739 0.0378 0.186 0.0791 0.105 0.189 0.237 0.000422 0.000529 Wall time: 50607.28811938083 ! Best model 376 0.186 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 377 100 0.25 0.00765 0.0972 0.0814 0.107 0.314 0.38 0.000701 0.000848 377 172 0.222 0.007 0.0824 0.0767 0.102 0.289 0.35 0.000646 0.000781 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 377 100 0.581 0.00541 0.473 0.0681 0.0896 0.837 0.838 0.00187 0.00187 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 377 50740.565 0.005 0.00776 0.123 0.279 0.0802 0.107 0.34 0.428 0.000759 0.000956 ! Validation 377 50740.565 0.005 0.00694 0.428 0.566 0.0767 0.101 0.766 0.797 0.00171 0.00178 Wall time: 50740.56536039058 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 378 100 0.211 0.00698 0.0708 0.0761 0.102 0.243 0.324 0.000543 0.000724 378 172 1.13 0.00663 1 0.0745 0.0992 1.2 1.22 0.00269 0.00272 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 378 100 0.292 0.00627 0.167 0.0739 0.0965 0.494 0.498 0.0011 0.00111 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 378 50874.283 0.005 0.00659 0.0976 0.229 0.0742 0.0989 0.298 0.379 0.000665 0.000847 ! Validation 378 50874.283 0.005 0.0076 0.2 0.352 0.081 0.106 0.49 0.544 0.00109 0.00122 Wall time: 50874.283649352845 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 379 100 0.212 0.00722 0.0678 0.0773 0.104 0.286 0.317 0.000639 0.000708 379 172 0.2 0.00758 0.0487 0.0796 0.106 0.205 0.269 0.000457 0.0006 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 379 100 0.283 0.00692 0.144 0.0776 0.101 0.458 0.463 0.00102 0.00103 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 379 51007.830 0.005 0.00677 0.0989 0.234 0.0753 0.1 0.308 0.383 0.000688 0.000856 ! Validation 379 51007.830 0.005 0.00829 0.205 0.371 0.0849 0.111 0.504 0.552 0.00113 0.00123 Wall time: 51007.83007479366 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 380 100 0.326 0.00663 0.193 0.0747 0.0992 0.497 0.536 0.00111 0.0012 380 172 0.163 0.00587 0.0459 0.0706 0.0934 0.227 0.261 0.000507 0.000583 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 380 100 0.126 0.00497 0.0265 0.065 0.0859 0.197 0.198 0.00044 0.000443 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 380 51141.084 0.005 0.00654 0.0936 0.224 0.0739 0.0986 0.287 0.373 0.00064 0.000832 ! Validation 380 51141.084 0.005 0.00653 0.0371 0.168 0.0743 0.0984 0.186 0.235 0.000416 0.000524 Wall time: 51141.08466869779 ! Best model 380 0.168 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 381 100 0.199 0.00594 0.0802 0.0705 0.0939 0.307 0.345 0.000684 0.00077 381 172 0.172 0.00657 0.0409 0.0739 0.0988 0.196 0.246 0.000436 0.00055 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 381 100 0.106 0.00495 0.00707 0.0646 0.0858 0.0888 0.102 0.000198 0.000229 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 381 51281.284 0.005 0.00644 0.0893 0.218 0.0733 0.0978 0.278 0.364 0.000621 0.000813 ! Validation 381 51281.284 0.005 0.0064 0.0484 0.176 0.0734 0.0974 0.217 0.268 0.000485 0.000598 Wall time: 51281.28456829861 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 382 100 0.141 0.00596 0.022 0.0707 0.0941 0.151 0.181 0.000338 0.000403 382 172 0.198 0.00694 0.0593 0.0766 0.101 0.251 0.297 0.00056 0.000662 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 382 100 0.17 0.00809 0.00798 0.0835 0.11 0.0928 0.109 0.000207 0.000243 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 382 51414.508 0.005 0.00628 0.0738 0.199 0.0722 0.0965 0.25 0.331 0.000559 0.000739 ! Validation 382 51414.508 0.005 0.00933 0.0614 0.248 0.0901 0.118 0.245 0.302 0.000547 0.000674 Wall time: 51414.50882800901 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 383 100 0.159 0.00603 0.0384 0.0709 0.0946 0.204 0.239 0.000455 0.000533 383 172 0.129 0.00565 0.0163 0.0688 0.0916 0.135 0.156 0.000301 0.000347 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 383 100 0.24 0.00526 0.134 0.0669 0.0884 0.441 0.447 0.000984 0.000997 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 383 51547.738 0.005 0.00667 0.105 0.238 0.0746 0.0995 0.309 0.395 0.00069 0.000882 ! Validation 383 51547.738 0.005 0.00665 0.135 0.268 0.0751 0.0994 0.388 0.447 0.000867 0.000997 Wall time: 51547.73833869584 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 384 100 0.132 0.00567 0.0191 0.0684 0.0917 0.142 0.168 0.000317 0.000376 384 172 0.227 0.00559 0.115 0.0681 0.0911 0.373 0.414 0.000832 0.000923 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 384 100 0.114 0.00472 0.0195 0.0632 0.0837 0.159 0.17 0.000356 0.00038 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 384 51681.514 0.005 0.00587 0.0568 0.174 0.0698 0.0934 0.232 0.29 0.000517 0.000648 ! Validation 384 51681.514 0.005 0.00612 0.0624 0.185 0.0721 0.0953 0.257 0.304 0.000574 0.000679 Wall time: 51681.514647656586 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 385 100 0.245 0.00832 0.0785 0.084 0.111 0.296 0.341 0.000662 0.000762 385 172 0.224 0.0059 0.107 0.0699 0.0936 0.352 0.398 0.000785 0.000888 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 385 100 0.202 0.00502 0.102 0.0653 0.0863 0.388 0.389 0.000867 0.000868 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 385 51818.346 0.005 0.00641 0.0899 0.218 0.0729 0.0975 0.289 0.365 0.000645 0.000815 ! Validation 385 51818.346 0.005 0.00642 0.17 0.298 0.0737 0.0976 0.43 0.502 0.000959 0.00112 Wall time: 51818.346754991915 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 386 100 0.234 0.00704 0.0935 0.0767 0.102 0.289 0.373 0.000644 0.000832 386 172 0.276 0.00583 0.159 0.0693 0.093 0.373 0.486 0.000834 0.00109 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 386 100 0.273 0.00548 0.163 0.0688 0.0902 0.491 0.492 0.00109 0.0011 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 386 51955.606 0.005 0.00633 0.094 0.221 0.0726 0.0969 0.3 0.373 0.000671 0.000833 ! Validation 386 51955.606 0.005 0.00676 0.27 0.405 0.0761 0.1 0.558 0.633 0.00125 0.00141 Wall time: 51955.60624103667 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 387 100 0.156 0.00541 0.0474 0.067 0.0896 0.224 0.265 0.000499 0.000592 387 172 0.266 0.00666 0.133 0.0748 0.0994 0.404 0.444 0.000902 0.00099 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 387 100 0.114 0.00533 0.00787 0.0673 0.0889 0.0992 0.108 0.000221 0.000241 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 387 52093.239 0.005 0.00723 0.102 0.247 0.077 0.104 0.309 0.39 0.00069 0.00087 ! Validation 387 52093.239 0.005 0.00668 0.0714 0.205 0.0749 0.0996 0.264 0.325 0.00059 0.000727 Wall time: 52093.23918452673 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 388 100 0.245 0.00686 0.107 0.0773 0.101 0.366 0.399 0.000818 0.000891 388 172 0.159 0.00543 0.0505 0.0665 0.0898 0.227 0.274 0.000507 0.000611 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 388 100 0.23 0.00452 0.139 0.0619 0.0819 0.454 0.455 0.00101 0.00102 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 388 52226.443 0.005 0.00604 0.0743 0.195 0.071 0.0947 0.27 0.332 0.000603 0.000742 ! Validation 388 52226.443 0.005 0.00578 0.107 0.223 0.0698 0.0926 0.347 0.398 0.000775 0.000889 Wall time: 52226.44323701691 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 389 100 0.304 0.0119 0.0652 0.103 0.133 0.255 0.311 0.000569 0.000695 389 172 0.217 0.00579 0.101 0.0696 0.0927 0.338 0.387 0.000755 0.000864 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 389 100 0.138 0.00484 0.0409 0.0646 0.0848 0.24 0.246 0.000535 0.00055 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 389 52359.632 0.005 0.00652 0.0883 0.219 0.0729 0.0984 0.269 0.362 0.000601 0.000808 ! Validation 389 52359.632 0.005 0.00597 0.0713 0.191 0.0714 0.0941 0.259 0.325 0.000578 0.000726 Wall time: 52359.63238452189 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 390 100 0.137 0.00547 0.0278 0.0671 0.0901 0.154 0.203 0.000344 0.000454 390 172 0.353 0.00609 0.231 0.0715 0.0951 0.549 0.586 0.00123 0.00131 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 390 100 0.273 0.0051 0.171 0.0653 0.087 0.502 0.504 0.00112 0.00113 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 390 52498.835 0.005 0.00553 0.0612 0.172 0.0678 0.0906 0.241 0.301 0.000539 0.000672 ! Validation 390 52498.835 0.005 0.00627 0.351 0.476 0.0727 0.0964 0.694 0.722 0.00155 0.00161 Wall time: 52498.834892925806 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 391 100 0.161 0.00601 0.0407 0.0715 0.0945 0.209 0.246 0.000467 0.000549 391 172 0.124 0.00529 0.0184 0.0667 0.0886 0.127 0.165 0.000283 0.000368 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 391 100 0.101 0.00436 0.014 0.0602 0.0804 0.136 0.144 0.000304 0.000322 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 391 52632.029 0.005 0.00635 0.0879 0.215 0.0727 0.0971 0.29 0.361 0.000647 0.000807 ! Validation 391 52632.029 0.005 0.00557 0.0662 0.178 0.0683 0.0909 0.267 0.314 0.000596 0.0007 Wall time: 52632.02979097469 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 392 100 0.166 0.00533 0.0594 0.0666 0.0889 0.242 0.297 0.00054 0.000663 392 172 0.147 0.00561 0.0348 0.0683 0.0913 0.181 0.227 0.000403 0.000507 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 392 100 0.206 0.00471 0.112 0.0626 0.0836 0.406 0.408 0.000906 0.000911 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 392 52765.226 0.005 0.00572 0.0725 0.187 0.069 0.0922 0.259 0.328 0.000578 0.000733 ! Validation 392 52765.226 0.005 0.00591 0.0916 0.21 0.0703 0.0936 0.311 0.369 0.000695 0.000823 Wall time: 52765.2264692788 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 393 100 0.167 0.00548 0.0571 0.0672 0.0902 0.241 0.291 0.000538 0.00065 393 172 0.508 0.0123 0.262 0.102 0.135 0.506 0.624 0.00113 0.00139 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 393 100 0.237 0.00742 0.0888 0.0794 0.105 0.35 0.363 0.00078 0.000811 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 393 52900.488 0.005 0.00609 0.0871 0.209 0.0705 0.095 0.283 0.359 0.000632 0.000802 ! Validation 393 52900.488 0.005 0.00827 0.071 0.236 0.0847 0.111 0.255 0.325 0.000569 0.000724 Wall time: 52900.48856984498 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 394 100 0.217 0.00573 0.103 0.0695 0.0922 0.353 0.391 0.000788 0.000872 394 172 0.175 0.00712 0.0321 0.0776 0.103 0.169 0.218 0.000378 0.000487 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 394 100 0.159 0.00607 0.0379 0.0708 0.0949 0.227 0.237 0.000508 0.000529 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 394 53033.679 0.005 0.0065 0.077 0.207 0.0729 0.0982 0.264 0.338 0.000588 0.000755 ! Validation 394 53033.679 0.005 0.00705 0.109 0.25 0.0775 0.102 0.349 0.403 0.000778 0.000899 Wall time: 53033.67941202596 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 395 100 0.288 0.00558 0.177 0.0685 0.091 0.44 0.512 0.000983 0.00114 395 172 0.188 0.00505 0.0866 0.0647 0.0866 0.319 0.359 0.000713 0.0008 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 395 100 0.0867 0.00428 0.00118 0.06 0.0797 0.04 0.0419 8.93e-05 9.34e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 395 53166.874 0.005 0.00592 0.0817 0.2 0.0701 0.0937 0.28 0.348 0.000626 0.000777 ! Validation 395 53166.874 0.005 0.00541 0.031 0.139 0.0673 0.0896 0.169 0.215 0.000377 0.000479 Wall time: 53166.87523073563 ! Best model 395 0.139 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 396 100 0.263 0.00508 0.162 0.0647 0.0868 0.466 0.49 0.00104 0.00109 396 172 0.223 0.00484 0.126 0.0628 0.0848 0.369 0.433 0.000823 0.000967 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 396 100 0.109 0.00468 0.015 0.0623 0.0834 0.143 0.149 0.00032 0.000334 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 396 53300.078 0.005 0.00503 0.0436 0.144 0.0644 0.0864 0.196 0.254 0.000437 0.000567 ! Validation 396 53300.078 0.005 0.00569 0.0672 0.181 0.0692 0.0919 0.25 0.316 0.000559 0.000705 Wall time: 53300.078850667924 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 397 100 0.225 0.00588 0.108 0.0696 0.0934 0.338 0.4 0.000755 0.000893 397 172 0.13 0.00545 0.0209 0.0686 0.09 0.133 0.176 0.000296 0.000393 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 397 100 0.0922 0.00449 0.0023 0.0614 0.0817 0.0534 0.0585 0.000119 0.000131 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 397 53435.573 0.005 0.00617 0.0899 0.213 0.0714 0.0957 0.289 0.365 0.000646 0.000816 ! Validation 397 53435.573 0.005 0.00543 0.0361 0.145 0.0676 0.0898 0.189 0.232 0.000422 0.000517 Wall time: 53435.57363816304 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 398 100 0.172 0.00679 0.0367 0.0756 0.1 0.165 0.234 0.000367 0.000521 398 172 0.319 0.0055 0.209 0.0671 0.0903 0.524 0.557 0.00117 0.00124 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 398 100 0.286 0.00432 0.2 0.0602 0.0801 0.544 0.545 0.00121 0.00122 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 398 53570.794 0.005 0.00552 0.0701 0.18 0.0675 0.0905 0.245 0.322 0.000547 0.000719 ! Validation 398 53570.794 0.005 0.0053 0.182 0.288 0.0668 0.0887 0.477 0.52 0.00106 0.00116 Wall time: 53570.79471290577 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 399 100 0.251 0.00476 0.156 0.0632 0.0841 0.446 0.481 0.000995 0.00107 399 172 0.125 0.00557 0.0141 0.0687 0.0909 0.121 0.145 0.00027 0.000323 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 399 100 0.25 0.00665 0.116 0.074 0.0994 0.413 0.416 0.000922 0.000928 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 399 53704.478 0.005 0.00522 0.0557 0.16 0.0658 0.0881 0.23 0.288 0.000514 0.000642 ! Validation 399 53704.478 0.005 0.00752 0.104 0.255 0.0794 0.106 0.339 0.393 0.000756 0.000877 Wall time: 53704.47841553576 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 400 100 0.136 0.00507 0.0344 0.0647 0.0867 0.195 0.226 0.000435 0.000504 400 172 0.149 0.00454 0.0581 0.0617 0.0821 0.26 0.294 0.000581 0.000656 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 400 100 0.108 0.00521 0.00385 0.0649 0.0879 0.0749 0.0756 0.000167 0.000169 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 400 53837.696 0.005 0.00584 0.0881 0.205 0.0697 0.0931 0.29 0.362 0.000648 0.000807 ! Validation 400 53837.696 0.005 0.00612 0.0237 0.146 0.0714 0.0954 0.146 0.188 0.000325 0.000419 Wall time: 53837.696235285606 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 401 100 3.41 0.135 0.704 0.348 0.448 0.75 1.02 0.00167 0.00228 401 172 0.958 0.0315 0.328 0.166 0.216 0.629 0.697 0.0014 0.00156 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 401 100 0.68 0.0299 0.0829 0.162 0.211 0.324 0.351 0.000723 0.000783 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 401 53970.924 0.005 0.12 2.95 5.35 0.261 0.422 1.29 2.09 0.00288 0.00467 ! Validation 401 53970.924 0.005 0.0323 0.143 0.788 0.168 0.219 0.369 0.461 0.000824 0.00103 Wall time: 53970.92517829873 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 402 100 0.322 0.0145 0.0308 0.111 0.147 0.169 0.214 0.000376 0.000478 402 172 1.01 0.0111 0.793 0.0966 0.128 1.06 1.09 0.00236 0.00242 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 402 100 1.57 0.0102 1.36 0.094 0.123 1.42 1.42 0.00317 0.00317 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 402 54109.008 0.005 0.0172 0.2 0.544 0.12 0.16 0.428 0.545 0.000954 0.00122 ! Validation 402 54109.008 0.005 0.0123 0.634 0.88 0.102 0.135 0.925 0.97 0.00206 0.00217 Wall time: 54109.008791826665 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 403 100 0.275 0.00915 0.0923 0.0874 0.117 0.32 0.37 0.000714 0.000826 403 172 0.71 0.00827 0.544 0.084 0.111 0.878 0.899 0.00196 0.00201 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 403 100 0.25 0.00887 0.0726 0.0877 0.115 0.326 0.328 0.000728 0.000733 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 403 54245.785 0.005 0.01 0.144 0.344 0.0916 0.122 0.366 0.461 0.000817 0.00103 ! Validation 403 54245.785 0.005 0.0102 0.0458 0.25 0.0932 0.123 0.209 0.261 0.000467 0.000582 Wall time: 54245.7849801979 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 404 100 0.171 0.00757 0.0198 0.0801 0.106 0.135 0.171 0.000301 0.000382 404 172 0.172 0.00689 0.0344 0.0762 0.101 0.181 0.226 0.000405 0.000504 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 404 100 0.337 0.00598 0.218 0.0714 0.0943 0.567 0.568 0.00127 0.00127 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 404 54380.349 0.005 0.00812 0.099 0.261 0.0825 0.11 0.3 0.383 0.00067 0.000856 ! Validation 404 54380.349 0.005 0.00747 0.0751 0.224 0.0796 0.105 0.267 0.334 0.000596 0.000745 Wall time: 54380.34942147462 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 405 100 0.197 0.00681 0.0613 0.0749 0.101 0.26 0.302 0.00058 0.000673 405 172 0.179 0.00649 0.0487 0.0738 0.0982 0.221 0.269 0.000494 0.0006 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 405 100 0.603 0.00558 0.492 0.0691 0.091 0.853 0.854 0.0019 0.00191 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 405 54519.170 0.005 0.00684 0.0967 0.233 0.0757 0.101 0.311 0.379 0.000694 0.000846 ! Validation 405 54519.170 0.005 0.00704 0.191 0.332 0.0772 0.102 0.484 0.533 0.00108 0.00119 Wall time: 54519.1704459209 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 406 100 0.159 0.00603 0.0383 0.0706 0.0946 0.184 0.239 0.000411 0.000533 406 172 0.153 0.00599 0.0328 0.0704 0.0943 0.174 0.221 0.000388 0.000493 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 406 100 0.105 0.00503 0.00467 0.0652 0.0864 0.079 0.0833 0.000176 0.000186 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 406 54652.386 0.005 0.00639 0.0705 0.198 0.0731 0.0974 0.255 0.324 0.000569 0.000722 ! Validation 406 54652.386 0.005 0.00635 0.0686 0.196 0.0733 0.0971 0.274 0.319 0.000613 0.000712 Wall time: 54652.38675547065 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 407 100 0.132 0.00529 0.0266 0.0663 0.0886 0.155 0.199 0.000347 0.000443 407 172 0.173 0.0055 0.0628 0.0671 0.0903 0.28 0.305 0.000626 0.000682 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 407 100 0.135 0.00465 0.0416 0.0626 0.0831 0.247 0.249 0.000551 0.000555 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 407 54787.547 0.005 0.00571 0.0484 0.163 0.0689 0.0921 0.215 0.268 0.000479 0.000598 ! Validation 407 54787.547 0.005 0.00594 0.0338 0.153 0.0706 0.0939 0.171 0.224 0.000382 0.0005 Wall time: 54787.54740385059 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 408 100 0.175 0.00582 0.0588 0.0698 0.093 0.231 0.296 0.000515 0.00066 408 172 0.182 0.00647 0.0528 0.0733 0.098 0.225 0.28 0.000502 0.000625 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 408 100 0.295 0.00566 0.182 0.0692 0.0917 0.519 0.52 0.00116 0.00116 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 408 54920.738 0.005 0.0056 0.0784 0.19 0.0682 0.0912 0.272 0.341 0.000607 0.000762 ! Validation 408 54920.738 0.005 0.0069 0.302 0.44 0.076 0.101 0.619 0.669 0.00138 0.00149 Wall time: 54920.738948022015 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 409 100 0.121 0.00509 0.0193 0.0652 0.0869 0.146 0.169 0.000325 0.000378 409 172 0.126 0.00488 0.0289 0.064 0.0851 0.164 0.207 0.000367 0.000462 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 409 100 0.348 0.00468 0.254 0.0627 0.0834 0.614 0.614 0.00137 0.00137 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 409 55054.273 0.005 0.00545 0.0684 0.177 0.0672 0.09 0.254 0.319 0.000566 0.000712 ! Validation 409 55054.273 0.005 0.00577 0.175 0.29 0.0697 0.0925 0.453 0.509 0.00101 0.00114 Wall time: 55054.273800599854 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 410 100 0.12 0.00494 0.0213 0.0638 0.0857 0.14 0.178 0.000313 0.000397 410 172 0.199 0.00487 0.101 0.0634 0.085 0.367 0.388 0.000818 0.000866 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 410 100 0.092 0.00439 0.00431 0.0604 0.0807 0.0739 0.08 0.000165 0.000179 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 410 55187.471 0.005 0.00525 0.0536 0.159 0.0659 0.0883 0.227 0.282 0.000506 0.000629 ! Validation 410 55187.471 0.005 0.00548 0.042 0.151 0.0678 0.0902 0.208 0.25 0.000465 0.000557 Wall time: 55187.471024830826 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 411 100 0.459 0.00526 0.354 0.0655 0.0883 0.703 0.725 0.00157 0.00162 411 172 0.118 0.00483 0.0219 0.0636 0.0846 0.135 0.18 0.000302 0.000403 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 411 100 0.0992 0.00444 0.0104 0.0608 0.0812 0.121 0.125 0.00027 0.000278 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 411 55322.607 0.005 0.00509 0.0586 0.16 0.0649 0.0869 0.232 0.295 0.000519 0.000659 ! Validation 411 55322.607 0.005 0.00543 0.0866 0.195 0.0676 0.0898 0.312 0.358 0.000696 0.0008 Wall time: 55322.60743653681 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 412 100 0.216 0.00472 0.122 0.0624 0.0837 0.4 0.425 0.000894 0.000948 412 172 0.15 0.00458 0.0585 0.0617 0.0824 0.255 0.295 0.000569 0.000658 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 412 100 0.185 0.00422 0.1 0.0593 0.0791 0.385 0.386 0.00086 0.000862 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 412 55455.818 0.005 0.00509 0.0623 0.164 0.0649 0.0869 0.247 0.304 0.000552 0.000679 ! Validation 412 55455.818 0.005 0.00523 0.0443 0.149 0.0662 0.0881 0.203 0.257 0.000454 0.000573 Wall time: 55455.81965320697 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 413 100 0.127 0.00532 0.0203 0.0669 0.0889 0.136 0.174 0.000302 0.000387 413 172 0.114 0.00466 0.021 0.0624 0.0832 0.139 0.177 0.00031 0.000394 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 413 100 0.0862 0.00421 0.00189 0.0591 0.0791 0.0498 0.053 0.000111 0.000118 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 413 55591.286 0.005 0.00511 0.0739 0.176 0.0651 0.0871 0.265 0.331 0.000591 0.000739 ! Validation 413 55591.286 0.005 0.00519 0.0372 0.141 0.066 0.0878 0.194 0.235 0.000433 0.000525 Wall time: 55591.28662418574 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 414 100 0.13 0.00488 0.033 0.0633 0.0851 0.187 0.221 0.000417 0.000494 414 172 0.134 0.00462 0.0415 0.0619 0.0828 0.197 0.248 0.000439 0.000554 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 414 100 0.0865 0.00418 0.00295 0.0585 0.0788 0.0582 0.0662 0.00013 0.000148 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 414 55724.578 0.005 0.00486 0.049 0.146 0.0633 0.0849 0.215 0.27 0.00048 0.000602 ! Validation 414 55724.578 0.005 0.00508 0.0312 0.133 0.0653 0.0868 0.176 0.215 0.000393 0.00048 Wall time: 55724.57857095264 ! Best model 414 0.133 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 415 100 0.246 0.00538 0.139 0.0665 0.0894 0.432 0.454 0.000965 0.00101 415 172 0.105 0.00483 0.0085 0.0637 0.0847 0.0864 0.112 0.000193 0.000251 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 415 100 0.107 0.00415 0.0241 0.059 0.0785 0.187 0.189 0.000419 0.000422 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 415 55857.832 0.005 0.00475 0.0424 0.137 0.0626 0.084 0.201 0.251 0.000448 0.00056 ! Validation 415 55857.832 0.005 0.00512 0.056 0.158 0.0656 0.0872 0.221 0.288 0.000493 0.000643 Wall time: 55857.83252124069 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 416 100 0.388 0.00465 0.295 0.0617 0.0831 0.645 0.662 0.00144 0.00148 416 172 0.122 0.00477 0.0263 0.0625 0.0841 0.158 0.198 0.000354 0.000441 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 416 100 0.104 0.00411 0.0213 0.0581 0.0781 0.173 0.178 0.000387 0.000397 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 416 55995.021 0.005 0.00483 0.0679 0.165 0.0632 0.0847 0.25 0.318 0.000559 0.000709 ! Validation 416 55995.021 0.005 0.00496 0.104 0.203 0.0645 0.0858 0.35 0.393 0.000781 0.000877 Wall time: 55995.02165997401 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 417 100 0.133 0.0053 0.0269 0.0656 0.0887 0.158 0.2 0.000354 0.000446 417 172 0.193 0.00466 0.0998 0.0617 0.0832 0.348 0.385 0.000777 0.000859 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 417 100 0.0988 0.00418 0.0152 0.0589 0.0787 0.148 0.15 0.00033 0.000336 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 417 56128.282 0.005 0.00473 0.0468 0.141 0.0625 0.0838 0.208 0.263 0.000465 0.000588 ! Validation 417 56128.282 0.005 0.00518 0.0239 0.127 0.0657 0.0877 0.154 0.188 0.000344 0.000421 Wall time: 56128.28241539886 ! Best model 417 0.127 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 418 100 0.162 0.00533 0.0554 0.0663 0.089 0.246 0.287 0.000549 0.00064 418 172 0.154 0.00473 0.0599 0.0623 0.0838 0.241 0.298 0.000537 0.000665 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 418 100 0.236 0.00426 0.15 0.0594 0.0795 0.471 0.473 0.00105 0.00105 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 418 56265.913 0.005 0.00482 0.0705 0.167 0.0632 0.0846 0.258 0.324 0.000576 0.000722 ! Validation 418 56265.913 0.005 0.00514 0.106 0.209 0.0659 0.0874 0.343 0.398 0.000766 0.000887 Wall time: 56265.9130790676 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 419 100 0.135 0.00473 0.04 0.0629 0.0838 0.196 0.244 0.000437 0.000544 419 172 0.402 0.00881 0.226 0.0858 0.114 0.521 0.579 0.00116 0.00129 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 419 100 0.344 0.0076 0.191 0.0776 0.106 0.529 0.533 0.00118 0.00119 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 419 56401.052 0.005 0.00531 0.071 0.177 0.0654 0.0888 0.243 0.324 0.000543 0.000724 ! Validation 419 56401.052 0.005 0.00828 0.0881 0.254 0.0825 0.111 0.32 0.362 0.000713 0.000807 Wall time: 56401.05295347888 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 420 100 0.144 0.00455 0.0533 0.0616 0.0821 0.247 0.281 0.000552 0.000628 420 172 0.124 0.00437 0.0365 0.0599 0.0805 0.198 0.233 0.000443 0.000519 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 420 100 0.0869 0.00387 0.00947 0.0566 0.0758 0.114 0.119 0.000255 0.000265 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 420 56535.213 0.005 0.00488 0.0453 0.143 0.0634 0.0851 0.204 0.259 0.000456 0.000579 ! Validation 420 56535.213 0.005 0.00468 0.0249 0.119 0.0626 0.0833 0.156 0.192 0.000349 0.000429 Wall time: 56535.21322345268 ! Best model 420 0.119 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 421 100 0.0997 0.00411 0.0176 0.0587 0.0781 0.129 0.161 0.000288 0.00036 421 172 0.176 0.00419 0.0918 0.0592 0.0789 0.344 0.369 0.000769 0.000824 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 421 100 0.102 0.00485 0.00463 0.0636 0.0849 0.0749 0.0829 0.000167 0.000185 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 421 56669.117 0.005 0.00464 0.0459 0.139 0.062 0.083 0.204 0.261 0.000456 0.000582 ! Validation 421 56669.117 0.005 0.00553 0.0567 0.167 0.0687 0.0906 0.229 0.29 0.00051 0.000648 Wall time: 56669.117014096584 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 422 100 0.123 0.00472 0.029 0.0633 0.0837 0.159 0.207 0.000355 0.000463 422 172 0.201 0.00483 0.105 0.0635 0.0847 0.369 0.394 0.000824 0.000879 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 422 100 0.0901 0.0041 0.00806 0.0584 0.078 0.0978 0.109 0.000218 0.000244 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 422 56803.721 0.005 0.00475 0.0566 0.151 0.0626 0.0839 0.231 0.29 0.000517 0.000647 ! Validation 422 56803.721 0.005 0.00484 0.0714 0.168 0.0637 0.0847 0.272 0.326 0.000608 0.000727 Wall time: 56803.721122108866 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 423 100 0.234 0.00426 0.149 0.0592 0.0795 0.393 0.471 0.000877 0.00105 423 172 0.19 0.00465 0.0971 0.0621 0.083 0.345 0.38 0.00077 0.000848 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 423 100 0.0971 0.00417 0.0136 0.0589 0.0787 0.139 0.142 0.00031 0.000317 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 423 56936.975 0.005 0.0047 0.058 0.152 0.0622 0.0835 0.236 0.293 0.000527 0.000655 ! Validation 423 56936.975 0.005 0.00499 0.0273 0.127 0.0646 0.0861 0.161 0.201 0.000359 0.000449 Wall time: 56936.9755105786 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 424 100 0.153 0.0043 0.0672 0.0599 0.0799 0.282 0.316 0.00063 0.000705 424 172 0.115 0.00422 0.0307 0.0592 0.0791 0.17 0.213 0.000379 0.000476 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 424 100 0.0835 0.00388 0.00579 0.0566 0.0759 0.0902 0.0927 0.000201 0.000207 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 424 57070.249 0.005 0.00463 0.0467 0.139 0.0618 0.0829 0.212 0.263 0.000474 0.000588 ! Validation 424 57070.249 0.005 0.00466 0.031 0.124 0.0624 0.0832 0.176 0.214 0.000393 0.000479 Wall time: 57070.249019993935 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 425 100 0.192 0.006 0.0725 0.0706 0.0944 0.243 0.328 0.000542 0.000732 425 172 0.0987 0.00432 0.0122 0.06 0.0801 0.103 0.135 0.000231 0.0003 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 425 100 0.09 0.00384 0.0132 0.0561 0.0755 0.136 0.14 0.000305 0.000313 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 425 57203.901 0.005 0.00503 0.0684 0.169 0.0644 0.0864 0.247 0.319 0.000551 0.000712 ! Validation 425 57203.901 0.005 0.00456 0.0233 0.115 0.0618 0.0823 0.152 0.186 0.000339 0.000415 Wall time: 57203.90150402486 ! Best model 425 0.115 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 426 100 0.26 0.00459 0.168 0.0612 0.0826 0.484 0.5 0.00108 0.00112 426 172 0.139 0.00442 0.0506 0.0605 0.081 0.21 0.274 0.00047 0.000612 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 426 100 0.0913 0.00423 0.00676 0.0579 0.0792 0.0885 0.1 0.000197 0.000224 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 426 57339.329 0.005 0.0044 0.0399 0.128 0.0602 0.0808 0.191 0.243 0.000427 0.000543 ! Validation 426 57339.329 0.005 0.00491 0.0241 0.122 0.0638 0.0854 0.152 0.189 0.00034 0.000422 Wall time: 57339.329570355825 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 427 100 0.223 0.00519 0.12 0.0661 0.0878 0.385 0.421 0.00086 0.00094 427 172 0.108 0.00449 0.0184 0.0607 0.0816 0.132 0.165 0.000295 0.000369 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 427 100 0.0908 0.00408 0.00908 0.0584 0.0779 0.109 0.116 0.000244 0.000259 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 427 57472.593 0.005 0.00534 0.0772 0.184 0.0663 0.089 0.27 0.339 0.000603 0.000756 ! Validation 427 57472.593 0.005 0.0048 0.0224 0.118 0.0638 0.0844 0.146 0.182 0.000327 0.000407 Wall time: 57472.59360825084 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 428 100 0.105 0.00448 0.0151 0.0609 0.0816 0.117 0.149 0.000262 0.000334 428 172 0.227 0.0044 0.139 0.0598 0.0808 0.426 0.454 0.00095 0.00101 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 428 100 0.111 0.00445 0.0218 0.0607 0.0812 0.176 0.18 0.000394 0.000401 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 428 57605.952 0.005 0.00447 0.0452 0.135 0.0607 0.0815 0.207 0.259 0.000462 0.000578 ! Validation 428 57605.952 0.005 0.00511 0.0867 0.189 0.0654 0.0871 0.303 0.359 0.000677 0.000801 Wall time: 57605.952503209934 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 429 100 0.124 0.00419 0.0397 0.059 0.0789 0.201 0.243 0.000448 0.000542 429 172 0.294 0.00497 0.194 0.0643 0.0859 0.454 0.537 0.00101 0.0012 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 429 100 0.216 0.00507 0.115 0.0655 0.0867 0.411 0.413 0.000917 0.000922 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 429 57739.211 0.005 0.00514 0.0768 0.18 0.0646 0.0874 0.258 0.337 0.000576 0.000753 ! Validation 429 57739.211 0.005 0.00599 0.135 0.254 0.0711 0.0943 0.384 0.447 0.000857 0.000998 Wall time: 57739.21136347391 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 430 100 0.108 0.00423 0.0232 0.0591 0.0793 0.138 0.186 0.000307 0.000415 430 172 0.192 0.0049 0.0938 0.064 0.0853 0.273 0.373 0.000609 0.000833 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 430 100 0.14 0.00496 0.0409 0.0645 0.0858 0.243 0.247 0.000543 0.00055 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 430 57872.476 0.005 0.00463 0.0479 0.141 0.0618 0.0829 0.215 0.267 0.00048 0.000595 ! Validation 430 57872.476 0.005 0.00569 0.0419 0.156 0.0693 0.0919 0.2 0.249 0.000446 0.000556 Wall time: 57872.475976570975 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 431 100 0.176 0.00437 0.0882 0.0601 0.0806 0.324 0.362 0.000723 0.000808 431 172 0.127 0.00482 0.0306 0.0628 0.0846 0.176 0.213 0.000393 0.000475 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 431 100 0.147 0.00433 0.0606 0.059 0.0801 0.298 0.3 0.000665 0.00067 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 431 58005.988 0.005 0.00444 0.0487 0.137 0.0605 0.0811 0.209 0.269 0.000465 0.0006 ! Validation 431 58005.988 0.005 0.00488 0.0267 0.124 0.064 0.0852 0.156 0.199 0.000348 0.000444 Wall time: 58005.98809750099 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 432 100 0.0912 0.00413 0.00864 0.0583 0.0783 0.0921 0.113 0.000206 0.000253 432 172 0.253 0.0103 0.0462 0.0946 0.124 0.201 0.262 0.000448 0.000584 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 432 100 0.326 0.0111 0.104 0.0985 0.128 0.373 0.392 0.000834 0.000876 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 432 58139.260 0.005 0.00509 0.0578 0.16 0.0639 0.0869 0.226 0.293 0.000505 0.000654 ! Validation 432 58139.260 0.005 0.0118 0.177 0.413 0.101 0.132 0.46 0.513 0.00103 0.00115 Wall time: 58139.260613632854 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 433 100 0.18 0.00637 0.053 0.0741 0.0972 0.246 0.28 0.00055 0.000626 433 172 0.115 0.00497 0.016 0.0638 0.0859 0.121 0.154 0.00027 0.000344 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 433 100 0.0912 0.00384 0.0145 0.0563 0.0755 0.143 0.147 0.000319 0.000327 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 433 58272.506 0.005 0.00578 0.0904 0.206 0.069 0.0926 0.281 0.366 0.000628 0.000818 ! Validation 433 58272.506 0.005 0.00451 0.0264 0.117 0.0612 0.0819 0.158 0.198 0.000352 0.000442 Wall time: 58272.506493402645 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 434 100 0.2 0.0045 0.11 0.0611 0.0817 0.37 0.404 0.000826 0.000901 434 172 0.153 0.00607 0.0314 0.0706 0.095 0.147 0.216 0.000328 0.000482 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 434 100 0.0908 0.00451 0.00067 0.061 0.0818 0.025 0.0315 5.58e-05 7.04e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 434 58406.046 0.005 0.00453 0.0487 0.139 0.0612 0.082 0.211 0.269 0.000471 0.0006 ! Validation 434 58406.046 0.005 0.005 0.0382 0.138 0.0651 0.0862 0.192 0.238 0.000429 0.000531 Wall time: 58406.046333345585 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 435 100 0.102 0.00422 0.0174 0.0595 0.0792 0.126 0.161 0.000281 0.000359 435 172 0.127 0.00427 0.042 0.0597 0.0796 0.199 0.25 0.000445 0.000558 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 435 100 0.0931 0.00415 0.0102 0.0593 0.0785 0.115 0.123 0.000257 0.000274 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 435 58539.664 0.005 0.00476 0.0657 0.161 0.0626 0.0841 0.239 0.312 0.000534 0.000697 ! Validation 435 58539.664 0.005 0.00486 0.0658 0.163 0.0642 0.0849 0.27 0.313 0.000604 0.000698 Wall time: 58539.66463756887 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 436 100 0.128 0.00398 0.0483 0.0569 0.0769 0.231 0.268 0.000515 0.000598 436 172 0.26 0.00372 0.186 0.0553 0.0744 0.493 0.525 0.0011 0.00117 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 436 100 0.0997 0.0038 0.0237 0.0557 0.0751 0.18 0.188 0.000402 0.000419 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 436 58682.841 0.005 0.00411 0.0378 0.12 0.0581 0.0781 0.188 0.237 0.000419 0.000528 ! Validation 436 58682.841 0.005 0.00434 0.102 0.189 0.0603 0.0802 0.352 0.39 0.000785 0.00087 Wall time: 58682.84154136991 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 437 100 0.111 0.00397 0.0312 0.0575 0.0768 0.187 0.215 0.000417 0.000481 437 172 0.177 0.00465 0.0838 0.063 0.0831 0.318 0.353 0.000709 0.000787 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 437 100 0.129 0.00571 0.015 0.0704 0.0921 0.136 0.149 0.000303 0.000333 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 437 58816.103 0.005 0.00475 0.0654 0.16 0.0624 0.0839 0.244 0.312 0.000544 0.000696 ! Validation 437 58816.103 0.005 0.00619 0.0926 0.216 0.0738 0.0959 0.324 0.371 0.000724 0.000827 Wall time: 58816.10312965093 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 438 100 0.239 0.00765 0.0856 0.0791 0.107 0.278 0.356 0.00062 0.000796 438 172 0.156 0.00481 0.0593 0.064 0.0845 0.254 0.297 0.000566 0.000662 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 438 100 0.175 0.00552 0.0651 0.0683 0.0905 0.29 0.311 0.000647 0.000694 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 438 58949.294 0.005 0.00476 0.0584 0.154 0.0625 0.084 0.23 0.294 0.000513 0.000657 ! Validation 438 58949.294 0.005 0.00605 0.213 0.334 0.0717 0.0948 0.508 0.563 0.00113 0.00126 Wall time: 58949.29430726962 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 439 100 0.118 0.00519 0.0142 0.0655 0.0878 0.107 0.145 0.000239 0.000324 439 172 0.196 0.00788 0.0383 0.0804 0.108 0.205 0.239 0.000458 0.000532 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 439 100 0.0984 0.00443 0.00983 0.0598 0.0811 0.107 0.121 0.000239 0.00027 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 439 59082.605 0.005 0.00477 0.0596 0.155 0.0627 0.0841 0.236 0.298 0.000527 0.000664 ! Validation 439 59082.605 0.005 0.00506 0.0651 0.166 0.0647 0.0867 0.255 0.311 0.00057 0.000694 Wall time: 59082.60573453503 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 440 100 0.166 0.0047 0.0717 0.0619 0.0835 0.296 0.326 0.000661 0.000728 440 172 0.259 0.0084 0.0906 0.0851 0.112 0.315 0.367 0.000702 0.000819 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 440 100 0.135 0.0062 0.0107 0.0729 0.0959 0.104 0.126 0.000232 0.000281 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 440 59217.170 0.005 0.0077 0.128 0.282 0.0777 0.107 0.326 0.435 0.000728 0.000972 ! Validation 440 59217.170 0.005 0.0069 0.0492 0.187 0.0766 0.101 0.221 0.27 0.000493 0.000603 Wall time: 59217.17028901074 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 441 100 0.119 0.00419 0.0355 0.0587 0.0789 0.207 0.23 0.000462 0.000512 441 172 0.0986 0.00389 0.0208 0.0566 0.076 0.139 0.176 0.000311 0.000392 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 441 100 0.11 0.00342 0.0414 0.0529 0.0712 0.246 0.248 0.000549 0.000554 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 441 59350.328 0.005 0.00431 0.0382 0.124 0.0596 0.08 0.184 0.238 0.000412 0.000531 ! Validation 441 59350.328 0.005 0.00404 0.0323 0.113 0.0579 0.0774 0.174 0.219 0.000387 0.000489 Wall time: 59350.32872455986 ! Best model 441 0.113 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 442 100 0.15 0.00358 0.0785 0.0541 0.0729 0.316 0.341 0.000705 0.000762 442 172 0.183 0.00556 0.0723 0.0669 0.0908 0.286 0.328 0.000637 0.000731 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 442 100 0.171 0.00472 0.0769 0.0628 0.0838 0.334 0.338 0.000745 0.000754 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 442 59483.485 0.005 0.00396 0.03 0.109 0.057 0.0767 0.164 0.211 0.000365 0.000471 ! Validation 442 59483.485 0.005 0.0055 0.0436 0.154 0.0678 0.0903 0.207 0.255 0.000461 0.000568 Wall time: 59483.48534069164 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 443 100 0.204 0.0046 0.112 0.062 0.0826 0.356 0.407 0.000795 0.00091 443 172 0.0859 0.00382 0.00946 0.0562 0.0753 0.0929 0.118 0.000207 0.000264 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 443 100 0.0712 0.00346 0.00198 0.0535 0.0717 0.0475 0.0543 0.000106 0.000121 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 443 59626.882 0.005 0.00426 0.0461 0.131 0.0593 0.0795 0.211 0.262 0.00047 0.000584 ! Validation 443 59626.882 0.005 0.00407 0.0239 0.105 0.0583 0.0778 0.153 0.188 0.000341 0.000421 Wall time: 59626.882807858754 ! Best model 443 0.105 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 444 100 0.104 0.00395 0.0247 0.0574 0.0766 0.153 0.192 0.000342 0.000428 444 172 0.235 0.00682 0.0988 0.0752 0.101 0.353 0.383 0.000788 0.000855 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 444 100 0.22 0.00697 0.0801 0.0751 0.102 0.339 0.345 0.000758 0.00077 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 444 59761.746 0.005 0.00434 0.0525 0.139 0.0596 0.0803 0.218 0.279 0.000486 0.000623 ! Validation 444 59761.746 0.005 0.00756 0.167 0.319 0.079 0.106 0.452 0.499 0.00101 0.00111 Wall time: 59761.74661400169 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 445 100 0.0881 0.00385 0.011 0.0564 0.0756 0.105 0.128 0.000234 0.000286 445 172 0.156 0.004 0.0763 0.0569 0.077 0.274 0.337 0.000612 0.000751 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 445 100 0.0809 0.0037 0.00701 0.0556 0.0741 0.0937 0.102 0.000209 0.000228 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 445 59898.554 0.005 0.00544 0.105 0.214 0.0655 0.0899 0.263 0.396 0.000586 0.000883 ! Validation 445 59898.554 0.005 0.0043 0.0547 0.141 0.06 0.0799 0.244 0.285 0.000544 0.000636 Wall time: 59898.554416399915 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 446 100 0.232 0.00485 0.135 0.064 0.0849 0.427 0.447 0.000953 0.000999 446 172 0.126 0.0037 0.0522 0.0551 0.0742 0.246 0.278 0.00055 0.000621 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 446 100 0.115 0.00327 0.0495 0.0516 0.0697 0.269 0.271 0.0006 0.000605 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 446 60031.761 0.005 0.00396 0.0353 0.114 0.057 0.0766 0.178 0.229 0.000397 0.000511 ! Validation 446 60031.761 0.005 0.00386 0.029 0.106 0.0566 0.0757 0.165 0.208 0.000369 0.000464 Wall time: 60031.76182065159 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 447 100 0.0911 0.00386 0.014 0.057 0.0757 0.118 0.144 0.000263 0.000322 447 172 0.309 0.0058 0.193 0.0691 0.0928 0.496 0.535 0.00111 0.00119 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 447 100 0.376 0.00505 0.275 0.0655 0.0866 0.627 0.638 0.0014 0.00143 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 447 60164.993 0.005 0.00479 0.0633 0.159 0.0617 0.0843 0.214 0.306 0.000478 0.000684 ! Validation 447 60164.993 0.005 0.00558 0.216 0.327 0.0694 0.091 0.47 0.566 0.00105 0.00126 Wall time: 60164.99342191266 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 448 100 0.0824 0.00381 0.00624 0.056 0.0752 0.0768 0.0963 0.000172 0.000215 448 172 0.113 0.00369 0.0391 0.0552 0.074 0.205 0.241 0.000458 0.000538 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 448 100 0.0719 0.0035 0.00186 0.0534 0.0721 0.0341 0.0525 7.61e-05 0.000117 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 448 60308.040 0.005 0.00428 0.0384 0.124 0.0591 0.0797 0.188 0.239 0.00042 0.000533 ! Validation 448 60308.040 0.005 0.004 0.0366 0.117 0.0577 0.077 0.194 0.233 0.000432 0.00052 Wall time: 60308.040214792825 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 449 100 0.176 0.00358 0.104 0.055 0.0729 0.371 0.394 0.000828 0.000878 449 172 0.0989 0.00369 0.0251 0.0552 0.074 0.147 0.193 0.000329 0.000431 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 449 100 0.141 0.0033 0.0755 0.0517 0.07 0.333 0.335 0.000744 0.000747 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 449 60441.258 0.005 0.00381 0.0303 0.106 0.0559 0.0752 0.166 0.212 0.000371 0.000473 ! Validation 449 60441.258 0.005 0.00379 0.0709 0.147 0.0561 0.075 0.273 0.325 0.000608 0.000724 Wall time: 60441.25843215501 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 450 100 0.0857 0.00355 0.0147 0.0541 0.0726 0.113 0.148 0.000252 0.000329 450 172 0.169 0.00441 0.0805 0.0615 0.0809 0.31 0.346 0.000693 0.000772 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 450 100 0.215 0.00555 0.104 0.07 0.0908 0.387 0.392 0.000864 0.000876 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 450 60574.460 0.005 0.00386 0.0354 0.112 0.0562 0.0756 0.178 0.229 0.000397 0.000511 ! Validation 450 60574.460 0.005 0.00594 0.161 0.28 0.0723 0.0939 0.453 0.489 0.00101 0.00109 Wall time: 60574.4608417768 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 451 100 0.0902 0.00376 0.0149 0.0557 0.0747 0.112 0.149 0.000249 0.000332 451 172 0.0771 0.00355 0.00607 0.0544 0.0726 0.0752 0.0949 0.000168 0.000212 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 451 100 0.0702 0.00338 0.00256 0.0526 0.0709 0.0544 0.0617 0.000121 0.000138 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 451 60707.823 0.005 0.00388 0.0314 0.109 0.0564 0.0759 0.169 0.216 0.000377 0.000482 ! Validation 451 60707.823 0.005 0.00394 0.0296 0.108 0.0571 0.0764 0.162 0.21 0.000361 0.000468 Wall time: 60707.82344430685 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 452 100 0.0918 0.00356 0.0206 0.0544 0.0727 0.145 0.175 0.000323 0.00039 452 172 0.0855 0.00358 0.0139 0.0542 0.0729 0.124 0.144 0.000276 0.000321 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 452 100 0.125 0.00328 0.0592 0.0518 0.0698 0.294 0.297 0.000656 0.000662 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 452 60841.288 0.005 0.00422 0.0543 0.139 0.0587 0.0792 0.224 0.284 0.000499 0.000634 ! Validation 452 60841.288 0.005 0.00373 0.0486 0.123 0.0557 0.0744 0.22 0.268 0.000491 0.000599 Wall time: 60841.288441679906 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 453 100 0.133 0.00362 0.0604 0.0548 0.0733 0.274 0.299 0.000612 0.000668 453 172 0.0962 0.00443 0.00749 0.0608 0.0811 0.0809 0.105 0.000181 0.000235 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 453 100 0.0891 0.00428 0.00345 0.0588 0.0797 0.0545 0.0715 0.000122 0.00016 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 453 60974.905 0.005 0.00399 0.0458 0.126 0.0574 0.077 0.21 0.261 0.000469 0.000582 ! Validation 453 60974.905 0.005 0.00465 0.0677 0.161 0.0622 0.0831 0.271 0.317 0.000605 0.000708 Wall time: 60974.905108002946 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 454 100 0.105 0.00477 0.00974 0.0634 0.0841 0.0968 0.12 0.000216 0.000268 454 172 0.135 0.00434 0.0481 0.0599 0.0803 0.209 0.267 0.000466 0.000597 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 454 100 0.396 0.00446 0.306 0.0611 0.0814 0.67 0.674 0.0015 0.00151 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 454 61108.138 0.005 0.00447 0.0675 0.157 0.0608 0.0814 0.255 0.317 0.00057 0.000707 ! Validation 454 61108.138 0.005 0.00492 0.205 0.303 0.0652 0.0855 0.433 0.552 0.000967 0.00123 Wall time: 61108.13794795703 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 455 100 0.163 0.00509 0.0609 0.0654 0.0869 0.269 0.301 0.0006 0.000671 455 172 0.263 0.0043 0.177 0.0606 0.0799 0.445 0.512 0.000994 0.00114 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 455 100 0.082 0.00407 0.0007 0.0584 0.0777 0.0247 0.0322 5.51e-05 7.2e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 455 61241.640 0.005 0.00524 0.0809 0.186 0.0651 0.0882 0.273 0.347 0.000609 0.000773 ! Validation 455 61241.640 0.005 0.00462 0.0542 0.147 0.0627 0.0828 0.238 0.284 0.000532 0.000633 Wall time: 61241.64069436677 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 456 100 0.0811 0.00344 0.0124 0.0531 0.0714 0.106 0.136 0.000236 0.000303 456 172 0.0894 0.00333 0.0228 0.0527 0.0703 0.149 0.184 0.000332 0.000411 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 456 100 0.0738 0.0032 0.00982 0.051 0.0689 0.116 0.121 0.000258 0.00027 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 456 61378.010 0.005 0.00365 0.0281 0.101 0.0546 0.0736 0.162 0.204 0.000362 0.000456 ! Validation 456 61378.010 0.005 0.00365 0.0234 0.0965 0.0552 0.0737 0.143 0.186 0.000318 0.000416 Wall time: 61378.0101103019 ! Best model 456 0.096 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 457 100 0.115 0.00454 0.0246 0.062 0.0821 0.131 0.191 0.000292 0.000427 457 172 0.0862 0.00333 0.0196 0.0519 0.0703 0.131 0.171 0.000292 0.000381 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 457 100 0.0797 0.00328 0.0141 0.0515 0.0698 0.142 0.145 0.000317 0.000323 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 457 61511.833 0.005 0.00389 0.0449 0.123 0.0566 0.076 0.198 0.258 0.000441 0.000576 ! Validation 457 61511.833 0.005 0.00382 0.0232 0.0995 0.0563 0.0753 0.142 0.186 0.000318 0.000414 Wall time: 61511.83328951476 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 458 100 0.0858 0.00325 0.0208 0.0516 0.0695 0.149 0.176 0.000332 0.000392 458 172 0.0857 0.00375 0.0107 0.056 0.0746 0.0871 0.126 0.000194 0.000281 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 458 100 0.0649 0.00312 0.00245 0.0505 0.0681 0.0522 0.0604 0.000116 0.000135 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 458 61645.055 0.005 0.00378 0.0324 0.108 0.0556 0.0749 0.168 0.219 0.000376 0.00049 ! Validation 458 61645.055 0.005 0.00365 0.0458 0.119 0.0551 0.0736 0.221 0.261 0.000494 0.000582 Wall time: 61645.05513768084 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 459 100 0.0869 0.00342 0.0185 0.0528 0.0713 0.144 0.166 0.000322 0.00037 459 172 0.0893 0.00354 0.0185 0.0543 0.0725 0.133 0.166 0.000297 0.00037 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 459 100 0.0916 0.00328 0.0261 0.0513 0.0697 0.192 0.197 0.000428 0.000439 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 459 61778.267 0.005 0.00372 0.0366 0.111 0.0553 0.0743 0.187 0.233 0.000417 0.00052 ! Validation 459 61778.267 0.005 0.00368 0.0294 0.103 0.0555 0.0739 0.171 0.209 0.000381 0.000466 Wall time: 61778.26731365267 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 460 100 0.164 0.00437 0.0766 0.0604 0.0806 0.293 0.337 0.000654 0.000753 460 172 0.0806 0.00358 0.00897 0.0545 0.0729 0.0946 0.115 0.000211 0.000258 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 460 100 0.0727 0.00315 0.00971 0.0505 0.0684 0.114 0.12 0.000254 0.000268 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 460 61911.712 0.005 0.00403 0.0426 0.123 0.0577 0.0773 0.198 0.252 0.000441 0.000562 ! Validation 460 61911.712 0.005 0.00365 0.025 0.0979 0.055 0.0736 0.158 0.193 0.000352 0.00043 Wall time: 61911.71250763489 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 461 100 0.0902 0.0036 0.0183 0.0549 0.0731 0.134 0.165 0.000299 0.000368 461 172 0.0906 0.00345 0.0216 0.0529 0.0715 0.146 0.179 0.000327 0.0004 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 461 100 0.112 0.00333 0.0456 0.0519 0.0703 0.259 0.26 0.000579 0.00058 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 461 62048.095 0.005 0.00402 0.0527 0.133 0.0573 0.0772 0.217 0.28 0.000484 0.000624 ! Validation 461 62048.095 0.005 0.00375 0.0338 0.109 0.0559 0.0746 0.174 0.224 0.000388 0.0005 Wall time: 62048.095598864835 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 462 100 0.0755 0.0033 0.00942 0.0521 0.07 0.0898 0.118 0.0002 0.000264 462 172 0.11 0.00379 0.0341 0.0559 0.075 0.201 0.225 0.00045 0.000502 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 462 100 0.0985 0.0034 0.0306 0.0535 0.071 0.208 0.213 0.000465 0.000475 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 462 62185.500 0.005 0.00369 0.0328 0.107 0.055 0.074 0.173 0.221 0.000385 0.000493 ! Validation 462 62185.500 0.005 0.00383 0.0394 0.116 0.0568 0.0754 0.191 0.242 0.000427 0.00054 Wall time: 62185.500556733925 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 463 100 3.43 0.0425 2.58 0.19 0.251 1.83 1.96 0.00409 0.00437 463 172 1.28 0.0501 0.279 0.206 0.273 0.516 0.643 0.00115 0.00144 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 463 100 0.96 0.0468 0.0231 0.2 0.264 0.167 0.185 0.000374 0.000413 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 463 62318.793 0.005 0.0636 1.81 3.08 0.185 0.307 0.996 1.64 0.00222 0.00366 ! Validation 463 62318.793 0.005 0.0494 0.354 1.34 0.206 0.271 0.582 0.725 0.0013 0.00162 Wall time: 62318.79331295192 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 464 100 0.373 0.014 0.0941 0.109 0.144 0.327 0.374 0.000729 0.000834 464 172 0.223 0.0103 0.0167 0.0931 0.124 0.127 0.157 0.000283 0.000352 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 464 100 0.209 0.00936 0.0219 0.09 0.118 0.178 0.18 0.000397 0.000403 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 464 62452.007 0.005 0.0198 0.257 0.654 0.127 0.172 0.468 0.618 0.00105 0.00138 ! Validation 464 62452.007 0.005 0.0107 0.0528 0.267 0.0955 0.126 0.22 0.28 0.000492 0.000625 Wall time: 62452.00736498274 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 465 100 0.193 0.00786 0.0358 0.0809 0.108 0.198 0.231 0.000443 0.000515 465 172 0.282 0.00676 0.147 0.0752 0.1 0.435 0.466 0.000972 0.00104 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 465 100 0.164 0.0061 0.0421 0.0716 0.0952 0.247 0.25 0.000551 0.000558 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 465 62585.241 0.005 0.00851 0.0849 0.255 0.0843 0.112 0.277 0.355 0.000618 0.000792 ! Validation 465 62585.241 0.005 0.00723 0.121 0.265 0.0781 0.104 0.373 0.423 0.000832 0.000944 Wall time: 62585.24175000377 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 466 100 0.143 0.00633 0.0166 0.0729 0.0969 0.12 0.157 0.000269 0.00035 466 172 0.336 0.00596 0.216 0.0697 0.0941 0.532 0.567 0.00119 0.00127 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 466 100 0.283 0.00507 0.182 0.0649 0.0867 0.517 0.519 0.00115 0.00116 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 466 62720.581 0.005 0.00629 0.0538 0.18 0.0721 0.0966 0.224 0.282 0.000499 0.00063 ! Validation 466 62720.581 0.005 0.0061 0.264 0.386 0.0716 0.0952 0.592 0.626 0.00132 0.0014 Wall time: 62720.5818115687 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 467 100 0.154 0.0053 0.0477 0.0657 0.0887 0.226 0.266 0.000503 0.000594 467 172 0.147 0.00575 0.0316 0.0688 0.0924 0.18 0.217 0.000402 0.000484 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 467 100 0.135 0.00487 0.038 0.0637 0.085 0.235 0.238 0.000523 0.00053 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 467 62853.764 0.005 0.00554 0.0712 0.182 0.0675 0.0907 0.265 0.325 0.000591 0.000726 ! Validation 467 62853.764 0.005 0.00585 0.0698 0.187 0.0698 0.0932 0.262 0.322 0.000584 0.000718 Wall time: 62853.764393543825 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 468 100 0.116 0.00464 0.0234 0.0618 0.083 0.15 0.186 0.000336 0.000416 468 172 0.112 0.00439 0.024 0.0603 0.0807 0.15 0.189 0.000334 0.000421 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 468 100 0.089 0.00414 0.00608 0.0582 0.0784 0.0872 0.095 0.000195 0.000212 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 468 62986.951 0.005 0.005 0.0365 0.136 0.0639 0.0861 0.187 0.233 0.000417 0.00052 ! Validation 468 62986.951 0.005 0.00502 0.0422 0.143 0.0646 0.0864 0.206 0.25 0.000459 0.000559 Wall time: 62986.951576588675 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 469 100 0.205 0.00481 0.109 0.0633 0.0845 0.378 0.402 0.000844 0.000897 469 172 0.157 0.00465 0.0644 0.0609 0.0831 0.274 0.309 0.000612 0.00069 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 469 100 0.0817 0.00402 0.00125 0.0574 0.0773 0.0409 0.0431 9.13e-05 9.63e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 469 63120.142 0.005 0.00471 0.0526 0.147 0.062 0.0836 0.226 0.279 0.000504 0.000624 ! Validation 469 63120.142 0.005 0.0049 0.0316 0.13 0.0637 0.0853 0.175 0.216 0.000392 0.000483 Wall time: 63120.14257024182 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 470 100 0.0991 0.00411 0.017 0.0581 0.0781 0.129 0.159 0.000289 0.000354 470 172 0.105 0.00427 0.0191 0.0587 0.0797 0.138 0.168 0.000308 0.000376 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 470 100 0.0828 0.00379 0.00693 0.0557 0.075 0.092 0.101 0.000205 0.000226 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 470 63264.396 0.005 0.00446 0.039 0.128 0.0603 0.0814 0.19 0.241 0.000425 0.000537 ! Validation 470 63264.396 0.005 0.00461 0.0281 0.12 0.0619 0.0828 0.156 0.204 0.000349 0.000456 Wall time: 63264.39615573594 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 471 100 0.144 0.00462 0.0513 0.061 0.0828 0.241 0.276 0.000538 0.000616 471 172 0.119 0.00417 0.0354 0.0585 0.0787 0.205 0.229 0.000458 0.000512 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 471 100 0.0851 0.00376 0.00996 0.0555 0.0747 0.115 0.122 0.000257 0.000271 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 471 63401.764 0.005 0.00453 0.065 0.156 0.0608 0.082 0.244 0.311 0.000545 0.000693 ! Validation 471 63401.764 0.005 0.00457 0.0218 0.113 0.0616 0.0824 0.146 0.18 0.000326 0.000402 Wall time: 63401.764807319734 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 472 100 0.141 0.00445 0.0517 0.0601 0.0813 0.245 0.277 0.000547 0.000619 472 172 0.103 0.00418 0.0195 0.0585 0.0788 0.148 0.17 0.000331 0.00038 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 472 100 0.0751 0.00369 0.00136 0.0548 0.074 0.0316 0.045 7.05e-05 0.0001 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 472 63534.963 0.005 0.00429 0.0573 0.143 0.0591 0.0798 0.234 0.292 0.000523 0.000651 ! Validation 472 63534.963 0.005 0.00441 0.0277 0.116 0.0605 0.0809 0.158 0.203 0.000353 0.000452 Wall time: 63534.96387113072 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 473 100 0.117 0.004 0.0367 0.0571 0.077 0.194 0.233 0.000433 0.000521 473 172 0.105 0.00401 0.0253 0.0574 0.0771 0.164 0.194 0.000367 0.000433 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 473 100 0.0986 0.00351 0.0284 0.0536 0.0722 0.202 0.205 0.00045 0.000458 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 473 63668.215 0.005 0.00407 0.0355 0.117 0.0576 0.0777 0.184 0.229 0.00041 0.000512 ! Validation 473 63668.215 0.005 0.00427 0.0313 0.117 0.0594 0.0796 0.171 0.215 0.000383 0.000481 Wall time: 63668.21490714885 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 474 100 0.0867 0.00357 0.0153 0.0544 0.0728 0.111 0.151 0.000247 0.000336 474 172 0.089 0.00397 0.00956 0.0566 0.0768 0.0962 0.119 0.000215 0.000266 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 474 100 0.0699 0.00339 0.00213 0.0523 0.0709 0.0488 0.0562 0.000109 0.000125 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 474 63804.768 0.005 0.00392 0.0259 0.104 0.0564 0.0762 0.155 0.196 0.000347 0.000438 ! Validation 474 63804.768 0.005 0.00404 0.031 0.112 0.0578 0.0774 0.172 0.215 0.000384 0.000479 Wall time: 63804.76859087264 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 475 100 0.203 0.00443 0.114 0.0601 0.0811 0.383 0.412 0.000855 0.00092 475 172 0.101 0.00375 0.0257 0.0558 0.0746 0.168 0.195 0.000376 0.000436 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 475 100 0.131 0.00418 0.0479 0.0576 0.0787 0.261 0.267 0.000582 0.000595 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 475 63938.122 0.005 0.00403 0.0553 0.136 0.0573 0.0773 0.228 0.287 0.000509 0.00064 ! Validation 475 63938.122 0.005 0.00486 0.195 0.293 0.0633 0.085 0.491 0.538 0.0011 0.0012 Wall time: 63938.12214930868 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 476 100 0.0973 0.00378 0.0217 0.0552 0.0749 0.146 0.18 0.000327 0.000401 476 172 0.106 0.00408 0.0247 0.0579 0.0778 0.146 0.191 0.000325 0.000427 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 476 100 0.145 0.00372 0.0709 0.0552 0.0743 0.321 0.324 0.000718 0.000724 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 476 64071.326 0.005 0.00393 0.0386 0.117 0.0566 0.0764 0.192 0.239 0.000429 0.000534 ! Validation 476 64071.326 0.005 0.00435 0.0701 0.157 0.06 0.0804 0.264 0.323 0.000589 0.00072 Wall time: 64071.32592921704 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 477 100 0.0886 0.0039 0.0106 0.0563 0.0761 0.101 0.125 0.000225 0.00028 477 172 0.214 0.00371 0.14 0.0548 0.0742 0.425 0.456 0.000949 0.00102 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 477 100 0.0999 0.00342 0.0315 0.0529 0.0713 0.213 0.216 0.000475 0.000483 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 477 64204.545 0.005 0.00375 0.0298 0.105 0.0553 0.0747 0.166 0.21 0.00037 0.000469 ! Validation 477 64204.545 0.005 0.00406 0.0446 0.126 0.0581 0.0776 0.199 0.257 0.000445 0.000574 Wall time: 64204.545814323705 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 478 100 0.0884 0.00355 0.0175 0.0537 0.0726 0.121 0.161 0.00027 0.000359 478 172 0.0814 0.00314 0.0186 0.051 0.0683 0.146 0.166 0.000325 0.000371 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 478 100 0.0696 0.00324 0.00482 0.051 0.0694 0.0766 0.0846 0.000171 0.000189 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 478 64337.746 0.005 0.0038 0.0446 0.121 0.0557 0.0751 0.194 0.257 0.000433 0.000574 ! Validation 478 64337.746 0.005 0.00383 0.0193 0.0959 0.0563 0.0754 0.136 0.169 0.000303 0.000378 Wall time: 64337.74720248999 ! Best model 478 0.096 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 479 100 0.249 0.00512 0.146 0.0647 0.0872 0.41 0.466 0.000916 0.00104 479 172 0.152 0.00405 0.0706 0.0577 0.0776 0.268 0.324 0.000598 0.000723 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 479 100 0.225 0.0038 0.149 0.055 0.0751 0.468 0.47 0.00105 0.00105 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 479 64472.338 0.005 0.00387 0.0379 0.115 0.0561 0.0758 0.185 0.237 0.000414 0.000529 ! Validation 479 64472.338 0.005 0.00428 0.141 0.227 0.0595 0.0797 0.419 0.458 0.000935 0.00102 Wall time: 64472.33822694281 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 480 100 0.111 0.0036 0.0393 0.0541 0.0731 0.208 0.242 0.000465 0.000539 480 172 0.143 0.00377 0.0671 0.0552 0.0748 0.304 0.316 0.000678 0.000704 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 480 100 0.119 0.00326 0.0536 0.0515 0.0695 0.279 0.282 0.000623 0.00063 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 480 64605.523 0.005 0.00363 0.0303 0.103 0.0544 0.0734 0.169 0.212 0.000378 0.000473 ! Validation 480 64605.523 0.005 0.00382 0.0799 0.156 0.0562 0.0753 0.283 0.344 0.000632 0.000769 Wall time: 64605.52330102865 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 481 100 0.0842 0.00353 0.0137 0.0534 0.0723 0.104 0.142 0.000232 0.000318 481 172 0.235 0.00376 0.16 0.0555 0.0747 0.465 0.487 0.00104 0.00109 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 481 100 0.133 0.00333 0.0668 0.0517 0.0703 0.312 0.315 0.000697 0.000703 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 481 64738.768 0.005 0.00358 0.0365 0.108 0.054 0.0729 0.183 0.233 0.000409 0.000519 ! Validation 481 64738.768 0.005 0.00386 0.164 0.241 0.0564 0.0757 0.459 0.494 0.00103 0.0011 Wall time: 64738.76892941864 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 482 100 0.09 0.00384 0.0133 0.0554 0.0755 0.109 0.141 0.000244 0.000314 482 172 0.235 0.00368 0.161 0.0546 0.0739 0.48 0.489 0.00107 0.00109 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 482 100 0.0708 0.00337 0.00338 0.0522 0.0708 0.0524 0.0708 0.000117 0.000158 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 482 64871.965 0.005 0.00355 0.0328 0.104 0.0539 0.0726 0.175 0.22 0.00039 0.000492 ! Validation 482 64871.965 0.005 0.00378 0.0324 0.108 0.056 0.0749 0.18 0.219 0.000403 0.000489 Wall time: 64871.96531264484 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 483 100 0.178 0.00373 0.104 0.0549 0.0744 0.367 0.392 0.000819 0.000876 483 172 0.0699 0.00306 0.00874 0.0503 0.0674 0.0949 0.114 0.000212 0.000254 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 483 100 0.0902 0.00385 0.0132 0.056 0.0756 0.135 0.14 0.000301 0.000313 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 483 65005.192 0.005 0.00376 0.0444 0.12 0.0554 0.0747 0.206 0.257 0.00046 0.000573 ! Validation 483 65005.192 0.005 0.00433 0.0319 0.119 0.0598 0.0802 0.168 0.217 0.000375 0.000485 Wall time: 65005.191958970856 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 484 100 0.0792 0.00317 0.0157 0.051 0.0686 0.127 0.153 0.000284 0.000341 484 172 0.0813 0.00328 0.0156 0.0517 0.0698 0.126 0.152 0.00028 0.00034 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 484 100 0.0621 0.00307 0.00064 0.0497 0.0675 0.0274 0.0308 6.13e-05 6.88e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 484 65139.079 0.005 0.00363 0.0421 0.115 0.0545 0.0735 0.196 0.25 0.000438 0.000558 ! Validation 484 65139.079 0.005 0.00353 0.028 0.0986 0.054 0.0724 0.167 0.204 0.000372 0.000455 Wall time: 65139.079508469906 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 485 100 0.107 0.00384 0.0299 0.0567 0.0755 0.172 0.211 0.000385 0.000471 485 172 0.0919 0.00416 0.00867 0.0589 0.0786 0.0813 0.113 0.000181 0.000253 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 485 100 0.0935 0.00353 0.0228 0.0545 0.0724 0.182 0.184 0.000407 0.000411 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 485 65273.574 0.005 0.00368 0.0345 0.108 0.0548 0.0739 0.177 0.226 0.000396 0.000505 ! Validation 485 65273.574 0.005 0.00406 0.0329 0.114 0.0587 0.0776 0.173 0.221 0.000386 0.000493 Wall time: 65273.574012659024 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 486 100 0.0887 0.00359 0.017 0.0542 0.073 0.135 0.159 0.000301 0.000354 486 172 0.106 0.00396 0.0267 0.057 0.0767 0.178 0.199 0.000397 0.000445 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 486 100 0.0922 0.00349 0.0224 0.0538 0.072 0.176 0.182 0.000392 0.000407 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 486 65406.772 0.005 0.00373 0.0391 0.114 0.0552 0.0744 0.193 0.241 0.00043 0.000538 ! Validation 486 65406.772 0.005 0.00397 0.0289 0.108 0.0576 0.0768 0.168 0.207 0.000375 0.000462 Wall time: 65406.77194284601 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 487 100 0.0928 0.00345 0.0239 0.0531 0.0716 0.151 0.188 0.000336 0.00042 487 172 0.132 0.00533 0.0257 0.0661 0.089 0.168 0.195 0.000374 0.000436 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 487 100 0.0743 0.00362 0.00182 0.0541 0.0733 0.0432 0.0519 9.63e-05 0.000116 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 487 65539.979 0.005 0.0035 0.0279 0.098 0.0534 0.0721 0.161 0.204 0.00036 0.000454 ! Validation 487 65539.979 0.005 0.00396 0.0328 0.112 0.0576 0.0767 0.17 0.221 0.00038 0.000493 Wall time: 65539.97945146775 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 488 100 0.133 0.00396 0.0543 0.0575 0.0766 0.243 0.284 0.000542 0.000634 488 172 0.121 0.00368 0.0473 0.0552 0.0739 0.215 0.265 0.000481 0.000592 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 488 100 0.205 0.00403 0.124 0.0583 0.0773 0.422 0.43 0.000941 0.000959 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 488 65682.366 0.005 0.00401 0.0652 0.145 0.0575 0.0772 0.248 0.311 0.000554 0.000695 ! Validation 488 65682.366 0.005 0.00452 0.0831 0.174 0.0618 0.0819 0.268 0.351 0.000598 0.000784 Wall time: 65682.36643406004 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 489 100 0.151 0.00342 0.0826 0.0529 0.0712 0.332 0.35 0.000741 0.000781 489 172 0.218 0.0078 0.062 0.0802 0.108 0.252 0.303 0.000563 0.000677 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 489 100 0.0719 0.00353 0.00128 0.0535 0.0724 0.0331 0.0435 7.39e-05 9.72e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 489 65816.367 0.005 0.00451 0.0536 0.144 0.0603 0.0818 0.217 0.282 0.000484 0.000629 ! Validation 489 65816.367 0.005 0.004 0.0675 0.148 0.0575 0.0771 0.27 0.317 0.000604 0.000707 Wall time: 65816.36712437076 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 490 100 0.0849 0.00338 0.0173 0.053 0.0708 0.122 0.16 0.000272 0.000358 490 172 0.085 0.0037 0.0111 0.0541 0.0741 0.0992 0.128 0.000221 0.000287 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 490 100 0.0808 0.0032 0.0167 0.0509 0.069 0.153 0.158 0.000341 0.000352 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 490 65949.709 0.005 0.0039 0.0381 0.116 0.0563 0.0761 0.186 0.238 0.000416 0.000531 ! Validation 490 65949.709 0.005 0.00362 0.0195 0.092 0.0549 0.0733 0.137 0.17 0.000305 0.00038 Wall time: 65949.70967949694 ! Best model 490 0.092 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 491 100 0.0819 0.00305 0.021 0.0498 0.0673 0.144 0.177 0.000323 0.000394 491 172 0.1 0.00373 0.0256 0.0553 0.0745 0.172 0.195 0.000384 0.000435 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 491 100 0.0727 0.0033 0.00672 0.0522 0.07 0.0912 0.0999 0.000204 0.000223 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 491 66082.936 0.005 0.00331 0.0231 0.0892 0.0519 0.0701 0.144 0.185 0.000322 0.000413 ! Validation 491 66082.936 0.005 0.00377 0.0223 0.0977 0.0559 0.0748 0.147 0.182 0.000328 0.000406 Wall time: 66082.93628696771 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 492 100 0.0767 0.00337 0.00932 0.0523 0.0707 0.0981 0.118 0.000219 0.000263 492 172 0.14 0.00616 0.0165 0.0719 0.0957 0.12 0.157 0.000267 0.000349 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 492 100 0.084 0.00373 0.0095 0.0555 0.0744 0.117 0.119 0.00026 0.000265 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 492 66217.798 0.005 0.00381 0.0477 0.124 0.0555 0.0752 0.208 0.266 0.000465 0.000594 ! Validation 492 66217.798 0.005 0.00423 0.0525 0.137 0.0596 0.0793 0.232 0.279 0.000518 0.000623 Wall time: 66217.79883239185 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 493 100 0.138 0.00332 0.0715 0.0523 0.0702 0.298 0.326 0.000665 0.000727 493 172 0.0923 0.00323 0.0277 0.0511 0.0692 0.174 0.203 0.000388 0.000453 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 493 100 0.0656 0.003 0.00569 0.0493 0.0667 0.0855 0.0919 0.000191 0.000205 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 493 66351.037 0.005 0.00376 0.0426 0.118 0.0555 0.0748 0.196 0.252 0.000438 0.000561 ! Validation 493 66351.037 0.005 0.00344 0.0224 0.0912 0.0534 0.0715 0.147 0.182 0.000329 0.000407 Wall time: 66351.03785466868 ! Best model 493 0.091 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 494 100 0.0656 0.00299 0.00585 0.0494 0.0666 0.0699 0.0932 0.000156 0.000208 494 172 0.0781 0.00327 0.0128 0.0514 0.0696 0.113 0.138 0.000251 0.000308 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 494 100 0.0748 0.00289 0.017 0.0486 0.0655 0.156 0.159 0.000348 0.000354 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 494 66484.881 0.005 0.00323 0.0193 0.084 0.0513 0.0693 0.135 0.169 0.000301 0.000378 ! Validation 494 66484.881 0.005 0.00331 0.0267 0.093 0.0523 0.0701 0.151 0.199 0.000337 0.000444 Wall time: 66484.88155714097 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 495 100 0.0823 0.00346 0.0131 0.0543 0.0717 0.115 0.139 0.000257 0.000311 495 172 0.109 0.00302 0.0485 0.0496 0.067 0.237 0.268 0.00053 0.000599 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 495 100 0.0799 0.003 0.0199 0.049 0.0668 0.17 0.172 0.000379 0.000383 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 495 66618.103 0.005 0.00349 0.0359 0.106 0.0534 0.0719 0.182 0.231 0.000405 0.000515 ! Validation 495 66618.103 0.005 0.0034 0.0456 0.114 0.0529 0.0711 0.204 0.26 0.000455 0.000581 Wall time: 66618.10307144374 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 496 100 0.113 0.00369 0.0393 0.0553 0.0741 0.204 0.241 0.000456 0.000539 496 172 0.0848 0.00329 0.019 0.0521 0.0699 0.135 0.168 0.000302 0.000375 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 496 100 0.0639 0.00311 0.00173 0.0502 0.0679 0.0471 0.0507 0.000105 0.000113 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 496 66751.345 0.005 0.00407 0.0524 0.134 0.0573 0.0777 0.216 0.279 0.000483 0.000623 ! Validation 496 66751.345 0.005 0.00356 0.0209 0.0921 0.0544 0.0727 0.14 0.176 0.000313 0.000393 Wall time: 66751.34548158478 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 497 100 0.105 0.00314 0.0425 0.0504 0.0683 0.227 0.251 0.000507 0.000561 497 172 0.227 0.00455 0.136 0.0614 0.0822 0.405 0.449 0.000904 0.001 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 497 100 0.0965 0.00454 0.00569 0.0607 0.0821 0.0741 0.0919 0.000165 0.000205 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 497 66884.869 0.005 0.00337 0.0289 0.0962 0.0523 0.0707 0.159 0.207 0.000355 0.000461 ! Validation 497 66884.869 0.005 0.00478 0.0531 0.149 0.063 0.0842 0.234 0.281 0.000523 0.000627 Wall time: 66884.86950463476 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 498 100 0.108 0.0036 0.036 0.0548 0.0731 0.196 0.231 0.000438 0.000516 498 172 0.0752 0.00309 0.0133 0.0504 0.0678 0.114 0.141 0.000254 0.000314 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 498 100 0.146 0.00313 0.0832 0.0507 0.0682 0.35 0.351 0.000782 0.000784 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 498 67018.071 0.005 0.00435 0.0588 0.146 0.059 0.0803 0.223 0.296 0.000497 0.00066 ! Validation 498 67018.071 0.005 0.00355 0.0704 0.141 0.0541 0.0726 0.278 0.323 0.00062 0.000722 Wall time: 67018.0723302248 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 499 100 0.0644 0.00304 0.00362 0.05 0.0672 0.0545 0.0733 0.000122 0.000164 499 172 0.11 0.00369 0.0364 0.0556 0.074 0.202 0.233 0.00045 0.000519 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 499 100 0.07 0.00338 0.00238 0.0526 0.0708 0.0502 0.0595 0.000112 0.000133 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 499 67151.275 0.005 0.00326 0.0272 0.0923 0.0516 0.0695 0.159 0.201 0.000354 0.000449 ! Validation 499 67151.275 0.005 0.00371 0.0173 0.0915 0.0555 0.0742 0.126 0.16 0.000282 0.000358 Wall time: 67151.2756193257 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 500 100 0.114 0.00318 0.0508 0.0509 0.0687 0.252 0.275 0.000563 0.000613 500 172 0.292 0.00354 0.221 0.0539 0.0725 0.539 0.573 0.0012 0.00128 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 500 100 0.0906 0.00435 0.00351 0.0603 0.0804 0.0675 0.0722 0.000151 0.000161 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 500 67284.483 0.005 0.00343 0.0366 0.105 0.0528 0.0713 0.178 0.233 0.000397 0.000519 ! Validation 500 67284.483 0.005 0.00472 0.0486 0.143 0.0626 0.0837 0.23 0.269 0.000513 0.0006 Wall time: 67284.4831526489 ! Stop training: max epochs Wall time: 67284.5258346228 Cumulative wall time: 67284.5258346228 Using device: cuda Please note that _all_ machine learning models running on CUDA hardware are generally somewhat nondeterministic and that this can manifest in small, generally unimportant variation in the final test errors. Loading model... loaded model Loading dataset... Processing dataset... Done! Loaded dataset specified in test_config.yaml. Using all frames from the specified test dataset, yielding a test set size of 500 frames. Starting... --- Final result: --- f_mae = 0.049286 f_rmse = 0.066177 e_mae = 0.123399 e_rmse = 0.148501 e/N_mae = 0.000275 e/N_rmse = 0.000331 f_mae = 0.049286 f_rmse = 0.066177 e_mae = 0.123399 e_rmse = 0.148501 e/N_mae = 0.000275 e/N_rmse = 0.000331 Train end time: 2024-12-09_05:43:51 Training duration: 18h 45m 25s