Train start time: 2024-12-08_10:57:11 Torch device: cuda Processing dataset... Loaded data: Batch(atomic_numbers=[2400000, 1], batch=[2400000], cell=[6000, 3, 3], edge_cell_shift=[82543766, 3], edge_index=[2, 82543766], forces=[2400000, 3], pbc=[6000, 3], pos=[2400000, 3], ptr=[6001], total_energy=[6000, 1]) processed data size: ~3295.80 MB Cached processed data to disk Done! Successfully loaded the data set of type ASEDataset(6000)... Replace string dataset_per_atom_total_energy_mean to -347.415620536814 Atomic outputs are scaled by: [H, C, N, O, Zn: None], shifted by [H, C, N, O, Zn: -347.415621]. Replace string dataset_forces_rms to 1.1921375159513592 Initially outputs are globally scaled by: 1.1921375159513592, total_energy are globally shifted by None. Successfully built the network... Number of weights: 1406856 Number of trainable weights: 1406856 ! Starting training ... validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 0 100 284 1.02 264 0.883 1.2 19.3 19.4 0.0484 0.0484 Initialization # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Initial Validation 0 7.469 0.005 1.07 628 650 0.914 1.24 29.5 29.9 0.0738 0.0747 Wall time: 7.469639488961548 ! Best model 0 649.595 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 1 100 26.9 0.996 6.95 0.881 1.19 2.51 3.14 0.00627 0.00786 1 172 27.2 0.976 7.63 0.873 1.18 2.54 3.29 0.00634 0.00823 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 1 100 20 0.936 1.27 0.854 1.15 1.23 1.34 0.00308 0.00335 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 1 118.882 0.005 0.996 6.97e+03 6.99e+03 0.883 1.19 37.6 99.6 0.0939 0.249 ! Validation 1 118.882 0.005 0.986 2.76 22.5 0.879 1.18 1.57 1.98 0.00392 0.00495 Wall time: 118.88287886884063 ! Best model 1 22.492 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 2 100 24.7 0.974 5.18 0.874 1.18 2.22 2.71 0.00556 0.00678 2 172 25.3 0.973 5.88 0.87 1.18 2.51 2.89 0.00627 0.00723 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 2 100 21 0.933 2.35 0.853 1.15 1.75 1.83 0.00438 0.00457 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 2 226.133 0.005 0.987 5.82 25.5 0.879 1.18 2.35 2.87 0.00587 0.00719 ! Validation 2 226.133 0.005 0.983 2.43 22.1 0.878 1.18 1.41 1.86 0.00353 0.00465 Wall time: 226.13328536599874 ! Best model 2 22.083 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 3 100 22.7 0.979 3.16 0.874 1.18 1.66 2.12 0.00414 0.00529 3 172 25.4 0.993 5.57 0.885 1.19 2.5 2.81 0.00625 0.00704 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 3 100 18.8 0.93 0.212 0.851 1.15 0.453 0.549 0.00113 0.00137 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 3 333.380 0.005 0.983 5.79 25.4 0.877 1.18 2.34 2.87 0.00584 0.00717 ! Validation 3 333.380 0.005 0.978 4.11 23.7 0.876 1.18 2.04 2.42 0.00509 0.00604 Wall time: 333.38227026490495 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 4 100 23.7 0.972 4.29 0.871 1.18 1.89 2.47 0.00473 0.00617 4 172 27.7 0.949 8.76 0.865 1.16 2.88 3.53 0.00719 0.00882 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 4 100 19.7 0.926 1.18 0.85 1.15 1.19 1.29 0.00298 0.00324 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 4 440.626 0.005 0.978 5.77 25.3 0.875 1.18 2.33 2.86 0.00583 0.00716 ! Validation 4 440.626 0.005 0.973 2.36 21.8 0.873 1.18 1.43 1.83 0.00358 0.00458 Wall time: 440.6265148916282 ! Best model 4 21.824 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 5 100 23.6 0.948 4.69 0.863 1.16 2.14 2.58 0.00535 0.00645 5 172 23.1 0.964 3.84 0.866 1.17 1.91 2.34 0.00478 0.00584 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 5 100 21.8 0.922 3.34 0.848 1.14 2.12 2.18 0.0053 0.00544 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 5 549.196 0.005 0.973 5.53 25 0.872 1.18 2.28 2.8 0.00569 0.00701 ! Validation 5 549.196 0.005 0.968 2.39 21.7 0.871 1.17 1.4 1.84 0.00349 0.00461 Wall time: 549.1964376796968 ! Best model 5 21.746 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 6 100 26.3 0.948 7.35 0.863 1.16 2.61 3.23 0.00653 0.00808 6 172 23.6 0.95 4.58 0.863 1.16 2.03 2.55 0.00508 0.00638 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 6 100 19 0.918 0.645 0.846 1.14 0.825 0.958 0.00206 0.00239 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 6 656.363 0.005 0.967 5.31 24.7 0.87 1.17 2.22 2.75 0.00556 0.00687 ! Validation 6 656.363 0.005 0.962 2.32 21.6 0.868 1.17 1.43 1.82 0.00358 0.00454 Wall time: 656.3633750160225 ! Best model 6 21.565 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 7 100 24.2 0.961 5.03 0.867 1.17 2.32 2.67 0.00581 0.00668 7 172 24.5 0.963 5.24 0.864 1.17 2.16 2.73 0.0054 0.00682 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 7 100 21.9 0.913 3.67 0.843 1.14 2.24 2.28 0.00559 0.00571 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 7 763.730 0.005 0.961 4.82 24 0.867 1.17 2.13 2.62 0.00531 0.00654 ! Validation 7 763.730 0.005 0.956 2.44 21.6 0.865 1.17 1.43 1.86 0.00358 0.00465 Wall time: 763.7304400228895 ! Best model 7 21.560 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 8 100 22.8 0.953 3.78 0.864 1.16 1.9 2.32 0.00475 0.0058 8 172 23.3 0.964 4.05 0.863 1.17 2.11 2.4 0.00528 0.006 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 8 100 18.3 0.908 0.156 0.841 1.14 0.422 0.471 0.00105 0.00118 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 8 870.906 0.005 0.956 4.71 23.8 0.864 1.17 2.09 2.59 0.00522 0.00647 ! Validation 8 870.906 0.005 0.95 3.14 22.1 0.862 1.16 1.76 2.11 0.00441 0.00528 Wall time: 870.9064843310043 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 9 100 22.5 0.957 3.39 0.864 1.17 1.62 2.19 0.00405 0.00549 9 172 24.1 0.952 5.02 0.859 1.16 2.09 2.67 0.00522 0.00668 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 9 100 21.2 0.903 3.16 0.838 1.13 2.07 2.12 0.00517 0.00529 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 9 978.045 0.005 0.949 4.39 23.4 0.861 1.16 2.02 2.5 0.00505 0.00624 ! Validation 9 978.045 0.005 0.944 2 20.9 0.859 1.16 1.3 1.69 0.00325 0.00422 Wall time: 978.0452594319358 ! Best model 9 20.879 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 10 100 23.4 0.945 4.47 0.858 1.16 1.94 2.52 0.00485 0.0063 10 172 23.3 0.936 4.57 0.854 1.15 1.88 2.55 0.0047 0.00637 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 10 100 18.1 0.897 0.162 0.835 1.13 0.429 0.48 0.00107 0.0012 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 10 1085.210 0.005 0.943 3.97 22.8 0.857 1.16 1.91 2.37 0.00479 0.00593 ! Validation 10 1085.210 0.005 0.937 2.35 21.1 0.855 1.15 1.49 1.83 0.00373 0.00456 Wall time: 1085.209994097706 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 11 100 22.5 0.94 3.7 0.855 1.16 1.88 2.29 0.00471 0.00573 11 172 24.1 0.952 5.03 0.861 1.16 2.12 2.67 0.00531 0.00668 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 11 100 18 0.891 0.142 0.831 1.13 0.403 0.45 0.00101 0.00112 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 11 1192.354 0.005 0.936 3.81 22.5 0.853 1.15 1.87 2.33 0.00469 0.00582 ! Validation 11 1192.354 0.005 0.929 2.86 21.4 0.851 1.15 1.68 2.02 0.0042 0.00504 Wall time: 1192.354094415903 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 12 100 21.8 0.953 2.71 0.859 1.16 1.62 1.96 0.00405 0.00491 12 172 22.7 0.942 3.87 0.852 1.16 1.78 2.35 0.00445 0.00587 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 12 100 18.3 0.883 0.677 0.827 1.12 0.883 0.981 0.00221 0.00245 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 12 1299.491 0.005 0.928 3.32 21.9 0.849 1.15 1.74 2.17 0.00435 0.00543 ! Validation 12 1299.491 0.005 0.921 1.39 19.8 0.846 1.14 1.11 1.41 0.00277 0.00352 Wall time: 1299.4934893688187 ! Best model 12 19.807 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 13 100 22.2 0.909 3.98 0.841 1.14 2.02 2.38 0.00505 0.00595 13 172 20.6 0.908 2.49 0.84 1.14 1.68 1.88 0.0042 0.00471 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 13 100 17.6 0.871 0.124 0.82 1.11 0.379 0.419 0.000946 0.00105 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 13 1406.670 0.005 0.918 3.06 21.4 0.844 1.14 1.66 2.09 0.00416 0.00522 ! Validation 13 1406.670 0.005 0.908 2.74 20.9 0.84 1.14 1.64 1.97 0.0041 0.00493 Wall time: 1406.6701712156646 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 14 100 20.1 0.912 1.86 0.837 1.14 1.4 1.63 0.00349 0.00406 14 172 19.4 0.874 1.91 0.822 1.11 1.29 1.65 0.00323 0.00412 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 14 100 16.9 0.841 0.0826 0.805 1.09 0.33 0.343 0.000826 0.000856 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 14 1513.934 0.005 0.898 2.48 20.4 0.834 1.13 1.5 1.88 0.00374 0.0047 ! Validation 14 1513.934 0.005 0.877 1.28 18.8 0.824 1.12 1.08 1.35 0.00271 0.00337 Wall time: 1513.9340923028067 ! Best model 14 18.826 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 15 100 19.2 0.849 2.2 0.809 1.1 1.35 1.77 0.00338 0.00442 15 172 18.9 0.817 2.53 0.792 1.08 1.54 1.9 0.00384 0.00474 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 15 100 17.2 0.784 1.56 0.776 1.06 1.47 1.49 0.00366 0.00373 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 15 1621.161 0.005 0.852 1.91 18.9 0.811 1.1 1.29 1.65 0.00322 0.00412 ! Validation 15 1621.161 0.005 0.815 2.66 19 0.793 1.08 1.6 1.94 0.00399 0.00486 Wall time: 1621.1611824939027 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 16 100 16.1 0.759 0.905 0.767 1.04 0.873 1.13 0.00218 0.00284 16 172 15.3 0.663 2.02 0.717 0.971 1.46 1.69 0.00365 0.00423 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 16 100 13.2 0.656 0.0678 0.709 0.966 0.283 0.31 0.000708 0.000776 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 16 1728.382 0.005 0.765 2.28 17.6 0.767 1.04 1.44 1.8 0.00359 0.00451 ! Validation 16 1728.382 0.005 0.675 1.26 14.8 0.722 0.979 1.07 1.34 0.00267 0.00334 Wall time: 1728.3826750880107 ! Best model 16 14.754 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 17 100 12.6 0.548 1.61 0.649 0.883 1.29 1.51 0.00322 0.00378 17 172 12.3 0.513 2.07 0.628 0.854 1.41 1.72 0.00352 0.00429 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 17 100 19.8 0.487 10.1 0.606 0.832 3.75 3.79 0.00938 0.00947 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 17 1835.625 0.005 0.579 2.68 14.3 0.666 0.907 1.56 1.95 0.0039 0.00488 ! Validation 17 1835.625 0.005 0.504 9.52 19.6 0.622 0.846 3.5 3.68 0.00876 0.0092 Wall time: 1835.6252476247028 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 18 100 12.6 0.458 3.44 0.593 0.807 1.81 2.21 0.00453 0.00553 18 172 9.67 0.443 0.818 0.581 0.793 0.784 1.08 0.00196 0.00269 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 18 100 9.56 0.415 1.25 0.561 0.768 1.21 1.34 0.00302 0.00334 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 18 1942.865 0.005 0.467 2.58 11.9 0.596 0.815 1.54 1.92 0.00384 0.00479 ! Validation 18 1942.865 0.005 0.436 1.33 10.1 0.578 0.787 1.11 1.38 0.00278 0.00344 Wall time: 1942.8651932938956 ! Best model 18 10.056 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 19 100 12.4 0.429 3.81 0.572 0.781 1.9 2.33 0.00475 0.00582 19 172 9.84 0.394 1.96 0.545 0.748 1.43 1.67 0.00358 0.00417 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 19 100 7.87 0.38 0.264 0.536 0.735 0.475 0.612 0.00119 0.00153 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 19 2050.167 0.005 0.417 3.45 11.8 0.562 0.77 1.79 2.22 0.00447 0.00554 ! Validation 19 2050.167 0.005 0.401 0.912 8.93 0.553 0.755 0.921 1.14 0.0023 0.00285 Wall time: 2050.1674064849503 ! Best model 19 8.929 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 20 100 13.1 0.371 5.66 0.532 0.726 2.77 2.84 0.00691 0.00709 20 172 13.3 0.377 5.8 0.535 0.732 2.68 2.87 0.0067 0.00718 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 20 100 24.4 0.352 17.4 0.516 0.707 4.94 4.97 0.0123 0.0124 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 20 2157.401 0.005 0.382 3.03 10.7 0.538 0.737 1.65 2.07 0.00413 0.00519 ! Validation 20 2157.401 0.005 0.369 18 25.4 0.53 0.724 4.95 5.06 0.0124 0.0126 Wall time: 2157.401603561826 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 21 100 11.4 0.351 4.39 0.518 0.706 2.32 2.5 0.0058 0.00625 21 172 11 0.345 4.1 0.51 0.7 2.2 2.41 0.00551 0.00603 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 21 100 7.33 0.332 0.696 0.502 0.687 0.899 0.995 0.00225 0.00249 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 21 2264.616 0.005 0.36 4.14 11.3 0.522 0.715 1.91 2.43 0.00477 0.00607 ! Validation 21 2264.616 0.005 0.35 1.3 8.29 0.517 0.705 1.11 1.36 0.00277 0.0034 Wall time: 2264.61627634475 ! Best model 21 8.295 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 22 100 14.2 0.333 7.56 0.502 0.688 3.15 3.28 0.00789 0.0082 22 172 12.7 0.339 5.88 0.504 0.694 2.72 2.89 0.0068 0.00723 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 22 100 12.2 0.318 5.81 0.493 0.673 2.84 2.87 0.0071 0.00718 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 22 2371.903 0.005 0.34 4.92 11.7 0.508 0.696 2.08 2.65 0.00521 0.00661 ! Validation 22 2371.903 0.005 0.336 10.3 17 0.507 0.691 3.61 3.82 0.00902 0.00956 Wall time: 2371.903458451852 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 23 100 7.61 0.336 0.904 0.507 0.691 0.874 1.13 0.00218 0.00283 23 172 9.51 0.311 3.3 0.488 0.664 1.9 2.16 0.00476 0.00541 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 23 100 6.56 0.302 0.52 0.481 0.655 0.79 0.86 0.00197 0.00215 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 23 2479.136 0.005 0.328 5.15 11.7 0.5 0.683 2.21 2.71 0.00552 0.00676 ! Validation 23 2479.136 0.005 0.319 1.26 7.63 0.495 0.673 1.09 1.34 0.00272 0.00334 Wall time: 2479.1362967910245 ! Best model 23 7.631 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 24 100 8.11 0.301 2.08 0.481 0.654 1.42 1.72 0.00355 0.0043 24 172 15.3 0.304 9.18 0.482 0.657 3.53 3.61 0.00883 0.00903 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 24 100 9.29 0.284 3.61 0.467 0.635 2.24 2.26 0.0056 0.00566 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 24 2586.387 0.005 0.307 3.94 10.1 0.484 0.661 1.9 2.36 0.00476 0.00591 ! Validation 24 2586.387 0.005 0.298 3.02 8.98 0.479 0.651 1.84 2.07 0.00461 0.00518 Wall time: 2586.38753039483 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 25 100 8.45 0.308 2.29 0.486 0.662 1.6 1.81 0.00399 0.00451 25 172 6.72 0.288 0.966 0.469 0.639 0.999 1.17 0.0025 0.00293 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 25 100 5.57 0.274 0.0835 0.46 0.624 0.307 0.345 0.000766 0.000861 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 25 2697.290 0.005 0.297 5.5 11.4 0.477 0.65 2.25 2.8 0.00563 0.00699 ! Validation 25 2697.290 0.005 0.287 0.695 6.43 0.471 0.638 0.812 0.994 0.00203 0.00249 Wall time: 2697.2901926799677 ! Best model 25 6.429 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 26 100 6.69 0.275 1.19 0.461 0.625 1.11 1.3 0.00279 0.00325 26 172 6.21 0.266 0.879 0.451 0.615 0.977 1.12 0.00244 0.00279 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 26 100 5.21 0.257 0.0675 0.446 0.605 0.279 0.31 0.000697 0.000774 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 26 2804.439 0.005 0.279 4.25 9.83 0.463 0.63 1.96 2.46 0.00491 0.00615 ! Validation 26 2804.439 0.005 0.269 0.722 6.09 0.456 0.618 0.831 1.01 0.00208 0.00253 Wall time: 2804.439375770744 ! Best model 26 6.093 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 27 100 6.12 0.262 0.877 0.449 0.611 0.911 1.12 0.00228 0.00279 27 172 7.18 0.261 1.96 0.448 0.609 1.47 1.67 0.00368 0.00417 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 27 100 5.09 0.248 0.134 0.439 0.594 0.331 0.437 0.000829 0.00109 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 27 2911.788 0.005 0.264 4.89 10.2 0.451 0.612 2.09 2.64 0.00522 0.00659 ! Validation 27 2911.788 0.005 0.257 1.38 6.51 0.446 0.604 1.17 1.4 0.00292 0.0035 Wall time: 2911.7880366500467 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 28 100 5.94 0.251 0.919 0.441 0.598 0.953 1.14 0.00238 0.00286 28 172 5.41 0.248 0.458 0.437 0.593 0.626 0.806 0.00156 0.00202 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 28 100 4.96 0.238 0.21 0.43 0.581 0.47 0.546 0.00118 0.00136 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 28 3019.399 0.005 0.252 5 10 0.441 0.598 2.11 2.67 0.00528 0.00667 ! Validation 28 3019.399 0.005 0.246 0.664 5.58 0.438 0.591 0.776 0.971 0.00194 0.00243 Wall time: 3019.3994822106324 ! Best model 28 5.582 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 29 100 17.7 0.255 12.6 0.443 0.602 4.17 4.24 0.0104 0.0106 29 172 5.42 0.233 0.75 0.425 0.576 0.881 1.03 0.0022 0.00258 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 29 100 4.85 0.229 0.269 0.423 0.571 0.56 0.618 0.0014 0.00155 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 29 3126.567 0.005 0.245 5.52 10.4 0.436 0.59 2.21 2.8 0.00553 0.007 ! Validation 29 3126.567 0.005 0.234 0.67 5.36 0.428 0.577 0.781 0.976 0.00195 0.00244 Wall time: 3126.5669227549806 ! Best model 29 5.356 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 30 100 19.3 0.222 14.8 0.413 0.562 4.5 4.59 0.0113 0.0115 30 172 5.22 0.225 0.715 0.42 0.566 0.849 1.01 0.00212 0.00252 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 30 100 4.64 0.224 0.162 0.418 0.564 0.405 0.48 0.00101 0.0012 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 30 3234.271 0.005 0.233 5.44 10.1 0.426 0.576 2.26 2.78 0.00566 0.00696 ! Validation 30 3234.271 0.005 0.227 0.552 5.09 0.422 0.568 0.719 0.886 0.0018 0.00222 Wall time: 3234.2710926206782 ! Best model 30 5.085 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 31 100 9.55 0.217 5.22 0.412 0.555 2.52 2.72 0.0063 0.00681 31 172 6.15 0.22 1.74 0.413 0.56 1.39 1.57 0.00349 0.00393 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 31 100 4.54 0.211 0.31 0.406 0.548 0.615 0.664 0.00154 0.00166 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 31 3342.465 0.005 0.222 4.12 8.55 0.416 0.561 1.92 2.42 0.00479 0.00605 ! Validation 31 3342.465 0.005 0.212 1.93 6.18 0.409 0.549 1.43 1.66 0.00358 0.00415 Wall time: 3342.465120834764 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 32 100 5.41 0.218 1.04 0.413 0.557 0.953 1.22 0.00238 0.00305 32 172 7.27 0.221 2.84 0.418 0.561 1.8 2.01 0.0045 0.00502 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 32 100 5.02 0.214 0.734 0.41 0.552 0.995 1.02 0.00249 0.00255 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 32 3449.855 0.005 0.215 6.03 10.3 0.41 0.553 2.27 2.93 0.00568 0.00732 ! Validation 32 3449.855 0.005 0.219 1.5 5.87 0.416 0.558 1.23 1.46 0.00309 0.00364 Wall time: 3449.8557284800336 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 33 100 18.4 0.209 14.2 0.407 0.546 4.23 4.49 0.0106 0.0112 33 172 6.05 0.212 1.8 0.408 0.549 1.35 1.6 0.00339 0.004 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 33 100 4.21 0.208 0.0526 0.404 0.543 0.219 0.274 0.000546 0.000684 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 33 3560.049 0.005 0.212 5.53 9.76 0.408 0.548 2.2 2.8 0.00551 0.00701 ! Validation 33 3560.049 0.005 0.209 0.552 4.73 0.407 0.545 0.719 0.886 0.0018 0.00221 Wall time: 3560.0489086969756 ! Best model 33 4.729 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 34 100 10.6 0.198 6.67 0.395 0.53 2.95 3.08 0.00739 0.0077 34 172 17.2 0.192 13.4 0.39 0.523 4.3 4.36 0.0108 0.0109 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 34 100 5.88 0.196 1.96 0.392 0.528 1.65 1.67 0.00413 0.00417 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 34 3667.275 0.005 0.2 4.07 8.07 0.397 0.534 1.97 2.4 0.00493 0.006 ! Validation 34 3667.275 0.005 0.198 4.02 7.97 0.396 0.53 2.14 2.39 0.00535 0.00598 Wall time: 3667.274953726679 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 35 100 13.9 0.19 10.1 0.386 0.52 3.72 3.78 0.00929 0.00945 35 172 8.87 0.19 5.07 0.385 0.52 2.61 2.68 0.00653 0.00671 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 35 100 6.78 0.186 3.06 0.381 0.514 2.07 2.09 0.00518 0.00522 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 35 3774.511 0.005 0.194 3.43 7.31 0.391 0.525 1.82 2.21 0.00454 0.00552 ! Validation 35 3774.511 0.005 0.186 4.79 8.51 0.384 0.514 2.46 2.61 0.00615 0.00652 Wall time: 3774.511277680751 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 36 100 6.26 0.191 2.43 0.388 0.522 1.71 1.86 0.00427 0.00464 36 172 11.1 0.184 7.41 0.38 0.511 3.15 3.24 0.00789 0.00811 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 36 100 3.83 0.186 0.107 0.382 0.514 0.337 0.391 0.000841 0.000977 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 36 3882.026 0.005 0.185 4.39 8.09 0.382 0.513 2.01 2.5 0.00502 0.00624 ! Validation 36 3882.026 0.005 0.185 2.46 6.15 0.383 0.512 1.65 1.87 0.00413 0.00467 Wall time: 3882.026468685828 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 37 100 4.06 0.176 0.545 0.373 0.5 0.689 0.88 0.00172 0.0022 37 172 5.38 0.185 1.67 0.383 0.513 1.4 1.54 0.0035 0.00385 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 37 100 3.83 0.188 0.0745 0.384 0.517 0.276 0.325 0.00069 0.000813 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 37 3989.278 0.005 0.185 4.94 8.63 0.382 0.512 2 2.65 0.00499 0.00663 ! Validation 37 3989.278 0.005 0.188 1.03 4.79 0.387 0.517 0.993 1.21 0.00248 0.00302 Wall time: 3989.278243697714 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 38 100 4.31 0.162 1.07 0.358 0.48 1.07 1.23 0.00268 0.00309 38 172 3.75 0.177 0.215 0.373 0.501 0.447 0.553 0.00112 0.00138 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 38 100 3.98 0.176 0.448 0.372 0.501 0.774 0.798 0.00194 0.002 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 38 4096.531 0.005 0.174 2.75 6.24 0.371 0.497 1.51 1.98 0.00376 0.00495 ! Validation 38 4096.531 0.005 0.176 0.6 4.12 0.374 0.5 0.74 0.923 0.00185 0.00231 Wall time: 4096.531468030997 ! Best model 38 4.121 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 39 100 7.83 0.164 4.54 0.358 0.483 2.43 2.54 0.00607 0.00635 39 172 5.61 0.16 2.41 0.356 0.476 1.75 1.85 0.00439 0.00463 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 39 100 6.66 0.163 3.4 0.356 0.481 2.19 2.2 0.00547 0.0055 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 39 4203.805 0.005 0.168 2.63 5.99 0.364 0.488 1.58 1.94 0.00395 0.00484 ! Validation 39 4203.805 0.005 0.161 2.34 5.56 0.358 0.479 1.65 1.82 0.00412 0.00456 Wall time: 4203.8053244017065 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 40 100 4.17 0.158 1.01 0.355 0.474 1.01 1.2 0.00253 0.003 40 172 4.64 0.156 1.51 0.35 0.471 1.2 1.47 0.003 0.00367 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 40 100 4.89 0.157 1.75 0.349 0.472 1.56 1.58 0.00391 0.00395 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 40 4311.013 0.005 0.16 3.08 6.28 0.356 0.477 1.69 2.09 0.00423 0.00523 ! Validation 40 4311.013 0.005 0.157 3.87 7.02 0.354 0.473 2.12 2.35 0.0053 0.00586 Wall time: 4311.013026240747 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 41 100 3.4 0.155 0.29 0.351 0.47 0.491 0.642 0.00123 0.00161 41 172 12.1 0.181 8.52 0.38 0.507 3.39 3.48 0.00847 0.0087 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 41 100 5.12 0.176 1.6 0.37 0.5 1.51 1.51 0.00377 0.00377 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 41 4418.166 0.005 0.158 4.53 7.68 0.353 0.473 1.95 2.54 0.00489 0.00634 ! Validation 41 4418.166 0.005 0.177 1.76 5.29 0.375 0.501 1.37 1.58 0.00342 0.00395 Wall time: 4418.166233033873 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 42 100 11.6 0.154 8.48 0.35 0.467 3.42 3.47 0.00855 0.00868 42 172 6.96 0.142 4.12 0.335 0.45 2.34 2.42 0.00585 0.00605 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 42 100 7.14 0.148 4.18 0.339 0.458 2.43 2.44 0.00609 0.0061 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 42 4526.656 0.005 0.158 2.81 5.97 0.354 0.474 1.61 2 0.00402 0.00499 ! Validation 42 4526.656 0.005 0.148 7.38 10.3 0.343 0.458 3.14 3.24 0.00786 0.0081 Wall time: 4526.655997888651 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 43 100 3.34 0.145 0.447 0.339 0.454 0.681 0.797 0.0017 0.00199 43 172 3.3 0.147 0.355 0.34 0.457 0.511 0.71 0.00128 0.00177 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 43 100 4.15 0.142 1.31 0.333 0.45 1.35 1.36 0.00338 0.00341 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 43 4633.817 0.005 0.145 2.46 5.35 0.338 0.453 1.49 1.87 0.00372 0.00467 ! Validation 43 4633.817 0.005 0.142 1.21 4.06 0.337 0.45 1.14 1.31 0.00285 0.00327 Wall time: 4633.817228751723 ! Best model 43 4.056 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 44 100 4.71 0.137 1.96 0.33 0.442 1.57 1.67 0.00393 0.00417 44 172 4.68 0.152 1.63 0.347 0.465 1.27 1.52 0.00318 0.00381 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 44 100 4.55 0.152 1.5 0.345 0.465 1.45 1.46 0.00362 0.00365 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 44 4740.984 0.005 0.139 3.21 5.99 0.332 0.444 1.68 2.14 0.0042 0.00534 ! Validation 44 4740.984 0.005 0.153 3.31 6.36 0.349 0.466 1.99 2.17 0.00496 0.00542 Wall time: 4740.984344179742 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 45 100 3.55 0.131 0.931 0.319 0.431 1.01 1.15 0.00253 0.00288 45 172 2.92 0.124 0.439 0.311 0.42 0.664 0.789 0.00166 0.00197 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 45 100 2.55 0.125 0.0616 0.311 0.421 0.227 0.296 0.000568 0.00074 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 45 4848.143 0.005 0.132 1.59 4.22 0.323 0.433 1.23 1.5 0.00309 0.00376 ! Validation 45 4848.143 0.005 0.125 0.607 3.11 0.316 0.422 0.745 0.929 0.00186 0.00232 Wall time: 4848.1437243986875 ! Best model 45 3.111 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 46 100 4.81 0.124 2.33 0.314 0.42 1.73 1.82 0.00433 0.00455 46 172 7.32 0.124 4.83 0.314 0.42 2.57 2.62 0.00643 0.00655 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 46 100 13.9 0.118 11.5 0.304 0.41 4.04 4.04 0.0101 0.0101 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 46 4955.309 0.005 0.126 2.45 4.97 0.316 0.423 1.45 1.87 0.00362 0.00466 ! Validation 46 4955.309 0.005 0.119 11.8 14.1 0.308 0.41 4.01 4.09 0.01 0.0102 Wall time: 4955.30902144406 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 47 100 2.82 0.114 0.54 0.301 0.402 0.73 0.876 0.00182 0.00219 47 172 3.07 0.118 0.708 0.307 0.409 0.735 1 0.00184 0.00251 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 47 100 3.86 0.12 1.47 0.306 0.412 1.44 1.45 0.0036 0.00361 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 47 5062.458 0.005 0.119 2.63 5.01 0.308 0.411 1.54 1.93 0.00384 0.00484 ! Validation 47 5062.458 0.005 0.12 0.538 2.93 0.31 0.413 0.709 0.875 0.00177 0.00219 Wall time: 5062.458344388753 ! Best model 47 2.934 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 48 100 9.43 0.109 7.24 0.295 0.394 3.16 3.21 0.00789 0.00802 48 172 6.27 0.105 4.16 0.292 0.387 2.34 2.43 0.00586 0.00608 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 48 100 4.37 0.109 2.19 0.293 0.393 1.75 1.77 0.00439 0.00442 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 48 5169.628 0.005 0.112 1.8 4.05 0.299 0.399 1.27 1.6 0.00317 0.004 ! Validation 48 5169.628 0.005 0.109 1.17 3.35 0.296 0.394 1.13 1.29 0.00282 0.00322 Wall time: 5169.628398759756 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 49 100 4.82 0.105 2.72 0.289 0.386 1.91 1.97 0.00478 0.00491 49 172 5.79 0.1 3.79 0.285 0.377 2.18 2.32 0.00546 0.0058 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 49 100 8.99 0.101 6.97 0.283 0.379 3.14 3.15 0.00785 0.00787 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 49 5276.772 0.005 0.105 1.77 3.88 0.29 0.387 1.27 1.59 0.00317 0.00397 ! Validation 49 5276.772 0.005 0.102 4.26 6.3 0.287 0.381 2.36 2.46 0.0059 0.00615 Wall time: 5276.77242482407 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 50 100 2.92 0.111 0.688 0.298 0.398 0.814 0.989 0.00204 0.00247 50 172 7.04 0.108 4.88 0.293 0.391 2.55 2.63 0.00638 0.00658 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 50 100 2.09 0.103 0.0249 0.286 0.383 0.143 0.188 0.000358 0.000471 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 50 5384.012 0.005 0.105 2.5 4.6 0.29 0.386 1.43 1.88 0.00358 0.00471 ! Validation 50 5384.012 0.005 0.103 1.27 3.34 0.289 0.383 1.15 1.35 0.00287 0.00336 Wall time: 5384.01213538181 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 51 100 2.29 0.0954 0.385 0.279 0.368 0.572 0.739 0.00143 0.00185 51 172 3.02 0.0963 1.09 0.276 0.37 1.17 1.25 0.00292 0.00311 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 51 100 1.85 0.0915 0.0238 0.27 0.361 0.142 0.184 0.000355 0.00046 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 51 5491.274 0.005 0.0981 1.49 3.45 0.281 0.373 1.09 1.46 0.00272 0.00364 ! Validation 51 5491.274 0.005 0.0924 1.01 2.85 0.273 0.362 1 1.2 0.00251 0.00299 Wall time: 5491.274653679691 ! Best model 51 2.855 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 52 100 6.08 0.0886 4.31 0.267 0.355 2.44 2.48 0.00609 0.00619 52 172 3.72 0.0893 1.94 0.268 0.356 1.57 1.66 0.00392 0.00415 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 52 100 2.21 0.0902 0.406 0.268 0.358 0.746 0.76 0.00186 0.0019 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 52 5598.472 0.005 0.0952 1.99 3.9 0.277 0.368 1.33 1.68 0.00332 0.00421 ! Validation 52 5598.472 0.005 0.0915 1.21 3.04 0.273 0.361 1.12 1.31 0.0028 0.00327 Wall time: 5598.472239891067 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 53 100 6.11 0.0953 4.21 0.277 0.368 2.38 2.45 0.00594 0.00611 53 172 2.36 0.0909 0.542 0.271 0.359 0.748 0.877 0.00187 0.00219 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 53 100 2.52 0.093 0.658 0.273 0.364 0.955 0.967 0.00239 0.00242 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 53 5706.176 0.005 0.0924 2.25 4.1 0.273 0.362 1.45 1.79 0.00361 0.00447 ! Validation 53 5706.176 0.005 0.0939 0.365 2.24 0.276 0.365 0.585 0.721 0.00146 0.0018 Wall time: 5706.176369786728 ! Best model 53 2.243 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 54 100 1.95 0.084 0.271 0.261 0.346 0.467 0.621 0.00117 0.00155 54 172 1.97 0.0834 0.299 0.26 0.344 0.525 0.651 0.00131 0.00163 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 54 100 1.76 0.0812 0.132 0.255 0.34 0.397 0.433 0.000991 0.00108 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 54 5813.340 0.005 0.0851 0.987 2.69 0.262 0.348 0.915 1.18 0.00229 0.00296 ! Validation 54 5813.340 0.005 0.084 0.486 2.17 0.262 0.346 0.663 0.831 0.00166 0.00208 Wall time: 5813.340877713636 ! Best model 54 2.166 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 55 100 7.54 0.0832 5.88 0.26 0.344 2.75 2.89 0.00689 0.00722 55 172 2.04 0.0847 0.348 0.262 0.347 0.602 0.704 0.00151 0.00176 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 55 100 7.24 0.0826 5.58 0.258 0.343 2.81 2.82 0.00703 0.00704 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 55 5920.526 0.005 0.085 2.04 3.74 0.262 0.347 1.43 1.7 0.00356 0.00426 ! Validation 55 5920.526 0.005 0.0839 3.99 5.67 0.261 0.345 2.28 2.38 0.00571 0.00595 Wall time: 5920.525915572885 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 56 100 1.89 0.0753 0.386 0.249 0.327 0.605 0.741 0.00151 0.00185 56 172 2.29 0.0788 0.715 0.252 0.335 0.889 1.01 0.00222 0.00252 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 56 100 3.56 0.0791 1.98 0.253 0.335 1.67 1.68 0.00417 0.00419 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 56 6027.780 0.005 0.0809 1.29 2.91 0.256 0.339 1.11 1.35 0.00276 0.00339 ! Validation 56 6027.780 0.005 0.0806 4.64 6.25 0.256 0.338 2.48 2.57 0.0062 0.00642 Wall time: 6027.780493604951 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 57 100 3.79 0.0803 2.18 0.255 0.338 1.63 1.76 0.00407 0.0044 57 172 3.48 0.079 1.9 0.254 0.335 1.52 1.64 0.00379 0.00411 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 57 100 3.16 0.0741 1.67 0.244 0.324 1.53 1.54 0.00383 0.00386 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 57 6135.024 0.005 0.079 1.39 2.96 0.253 0.335 1.11 1.4 0.00276 0.00351 ! Validation 57 6135.024 0.005 0.0759 0.651 2.17 0.249 0.328 0.811 0.962 0.00203 0.00241 Wall time: 6135.024046565872 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 58 100 1.8 0.0765 0.273 0.248 0.33 0.542 0.623 0.00136 0.00156 58 172 1.58 0.0703 0.174 0.24 0.316 0.412 0.497 0.00103 0.00124 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 58 100 1.81 0.0706 0.399 0.239 0.317 0.728 0.753 0.00182 0.00188 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 58 6242.179 0.005 0.0749 1.25 2.75 0.247 0.326 1.09 1.33 0.00272 0.00333 ! Validation 58 6242.179 0.005 0.073 0.338 1.8 0.244 0.322 0.562 0.693 0.00141 0.00173 Wall time: 6242.179157353938 ! Best model 58 1.797 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 59 100 2.68 0.0759 1.16 0.249 0.328 1.05 1.28 0.00263 0.00321 59 172 1.88 0.0728 0.424 0.244 0.322 0.674 0.777 0.00168 0.00194 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 59 100 1.54 0.0716 0.111 0.241 0.319 0.362 0.397 0.000905 0.000993 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 59 6349.343 0.005 0.0739 1.48 2.96 0.245 0.324 1.16 1.45 0.0029 0.00363 ! Validation 59 6349.343 0.005 0.0738 0.377 1.85 0.246 0.324 0.597 0.732 0.00149 0.00183 Wall time: 6349.3432389940135 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 60 100 2.15 0.0709 0.727 0.239 0.317 0.9 1.02 0.00225 0.00254 60 172 1.67 0.0711 0.245 0.24 0.318 0.456 0.591 0.00114 0.00148 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 60 100 1.59 0.0707 0.172 0.239 0.317 0.463 0.494 0.00116 0.00123 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 60 6460.963 0.005 0.0709 1.26 2.67 0.24 0.318 1.07 1.34 0.00267 0.00334 ! Validation 60 6460.963 0.005 0.0724 1.06 2.51 0.243 0.321 1.05 1.23 0.00263 0.00307 Wall time: 6460.963412728626 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 61 100 2.99 0.0727 1.53 0.241 0.321 1.4 1.48 0.00349 0.00369 61 172 2.31 0.0721 0.869 0.241 0.32 0.959 1.11 0.0024 0.00278 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 61 100 1.73 0.0722 0.285 0.241 0.32 0.607 0.636 0.00152 0.00159 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 61 6568.168 0.005 0.07 1.3 2.7 0.238 0.315 1.09 1.36 0.00272 0.0034 ! Validation 61 6568.168 0.005 0.0736 2.19 3.66 0.245 0.323 1.64 1.77 0.00409 0.00441 Wall time: 6568.168762085028 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 62 100 2.86 0.0636 1.58 0.228 0.301 1.39 1.5 0.00348 0.00375 62 172 2.92 0.0637 1.65 0.227 0.301 1.42 1.53 0.00354 0.00383 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 62 100 1.73 0.0637 0.46 0.228 0.301 0.787 0.809 0.00197 0.00202 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 62 6675.381 0.005 0.0677 1.08 2.44 0.235 0.31 0.994 1.24 0.00248 0.0031 ! Validation 62 6675.381 0.005 0.066 2.61 3.93 0.232 0.306 1.81 1.93 0.00453 0.00481 Wall time: 6675.381404759828 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 63 100 1.61 0.0617 0.372 0.224 0.296 0.551 0.727 0.00138 0.00182 63 172 1.52 0.0628 0.262 0.226 0.299 0.511 0.61 0.00128 0.00152 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 63 100 1.25 0.0599 0.0502 0.22 0.292 0.188 0.267 0.000471 0.000668 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 63 6782.587 0.005 0.0646 0.962 2.25 0.229 0.303 0.949 1.17 0.00237 0.00292 ! Validation 63 6782.587 0.005 0.0615 0.569 1.8 0.224 0.296 0.733 0.899 0.00183 0.00225 Wall time: 6782.587071888614 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 64 100 1.4 0.0608 0.188 0.223 0.294 0.459 0.516 0.00115 0.00129 64 172 3.41 0.0643 2.12 0.229 0.302 1.68 1.74 0.00419 0.00434 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 64 100 1.26 0.0614 0.0308 0.223 0.295 0.157 0.209 0.000393 0.000523 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 64 6889.795 0.005 0.0611 0.956 2.18 0.223 0.295 0.918 1.16 0.0023 0.00291 ! Validation 64 6889.795 0.005 0.0626 0.517 1.77 0.226 0.298 0.697 0.857 0.00174 0.00214 Wall time: 6889.794942931738 ! Best model 64 1.770 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 65 100 2.14 0.0691 0.761 0.237 0.313 0.903 1.04 0.00226 0.0026 65 172 2.58 0.0595 1.39 0.22 0.291 1.33 1.4 0.00333 0.00351 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 65 100 1.35 0.0579 0.19 0.217 0.287 0.478 0.52 0.00119 0.0013 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 65 6997.010 0.005 0.064 1.35 2.63 0.228 0.302 1.06 1.39 0.00266 0.00346 ! Validation 65 6997.010 0.005 0.0598 1.38 2.57 0.221 0.291 1.25 1.4 0.00312 0.0035 Wall time: 6997.010648529045 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 66 100 1.86 0.058 0.703 0.218 0.287 0.899 0.999 0.00225 0.0025 66 172 1.41 0.0581 0.244 0.216 0.287 0.494 0.589 0.00124 0.00147 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 66 100 1.29 0.0558 0.176 0.213 0.282 0.441 0.5 0.0011 0.00125 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 66 7104.211 0.005 0.0593 0.881 2.07 0.219 0.29 0.898 1.12 0.00224 0.0028 ! Validation 66 7104.211 0.005 0.0571 0.29 1.43 0.215 0.285 0.519 0.642 0.0013 0.00161 Wall time: 7104.211649829987 ! Best model 66 1.432 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 67 100 3.13 0.0555 2.02 0.211 0.281 1.56 1.7 0.00391 0.00424 67 172 1.83 0.053 0.775 0.208 0.274 0.908 1.05 0.00227 0.00262 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 67 100 3.85 0.0521 2.81 0.205 0.272 1.98 2 0.00496 0.00499 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 67 7211.424 0.005 0.0573 0.806 1.95 0.216 0.285 0.833 1.07 0.00208 0.00268 ! Validation 67 7211.424 0.005 0.0538 1.98 3.05 0.209 0.276 1.56 1.68 0.00391 0.00419 Wall time: 7211.424361479003 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 68 100 1.62 0.0553 0.515 0.212 0.28 0.769 0.856 0.00192 0.00214 68 172 1.26 0.0538 0.179 0.209 0.277 0.421 0.504 0.00105 0.00126 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 68 100 1.16 0.0529 0.106 0.207 0.274 0.361 0.388 0.000901 0.000971 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 68 7318.748 0.005 0.0553 0.903 2.01 0.212 0.28 0.921 1.13 0.0023 0.00283 ! Validation 68 7318.748 0.005 0.0547 0.656 1.75 0.211 0.279 0.807 0.966 0.00202 0.00241 Wall time: 7318.748765640892 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 69 100 1.32 0.0574 0.173 0.215 0.286 0.401 0.495 0.001 0.00124 69 172 1.33 0.0537 0.261 0.209 0.276 0.49 0.609 0.00123 0.00152 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 69 100 1.1 0.0526 0.0454 0.206 0.273 0.192 0.254 0.000479 0.000635 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 69 7425.965 0.005 0.056 1.14 2.26 0.213 0.282 1.02 1.27 0.00256 0.00318 ! Validation 69 7425.965 0.005 0.0546 0.412 1.5 0.21 0.278 0.62 0.765 0.00155 0.00191 Wall time: 7425.964952074923 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 70 100 1.6 0.051 0.576 0.203 0.269 0.786 0.905 0.00196 0.00226 70 172 1.65 0.0525 0.6 0.206 0.273 0.804 0.924 0.00201 0.00231 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 70 100 2.82 0.0488 1.84 0.199 0.263 1.59 1.62 0.00399 0.00404 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 70 7533.176 0.005 0.052 0.682 1.72 0.205 0.272 0.805 0.985 0.00201 0.00246 ! Validation 70 7533.176 0.005 0.0504 0.845 1.85 0.202 0.268 0.97 1.1 0.00242 0.00274 Wall time: 7533.176422887016 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 71 100 1.4 0.0498 0.404 0.2 0.266 0.645 0.758 0.00161 0.0019 71 172 1.63 0.0472 0.688 0.196 0.259 0.862 0.989 0.00216 0.00247 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 71 100 1.02 0.0467 0.0891 0.194 0.258 0.261 0.356 0.000653 0.00089 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 71 7640.380 0.005 0.0495 0.637 1.63 0.2 0.265 0.773 0.951 0.00193 0.00238 ! Validation 71 7640.380 0.005 0.0483 0.467 1.43 0.198 0.262 0.662 0.814 0.00166 0.00204 Wall time: 7640.380250964779 ! Best model 71 1.432 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 72 100 1.2 0.0489 0.223 0.199 0.264 0.46 0.563 0.00115 0.00141 72 172 2.15 0.0487 1.17 0.199 0.263 1.18 1.29 0.00296 0.00323 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 72 100 1.14 0.0494 0.149 0.199 0.265 0.41 0.46 0.00103 0.00115 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 72 7747.605 0.005 0.0481 0.751 1.71 0.197 0.261 0.832 1.03 0.00208 0.00258 ! Validation 72 7747.605 0.005 0.0504 1.15 2.15 0.202 0.268 1.15 1.28 0.00288 0.00319 Wall time: 7747.605236001778 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 73 100 1.18 0.0472 0.233 0.195 0.259 0.456 0.575 0.00114 0.00144 73 172 1.97 0.048 1.01 0.196 0.261 1.11 1.2 0.00278 0.003 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 73 100 4.56 0.0459 3.64 0.192 0.256 2.27 2.28 0.00566 0.00569 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 73 7854.822 0.005 0.0479 0.734 1.69 0.196 0.261 0.8 1.02 0.002 0.00255 ! Validation 73 7854.822 0.005 0.0482 1.8 2.77 0.197 0.262 1.5 1.6 0.00375 0.004 Wall time: 7854.82227017777 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 74 100 2 0.046 1.08 0.193 0.256 1.15 1.24 0.00288 0.0031 74 172 1.36 0.0454 0.447 0.191 0.254 0.69 0.797 0.00173 0.00199 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 74 100 4.04 0.0445 3.15 0.189 0.251 2.1 2.12 0.00525 0.00529 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 74 7962.037 0.005 0.0465 0.759 1.69 0.193 0.257 0.856 1.04 0.00214 0.0026 ! Validation 74 7962.037 0.005 0.0452 0.852 1.76 0.191 0.254 0.952 1.1 0.00238 0.00275 Wall time: 7962.037362674717 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 75 100 1.53 0.0429 0.672 0.185 0.247 0.854 0.977 0.00214 0.00244 75 172 1.07 0.0433 0.199 0.187 0.248 0.436 0.532 0.00109 0.00133 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 75 100 1.48 0.0436 0.606 0.187 0.249 0.891 0.928 0.00223 0.00232 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 75 8069.246 0.005 0.0446 0.612 1.5 0.189 0.252 0.756 0.933 0.00189 0.00233 ! Validation 75 8069.246 0.005 0.0438 0.217 1.09 0.188 0.249 0.45 0.555 0.00112 0.00139 Wall time: 8069.246280634776 ! Best model 75 1.092 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 76 100 1.03 0.0427 0.175 0.187 0.246 0.393 0.498 0.000983 0.00125 76 172 1.17 0.0413 0.345 0.182 0.242 0.528 0.7 0.00132 0.00175 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 76 100 1.08 0.0427 0.231 0.185 0.246 0.512 0.574 0.00128 0.00143 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 76 8176.464 0.005 0.0431 0.689 1.55 0.186 0.248 0.815 0.99 0.00204 0.00248 ! Validation 76 8176.464 0.005 0.0431 0.624 1.49 0.186 0.247 0.78 0.942 0.00195 0.00235 Wall time: 8176.464097540826 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 77 100 1.25 0.0474 0.303 0.195 0.26 0.534 0.656 0.00133 0.00164 77 172 2.27 0.0386 1.5 0.177 0.234 1.4 1.46 0.0035 0.00365 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 77 100 1.28 0.04 0.485 0.18 0.238 0.773 0.83 0.00193 0.00208 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 77 8283.694 0.005 0.0445 0.876 1.77 0.189 0.251 0.845 1.12 0.00211 0.00279 ! Validation 77 8283.694 0.005 0.0415 1.35 2.18 0.183 0.243 1.27 1.39 0.00318 0.00347 Wall time: 8283.69446707284 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 78 100 0.954 0.0424 0.106 0.185 0.246 0.314 0.387 0.000785 0.000968 78 172 0.885 0.0407 0.0704 0.181 0.241 0.253 0.316 0.000633 0.000791 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 78 100 1.34 0.0388 0.568 0.176 0.235 0.86 0.898 0.00215 0.00225 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 78 8391.032 0.005 0.0426 0.695 1.55 0.185 0.246 0.77 0.994 0.00193 0.00249 ! Validation 78 8391.032 0.005 0.0395 0.219 1.01 0.178 0.237 0.445 0.558 0.00111 0.00139 Wall time: 8391.031933877617 ! Best model 78 1.010 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 79 100 1.26 0.0398 0.465 0.179 0.238 0.655 0.813 0.00164 0.00203 79 172 0.905 0.0381 0.142 0.175 0.233 0.367 0.449 0.000917 0.00112 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 79 100 1.41 0.037 0.671 0.172 0.229 0.94 0.976 0.00235 0.00244 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 79 8498.228 0.005 0.0402 0.519 1.32 0.18 0.239 0.685 0.859 0.00171 0.00215 ! Validation 79 8498.228 0.005 0.0381 0.231 0.992 0.175 0.233 0.464 0.573 0.00116 0.00143 Wall time: 8498.227925586049 ! Best model 79 0.992 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 80 100 1.36 0.0381 0.594 0.176 0.233 0.821 0.919 0.00205 0.0023 80 172 0.893 0.0387 0.119 0.177 0.235 0.348 0.411 0.000869 0.00103 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 80 100 0.81 0.0383 0.0446 0.174 0.233 0.225 0.252 0.000562 0.000629 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 80 8605.499 0.005 0.0393 0.662 1.45 0.178 0.236 0.785 0.97 0.00196 0.00243 ! Validation 80 8605.499 0.005 0.0388 0.777 1.55 0.176 0.235 0.914 1.05 0.00228 0.00263 Wall time: 8605.498946148902 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 81 100 0.863 0.0368 0.127 0.172 0.229 0.363 0.425 0.000906 0.00106 81 172 0.959 0.0373 0.214 0.173 0.23 0.453 0.551 0.00113 0.00138 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 81 100 2.54 0.0376 1.79 0.172 0.231 1.57 1.59 0.00393 0.00398 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 81 8712.716 0.005 0.0376 0.484 1.24 0.174 0.231 0.664 0.83 0.00166 0.00207 ! Validation 81 8712.716 0.005 0.0374 0.293 1.04 0.173 0.231 0.517 0.646 0.00129 0.00161 Wall time: 8712.716680557933 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 82 100 1.51 0.0353 0.807 0.168 0.224 1.02 1.07 0.00255 0.00268 82 172 1.04 0.0384 0.272 0.175 0.234 0.536 0.622 0.00134 0.00156 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 82 100 1.71 0.0383 0.94 0.174 0.233 1.13 1.16 0.00283 0.00289 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 82 8820.274 0.005 0.0371 0.597 1.34 0.173 0.23 0.729 0.921 0.00182 0.0023 ! Validation 82 8820.274 0.005 0.0373 0.226 0.972 0.173 0.23 0.455 0.566 0.00114 0.00142 Wall time: 8820.274271880742 ! Best model 82 0.972 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 83 100 1.09 0.0353 0.383 0.169 0.224 0.64 0.738 0.0016 0.00185 83 172 1.3 0.038 0.544 0.174 0.232 0.791 0.879 0.00198 0.0022 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 83 100 0.796 0.0367 0.0616 0.171 0.229 0.264 0.296 0.000659 0.000739 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 83 8927.489 0.005 0.0368 0.592 1.33 0.172 0.229 0.714 0.917 0.00178 0.00229 ! Validation 83 8927.489 0.005 0.0365 1.19 1.92 0.171 0.228 1.19 1.3 0.00298 0.00325 Wall time: 8927.489571162034 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 84 100 1.48 0.0352 0.781 0.168 0.224 0.981 1.05 0.00245 0.00263 84 172 0.911 0.0313 0.286 0.159 0.211 0.536 0.638 0.00134 0.00159 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 84 100 1.41 0.0325 0.763 0.16 0.215 1.01 1.04 0.00252 0.0026 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 84 9034.689 0.005 0.0343 0.367 1.05 0.166 0.221 0.58 0.722 0.00145 0.00181 ! Validation 84 9034.689 0.005 0.0326 0.517 1.17 0.161 0.215 0.726 0.857 0.00182 0.00214 Wall time: 9034.689339888748 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 85 100 0.912 0.0309 0.294 0.158 0.21 0.557 0.646 0.00139 0.00162 85 172 2.23 0.0334 1.57 0.164 0.218 1.44 1.49 0.0036 0.00373 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 85 100 1.8 0.0337 1.13 0.164 0.219 1.25 1.27 0.00313 0.00317 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 85 9141.884 0.005 0.0333 0.49 1.15 0.163 0.217 0.665 0.833 0.00166 0.00208 ! Validation 85 9141.884 0.005 0.0335 1.99 2.66 0.165 0.218 1.57 1.68 0.00393 0.0042 Wall time: 9141.884488306008 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 86 100 0.89 0.0325 0.241 0.162 0.215 0.494 0.585 0.00123 0.00146 86 172 1.23 0.033 0.567 0.163 0.217 0.81 0.898 0.00203 0.00225 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 86 100 1.01 0.0329 0.35 0.162 0.216 0.676 0.706 0.00169 0.00176 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 86 9249.253 0.005 0.0346 0.659 1.35 0.167 0.222 0.774 0.968 0.00193 0.00242 ! Validation 86 9249.253 0.005 0.033 0.173 0.833 0.163 0.217 0.388 0.495 0.000969 0.00124 Wall time: 9249.25318998564 ! Best model 86 0.833 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 87 100 0.724 0.0308 0.109 0.157 0.209 0.318 0.393 0.000796 0.000982 87 172 0.792 0.0308 0.176 0.157 0.209 0.427 0.501 0.00107 0.00125 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 87 100 0.943 0.0309 0.325 0.156 0.21 0.641 0.68 0.0016 0.0017 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 87 9357.572 0.005 0.0322 0.364 1.01 0.161 0.214 0.573 0.72 0.00143 0.0018 ! Validation 87 9357.572 0.005 0.0304 0.171 0.779 0.156 0.208 0.389 0.492 0.000973 0.00123 Wall time: 9357.572424210608 ! Best model 87 0.779 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 88 100 0.849 0.0291 0.267 0.153 0.203 0.515 0.616 0.00129 0.00154 88 172 0.886 0.031 0.267 0.158 0.21 0.499 0.616 0.00125 0.00154 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 88 100 0.86 0.031 0.24 0.156 0.21 0.538 0.584 0.00135 0.00146 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 88 9464.920 0.005 0.0316 0.485 1.12 0.159 0.212 0.665 0.83 0.00166 0.00208 ! Validation 88 9464.920 0.005 0.0304 0.169 0.777 0.156 0.208 0.382 0.491 0.000956 0.00123 Wall time: 9464.920635954943 ! Best model 88 0.777 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 89 100 0.723 0.0307 0.109 0.157 0.209 0.32 0.394 0.000799 0.000985 89 172 0.842 0.0282 0.277 0.15 0.2 0.514 0.628 0.00129 0.00157 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 89 100 0.681 0.0297 0.0868 0.153 0.205 0.293 0.351 0.000732 0.000878 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 89 9572.042 0.005 0.0322 0.476 1.12 0.16 0.214 0.641 0.822 0.0016 0.00206 ! Validation 89 9572.042 0.005 0.029 0.393 0.973 0.152 0.203 0.606 0.747 0.00152 0.00187 Wall time: 9572.042422719765 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 90 100 0.908 0.0309 0.29 0.157 0.21 0.555 0.642 0.00139 0.0016 90 172 0.716 0.0282 0.151 0.15 0.2 0.41 0.464 0.00102 0.00116 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 90 100 0.882 0.029 0.301 0.151 0.203 0.62 0.654 0.00155 0.00164 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 90 9679.134 0.005 0.0296 0.376 0.967 0.154 0.205 0.587 0.731 0.00147 0.00183 ! Validation 90 9679.134 0.005 0.0285 0.226 0.795 0.151 0.201 0.453 0.567 0.00113 0.00142 Wall time: 9679.134850226808 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 91 100 0.951 0.0291 0.37 0.152 0.203 0.646 0.725 0.00162 0.00181 91 172 1.01 0.0267 0.479 0.146 0.195 0.763 0.825 0.00191 0.00206 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 91 100 1.61 0.029 1.03 0.152 0.203 1.19 1.21 0.00299 0.00302 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 91 9788.167 0.005 0.0309 0.507 1.12 0.157 0.21 0.663 0.848 0.00166 0.00212 ! Validation 91 9788.167 0.005 0.028 0.325 0.885 0.15 0.2 0.574 0.679 0.00144 0.0017 Wall time: 9788.167860022746 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 92 100 0.767 0.0279 0.209 0.149 0.199 0.437 0.545 0.00109 0.00136 92 172 0.816 0.0273 0.271 0.148 0.197 0.496 0.621 0.00124 0.00155 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 92 100 1.23 0.0298 0.633 0.153 0.206 0.922 0.949 0.0023 0.00237 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 92 9895.355 0.005 0.0277 0.325 0.878 0.149 0.198 0.533 0.679 0.00133 0.0017 ! Validation 92 9895.355 0.005 0.0284 0.223 0.79 0.151 0.201 0.467 0.563 0.00117 0.00141 Wall time: 9895.355223069899 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 93 100 2.44 0.0333 1.78 0.164 0.217 1.49 1.59 0.00373 0.00397 93 172 27.7 1.03 7.06 0.898 1.21 2.55 3.17 0.00637 0.00792 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 93 100 37 0.929 18.4 0.853 1.15 5.09 5.12 0.0127 0.0128 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 93 10002.526 0.005 0.25 296 301 0.329 0.595 7.05 20.5 0.0176 0.0513 ! Validation 93 10002.526 0.005 0.981 43.1 62.7 0.878 1.18 7.65 7.83 0.0191 0.0196 Wall time: 10002.526767245028 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 94 100 24.6 0.947 5.68 0.859 1.16 2.28 2.84 0.0057 0.0071 94 172 21.7 0.923 3.2 0.846 1.15 1.74 2.13 0.00435 0.00533 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 94 100 18.7 0.88 1.13 0.826 1.12 1.22 1.27 0.00304 0.00317 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 94 10109.687 0.005 0.954 6.17 25.2 0.863 1.16 2.32 2.96 0.00579 0.0074 ! Validation 94 10109.687 0.005 0.921 1.59 20 0.847 1.14 1.16 1.5 0.00289 0.00376 Wall time: 10109.687545764726 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 95 100 20.3 0.889 2.56 0.828 1.12 1.56 1.91 0.0039 0.00477 95 172 21.9 0.834 5.18 0.808 1.09 2.27 2.71 0.00568 0.00678 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 95 100 16.3 0.814 0.0603 0.792 1.08 0.266 0.293 0.000664 0.000732 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 95 10216.834 0.005 0.89 3.07 20.9 0.831 1.12 1.68 2.09 0.0042 0.00522 ! Validation 95 10216.834 0.005 0.846 1.45 18.4 0.809 1.1 1.14 1.43 0.00284 0.00359 Wall time: 10216.834318042733 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 96 100 17 0.771 1.61 0.773 1.05 1.25 1.51 0.00313 0.00378 96 172 15.9 0.716 1.6 0.745 1.01 1.16 1.51 0.00291 0.00377 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 96 100 14 0.682 0.357 0.725 0.985 0.668 0.712 0.00167 0.00178 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 96 10323.976 0.005 0.786 2.18 17.9 0.779 1.06 1.4 1.76 0.00351 0.0044 ! Validation 96 10323.976 0.005 0.701 1.77 15.8 0.737 0.998 1.29 1.58 0.00322 0.00396 Wall time: 10323.976264609024 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 97 100 11.9 0.546 1.02 0.653 0.881 0.937 1.2 0.00234 0.00301 97 172 14.8 0.426 6.25 0.577 0.778 2.69 2.98 0.00671 0.00745 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 97 100 12 0.427 3.5 0.574 0.779 2.22 2.23 0.00554 0.00558 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 97 10431.121 0.005 0.573 1.8 13.3 0.665 0.902 1.28 1.6 0.00321 0.00399 ! Validation 97 10431.121 0.005 0.421 4.26 12.7 0.573 0.774 2.08 2.46 0.00521 0.00615 Wall time: 10431.121100006625 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 98 100 9.59 0.301 3.57 0.488 0.654 2.07 2.25 0.00517 0.00563 98 172 6.03 0.252 0.988 0.447 0.598 0.917 1.19 0.00229 0.00296 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 98 100 7.13 0.252 2.1 0.448 0.598 1.71 1.73 0.00427 0.00432 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 98 10540.545 0.005 0.323 2.13 8.59 0.502 0.678 1.41 1.74 0.00351 0.00435 ! Validation 98 10540.545 0.005 0.248 2.43 7.39 0.445 0.593 1.58 1.86 0.00394 0.00465 Wall time: 10540.545723396819 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 99 100 4.51 0.2 0.512 0.403 0.533 0.708 0.853 0.00177 0.00213 99 172 5.49 0.177 1.95 0.378 0.502 1.54 1.66 0.00386 0.00416 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 99 100 3.83 0.179 0.251 0.382 0.504 0.534 0.597 0.00134 0.00149 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 99 10647.806 0.005 0.212 1.5 5.73 0.412 0.548 1.17 1.46 0.00292 0.00365 ! Validation 99 10647.806 0.005 0.18 0.637 4.25 0.383 0.506 0.763 0.951 0.00191 0.00238 Wall time: 10647.806008493993 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 100 100 3.49 0.158 0.328 0.36 0.474 0.595 0.683 0.00149 0.00171 100 172 8.06 0.141 5.25 0.339 0.447 2.58 2.73 0.00645 0.00683 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 100 100 4.12 0.14 1.31 0.339 0.447 1.34 1.36 0.00334 0.00341 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 100 10754.964 0.005 0.162 1.16 4.4 0.363 0.48 1.02 1.28 0.00255 0.00321 ! Validation 100 10754.964 0.005 0.145 3.3 6.21 0.345 0.454 2.02 2.17 0.00505 0.00542 Wall time: 10754.96466326667 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 101 100 2.93 0.129 0.341 0.325 0.429 0.564 0.696 0.00141 0.00174 101 172 3.57 0.122 1.12 0.316 0.417 1.09 1.26 0.00273 0.00315 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 101 100 2.81 0.12 0.417 0.312 0.413 0.734 0.77 0.00183 0.00193 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 101 10862.117 0.005 0.135 1.26 3.96 0.332 0.438 1.06 1.34 0.00266 0.00335 ! Validation 101 10862.117 0.005 0.124 0.47 2.96 0.32 0.42 0.669 0.817 0.00167 0.00204 Wall time: 10862.117647236679 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 102 100 3.66 0.114 1.38 0.306 0.403 1.24 1.4 0.0031 0.0035 102 172 3.52 0.115 1.22 0.306 0.404 1.14 1.32 0.00284 0.00329 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 102 100 2.17 0.106 0.0546 0.293 0.388 0.276 0.279 0.00069 0.000696 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 102 10969.232 0.005 0.119 1.34 3.72 0.312 0.411 1.1 1.38 0.00275 0.00345 ! Validation 102 10969.232 0.005 0.11 0.451 2.66 0.301 0.396 0.63 0.8 0.00158 0.002 Wall time: 10969.232646774035 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 103 100 2.75 0.104 0.677 0.291 0.384 0.836 0.981 0.00209 0.00245 103 172 4.14 0.0985 2.17 0.284 0.374 1.67 1.76 0.00418 0.00439 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 103 100 1.97 0.0942 0.0907 0.276 0.366 0.334 0.359 0.000835 0.000898 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 103 11076.337 0.005 0.104 0.9 2.98 0.292 0.385 0.908 1.13 0.00227 0.00283 ! Validation 103 11076.337 0.005 0.0978 0.578 2.53 0.284 0.373 0.73 0.907 0.00182 0.00227 Wall time: 11076.337415508926 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 104 100 3.19 0.0958 1.27 0.28 0.369 1.15 1.34 0.00286 0.00336 104 172 7.58 0.0893 5.8 0.271 0.356 2.79 2.87 0.00699 0.00718 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 104 100 2.79 0.0864 1.07 0.265 0.35 1.22 1.23 0.00304 0.00308 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 104 11183.493 0.005 0.0955 1.24 3.15 0.28 0.368 1.07 1.32 0.00267 0.00331 ! Validation 104 11183.493 0.005 0.0903 2.72 4.52 0.273 0.358 1.85 1.96 0.00464 0.00491 Wall time: 11183.493529269937 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 105 100 4.43 0.0866 2.7 0.268 0.351 1.87 1.96 0.00468 0.00489 105 172 2.41 0.085 0.713 0.265 0.348 0.843 1.01 0.00211 0.00252 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 105 100 2.31 0.0789 0.732 0.253 0.335 1.01 1.02 0.00252 0.00255 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 105 11290.599 0.005 0.0878 1.01 2.77 0.269 0.353 0.984 1.2 0.00246 0.003 ! Validation 105 11290.599 0.005 0.0822 0.88 2.52 0.26 0.342 0.948 1.12 0.00237 0.0028 Wall time: 11290.599427139852 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 106 100 2.74 0.0781 1.18 0.254 0.333 1.16 1.3 0.00289 0.00324 106 172 2.99 0.0757 1.48 0.25 0.328 1.37 1.45 0.00343 0.00362 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 106 100 2.15 0.0715 0.719 0.241 0.319 0.999 1.01 0.0025 0.00253 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 106 11397.714 0.005 0.0797 0.748 2.34 0.256 0.336 0.822 1.03 0.00205 0.00258 ! Validation 106 11397.714 0.005 0.0749 0.426 1.92 0.249 0.326 0.645 0.778 0.00161 0.00194 Wall time: 11397.714147385675 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 107 100 2.06 0.0782 0.5 0.253 0.333 0.642 0.843 0.00161 0.00211 107 172 3.16 0.0773 1.61 0.252 0.331 1.38 1.51 0.00346 0.00378 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 107 100 2.32 0.0729 0.857 0.245 0.322 1.1 1.1 0.00274 0.00276 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 107 11504.840 0.005 0.0768 1.42 2.96 0.251 0.33 1.18 1.42 0.00294 0.00356 ! Validation 107 11504.840 0.005 0.0743 0.339 1.83 0.248 0.325 0.569 0.694 0.00142 0.00173 Wall time: 11504.840245693922 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 108 100 1.64 0.0697 0.247 0.24 0.315 0.494 0.593 0.00124 0.00148 108 172 1.63 0.0688 0.251 0.239 0.313 0.509 0.598 0.00127 0.00149 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 108 100 1.55 0.0646 0.253 0.23 0.303 0.586 0.6 0.00147 0.0015 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 108 11611.978 0.005 0.0721 0.849 2.29 0.244 0.32 0.894 1.1 0.00224 0.00275 ! Validation 108 11611.978 0.005 0.0673 0.24 1.59 0.236 0.309 0.47 0.584 0.00118 0.00146 Wall time: 11611.978002539836 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 109 100 1.55 0.0681 0.189 0.237 0.311 0.403 0.518 0.00101 0.0013 109 172 1.91 0.0642 0.63 0.23 0.302 0.749 0.946 0.00187 0.00237 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 109 100 1.23 0.0609 0.0104 0.224 0.294 0.0815 0.121 0.000204 0.000303 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 109 11727.188 0.005 0.066 0.683 2 0.233 0.306 0.798 0.986 0.002 0.00246 ! Validation 109 11727.188 0.005 0.0629 0.293 1.55 0.228 0.299 0.518 0.645 0.00129 0.00161 Wall time: 11727.188386528753 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 110 100 1.94 0.0616 0.709 0.226 0.296 0.894 1 0.00223 0.00251 110 172 2.56 0.0593 1.37 0.222 0.29 1.31 1.4 0.00328 0.00349 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 110 100 1.25 0.059 0.0682 0.221 0.289 0.296 0.311 0.00074 0.000778 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 110 11834.251 0.005 0.0644 1.02 2.3 0.231 0.302 0.962 1.2 0.00241 0.003 ! Validation 110 11834.251 0.005 0.0611 0.649 1.87 0.225 0.295 0.816 0.96 0.00204 0.0024 Wall time: 11834.25155264372 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 111 100 1.58 0.0627 0.325 0.227 0.298 0.594 0.679 0.00148 0.0017 111 172 1.4 0.0577 0.243 0.218 0.286 0.468 0.588 0.00117 0.00147 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 111 100 1.11 0.0552 0.00843 0.214 0.28 0.0844 0.109 0.000211 0.000274 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 111 11941.333 0.005 0.0613 0.861 2.09 0.225 0.295 0.894 1.11 0.00223 0.00277 ! Validation 111 11941.333 0.005 0.0578 0.475 1.63 0.219 0.287 0.674 0.821 0.00169 0.00205 Wall time: 11941.33370034583 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 112 100 1.81 0.0568 0.67 0.216 0.284 0.875 0.976 0.00219 0.00244 112 172 1.35 0.0563 0.227 0.215 0.283 0.464 0.568 0.00116 0.00142 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 112 100 1.23 0.0521 0.186 0.208 0.272 0.5 0.514 0.00125 0.00129 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 112 12048.403 0.005 0.0577 0.753 1.91 0.219 0.286 0.839 1.03 0.0021 0.00259 ! Validation 112 12048.403 0.005 0.0544 0.342 1.43 0.213 0.278 0.557 0.697 0.00139 0.00174 Wall time: 12048.403101264033 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 113 100 1.28 0.0554 0.175 0.214 0.281 0.416 0.498 0.00104 0.00125 113 172 1.44 0.0581 0.281 0.219 0.287 0.491 0.632 0.00123 0.00158 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 113 100 3.59 0.055 2.49 0.214 0.28 1.88 1.88 0.0047 0.0047 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 113 12155.656 0.005 0.0552 0.845 1.95 0.214 0.28 0.876 1.1 0.00219 0.00274 ! Validation 113 12155.656 0.005 0.0556 1.21 2.32 0.214 0.281 1.2 1.31 0.00299 0.00328 Wall time: 12155.6566461958 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 114 100 1.21 0.0501 0.211 0.204 0.267 0.408 0.548 0.00102 0.00137 114 172 1.31 0.0511 0.289 0.206 0.269 0.502 0.641 0.00125 0.0016 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 114 100 1.14 0.0481 0.177 0.2 0.261 0.49 0.501 0.00122 0.00125 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 114 12262.732 0.005 0.0526 0.576 1.63 0.208 0.273 0.731 0.905 0.00183 0.00226 ! Validation 114 12262.732 0.005 0.0498 0.248 1.24 0.203 0.266 0.472 0.593 0.00118 0.00148 Wall time: 12262.732421870809 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 115 100 2.78 0.0467 1.85 0.197 0.258 1.56 1.62 0.00391 0.00405 115 172 1.23 0.0496 0.238 0.202 0.266 0.472 0.582 0.00118 0.00145 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 115 100 1.71 0.0459 0.79 0.195 0.256 1.05 1.06 0.00263 0.00265 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 115 12371.630 0.005 0.0502 0.665 1.67 0.204 0.267 0.756 0.973 0.00189 0.00243 ! Validation 115 12371.630 0.005 0.0477 0.495 1.45 0.199 0.26 0.725 0.839 0.00181 0.0021 Wall time: 12371.62996710185 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 116 100 1.1 0.0491 0.121 0.201 0.264 0.316 0.415 0.00079 0.00104 116 172 1.61 0.0473 0.667 0.198 0.259 0.882 0.974 0.0022 0.00243 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 116 100 2.29 0.0456 1.38 0.194 0.255 1.4 1.4 0.0035 0.0035 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 116 12478.643 0.005 0.0494 0.878 1.86 0.202 0.265 0.894 1.12 0.00223 0.00279 ! Validation 116 12478.643 0.005 0.0473 0.49 1.44 0.198 0.259 0.723 0.835 0.00181 0.00209 Wall time: 12478.643252050038 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 117 100 1.3 0.0482 0.333 0.199 0.262 0.539 0.687 0.00135 0.00172 117 172 2.15 0.0494 1.16 0.203 0.265 1.17 1.28 0.00291 0.00321 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 117 100 1.74 0.0456 0.824 0.195 0.255 1.08 1.08 0.00269 0.00271 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 117 12585.797 0.005 0.0487 0.883 1.86 0.201 0.263 0.878 1.12 0.0022 0.0028 ! Validation 117 12585.797 0.005 0.0467 0.283 1.22 0.197 0.258 0.525 0.634 0.00131 0.00159 Wall time: 12585.797510439064 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 118 100 1.2 0.0437 0.329 0.19 0.249 0.572 0.684 0.00143 0.00171 118 172 0.926 0.0402 0.123 0.183 0.239 0.357 0.418 0.000893 0.00104 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 118 100 0.854 0.0409 0.0357 0.184 0.241 0.191 0.225 0.000478 0.000563 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 118 12692.810 0.005 0.0454 0.525 1.43 0.194 0.254 0.698 0.864 0.00175 0.00216 ! Validation 118 12692.810 0.005 0.0427 0.216 1.07 0.188 0.246 0.44 0.554 0.0011 0.00138 Wall time: 12692.810199192725 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 119 100 2.17 0.0442 1.28 0.19 0.251 1.29 1.35 0.00321 0.00338 119 172 1.01 0.0443 0.122 0.191 0.251 0.341 0.417 0.000851 0.00104 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 119 100 3.99 0.0442 3.11 0.192 0.251 2.1 2.1 0.00525 0.00525 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 119 12799.836 0.005 0.0439 0.771 1.65 0.19 0.25 0.846 1.05 0.00211 0.00262 ! Validation 119 12799.836 0.005 0.0443 1.38 2.27 0.191 0.251 1.3 1.4 0.00325 0.00351 Wall time: 12799.836406882852 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 120 100 1.07 0.0421 0.232 0.186 0.245 0.455 0.575 0.00114 0.00144 120 172 0.939 0.0412 0.115 0.184 0.242 0.342 0.405 0.000855 0.00101 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 120 100 1.13 0.0389 0.354 0.179 0.235 0.7 0.709 0.00175 0.00177 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 120 12907.258 0.005 0.0426 0.546 1.4 0.187 0.246 0.71 0.881 0.00178 0.0022 ! Validation 120 12907.258 0.005 0.0398 0.156 0.951 0.181 0.238 0.383 0.471 0.000957 0.00118 Wall time: 12907.25805130694 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 121 100 1.27 0.0427 0.415 0.187 0.246 0.658 0.768 0.00165 0.00192 121 172 1.1 0.0415 0.274 0.185 0.243 0.521 0.624 0.0013 0.00156 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 121 100 2.22 0.0388 1.44 0.179 0.235 1.43 1.43 0.00357 0.00358 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 121 13014.347 0.005 0.0414 0.762 1.59 0.185 0.243 0.855 1.04 0.00214 0.0026 ! Validation 121 13014.347 0.005 0.0396 0.542 1.33 0.181 0.237 0.775 0.878 0.00194 0.00219 Wall time: 13014.347335233819 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 122 100 1 0.039 0.222 0.179 0.235 0.438 0.562 0.0011 0.0014 122 172 1.51 0.0404 0.707 0.182 0.24 0.919 1 0.0023 0.00251 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 122 100 1.19 0.0386 0.415 0.179 0.234 0.758 0.768 0.00189 0.00192 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 122 13121.461 0.005 0.0391 0.556 1.34 0.179 0.236 0.698 0.889 0.00174 0.00222 ! Validation 122 13121.461 0.005 0.0387 0.15 0.924 0.179 0.234 0.375 0.462 0.000937 0.00116 Wall time: 13121.46141214203 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 123 100 1.27 0.0397 0.48 0.18 0.238 0.768 0.826 0.00192 0.00206 123 172 1.01 0.0396 0.22 0.179 0.237 0.444 0.559 0.00111 0.0014 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 123 100 2.39 0.0376 1.64 0.176 0.231 1.52 1.53 0.0038 0.00381 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 123 13228.565 0.005 0.0397 0.779 1.57 0.181 0.238 0.866 1.05 0.00216 0.00263 ! Validation 123 13228.565 0.005 0.0379 0.559 1.32 0.177 0.232 0.792 0.891 0.00198 0.00223 Wall time: 13228.56563680293 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 124 100 1.72 0.0363 0.998 0.172 0.227 1.12 1.19 0.0028 0.00298 124 172 1.03 0.0369 0.289 0.173 0.229 0.558 0.64 0.0014 0.0016 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 124 100 1.46 0.0352 0.759 0.17 0.224 1.03 1.04 0.00258 0.0026 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 124 13335.670 0.005 0.0378 0.559 1.32 0.176 0.232 0.713 0.891 0.00178 0.00223 ! Validation 124 13335.670 0.005 0.0359 0.319 1.04 0.172 0.226 0.576 0.673 0.00144 0.00168 Wall time: 13335.670758925844 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 125 100 1.2 0.0389 0.418 0.178 0.235 0.631 0.771 0.00158 0.00193 125 172 1.37 0.0381 0.609 0.176 0.233 0.847 0.931 0.00212 0.00233 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 125 100 1.21 0.0355 0.5 0.17 0.225 0.831 0.843 0.00208 0.00211 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 125 13442.762 0.005 0.0367 0.664 1.4 0.173 0.228 0.796 0.971 0.00199 0.00243 ! Validation 125 13442.762 0.005 0.0357 0.16 0.874 0.171 0.225 0.379 0.477 0.000949 0.00119 Wall time: 13442.76219569007 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 126 100 0.983 0.0344 0.295 0.167 0.221 0.558 0.648 0.0014 0.00162 126 172 0.909 0.0352 0.206 0.17 0.224 0.485 0.541 0.00121 0.00135 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 126 100 1.04 0.0352 0.338 0.17 0.224 0.68 0.693 0.0017 0.00173 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 126 13549.919 0.005 0.0356 0.592 1.3 0.17 0.225 0.75 0.917 0.00188 0.00229 ! Validation 126 13549.919 0.005 0.0348 0.139 0.835 0.169 0.222 0.358 0.445 0.000896 0.00111 Wall time: 13549.919763006736 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 127 100 1.31 0.036 0.587 0.172 0.226 0.841 0.913 0.0021 0.00228 127 172 1.66 0.0363 0.93 0.171 0.227 1.08 1.15 0.00271 0.00287 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 127 100 0.667 0.0325 0.0159 0.163 0.215 0.14 0.15 0.000349 0.000375 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 127 13657.308 0.005 0.0347 0.549 1.24 0.168 0.222 0.694 0.883 0.00173 0.00221 ! Validation 127 13657.308 0.005 0.0332 0.196 0.859 0.165 0.217 0.418 0.527 0.00104 0.00132 Wall time: 13657.307970101945 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 128 100 0.774 0.0329 0.116 0.163 0.216 0.309 0.405 0.000772 0.00101 128 172 0.811 0.0353 0.105 0.169 0.224 0.329 0.387 0.000822 0.000968 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 128 100 0.693 0.0325 0.0434 0.163 0.215 0.201 0.248 0.000503 0.000621 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 128 13764.430 0.005 0.0336 0.519 1.19 0.165 0.218 0.696 0.859 0.00174 0.00215 ! Validation 128 13764.430 0.005 0.0328 0.334 0.99 0.164 0.216 0.57 0.689 0.00142 0.00172 Wall time: 13764.43028598791 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 129 100 0.819 0.0335 0.148 0.165 0.218 0.376 0.459 0.00094 0.00115 129 172 0.913 0.0319 0.275 0.161 0.213 0.535 0.625 0.00134 0.00156 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 129 100 0.664 0.0322 0.0204 0.162 0.214 0.166 0.17 0.000414 0.000426 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 129 13871.544 0.005 0.0333 0.604 1.27 0.165 0.217 0.752 0.927 0.00188 0.00232 ! Validation 129 13871.544 0.005 0.0322 0.36 1 0.162 0.214 0.6 0.716 0.0015 0.00179 Wall time: 13871.543949138839 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 130 100 1.29 0.0302 0.689 0.157 0.207 0.902 0.989 0.00225 0.00247 130 172 1.74 0.03 1.14 0.156 0.207 1.2 1.27 0.003 0.00319 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 130 100 1.37 0.0307 0.752 0.158 0.209 1.02 1.03 0.00255 0.00258 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 130 13978.747 0.005 0.032 0.463 1.1 0.161 0.213 0.632 0.81 0.00158 0.00203 ! Validation 130 13978.747 0.005 0.0312 1.95 2.57 0.16 0.21 1.61 1.67 0.00403 0.00416 Wall time: 13978.746999610681 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 131 100 1.06 0.0311 0.436 0.158 0.21 0.722 0.787 0.0018 0.00197 131 172 1.6 0.0323 0.953 0.162 0.214 1.1 1.16 0.00276 0.00291 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 131 100 0.862 0.0329 0.205 0.163 0.216 0.51 0.539 0.00127 0.00135 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 131 14085.914 0.005 0.0314 0.577 1.2 0.16 0.211 0.739 0.905 0.00185 0.00226 ! Validation 131 14085.914 0.005 0.0325 0.486 1.14 0.163 0.215 0.685 0.831 0.00171 0.00208 Wall time: 14085.914016680792 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 132 100 0.829 0.0292 0.246 0.154 0.204 0.506 0.591 0.00126 0.00148 132 172 0.79 0.0297 0.196 0.155 0.205 0.41 0.528 0.00102 0.00132 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 132 100 0.967 0.0285 0.397 0.151 0.201 0.732 0.751 0.00183 0.00188 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 132 14193.050 0.005 0.0308 0.449 1.06 0.158 0.209 0.625 0.799 0.00156 0.002 ! Validation 132 14193.050 0.005 0.0291 0.12 0.701 0.154 0.203 0.328 0.412 0.00082 0.00103 Wall time: 14193.049916085787 ! Best model 132 0.701 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 133 100 0.73 0.0296 0.138 0.156 0.205 0.366 0.443 0.000914 0.00111 133 172 0.776 0.0301 0.174 0.155 0.207 0.385 0.498 0.000962 0.00124 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 133 100 0.869 0.0289 0.291 0.153 0.203 0.622 0.643 0.00155 0.00161 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 133 14300.189 0.005 0.03 0.471 1.07 0.156 0.206 0.643 0.819 0.00161 0.00205 ! Validation 133 14300.189 0.005 0.0287 0.136 0.709 0.153 0.202 0.344 0.44 0.000859 0.0011 Wall time: 14300.18940440286 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 134 100 0.687 0.0279 0.129 0.151 0.199 0.374 0.429 0.000934 0.00107 134 172 0.865 0.0305 0.255 0.157 0.208 0.533 0.601 0.00133 0.0015 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 134 100 1.18 0.0276 0.627 0.148 0.198 0.926 0.944 0.00232 0.00236 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 134 14407.292 0.005 0.0294 0.5 1.09 0.154 0.204 0.669 0.843 0.00167 0.00211 ! Validation 134 14407.292 0.005 0.0279 0.169 0.727 0.15 0.199 0.409 0.491 0.00102 0.00123 Wall time: 14407.292740263045 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 135 100 0.94 0.028 0.379 0.151 0.2 0.659 0.734 0.00165 0.00183 135 172 0.672 0.027 0.131 0.147 0.196 0.346 0.431 0.000866 0.00108 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 135 100 1.18 0.0267 0.645 0.146 0.195 0.943 0.957 0.00236 0.00239 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 135 14514.387 0.005 0.0295 0.475 1.07 0.154 0.205 0.64 0.822 0.0016 0.00205 ! Validation 135 14514.387 0.005 0.027 0.326 0.867 0.148 0.196 0.598 0.681 0.00149 0.0017 Wall time: 14514.387004548684 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 136 100 0.812 0.0277 0.257 0.149 0.199 0.522 0.605 0.0013 0.00151 136 172 0.64 0.0282 0.0757 0.151 0.2 0.278 0.328 0.000694 0.00082 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 136 100 0.592 0.0285 0.0211 0.151 0.201 0.142 0.173 0.000354 0.000433 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 136 14621.487 0.005 0.0277 0.44 0.994 0.149 0.198 0.624 0.791 0.00156 0.00198 ! Validation 136 14621.487 0.005 0.0282 0.518 1.08 0.151 0.2 0.754 0.858 0.00189 0.00214 Wall time: 14621.487162321806 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 137 100 1.32 0.0255 0.808 0.144 0.191 1.02 1.07 0.00254 0.00268 137 172 1.02 0.0302 0.413 0.157 0.207 0.668 0.766 0.00167 0.00191 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 137 100 0.903 0.0311 0.281 0.159 0.21 0.615 0.632 0.00154 0.00158 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 137 14729.826 0.005 0.0275 0.523 1.07 0.149 0.198 0.693 0.862 0.00173 0.00215 ! Validation 137 14729.826 0.005 0.0303 0.566 1.17 0.157 0.208 0.789 0.897 0.00197 0.00224 Wall time: 14729.826824825723 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 138 100 1.02 0.0285 0.449 0.152 0.201 0.642 0.799 0.00161 0.002 138 172 0.709 0.0261 0.188 0.145 0.193 0.44 0.516 0.0011 0.00129 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 138 100 0.533 0.0257 0.0185 0.144 0.191 0.137 0.162 0.000342 0.000406 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 138 14836.897 0.005 0.0271 0.356 0.898 0.148 0.196 0.562 0.712 0.0014 0.00178 ! Validation 138 14836.897 0.005 0.0259 0.161 0.679 0.145 0.192 0.383 0.478 0.000957 0.00119 Wall time: 14836.897095757071 ! Best model 138 0.679 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 139 100 0.752 0.0238 0.275 0.139 0.184 0.567 0.626 0.00142 0.00156 139 172 0.618 0.0242 0.134 0.14 0.186 0.357 0.436 0.000893 0.00109 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 139 100 1.26 0.0244 0.77 0.139 0.186 1.03 1.05 0.00258 0.00261 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 139 14944.533 0.005 0.0256 0.3 0.811 0.143 0.191 0.52 0.653 0.0013 0.00163 ! Validation 139 14944.533 0.005 0.0246 0.276 0.768 0.141 0.187 0.547 0.627 0.00137 0.00157 Wall time: 14944.53304322483 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 140 100 0.628 0.0254 0.121 0.143 0.19 0.332 0.414 0.000831 0.00104 140 172 0.579 0.024 0.1 0.139 0.185 0.299 0.377 0.000747 0.000942 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 140 100 0.826 0.0249 0.328 0.141 0.188 0.662 0.682 0.00165 0.00171 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 140 15051.567 0.005 0.0255 0.426 0.936 0.143 0.19 0.63 0.778 0.00158 0.00194 ! Validation 140 15051.567 0.005 0.0248 0.127 0.624 0.142 0.188 0.331 0.425 0.000828 0.00106 Wall time: 15051.566901423968 ! Best model 140 0.624 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 141 100 0.656 0.0235 0.185 0.137 0.183 0.435 0.513 0.00109 0.00128 141 172 0.921 0.0269 0.383 0.146 0.196 0.634 0.738 0.00159 0.00184 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 141 100 2.01 0.0246 1.52 0.14 0.187 1.46 1.47 0.00365 0.00367 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 141 15158.617 0.005 0.0253 0.435 0.941 0.143 0.19 0.627 0.786 0.00157 0.00197 ! Validation 141 15158.617 0.005 0.0246 0.477 0.97 0.141 0.187 0.733 0.824 0.00183 0.00206 Wall time: 15158.617433754727 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 142 100 0.956 0.0259 0.437 0.144 0.192 0.722 0.788 0.00181 0.00197 142 172 0.871 0.0255 0.362 0.142 0.19 0.598 0.717 0.0015 0.00179 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 142 100 0.515 0.0243 0.029 0.139 0.186 0.143 0.203 0.000356 0.000507 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 142 15265.863 0.005 0.0253 0.479 0.986 0.143 0.19 0.675 0.825 0.00169 0.00206 ! Validation 142 15265.863 0.005 0.0244 0.143 0.63 0.14 0.186 0.354 0.45 0.000884 0.00113 Wall time: 15265.86335507501 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 143 100 1.18 0.0255 0.665 0.144 0.19 0.827 0.972 0.00207 0.00243 143 172 0.729 0.0253 0.222 0.142 0.19 0.478 0.562 0.0012 0.0014 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 143 100 1.09 0.0227 0.636 0.134 0.179 0.936 0.951 0.00234 0.00238 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 143 15372.951 0.005 0.0245 0.347 0.837 0.14 0.187 0.546 0.702 0.00136 0.00176 ! Validation 143 15372.951 0.005 0.023 0.582 1.04 0.136 0.181 0.825 0.909 0.00206 0.00227 Wall time: 15372.951259085909 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 144 100 0.842 0.0234 0.374 0.138 0.182 0.597 0.729 0.00149 0.00182 144 172 0.539 0.0219 0.101 0.133 0.176 0.318 0.379 0.000796 0.000947 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 144 100 0.514 0.0215 0.085 0.131 0.175 0.305 0.348 0.000762 0.000869 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 144 15480.024 0.005 0.0237 0.337 0.81 0.138 0.183 0.55 0.692 0.00137 0.00173 ! Validation 144 15480.024 0.005 0.0221 0.167 0.608 0.133 0.177 0.395 0.486 0.000988 0.00122 Wall time: 15480.02417129092 ! Best model 144 0.608 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 145 100 0.578 0.0238 0.102 0.137 0.184 0.294 0.381 0.000736 0.000954 145 172 0.594 0.0218 0.159 0.132 0.176 0.385 0.475 0.000963 0.00119 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 145 100 1.67 0.0214 1.24 0.13 0.174 1.32 1.33 0.0033 0.00332 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 145 15587.113 0.005 0.0226 0.313 0.765 0.135 0.179 0.548 0.667 0.00137 0.00167 ! Validation 145 15587.113 0.005 0.0218 0.667 1.1 0.133 0.176 0.913 0.973 0.00228 0.00243 Wall time: 15587.11300327396 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 146 100 0.536 0.0214 0.109 0.131 0.174 0.332 0.393 0.00083 0.000983 146 172 0.514 0.0228 0.059 0.135 0.18 0.237 0.29 0.000593 0.000724 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 146 100 0.485 0.0221 0.0426 0.133 0.177 0.187 0.246 0.000467 0.000615 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 146 15694.258 0.005 0.0226 0.38 0.832 0.135 0.179 0.584 0.735 0.00146 0.00184 ! Validation 146 15694.258 0.005 0.0221 0.1 0.542 0.134 0.177 0.292 0.377 0.00073 0.000943 Wall time: 15694.258548425976 ! Best model 146 0.542 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 147 100 0.694 0.0213 0.268 0.131 0.174 0.553 0.617 0.00138 0.00154 147 172 0.877 0.0239 0.398 0.138 0.184 0.671 0.752 0.00168 0.00188 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 147 100 0.449 0.0211 0.0264 0.129 0.173 0.156 0.194 0.00039 0.000484 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 147 15801.342 0.005 0.0221 0.338 0.779 0.133 0.177 0.568 0.693 0.00142 0.00173 ! Validation 147 15801.342 0.005 0.0213 0.159 0.585 0.131 0.174 0.379 0.475 0.000947 0.00119 Wall time: 15801.34261433268 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 148 100 0.699 0.0234 0.231 0.138 0.182 0.503 0.573 0.00126 0.00143 148 172 0.526 0.0207 0.113 0.129 0.172 0.334 0.4 0.000836 0.001 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 148 100 0.689 0.0201 0.286 0.126 0.169 0.616 0.638 0.00154 0.00159 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 148 15908.866 0.005 0.0224 0.405 0.853 0.134 0.178 0.604 0.759 0.00151 0.0019 ! Validation 148 15908.866 0.005 0.0208 0.104 0.52 0.129 0.172 0.311 0.385 0.000777 0.000962 Wall time: 15908.865964367986 ! Best model 148 0.520 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 149 100 0.64 0.0208 0.223 0.129 0.172 0.385 0.563 0.000963 0.00141 149 172 0.697 0.0234 0.23 0.137 0.182 0.453 0.571 0.00113 0.00143 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 149 100 1.27 0.0225 0.823 0.135 0.179 1.06 1.08 0.00266 0.0027 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 149 16015.979 0.005 0.021 0.299 0.72 0.13 0.173 0.521 0.652 0.0013 0.00163 ! Validation 149 16015.979 0.005 0.0223 0.234 0.68 0.134 0.178 0.488 0.576 0.00122 0.00144 Wall time: 16015.979700404685 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 150 100 0.481 0.0193 0.0945 0.124 0.166 0.276 0.367 0.000689 0.000916 150 172 0.469 0.0204 0.0599 0.129 0.17 0.205 0.292 0.000513 0.000729 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 150 100 0.96 0.0203 0.554 0.127 0.17 0.872 0.887 0.00218 0.00222 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 150 16123.058 0.005 0.0209 0.328 0.745 0.129 0.172 0.548 0.683 0.00137 0.00171 ! Validation 150 16123.058 0.005 0.0204 0.246 0.654 0.128 0.17 0.512 0.591 0.00128 0.00148 Wall time: 16123.05813665688 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 151 100 0.489 0.0205 0.0787 0.128 0.171 0.259 0.334 0.000647 0.000836 151 172 0.974 0.0226 0.522 0.135 0.179 0.708 0.862 0.00177 0.00215 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 151 100 4.89 0.0232 4.42 0.137 0.182 2.5 2.51 0.00626 0.00627 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 151 16230.136 0.005 0.0207 0.375 0.789 0.129 0.172 0.594 0.73 0.00149 0.00182 ! Validation 151 16230.136 0.005 0.0227 1.34 1.79 0.136 0.18 1.23 1.38 0.00307 0.00345 Wall time: 16230.136475899722 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 152 100 0.533 0.0198 0.137 0.126 0.168 0.357 0.442 0.000893 0.00111 152 172 0.795 0.02 0.395 0.126 0.169 0.683 0.749 0.00171 0.00187 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 152 100 0.612 0.019 0.233 0.123 0.164 0.549 0.575 0.00137 0.00144 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 152 16337.216 0.005 0.0207 0.321 0.736 0.129 0.172 0.54 0.676 0.00135 0.00169 ! Validation 152 16337.216 0.005 0.019 0.158 0.538 0.124 0.164 0.376 0.474 0.000939 0.00118 Wall time: 16337.216534296982 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 153 100 0.707 0.0198 0.31 0.127 0.168 0.556 0.664 0.00139 0.00166 153 172 0.56 0.0199 0.163 0.125 0.168 0.395 0.481 0.000988 0.0012 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 153 100 0.444 0.0189 0.066 0.123 0.164 0.266 0.306 0.000666 0.000766 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 153 16444.285 0.005 0.0205 0.383 0.792 0.128 0.171 0.594 0.738 0.00148 0.00184 ! Validation 153 16444.285 0.005 0.0191 0.417 0.798 0.124 0.165 0.685 0.77 0.00171 0.00193 Wall time: 16444.28561159596 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 154 100 0.523 0.0197 0.129 0.125 0.167 0.331 0.428 0.000827 0.00107 154 172 0.416 0.0184 0.0474 0.121 0.162 0.214 0.26 0.000535 0.000649 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 154 100 0.458 0.0174 0.109 0.118 0.157 0.359 0.394 0.000897 0.000984 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 154 16551.393 0.005 0.0192 0.232 0.615 0.124 0.165 0.452 0.575 0.00113 0.00144 ! Validation 154 16551.393 0.005 0.0179 0.0804 0.437 0.12 0.159 0.263 0.338 0.000657 0.000845 Wall time: 16551.39316870086 ! Best model 154 0.437 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 155 100 0.481 0.0172 0.138 0.117 0.156 0.37 0.442 0.000925 0.00111 155 172 0.657 0.0182 0.294 0.121 0.161 0.58 0.646 0.00145 0.00161 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 155 100 0.556 0.0171 0.215 0.117 0.156 0.532 0.553 0.00133 0.00138 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 155 16658.744 0.005 0.0184 0.281 0.648 0.121 0.162 0.516 0.632 0.00129 0.00158 ! Validation 155 16658.744 0.005 0.0173 0.406 0.753 0.118 0.157 0.671 0.76 0.00168 0.0019 Wall time: 16658.744398637675 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 156 100 0.452 0.0191 0.0709 0.123 0.165 0.265 0.317 0.000662 0.000794 156 172 0.433 0.0171 0.0915 0.117 0.156 0.282 0.361 0.000705 0.000901 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 156 100 0.41 0.0166 0.0774 0.115 0.154 0.289 0.332 0.000722 0.000829 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 156 16765.826 0.005 0.0184 0.3 0.669 0.122 0.162 0.521 0.653 0.0013 0.00163 ! Validation 156 16765.826 0.005 0.0169 0.289 0.627 0.117 0.155 0.533 0.641 0.00133 0.0016 Wall time: 16765.82601134479 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 157 100 0.486 0.0179 0.128 0.12 0.16 0.321 0.426 0.000803 0.00107 157 172 0.462 0.017 0.123 0.117 0.155 0.359 0.417 0.000897 0.00104 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 157 100 1.45 0.0177 1.1 0.121 0.158 1.24 1.25 0.00311 0.00312 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 157 16872.909 0.005 0.0179 0.298 0.657 0.12 0.16 0.529 0.651 0.00132 0.00163 ! Validation 157 16872.909 0.005 0.0176 0.287 0.639 0.12 0.158 0.532 0.639 0.00133 0.0016 Wall time: 16872.90936957905 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 158 100 23.6 0.952 4.58 0.867 1.16 2.09 2.55 0.00522 0.00638 158 172 21.6 0.855 4.52 0.813 1.1 1.84 2.53 0.00459 0.00634 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 158 100 16.9 0.801 0.867 0.789 1.07 1.01 1.11 0.00254 0.00277 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 158 16979.990 0.005 0.806 86.4 103 0.747 1.07 4.85 11.1 0.0121 0.0277 ! Validation 158 16979.990 0.005 0.84 1.74 18.5 0.811 1.09 1.21 1.57 0.00302 0.00394 Wall time: 16979.99012556672 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 159 100 12 0.492 2.12 0.625 0.836 1.45 1.74 0.00363 0.00434 159 172 8.09 0.367 0.756 0.549 0.722 0.774 1.04 0.00193 0.00259 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 159 100 7.42 0.364 0.131 0.544 0.72 0.372 0.432 0.00093 0.00108 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 159 17087.071 0.005 0.576 2.23 13.7 0.67 0.905 1.38 1.78 0.00346 0.00445 ! Validation 159 17087.071 0.005 0.368 0.981 8.35 0.55 0.723 0.928 1.18 0.00232 0.00295 Wall time: 17087.07187746372 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 160 100 8.05 0.291 2.22 0.488 0.644 1.52 1.78 0.0038 0.00444 160 172 5.94 0.247 0.996 0.451 0.593 0.936 1.19 0.00234 0.00297 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 160 100 4.92 0.241 0.0993 0.443 0.585 0.299 0.376 0.000748 0.000939 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 160 17194.148 0.005 0.306 1.27 7.39 0.5 0.659 1.07 1.35 0.00268 0.00336 ! Validation 160 17194.148 0.005 0.245 0.704 5.6 0.449 0.59 0.783 1 0.00196 0.0025 Wall time: 17194.148712446913 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 161 100 4.67 0.193 0.8 0.399 0.524 0.781 1.07 0.00195 0.00267 161 172 3.83 0.169 0.449 0.374 0.49 0.64 0.799 0.0016 0.002 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 161 100 3.66 0.16 0.456 0.362 0.477 0.794 0.805 0.00198 0.00201 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 161 17301.217 0.005 0.204 1.27 5.34 0.409 0.538 1.07 1.34 0.00268 0.00335 ! Validation 161 17301.217 0.005 0.163 0.708 3.96 0.367 0.481 0.809 1 0.00202 0.00251 Wall time: 17301.21778223384 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 162 100 4.51 0.13 1.91 0.329 0.43 1.47 1.65 0.00368 0.00412 162 172 2.94 0.119 0.56 0.313 0.411 0.751 0.892 0.00188 0.00223 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 162 100 2.63 0.112 0.388 0.304 0.399 0.728 0.743 0.00182 0.00186 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 162 17408.302 0.005 0.137 1.06 3.8 0.337 0.442 0.974 1.23 0.00244 0.00306 ! Validation 162 17408.302 0.005 0.115 2.18 4.48 0.309 0.404 1.59 1.76 0.00397 0.0044 Wall time: 17408.302295804024 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 163 100 3.08 0.101 1.06 0.29 0.379 1.07 1.23 0.00266 0.00307 163 172 2.45 0.0888 0.669 0.272 0.355 0.827 0.975 0.00207 0.00244 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 163 100 2.52 0.0874 0.766 0.269 0.353 1.02 1.04 0.00255 0.00261 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 163 17515.381 0.005 0.102 0.833 2.88 0.292 0.382 0.867 1.09 0.00217 0.00272 ! Validation 163 17515.381 0.005 0.0885 0.612 2.38 0.272 0.355 0.76 0.933 0.0019 0.00233 Wall time: 17515.381662047934 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 164 100 1.82 0.0772 0.273 0.253 0.331 0.554 0.623 0.00138 0.00156 164 172 2.38 0.0745 0.894 0.248 0.325 0.993 1.13 0.00248 0.00282 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 164 100 1.64 0.0716 0.205 0.243 0.319 0.497 0.539 0.00124 0.00135 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 164 17622.455 0.005 0.0818 0.806 2.44 0.261 0.341 0.86 1.07 0.00215 0.00268 ! Validation 164 17622.455 0.005 0.0722 1.49 2.93 0.245 0.32 1.3 1.45 0.00325 0.00363 Wall time: 17622.45540610468 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 165 100 1.43 0.0647 0.133 0.232 0.303 0.348 0.435 0.000871 0.00109 165 172 1.32 0.0587 0.146 0.221 0.289 0.355 0.455 0.000888 0.00114 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 165 100 1.78 0.0579 0.623 0.218 0.287 0.921 0.941 0.0023 0.00235 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 165 17729.549 0.005 0.0663 0.68 2.01 0.235 0.307 0.793 0.983 0.00198 0.00246 ! Validation 165 17729.549 0.005 0.0584 0.371 1.54 0.22 0.288 0.595 0.726 0.00149 0.00181 Wall time: 17729.54906749772 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 166 100 1.25 0.0513 0.228 0.206 0.27 0.428 0.569 0.00107 0.00142 166 172 1.14 0.0479 0.181 0.199 0.261 0.423 0.508 0.00106 0.00127 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 166 100 1.21 0.0473 0.263 0.198 0.259 0.591 0.611 0.00148 0.00153 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 166 17837.033 0.005 0.0533 0.546 1.61 0.21 0.275 0.707 0.881 0.00177 0.0022 ! Validation 166 17837.033 0.005 0.0476 0.244 1.19 0.199 0.26 0.478 0.588 0.00119 0.00147 Wall time: 17837.033303949982 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 167 100 1.44 0.0445 0.548 0.193 0.251 0.755 0.882 0.00189 0.00221 167 172 1.08 0.0414 0.253 0.186 0.243 0.476 0.6 0.00119 0.0015 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 167 100 1.15 0.0424 0.302 0.187 0.245 0.632 0.655 0.00158 0.00164 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 167 17944.129 0.005 0.045 0.646 1.55 0.193 0.253 0.745 0.959 0.00186 0.0024 ! Validation 167 17944.129 0.005 0.0417 0.318 1.15 0.186 0.243 0.538 0.673 0.00134 0.00168 Wall time: 17944.129674805794 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 168 100 0.948 0.0388 0.172 0.178 0.235 0.397 0.495 0.000992 0.00124 168 172 1.09 0.0353 0.385 0.172 0.224 0.62 0.739 0.00155 0.00185 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 168 100 1.47 0.0351 0.767 0.171 0.223 1.04 1.04 0.00259 0.00261 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 168 18051.271 0.005 0.0385 0.493 1.26 0.179 0.234 0.683 0.837 0.00171 0.00209 ! Validation 168 18051.271 0.005 0.0353 0.311 1.02 0.171 0.224 0.555 0.665 0.00139 0.00166 Wall time: 18051.2712896117 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 169 100 0.778 0.0328 0.122 0.164 0.216 0.333 0.417 0.000832 0.00104 169 172 1.29 0.0318 0.65 0.161 0.213 0.853 0.961 0.00213 0.0024 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 169 100 0.822 0.0317 0.189 0.162 0.212 0.497 0.518 0.00124 0.0013 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 169 18158.377 0.005 0.0348 0.575 1.27 0.17 0.223 0.705 0.904 0.00176 0.00226 ! Validation 169 18158.377 0.005 0.0318 0.241 0.876 0.162 0.213 0.469 0.585 0.00117 0.00146 Wall time: 18158.37736902898 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 170 100 0.843 0.0309 0.225 0.159 0.21 0.495 0.565 0.00124 0.00141 170 172 0.739 0.0295 0.149 0.155 0.205 0.373 0.461 0.000933 0.00115 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 170 100 2.39 0.0295 1.8 0.155 0.205 1.59 1.6 0.00399 0.004 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 170 18265.471 0.005 0.0311 0.48 1.1 0.16 0.21 0.663 0.826 0.00166 0.00207 ! Validation 170 18265.471 0.005 0.0292 0.715 1.3 0.155 0.204 0.921 1.01 0.0023 0.00252 Wall time: 18265.471090306994 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 171 100 0.761 0.0284 0.192 0.152 0.201 0.448 0.522 0.00112 0.00131 171 172 0.707 0.0288 0.131 0.153 0.202 0.36 0.431 0.000901 0.00108 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 171 100 0.627 0.0275 0.077 0.15 0.198 0.302 0.331 0.000754 0.000827 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 171 18372.843 0.005 0.0287 0.37 0.943 0.153 0.202 0.594 0.725 0.00148 0.00181 ! Validation 171 18372.843 0.005 0.0274 0.151 0.699 0.15 0.197 0.367 0.464 0.000918 0.00116 Wall time: 18372.843271029647 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 172 100 0.964 0.0272 0.421 0.149 0.196 0.64 0.773 0.0016 0.00193 172 172 1.04 0.0254 0.536 0.143 0.19 0.807 0.872 0.00202 0.00218 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 172 100 0.571 0.0257 0.0565 0.145 0.191 0.254 0.283 0.000635 0.000708 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 172 18479.916 0.005 0.027 0.369 0.908 0.148 0.196 0.573 0.724 0.00143 0.00181 ! Validation 172 18479.916 0.005 0.0259 0.739 1.26 0.145 0.192 0.95 1.02 0.00237 0.00256 Wall time: 18479.916819253005 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 173 100 0.61 0.0266 0.0769 0.146 0.195 0.25 0.331 0.000625 0.000826 173 172 0.902 0.0254 0.393 0.143 0.19 0.679 0.747 0.0017 0.00187 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 173 100 0.546 0.025 0.0468 0.143 0.188 0.227 0.258 0.000567 0.000645 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 173 18587.008 0.005 0.0258 0.399 0.915 0.144 0.191 0.609 0.753 0.00152 0.00188 ! Validation 173 18587.008 0.005 0.0252 0.506 1.01 0.143 0.189 0.761 0.848 0.0019 0.00212 Wall time: 18587.008145618718 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 174 100 0.746 0.0235 0.275 0.139 0.183 0.534 0.626 0.00134 0.00156 174 172 1.22 0.0237 0.749 0.138 0.183 0.972 1.03 0.00243 0.00258 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 174 100 0.899 0.0239 0.421 0.139 0.184 0.764 0.774 0.00191 0.00193 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 174 18694.146 0.005 0.0244 0.312 0.8 0.14 0.186 0.524 0.666 0.00131 0.00166 ! Validation 174 18694.146 0.005 0.0241 1.23 1.71 0.14 0.185 1.27 1.32 0.00316 0.0033 Wall time: 18694.14677881589 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 175 100 0.75 0.0236 0.278 0.138 0.183 0.552 0.628 0.00138 0.00157 175 172 1.28 0.0249 0.778 0.141 0.188 0.95 1.05 0.00237 0.00263 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 175 100 1.34 0.0225 0.886 0.135 0.179 1.11 1.12 0.00279 0.00281 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 175 18801.156 0.005 0.0238 0.36 0.837 0.138 0.184 0.585 0.715 0.00146 0.00179 ! Validation 175 18801.156 0.005 0.023 1.58 2.04 0.137 0.181 1.45 1.5 0.00363 0.00375 Wall time: 18801.155943365768 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 176 100 0.6 0.0246 0.108 0.14 0.187 0.338 0.392 0.000845 0.000981 176 172 0.614 0.0238 0.139 0.138 0.184 0.352 0.445 0.000881 0.00111 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 176 100 1.51 0.0218 1.08 0.133 0.176 1.23 1.24 0.00307 0.00309 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 176 18909.771 0.005 0.0233 0.382 0.848 0.137 0.182 0.601 0.737 0.0015 0.00184 ! Validation 176 18909.771 0.005 0.0221 0.572 1.01 0.134 0.177 0.833 0.901 0.00208 0.00225 Wall time: 18909.771393648814 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 177 100 0.571 0.0227 0.118 0.134 0.179 0.314 0.409 0.000785 0.00102 177 172 1.35 0.0205 0.937 0.128 0.171 1.13 1.15 0.00282 0.00289 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 177 100 0.712 0.021 0.292 0.13 0.173 0.628 0.644 0.00157 0.00161 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 177 19016.770 0.005 0.0223 0.325 0.771 0.134 0.178 0.545 0.679 0.00136 0.0017 ! Validation 177 19016.770 0.005 0.0214 0.645 1.07 0.131 0.174 0.872 0.957 0.00218 0.00239 Wall time: 19016.76991549274 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 178 100 1.16 0.0224 0.71 0.134 0.178 0.963 1 0.00241 0.00251 178 172 0.556 0.0204 0.147 0.127 0.17 0.382 0.458 0.000955 0.00114 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 178 100 0.631 0.0214 0.203 0.131 0.174 0.52 0.538 0.0013 0.00134 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 178 19124.242 0.005 0.0221 0.405 0.847 0.133 0.177 0.624 0.759 0.00156 0.0019 ! Validation 178 19124.242 0.005 0.0214 0.628 1.06 0.131 0.174 0.873 0.944 0.00218 0.00236 Wall time: 19124.242795256898 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 179 100 0.919 0.0205 0.51 0.128 0.17 0.787 0.851 0.00197 0.00213 179 172 0.503 0.0205 0.0926 0.128 0.171 0.316 0.363 0.000791 0.000907 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 179 100 0.554 0.0209 0.135 0.13 0.172 0.413 0.438 0.00103 0.0011 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 179 19231.330 0.005 0.021 0.272 0.692 0.13 0.173 0.502 0.622 0.00126 0.00156 ! Validation 179 19231.330 0.005 0.0212 0.471 0.894 0.131 0.173 0.722 0.818 0.00181 0.00205 Wall time: 19231.33071126882 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 180 100 0.533 0.0201 0.131 0.127 0.169 0.376 0.431 0.000939 0.00108 180 172 0.733 0.0205 0.323 0.128 0.171 0.562 0.677 0.0014 0.00169 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 180 100 1.44 0.02 1.04 0.127 0.169 1.21 1.21 0.00302 0.00304 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 180 19338.423 0.005 0.0205 0.293 0.703 0.128 0.171 0.522 0.645 0.00131 0.00161 ! Validation 180 19338.423 0.005 0.0201 0.381 0.783 0.127 0.169 0.66 0.736 0.00165 0.00184 Wall time: 19338.42354084272 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 181 100 0.479 0.021 0.0584 0.129 0.173 0.243 0.288 0.000609 0.00072 181 172 0.508 0.0196 0.117 0.126 0.167 0.327 0.407 0.000818 0.00102 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 181 100 0.445 0.0197 0.0511 0.126 0.167 0.242 0.269 0.000606 0.000674 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 181 19445.521 0.005 0.0211 0.417 0.838 0.13 0.173 0.623 0.77 0.00156 0.00192 ! Validation 181 19445.521 0.005 0.0199 0.221 0.618 0.126 0.168 0.461 0.561 0.00115 0.0014 Wall time: 19445.521586723626 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 182 100 0.455 0.019 0.0751 0.123 0.164 0.26 0.327 0.000651 0.000817 182 172 0.47 0.0193 0.0847 0.124 0.165 0.271 0.347 0.000677 0.000868 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 182 100 0.48 0.0188 0.103 0.123 0.164 0.357 0.383 0.000894 0.000958 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 182 19552.603 0.005 0.0198 0.232 0.628 0.126 0.168 0.458 0.575 0.00114 0.00144 ! Validation 182 19552.603 0.005 0.019 0.287 0.667 0.124 0.164 0.54 0.639 0.00135 0.0016 Wall time: 19552.602916997857 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 183 100 2.47 0.0195 2.09 0.125 0.166 1.67 1.72 0.00417 0.0043 183 172 1.14 0.0192 0.756 0.125 0.165 0.994 1.04 0.00248 0.00259 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 183 100 1.04 0.0195 0.647 0.126 0.166 0.949 0.959 0.00237 0.0024 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 183 19659.663 0.005 0.0196 0.364 0.757 0.125 0.167 0.569 0.719 0.00142 0.0018 ! Validation 183 19659.663 0.005 0.0197 0.15 0.544 0.126 0.167 0.379 0.461 0.000948 0.00115 Wall time: 19659.663101770915 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 184 100 0.45 0.0187 0.0765 0.122 0.163 0.26 0.33 0.000649 0.000824 184 172 0.492 0.0203 0.0861 0.127 0.17 0.276 0.35 0.00069 0.000874 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 184 100 0.408 0.0195 0.0168 0.125 0.167 0.134 0.154 0.000335 0.000386 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 184 19766.769 0.005 0.0197 0.36 0.754 0.125 0.167 0.567 0.715 0.00142 0.00179 ! Validation 184 19766.769 0.005 0.0193 0.142 0.528 0.124 0.166 0.364 0.448 0.00091 0.00112 Wall time: 19766.76959042484 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 185 100 0.659 0.0184 0.291 0.121 0.162 0.574 0.643 0.00144 0.00161 185 172 0.52 0.0193 0.134 0.124 0.166 0.363 0.436 0.000908 0.00109 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 185 100 0.512 0.0185 0.141 0.123 0.162 0.426 0.448 0.00107 0.00112 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 185 19874.063 0.005 0.0191 0.288 0.669 0.123 0.165 0.514 0.64 0.00129 0.0016 ! Validation 185 19874.063 0.005 0.0187 0.358 0.732 0.122 0.163 0.626 0.714 0.00156 0.00178 Wall time: 19874.063476264942 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 186 100 0.614 0.0183 0.248 0.122 0.161 0.511 0.593 0.00128 0.00148 186 172 0.697 0.018 0.337 0.12 0.16 0.629 0.692 0.00157 0.00173 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 186 100 0.713 0.0173 0.367 0.118 0.157 0.707 0.722 0.00177 0.0018 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 186 19981.134 0.005 0.0183 0.206 0.573 0.121 0.161 0.433 0.542 0.00108 0.00135 ! Validation 186 19981.134 0.005 0.0175 0.205 0.555 0.119 0.158 0.468 0.539 0.00117 0.00135 Wall time: 19981.134123038035 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 187 100 0.514 0.0177 0.16 0.119 0.159 0.424 0.476 0.00106 0.00119 187 172 0.506 0.018 0.146 0.12 0.16 0.352 0.455 0.000881 0.00114 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 187 100 0.627 0.0172 0.283 0.118 0.157 0.621 0.634 0.00155 0.00158 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 187 20088.514 0.005 0.0179 0.255 0.614 0.12 0.16 0.487 0.602 0.00122 0.00151 ! Validation 187 20088.514 0.005 0.0173 0.428 0.775 0.118 0.157 0.695 0.78 0.00174 0.00195 Wall time: 20088.514338745736 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 188 100 0.384 0.0173 0.039 0.117 0.157 0.19 0.235 0.000475 0.000589 188 172 0.495 0.0181 0.132 0.12 0.161 0.371 0.434 0.000926 0.00108 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 188 100 0.373 0.0177 0.0185 0.12 0.159 0.146 0.162 0.000366 0.000405 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 188 20196.161 0.005 0.0175 0.279 0.629 0.118 0.158 0.504 0.63 0.00126 0.00157 ! Validation 188 20196.161 0.005 0.0175 0.185 0.536 0.119 0.158 0.414 0.513 0.00104 0.00128 Wall time: 20196.16174420202 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 189 100 0.535 0.0169 0.198 0.117 0.155 0.475 0.53 0.00119 0.00132 189 172 0.46 0.0174 0.111 0.118 0.157 0.34 0.398 0.000851 0.000995 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 189 100 0.597 0.0165 0.266 0.116 0.153 0.596 0.615 0.00149 0.00154 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 189 20303.233 0.005 0.0176 0.284 0.636 0.118 0.158 0.499 0.636 0.00125 0.00159 ! Validation 189 20303.233 0.005 0.0167 0.205 0.54 0.116 0.154 0.459 0.54 0.00115 0.00135 Wall time: 20303.232991661876 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 190 100 0.398 0.0168 0.0611 0.116 0.155 0.236 0.295 0.000591 0.000737 190 172 0.402 0.0156 0.0891 0.112 0.149 0.283 0.356 0.000707 0.00089 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 190 100 0.357 0.0163 0.0303 0.115 0.152 0.156 0.208 0.00039 0.000519 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 190 20410.299 0.005 0.0172 0.263 0.606 0.117 0.156 0.486 0.611 0.00121 0.00153 ! Validation 190 20410.299 0.005 0.0164 0.122 0.45 0.115 0.153 0.323 0.417 0.000807 0.00104 Wall time: 20410.299378089607 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 191 100 0.504 0.0174 0.155 0.118 0.157 0.366 0.469 0.000916 0.00117 191 172 0.406 0.0172 0.0617 0.117 0.157 0.246 0.296 0.000614 0.00074 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 191 100 0.533 0.0171 0.192 0.117 0.156 0.506 0.522 0.00126 0.00131 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 191 20517.706 0.005 0.0175 0.372 0.723 0.118 0.158 0.574 0.727 0.00144 0.00182 ! Validation 191 20517.706 0.005 0.0169 0.0963 0.434 0.116 0.155 0.302 0.37 0.000756 0.000925 Wall time: 20517.706371756736 ! Best model 191 0.434 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 192 100 0.45 0.0155 0.141 0.112 0.148 0.382 0.447 0.000955 0.00112 192 172 0.772 0.0189 0.393 0.123 0.164 0.636 0.747 0.00159 0.00187 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 192 100 0.432 0.0184 0.0642 0.122 0.162 0.252 0.302 0.00063 0.000755 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 192 20624.829 0.005 0.017 0.302 0.641 0.116 0.155 0.515 0.655 0.00129 0.00164 ! Validation 192 20624.829 0.005 0.0182 0.11 0.473 0.121 0.161 0.306 0.395 0.000765 0.000989 Wall time: 20624.8293136009 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 193 100 0.365 0.0162 0.0404 0.114 0.152 0.198 0.24 0.000495 0.000599 193 172 0.988 0.0187 0.615 0.122 0.163 0.775 0.935 0.00194 0.00234 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 193 100 0.585 0.0174 0.238 0.118 0.157 0.564 0.582 0.00141 0.00145 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 193 20732.003 0.005 0.0164 0.214 0.543 0.114 0.153 0.421 0.551 0.00105 0.00138 ! Validation 193 20732.003 0.005 0.0174 0.213 0.561 0.118 0.157 0.461 0.551 0.00115 0.00138 Wall time: 20732.003889551852 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 194 100 0.535 0.0166 0.202 0.115 0.154 0.466 0.536 0.00117 0.00134 194 172 0.448 0.0152 0.144 0.111 0.147 0.39 0.453 0.000975 0.00113 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 194 100 0.415 0.0153 0.11 0.111 0.147 0.365 0.395 0.000914 0.000988 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 194 20839.113 0.005 0.0161 0.232 0.554 0.113 0.151 0.464 0.574 0.00116 0.00143 ! Validation 194 20839.113 0.005 0.0155 0.207 0.517 0.112 0.148 0.452 0.543 0.00113 0.00136 Wall time: 20839.113701303024 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 195 100 0.643 0.0165 0.313 0.115 0.153 0.6 0.667 0.0015 0.00167 195 172 0.379 0.0147 0.0851 0.109 0.144 0.302 0.348 0.000754 0.00087 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 195 100 0.492 0.0152 0.189 0.111 0.147 0.5 0.518 0.00125 0.00129 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 195 20946.303 0.005 0.0159 0.27 0.588 0.113 0.15 0.498 0.62 0.00124 0.00155 ! Validation 195 20946.303 0.005 0.0152 0.0871 0.392 0.111 0.147 0.287 0.352 0.000719 0.000879 Wall time: 20946.303547536023 ! Best model 195 0.392 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 196 100 0.642 0.0153 0.336 0.11 0.147 0.651 0.691 0.00163 0.00173 196 172 0.854 0.0148 0.558 0.109 0.145 0.852 0.891 0.00213 0.00223 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 196 100 0.83 0.0144 0.541 0.109 0.143 0.869 0.877 0.00217 0.00219 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 196 21053.391 0.005 0.0151 0.177 0.479 0.11 0.146 0.401 0.501 0.001 0.00125 ! Validation 196 21053.391 0.005 0.0145 0.256 0.546 0.108 0.144 0.539 0.603 0.00135 0.00151 Wall time: 21053.391204152722 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 197 100 0.609 0.016 0.288 0.113 0.151 0.542 0.64 0.00136 0.0016 197 172 0.397 0.0144 0.11 0.107 0.143 0.324 0.395 0.000811 0.000989 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 197 100 0.303 0.0137 0.0287 0.106 0.139 0.157 0.202 0.000394 0.000505 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 197 21160.465 0.005 0.015 0.218 0.517 0.109 0.146 0.44 0.556 0.0011 0.00139 ! Validation 197 21160.465 0.005 0.0141 0.083 0.365 0.107 0.141 0.271 0.343 0.000679 0.000858 Wall time: 21160.465412549675 ! Best model 197 0.365 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 198 100 0.951 0.0162 0.627 0.114 0.152 0.89 0.944 0.00222 0.00236 198 172 0.328 0.0135 0.0582 0.104 0.138 0.223 0.288 0.000558 0.000719 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 198 100 0.355 0.0134 0.0874 0.105 0.138 0.326 0.352 0.000816 0.000881 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 198 21267.560 0.005 0.0148 0.218 0.513 0.109 0.145 0.442 0.557 0.0011 0.00139 ! Validation 198 21267.560 0.005 0.0137 0.0696 0.343 0.105 0.139 0.255 0.314 0.000638 0.000786 Wall time: 21267.56040966697 ! Best model 198 0.343 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 199 100 0.333 0.0143 0.0464 0.107 0.143 0.197 0.257 0.000493 0.000642 199 172 0.47 0.0134 0.201 0.104 0.138 0.475 0.535 0.00119 0.00134 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 199 100 0.386 0.0129 0.127 0.103 0.136 0.399 0.425 0.000999 0.00106 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 199 21374.657 0.005 0.0142 0.207 0.491 0.107 0.142 0.432 0.542 0.00108 0.00135 ! Validation 199 21374.657 0.005 0.0135 0.0817 0.353 0.105 0.139 0.28 0.341 0.0007 0.000852 Wall time: 21374.657722952776 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 200 100 0.338 0.0149 0.0404 0.109 0.145 0.187 0.239 0.000468 0.000599 200 172 0.678 0.0137 0.405 0.105 0.14 0.712 0.758 0.00178 0.0019 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 200 100 0.648 0.0134 0.38 0.105 0.138 0.726 0.734 0.00181 0.00184 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 200 21481.741 0.005 0.0138 0.205 0.481 0.105 0.14 0.434 0.54 0.00109 0.00135 ! Validation 200 21481.741 0.005 0.0135 0.942 1.21 0.105 0.139 1.11 1.16 0.00277 0.00289 Wall time: 21481.741052677855 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 201 100 0.463 0.014 0.183 0.106 0.141 0.448 0.51 0.00112 0.00127 201 172 0.297 0.0127 0.0422 0.101 0.135 0.184 0.245 0.00046 0.000612 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 201 100 0.319 0.0126 0.0661 0.102 0.134 0.29 0.307 0.000725 0.000766 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 201 21588.822 0.005 0.0138 0.231 0.507 0.105 0.14 0.464 0.573 0.00116 0.00143 ! Validation 201 21588.822 0.005 0.0129 0.0772 0.335 0.102 0.135 0.269 0.331 0.000672 0.000828 Wall time: 21588.822548762895 ! Best model 201 0.335 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 202 100 0.296 0.0133 0.0289 0.103 0.138 0.169 0.203 0.000422 0.000507 202 172 0.289 0.0121 0.047 0.0986 0.131 0.216 0.258 0.000539 0.000646 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 202 100 0.567 0.0124 0.319 0.101 0.133 0.666 0.673 0.00166 0.00168 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 202 21696.540 0.005 0.0135 0.229 0.498 0.104 0.138 0.456 0.57 0.00114 0.00143 ! Validation 202 21696.540 0.005 0.0126 0.121 0.373 0.101 0.134 0.35 0.415 0.000875 0.00104 Wall time: 21696.540520532988 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 203 100 0.322 0.0135 0.0529 0.104 0.138 0.227 0.274 0.000568 0.000686 203 172 0.545 0.0134 0.277 0.105 0.138 0.595 0.628 0.00149 0.00157 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 203 100 0.49 0.0135 0.22 0.106 0.139 0.544 0.559 0.00136 0.0014 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 203 21803.594 0.005 0.0133 0.231 0.498 0.104 0.138 0.445 0.572 0.00111 0.00143 ! Validation 203 21803.594 0.005 0.0137 0.139 0.412 0.106 0.139 0.361 0.444 0.000902 0.00111 Wall time: 21803.59465501085 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 204 100 0.727 0.013 0.466 0.102 0.136 0.78 0.814 0.00195 0.00204 204 172 0.437 0.0116 0.205 0.0971 0.128 0.476 0.539 0.00119 0.00135 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 204 100 0.358 0.0114 0.13 0.0972 0.127 0.418 0.43 0.00104 0.00107 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 204 21910.651 0.005 0.0128 0.184 0.44 0.101 0.135 0.414 0.511 0.00104 0.00128 ! Validation 204 21910.651 0.005 0.012 0.176 0.415 0.0987 0.13 0.428 0.5 0.00107 0.00125 Wall time: 21910.65125566069 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 205 100 0.291 0.0126 0.0393 0.101 0.134 0.191 0.236 0.000477 0.000591 205 172 0.534 0.0126 0.282 0.1 0.134 0.582 0.633 0.00146 0.00158 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 205 100 0.377 0.0113 0.151 0.0968 0.127 0.449 0.463 0.00112 0.00116 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 205 22017.716 0.005 0.0128 0.218 0.474 0.101 0.135 0.438 0.557 0.0011 0.00139 ! Validation 205 22017.716 0.005 0.0116 0.342 0.575 0.0973 0.129 0.632 0.697 0.00158 0.00174 Wall time: 22017.71667331364 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 206 100 0.893 0.0132 0.628 0.104 0.137 0.891 0.945 0.00223 0.00236 206 172 0.385 0.0119 0.147 0.0982 0.13 0.353 0.458 0.000883 0.00114 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 206 100 0.336 0.0116 0.105 0.0983 0.128 0.367 0.386 0.000918 0.000966 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 206 22130.353 0.005 0.0126 0.236 0.487 0.101 0.134 0.463 0.579 0.00116 0.00145 ! Validation 206 22130.353 0.005 0.0118 0.235 0.47 0.0979 0.129 0.508 0.577 0.00127 0.00144 Wall time: 22130.353010156658 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 207 100 0.294 0.011 0.0737 0.0944 0.125 0.253 0.324 0.000631 0.000809 207 172 0.333 0.0119 0.0949 0.0975 0.13 0.288 0.367 0.000721 0.000918 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 207 100 0.308 0.0108 0.0916 0.0949 0.124 0.351 0.361 0.000877 0.000902 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 207 22237.396 0.005 0.0117 0.152 0.387 0.0971 0.129 0.375 0.465 0.000937 0.00116 ! Validation 207 22237.396 0.005 0.0111 0.0677 0.289 0.0951 0.125 0.248 0.31 0.000619 0.000775 Wall time: 22237.39602380572 ! Best model 207 0.289 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 208 100 24.5 0.816 8.12 0.806 1.08 2.52 3.4 0.0063 0.00849 208 172 4.05 0.164 0.764 0.365 0.484 0.872 1.04 0.00218 0.0026 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 208 100 3.1 0.15 0.104 0.35 0.461 0.353 0.384 0.000882 0.00096 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 208 22344.471 0.005 0.346 13.6 20.6 0.436 0.702 2.46 4.41 0.00615 0.011 ! Validation 208 22344.471 0.005 0.157 0.831 3.97 0.358 0.473 0.888 1.09 0.00222 0.00272 Wall time: 22344.470980270766 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 209 100 2.85 0.0672 1.5 0.234 0.309 1.33 1.46 0.00334 0.00366 209 172 1.75 0.0587 0.577 0.219 0.289 0.781 0.905 0.00195 0.00226 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 209 100 2.18 0.0604 0.969 0.225 0.293 1.17 1.17 0.00292 0.00293 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 209 22451.796 0.005 0.0832 0.8 2.46 0.258 0.344 0.845 1.07 0.00211 0.00267 ! Validation 209 22451.796 0.005 0.0577 2.31 3.46 0.218 0.286 1.69 1.81 0.00423 0.00453 Wall time: 22451.796635614708 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 210 100 1.94 0.0373 1.19 0.174 0.23 1.22 1.3 0.00304 0.00325 210 172 0.991 0.0315 0.361 0.159 0.212 0.633 0.716 0.00158 0.00179 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 210 100 0.665 0.0315 0.0349 0.159 0.212 0.173 0.223 0.000434 0.000557 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 210 22558.856 0.005 0.0403 0.43 1.24 0.181 0.239 0.626 0.782 0.00157 0.00195 ! Validation 210 22558.856 0.005 0.03 0.151 0.752 0.156 0.207 0.359 0.464 0.000898 0.00116 Wall time: 22558.856562052853 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 211 100 0.677 0.026 0.158 0.145 0.192 0.379 0.473 0.000949 0.00118 211 172 0.69 0.0258 0.174 0.143 0.192 0.423 0.497 0.00106 0.00124 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 211 100 1.65 0.025 1.15 0.141 0.188 1.27 1.28 0.00318 0.0032 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 211 22666.055 0.005 0.0276 0.394 0.945 0.149 0.198 0.588 0.748 0.00147 0.00187 ! Validation 211 22666.055 0.005 0.0242 0.834 1.32 0.139 0.186 1.02 1.09 0.00255 0.00272 Wall time: 22666.055174370762 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 212 100 0.834 0.0235 0.365 0.136 0.183 0.657 0.72 0.00164 0.0018 212 172 0.613 0.0193 0.226 0.124 0.166 0.463 0.567 0.00116 0.00142 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 212 100 0.462 0.0213 0.0364 0.13 0.174 0.179 0.227 0.000446 0.000568 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 212 22773.082 0.005 0.0225 0.25 0.701 0.134 0.179 0.476 0.596 0.00119 0.00149 ! Validation 212 22773.082 0.005 0.0204 0.104 0.512 0.128 0.17 0.296 0.385 0.00074 0.000961 Wall time: 22773.082612345926 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 213 100 1.22 0.0193 0.836 0.124 0.165 1.06 1.09 0.00266 0.00273 213 172 0.448 0.0188 0.0714 0.122 0.164 0.279 0.319 0.000698 0.000796 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 213 100 0.412 0.0192 0.0284 0.123 0.165 0.142 0.201 0.000355 0.000503 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 213 22880.122 0.005 0.0203 0.311 0.717 0.127 0.17 0.532 0.665 0.00133 0.00166 ! Validation 213 22880.122 0.005 0.0186 0.0783 0.45 0.122 0.163 0.261 0.334 0.000653 0.000834 Wall time: 22880.122683015652 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 214 100 0.462 0.017 0.122 0.117 0.155 0.341 0.417 0.000853 0.00104 214 172 0.384 0.0168 0.0477 0.116 0.155 0.217 0.26 0.000542 0.000651 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 214 100 0.359 0.0173 0.013 0.117 0.157 0.124 0.136 0.00031 0.000339 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 214 22987.314 0.005 0.0182 0.227 0.591 0.12 0.161 0.458 0.568 0.00114 0.00142 ! Validation 214 22987.314 0.005 0.0169 0.101 0.439 0.116 0.155 0.302 0.379 0.000755 0.000948 Wall time: 22987.314175766893 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 215 100 0.617 0.0168 0.28 0.116 0.155 0.591 0.631 0.00148 0.00158 215 172 0.414 0.0164 0.0862 0.114 0.153 0.28 0.35 0.0007 0.000875 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 215 100 0.467 0.0162 0.143 0.114 0.152 0.432 0.45 0.00108 0.00113 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 215 23094.310 0.005 0.0169 0.258 0.596 0.116 0.155 0.475 0.606 0.00119 0.00151 ! Validation 215 23094.310 0.005 0.0159 0.0852 0.404 0.113 0.151 0.288 0.348 0.000719 0.00087 Wall time: 23094.31017630687 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 216 100 0.395 0.0167 0.0609 0.115 0.154 0.224 0.294 0.000559 0.000735 216 172 1.04 0.0149 0.745 0.109 0.145 1 1.03 0.00251 0.00257 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 216 100 0.519 0.015 0.22 0.109 0.146 0.542 0.559 0.00135 0.0014 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 216 23201.294 0.005 0.0159 0.224 0.542 0.112 0.15 0.454 0.563 0.00113 0.00141 ! Validation 216 23201.294 0.005 0.015 0.274 0.574 0.11 0.146 0.547 0.624 0.00137 0.00156 Wall time: 23201.294764601626 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 217 100 0.384 0.0139 0.106 0.105 0.141 0.32 0.388 0.0008 0.000971 217 172 0.337 0.0137 0.0639 0.105 0.139 0.23 0.301 0.000576 0.000753 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 217 100 0.322 0.0137 0.0489 0.105 0.139 0.234 0.264 0.000585 0.000659 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 217 23308.285 0.005 0.0147 0.157 0.452 0.108 0.145 0.383 0.472 0.000958 0.00118 ! Validation 217 23308.285 0.005 0.0138 0.0714 0.347 0.105 0.14 0.263 0.319 0.000658 0.000797 Wall time: 23308.285465969704 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 218 100 1.1 0.0142 0.821 0.107 0.142 1.05 1.08 0.00263 0.0027 218 172 0.38 0.0141 0.0973 0.106 0.142 0.279 0.372 0.000698 0.00093 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 218 100 0.381 0.0134 0.112 0.104 0.138 0.38 0.4 0.00095 0.000999 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 218 23415.272 0.005 0.0143 0.249 0.536 0.107 0.143 0.478 0.595 0.00119 0.00149 ! Validation 218 23415.272 0.005 0.0135 0.102 0.371 0.104 0.138 0.323 0.38 0.000808 0.000951 Wall time: 23415.272080291994 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 219 100 0.566 0.0142 0.283 0.107 0.142 0.586 0.634 0.00146 0.00159 219 172 0.405 0.0141 0.122 0.107 0.142 0.362 0.417 0.000904 0.00104 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 219 100 0.409 0.0141 0.127 0.107 0.142 0.405 0.425 0.00101 0.00106 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 219 23522.256 0.005 0.0138 0.266 0.541 0.105 0.14 0.486 0.615 0.00121 0.00154 ! Validation 219 23522.256 0.005 0.0139 0.338 0.617 0.106 0.141 0.62 0.693 0.00155 0.00173 Wall time: 23522.255951256026 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 220 100 0.534 0.013 0.274 0.102 0.136 0.57 0.624 0.00143 0.00156 220 172 0.279 0.0123 0.033 0.0993 0.132 0.174 0.217 0.000435 0.000542 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 220 100 0.266 0.0123 0.0197 0.1 0.132 0.118 0.167 0.000295 0.000418 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 220 23629.252 0.005 0.0135 0.18 0.449 0.104 0.138 0.409 0.506 0.00102 0.00127 ! Validation 220 23629.252 0.005 0.0125 0.058 0.308 0.101 0.133 0.232 0.287 0.000579 0.000718 Wall time: 23629.252243455034 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 221 100 0.339 0.0125 0.0882 0.0997 0.133 0.29 0.354 0.000725 0.000885 221 172 1.28 0.0157 0.962 0.112 0.149 1.12 1.17 0.00281 0.00292 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 221 100 0.733 0.0144 0.445 0.109 0.143 0.785 0.795 0.00196 0.00199 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 221 23736.233 0.005 0.0127 0.224 0.479 0.101 0.135 0.44 0.564 0.0011 0.00141 ! Validation 221 23736.233 0.005 0.0142 0.506 0.789 0.108 0.142 0.793 0.848 0.00198 0.00212 Wall time: 23736.23366462998 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 222 100 0.361 0.0116 0.129 0.0969 0.128 0.369 0.428 0.000922 0.00107 222 172 0.285 0.0119 0.0467 0.0975 0.13 0.212 0.258 0.000529 0.000644 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 222 100 0.442 0.012 0.202 0.0991 0.131 0.52 0.536 0.0013 0.00134 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 222 23843.222 0.005 0.0129 0.225 0.484 0.102 0.135 0.459 0.566 0.00115 0.00142 ! Validation 222 23843.222 0.005 0.0121 0.454 0.697 0.0993 0.131 0.744 0.804 0.00186 0.00201 Wall time: 23843.22284398973 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 223 100 0.533 0.0132 0.269 0.103 0.137 0.566 0.618 0.00141 0.00154 223 172 0.353 0.0124 0.105 0.0993 0.133 0.316 0.387 0.000789 0.000968 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 223 100 0.562 0.0115 0.333 0.0968 0.128 0.678 0.687 0.00169 0.00172 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 223 23950.212 0.005 0.0122 0.166 0.41 0.0989 0.132 0.389 0.486 0.000972 0.00122 ! Validation 223 23950.212 0.005 0.0117 0.152 0.386 0.0973 0.129 0.398 0.465 0.000995 0.00116 Wall time: 23950.21234855475 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 224 100 0.388 0.0113 0.162 0.0953 0.127 0.423 0.48 0.00106 0.0012 224 172 0.264 0.0116 0.0317 0.0962 0.129 0.173 0.212 0.000432 0.000531 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 224 100 0.23 0.0106 0.0185 0.0932 0.123 0.117 0.162 0.000292 0.000405 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 224 24057.451 0.005 0.0117 0.165 0.399 0.097 0.129 0.385 0.484 0.000962 0.00121 ! Validation 224 24057.451 0.005 0.011 0.0658 0.285 0.0944 0.125 0.252 0.306 0.00063 0.000764 Wall time: 24057.450911131687 ! Best model 224 0.285 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 225 100 0.305 0.012 0.0646 0.0977 0.131 0.219 0.303 0.000547 0.000758 225 172 0.453 0.0116 0.221 0.0965 0.128 0.469 0.56 0.00117 0.0014 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 225 100 0.345 0.0105 0.136 0.0929 0.122 0.424 0.439 0.00106 0.0011 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 225 24164.502 0.005 0.0114 0.189 0.417 0.0957 0.127 0.423 0.519 0.00106 0.0013 ! Validation 225 24164.502 0.005 0.0109 0.13 0.347 0.0938 0.124 0.375 0.43 0.000938 0.00108 Wall time: 24164.502760953736 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 226 100 0.293 0.0114 0.0646 0.0957 0.127 0.259 0.303 0.000649 0.000757 226 172 0.353 0.012 0.112 0.0978 0.131 0.328 0.398 0.000821 0.000996 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 226 100 0.594 0.0103 0.387 0.0921 0.121 0.734 0.742 0.00183 0.00185 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 226 24271.536 0.005 0.0112 0.191 0.416 0.095 0.126 0.419 0.522 0.00105 0.0013 ! Validation 226 24271.536 0.005 0.0106 0.399 0.611 0.0926 0.123 0.704 0.753 0.00176 0.00188 Wall time: 24271.53682042472 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 227 100 0.287 0.0119 0.0484 0.0979 0.13 0.206 0.262 0.000514 0.000656 227 172 0.277 0.0106 0.0648 0.0924 0.123 0.263 0.303 0.000657 0.000759 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 227 100 0.341 0.0098 0.145 0.0897 0.118 0.44 0.454 0.0011 0.00114 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 227 24382.675 0.005 0.011 0.177 0.396 0.094 0.125 0.396 0.501 0.000989 0.00125 ! Validation 227 24382.675 0.005 0.0103 0.194 0.399 0.0914 0.121 0.453 0.525 0.00113 0.00131 Wall time: 24382.67531672772 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 228 100 1.06 0.0107 0.846 0.0931 0.123 1.07 1.1 0.00268 0.00274 228 172 0.263 0.0101 0.0606 0.0906 0.12 0.235 0.293 0.000588 0.000733 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 228 100 0.404 0.00978 0.208 0.0894 0.118 0.534 0.544 0.00134 0.00136 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 228 24489.723 0.005 0.0106 0.166 0.378 0.0924 0.123 0.387 0.485 0.000967 0.00121 ! Validation 228 24489.723 0.005 0.0102 0.209 0.412 0.091 0.12 0.474 0.545 0.00119 0.00136 Wall time: 24489.723243538756 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 229 100 0.225 0.00951 0.0345 0.0874 0.116 0.182 0.222 0.000455 0.000554 229 172 0.276 0.0106 0.0634 0.093 0.123 0.226 0.3 0.000565 0.00075 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 229 100 0.2 0.00951 0.00941 0.0887 0.116 0.0821 0.116 0.000205 0.000289 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 229 24596.955 0.005 0.0103 0.155 0.361 0.091 0.121 0.381 0.47 0.000952 0.00117 ! Validation 229 24596.955 0.005 0.00977 0.064 0.259 0.0891 0.118 0.239 0.302 0.000598 0.000754 Wall time: 24596.95578536298 ! Best model 229 0.259 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 230 100 0.315 0.0101 0.112 0.0907 0.12 0.34 0.399 0.00085 0.000997 230 172 0.268 0.00984 0.0709 0.0886 0.118 0.251 0.317 0.000628 0.000794 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 230 100 0.392 0.00914 0.209 0.0865 0.114 0.537 0.546 0.00134 0.00136 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 230 24704.109 0.005 0.01 0.146 0.346 0.0897 0.119 0.367 0.456 0.000917 0.00114 ! Validation 230 24704.109 0.005 0.00945 0.26 0.449 0.0878 0.116 0.554 0.607 0.00139 0.00152 Wall time: 24704.109883946832 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 231 100 0.371 0.00978 0.175 0.0884 0.118 0.456 0.499 0.00114 0.00125 231 172 0.222 0.00899 0.0427 0.0856 0.113 0.211 0.246 0.000527 0.000616 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 231 100 0.223 0.00894 0.0444 0.0855 0.113 0.236 0.251 0.000589 0.000628 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 231 24811.186 0.005 0.0098 0.158 0.354 0.0888 0.118 0.377 0.474 0.000942 0.00118 ! Validation 231 24811.186 0.005 0.00929 0.0582 0.244 0.0871 0.115 0.239 0.288 0.000597 0.000719 Wall time: 24811.185950696003 ! Best model 231 0.244 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 232 100 0.257 0.00998 0.0575 0.0894 0.119 0.221 0.286 0.000553 0.000714 232 172 0.222 0.00903 0.0416 0.0856 0.113 0.188 0.243 0.00047 0.000608 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 232 100 0.203 0.00872 0.0289 0.0851 0.111 0.186 0.203 0.000465 0.000507 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 232 24921.670 0.005 0.00998 0.186 0.385 0.0896 0.119 0.377 0.514 0.000942 0.00128 ! Validation 232 24921.670 0.005 0.00906 0.0546 0.236 0.086 0.113 0.217 0.279 0.000543 0.000696 Wall time: 24921.67057534773 ! Best model 232 0.236 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 233 100 0.392 0.00965 0.199 0.0882 0.117 0.471 0.532 0.00118 0.00133 233 172 0.334 0.0087 0.16 0.0838 0.111 0.42 0.476 0.00105 0.00119 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 233 100 0.285 0.00844 0.116 0.0835 0.11 0.392 0.406 0.00098 0.00102 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 233 25028.741 0.005 0.00947 0.158 0.348 0.0872 0.116 0.363 0.474 0.000907 0.00119 ! Validation 233 25028.741 0.005 0.00884 0.12 0.297 0.0849 0.112 0.364 0.414 0.000909 0.00103 Wall time: 25028.741461738013 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 234 100 0.267 0.00864 0.094 0.0838 0.111 0.316 0.366 0.000789 0.000914 234 172 0.453 0.00953 0.263 0.0877 0.116 0.571 0.611 0.00143 0.00153 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 234 100 0.429 0.00903 0.249 0.0861 0.113 0.586 0.595 0.00147 0.00149 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 234 25135.806 0.005 0.0092 0.141 0.325 0.086 0.114 0.355 0.448 0.000887 0.00112 ! Validation 234 25135.806 0.005 0.00963 0.27 0.462 0.0886 0.117 0.558 0.619 0.0014 0.00155 Wall time: 25135.806025246624 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 235 100 0.212 0.00902 0.0311 0.0854 0.113 0.168 0.21 0.00042 0.000526 235 172 0.233 0.00875 0.0585 0.0838 0.111 0.233 0.288 0.000582 0.000721 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 235 100 0.193 0.00838 0.025 0.0832 0.109 0.177 0.189 0.000443 0.000472 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 235 25242.869 0.005 0.0092 0.152 0.336 0.0861 0.114 0.377 0.465 0.000942 0.00116 ! Validation 235 25242.869 0.005 0.00863 0.129 0.302 0.084 0.111 0.359 0.429 0.000897 0.00107 Wall time: 25242.869696064852 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 236 100 0.203 0.00819 0.0393 0.0813 0.108 0.183 0.236 0.000458 0.00059 236 172 0.573 0.00856 0.402 0.0831 0.11 0.717 0.756 0.00179 0.00189 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 236 100 0.839 0.00824 0.674 0.0825 0.108 0.977 0.979 0.00244 0.00245 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 236 25349.932 0.005 0.00889 0.147 0.325 0.0845 0.112 0.369 0.457 0.000923 0.00114 ! Validation 236 25349.932 0.005 0.00846 0.212 0.381 0.0831 0.11 0.472 0.549 0.00118 0.00137 Wall time: 25349.932594651822 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 237 100 0.201 0.0088 0.0254 0.083 0.112 0.164 0.19 0.00041 0.000475 237 172 0.249 0.0102 0.0452 0.0896 0.12 0.198 0.253 0.000495 0.000633 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 237 100 0.205 0.00779 0.0492 0.08 0.105 0.26 0.264 0.00065 0.000661 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 237 25456.991 0.005 0.00882 0.149 0.326 0.0842 0.112 0.355 0.461 0.000886 0.00115 ! Validation 237 25456.991 0.005 0.00828 0.0486 0.214 0.0821 0.109 0.211 0.263 0.000528 0.000657 Wall time: 25456.991822046693 ! Best model 237 0.214 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 238 100 0.219 0.00774 0.0642 0.0788 0.105 0.258 0.302 0.000645 0.000755 238 172 0.199 0.00765 0.0461 0.0788 0.104 0.218 0.256 0.000544 0.00064 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 238 100 0.285 0.00737 0.138 0.0778 0.102 0.439 0.443 0.0011 0.00111 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 238 25564.262 0.005 0.00829 0.101 0.267 0.0815 0.109 0.302 0.379 0.000755 0.000947 ! Validation 238 25564.262 0.005 0.00778 0.112 0.268 0.0795 0.105 0.352 0.4 0.00088 0.000999 Wall time: 25564.262175271753 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 239 100 0.262 0.00829 0.0965 0.0814 0.109 0.348 0.37 0.00087 0.000926 239 172 0.243 0.00822 0.0787 0.0817 0.108 0.285 0.335 0.000714 0.000836 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 239 100 0.193 0.00785 0.0358 0.0804 0.106 0.198 0.226 0.000496 0.000564 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 239 25671.566 0.005 0.00843 0.162 0.331 0.0823 0.109 0.387 0.481 0.000966 0.0012 ! Validation 239 25671.566 0.005 0.00805 0.0702 0.231 0.0809 0.107 0.243 0.316 0.000606 0.000789 Wall time: 25671.566630120855 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 240 100 0.512 0.00793 0.354 0.0796 0.106 0.677 0.709 0.00169 0.00177 240 172 0.249 0.00812 0.0863 0.0807 0.107 0.299 0.35 0.000747 0.000876 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 240 100 0.176 0.00742 0.0274 0.0783 0.103 0.176 0.197 0.000439 0.000493 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 240 25778.710 0.005 0.00801 0.101 0.261 0.0801 0.107 0.303 0.379 0.000758 0.000946 ! Validation 240 25778.710 0.005 0.00781 0.152 0.308 0.0799 0.105 0.394 0.465 0.000984 0.00116 Wall time: 25778.710424905643 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 241 100 0.212 0.00783 0.0551 0.079 0.106 0.238 0.28 0.000595 0.0007 241 172 0.197 0.00824 0.0325 0.0815 0.108 0.194 0.215 0.000486 0.000538 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 241 100 0.173 0.00692 0.0344 0.0755 0.0991 0.215 0.221 0.000537 0.000553 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 241 25885.787 0.005 0.00788 0.125 0.283 0.0795 0.106 0.337 0.422 0.000844 0.00106 ! Validation 241 25885.787 0.005 0.00763 0.178 0.331 0.0789 0.104 0.439 0.503 0.0011 0.00126 Wall time: 25885.786940664053 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 242 100 0.238 0.00736 0.0905 0.0772 0.102 0.299 0.359 0.000746 0.000896 242 172 0.297 0.00807 0.136 0.0799 0.107 0.39 0.44 0.000974 0.0011 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 242 100 0.223 0.00668 0.0899 0.0742 0.0974 0.354 0.357 0.000885 0.000894 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 242 25992.997 0.005 0.00789 0.121 0.279 0.0795 0.106 0.323 0.415 0.000807 0.00104 ! Validation 242 25992.997 0.005 0.00712 0.0908 0.233 0.076 0.101 0.31 0.359 0.000775 0.000898 Wall time: 25992.997114599682 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 243 100 0.174 0.00709 0.0317 0.0756 0.1 0.179 0.212 0.000448 0.000531 243 172 0.212 0.00726 0.0666 0.0767 0.102 0.245 0.308 0.000613 0.000769 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 243 100 0.268 0.00676 0.133 0.0743 0.098 0.432 0.434 0.00108 0.00109 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 243 26100.376 0.005 0.00751 0.1 0.251 0.0775 0.103 0.299 0.378 0.000747 0.000945 ! Validation 243 26100.376 0.005 0.00725 0.247 0.392 0.0768 0.102 0.546 0.593 0.00136 0.00148 Wall time: 26100.37609782163 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 244 100 0.222 0.0072 0.0775 0.0755 0.101 0.292 0.332 0.000729 0.00083 244 172 0.286 0.0072 0.143 0.0758 0.101 0.396 0.45 0.000991 0.00113 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 244 100 0.201 0.0066 0.0688 0.0734 0.0968 0.302 0.313 0.000755 0.000782 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 244 26207.440 0.005 0.00757 0.122 0.273 0.0779 0.104 0.321 0.417 0.000802 0.00104 ! Validation 244 26207.440 0.005 0.00697 0.136 0.276 0.0753 0.0996 0.384 0.44 0.000961 0.0011 Wall time: 26207.44070226373 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 245 100 0.186 0.00735 0.0386 0.0768 0.102 0.158 0.234 0.000395 0.000586 245 172 0.273 0.008 0.113 0.0801 0.107 0.356 0.401 0.000891 0.001 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 245 100 0.36 0.0065 0.23 0.0729 0.0961 0.57 0.572 0.00142 0.00143 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 245 26314.515 0.005 0.00745 0.112 0.261 0.0773 0.103 0.319 0.399 0.000798 0.000998 ! Validation 245 26314.515 0.005 0.00676 0.0884 0.223 0.074 0.098 0.284 0.354 0.00071 0.000886 Wall time: 26314.515512879938 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 246 100 0.18 0.00681 0.0436 0.0737 0.0984 0.21 0.249 0.000526 0.000622 246 172 0.164 0.00619 0.0398 0.0707 0.0938 0.2 0.238 0.000501 0.000595 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 246 100 0.176 0.00644 0.0469 0.0731 0.0957 0.249 0.258 0.000621 0.000645 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 246 26421.583 0.005 0.0078 0.139 0.295 0.079 0.105 0.343 0.445 0.000857 0.00111 ! Validation 246 26421.583 0.005 0.00686 0.0583 0.196 0.075 0.0988 0.24 0.288 0.000601 0.00072 Wall time: 26421.583528244868 ! Best model 246 0.196 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 247 100 0.163 0.00684 0.026 0.0734 0.0986 0.152 0.192 0.00038 0.000481 247 172 0.272 0.00789 0.114 0.08 0.106 0.331 0.403 0.000829 0.00101 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 247 100 0.29 0.00742 0.141 0.0785 0.103 0.447 0.448 0.00112 0.00112 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 247 26528.660 0.005 0.00764 0.112 0.265 0.0777 0.104 0.317 0.4 0.000793 0.000999 ! Validation 247 26528.660 0.005 0.00777 0.124 0.28 0.0798 0.105 0.368 0.42 0.00092 0.00105 Wall time: 26528.660763236694 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 248 100 0.281 0.00731 0.135 0.0773 0.102 0.396 0.438 0.000989 0.00109 248 172 0.261 0.00672 0.127 0.0736 0.0978 0.386 0.424 0.000966 0.00106 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 248 100 0.165 0.00603 0.0445 0.0707 0.0926 0.244 0.251 0.000609 0.000629 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 248 26635.982 0.005 0.00699 0.0924 0.232 0.0747 0.0997 0.287 0.362 0.000717 0.000906 ! Validation 248 26635.982 0.005 0.00635 0.0992 0.226 0.0717 0.095 0.317 0.376 0.000794 0.000939 Wall time: 26635.982862504665 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 249 100 0.172 0.0067 0.0382 0.0734 0.0976 0.18 0.233 0.000449 0.000583 249 172 0.305 0.00823 0.14 0.0827 0.108 0.386 0.446 0.000965 0.00112 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 249 100 0.139 0.00645 0.00985 0.0727 0.0957 0.104 0.118 0.00026 0.000296 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 249 26743.053 0.005 0.00695 0.102 0.241 0.0745 0.0994 0.3 0.38 0.000749 0.00095 ! Validation 249 26743.053 0.005 0.0068 0.0473 0.183 0.0743 0.0983 0.205 0.259 0.000512 0.000648 Wall time: 26743.053835544735 ! Best model 249 0.183 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 250 100 0.162 0.00663 0.0299 0.0725 0.097 0.165 0.206 0.000412 0.000516 250 172 0.196 0.00752 0.0453 0.078 0.103 0.2 0.254 0.0005 0.000634 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 250 100 0.374 0.00799 0.215 0.0816 0.107 0.55 0.552 0.00138 0.00138 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 250 26850.142 0.005 0.00737 0.115 0.263 0.0761 0.102 0.309 0.405 0.000772 0.00101 ! Validation 250 26850.142 0.005 0.00818 0.0788 0.242 0.0823 0.108 0.272 0.335 0.000681 0.000837 Wall time: 26850.14286794467 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 251 100 0.228 0.00655 0.097 0.0726 0.0965 0.356 0.371 0.000891 0.000928 251 172 0.186 0.00737 0.0383 0.0771 0.102 0.198 0.233 0.000494 0.000583 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 251 100 0.195 0.00709 0.0534 0.0783 0.1 0.273 0.276 0.000682 0.000689 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 251 26957.218 0.005 0.0083 0.133 0.299 0.0808 0.109 0.35 0.436 0.000874 0.00109 ! Validation 251 26957.218 0.005 0.00744 0.0397 0.188 0.0786 0.103 0.187 0.237 0.000468 0.000594 Wall time: 26957.21870831307 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 252 100 0.233 0.00812 0.0703 0.0816 0.107 0.259 0.316 0.000648 0.00079 252 172 0.545 0.00622 0.421 0.0706 0.094 0.752 0.773 0.00188 0.00193 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 252 100 0.985 0.00613 0.862 0.0712 0.0934 1.11 1.11 0.00277 0.00277 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 252 27064.475 0.005 0.00709 0.0964 0.238 0.0751 0.1 0.293 0.37 0.000734 0.000924 ! Validation 252 27064.475 0.005 0.00635 0.373 0.499 0.072 0.095 0.673 0.728 0.00168 0.00182 Wall time: 27064.475464864634 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 253 100 0.141 0.00585 0.0235 0.0677 0.0912 0.15 0.183 0.000374 0.000457 253 172 0.191 0.00642 0.0623 0.0711 0.0955 0.251 0.297 0.000627 0.000744 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 253 100 0.115 0.00541 0.00654 0.0664 0.0877 0.0946 0.0964 0.000237 0.000241 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 253 27171.595 0.005 0.00716 0.0876 0.231 0.0751 0.101 0.278 0.353 0.000695 0.000882 ! Validation 253 27171.595 0.005 0.00578 0.0765 0.192 0.0683 0.0906 0.269 0.33 0.000673 0.000824 Wall time: 27171.595162897836 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 254 100 0.749 0.0122 0.504 0.102 0.132 0.815 0.847 0.00204 0.00212 254 172 0.223 0.00844 0.0539 0.0828 0.11 0.23 0.277 0.000576 0.000692 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 254 100 0.606 0.00649 0.476 0.0735 0.0961 0.822 0.822 0.00205 0.00206 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 254 27278.605 0.005 0.00748 0.112 0.262 0.0763 0.103 0.298 0.399 0.000744 0.000998 ! Validation 254 27278.605 0.005 0.00675 0.121 0.256 0.0746 0.0979 0.34 0.415 0.00085 0.00104 Wall time: 27278.605383484624 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 255 100 0.143 0.00589 0.0255 0.0688 0.0915 0.166 0.19 0.000416 0.000476 255 172 0.197 0.00655 0.0663 0.0718 0.0965 0.254 0.307 0.000634 0.000767 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 255 100 0.123 0.00548 0.0136 0.067 0.0883 0.139 0.139 0.000347 0.000348 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 255 27385.620 0.005 0.0064 0.0808 0.209 0.0713 0.0954 0.268 0.339 0.00067 0.000847 ! Validation 255 27385.620 0.005 0.00597 0.0341 0.153 0.0692 0.0921 0.179 0.22 0.000447 0.00055 Wall time: 27385.620823985897 ! Best model 255 0.153 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 256 100 0.214 0.00723 0.0695 0.0771 0.101 0.262 0.314 0.000654 0.000785 256 172 0.171 0.00603 0.0509 0.069 0.0926 0.206 0.269 0.000516 0.000673 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 256 100 0.119 0.00542 0.0105 0.0668 0.0877 0.117 0.122 0.000292 0.000306 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 256 27493.160 0.005 0.0066 0.0945 0.226 0.0726 0.0969 0.297 0.367 0.000744 0.000916 ! Validation 256 27493.160 0.005 0.00585 0.102 0.219 0.0688 0.0912 0.327 0.381 0.000819 0.000951 Wall time: 27493.160767213907 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 257 100 0.72 0.00591 0.602 0.0675 0.0917 0.904 0.925 0.00226 0.00231 257 172 0.389 0.00586 0.272 0.0681 0.0912 0.596 0.622 0.00149 0.00155 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 257 100 0.401 0.00564 0.288 0.0682 0.0896 0.64 0.64 0.0016 0.0016 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 257 27600.168 0.005 0.00596 0.0787 0.198 0.0687 0.092 0.261 0.334 0.000652 0.000835 ! Validation 257 27600.168 0.005 0.00603 0.292 0.413 0.0698 0.0926 0.598 0.644 0.00149 0.00161 Wall time: 27600.168470932636 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 258 100 0.248 0.00591 0.129 0.0682 0.0917 0.386 0.429 0.000966 0.00107 258 172 0.131 0.00533 0.0247 0.0655 0.087 0.14 0.187 0.00035 0.000468 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 258 100 0.126 0.00501 0.0256 0.0639 0.0843 0.183 0.191 0.000456 0.000477 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 258 27707.174 0.005 0.00647 0.0774 0.207 0.0717 0.0959 0.271 0.332 0.000678 0.000829 ! Validation 258 27707.174 0.005 0.00548 0.0677 0.177 0.0665 0.0883 0.251 0.31 0.000629 0.000776 Wall time: 27707.174089873675 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 259 100 0.194 0.00642 0.0655 0.0718 0.0956 0.267 0.305 0.000668 0.000763 259 172 0.162 0.0053 0.0557 0.0648 0.0868 0.23 0.281 0.000575 0.000704 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 259 100 0.167 0.00531 0.0603 0.0657 0.0869 0.274 0.293 0.000684 0.000732 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 259 27814.181 0.005 0.00673 0.0961 0.231 0.0727 0.0978 0.291 0.37 0.000727 0.000924 ! Validation 259 27814.181 0.005 0.00585 0.069 0.186 0.0688 0.0912 0.273 0.313 0.000683 0.000783 Wall time: 27814.18095910782 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 260 100 159 1.31 133 1.02 1.37 13.3 13.7 0.0333 0.0344 260 172 19.8 0.827 3.28 0.804 1.08 1.6 2.16 0.00399 0.0054 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 260 100 15.7 0.778 0.154 0.78 1.05 0.382 0.468 0.000954 0.00117 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 260 27921.197 0.005 0.442 15.6 24.4 0.433 0.793 2.1 4.71 0.00526 0.0118 ! Validation 260 27921.197 0.005 0.821 1.94 18.4 0.804 1.08 1.32 1.66 0.00329 0.00415 Wall time: 27921.197495135944 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 261 100 5.17 0.228 0.609 0.428 0.569 0.837 0.93 0.00209 0.00233 261 172 4.56 0.151 1.54 0.351 0.463 1.18 1.48 0.00296 0.0037 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 261 100 4.37 0.145 1.47 0.343 0.453 1.44 1.45 0.0036 0.00362 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 261 28028.271 0.005 0.335 2.5 9.2 0.498 0.69 1.46 1.89 0.00366 0.00471 ! Validation 261 28028.271 0.005 0.152 2.37 5.41 0.352 0.465 1.57 1.83 0.00393 0.00459 Wall time: 28028.27145464765 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 262 100 2.51 0.106 0.379 0.296 0.389 0.628 0.734 0.00157 0.00184 262 172 4.28 0.0957 2.36 0.279 0.369 1.74 1.83 0.00435 0.00458 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 262 100 4.85 0.0881 3.09 0.269 0.354 2.09 2.1 0.00523 0.00524 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 262 28135.369 0.005 0.119 1.08 3.46 0.311 0.411 0.975 1.24 0.00244 0.0031 ! Validation 262 28135.369 0.005 0.0926 2.04 3.89 0.275 0.363 1.56 1.7 0.00389 0.00426 Wall time: 28135.369347286876 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 263 100 2.27 0.0808 0.649 0.256 0.339 0.819 0.961 0.00205 0.0024 263 172 1.64 0.0759 0.127 0.248 0.328 0.363 0.424 0.000907 0.00106 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 263 100 3.41 0.071 1.99 0.241 0.318 1.68 1.68 0.00419 0.0042 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 263 28242.575 0.005 0.0837 0.705 2.38 0.261 0.345 0.786 1 0.00196 0.0025 ! Validation 263 28242.575 0.005 0.0737 1.55 3.02 0.244 0.324 1.34 1.48 0.00336 0.00371 Wall time: 28242.575004194863 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 264 100 2.32 0.0649 1.02 0.229 0.304 1.08 1.2 0.00269 0.00301 264 172 1.46 0.0621 0.218 0.223 0.297 0.487 0.556 0.00122 0.00139 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 264 100 1.25 0.0582 0.0872 0.218 0.288 0.336 0.352 0.000839 0.00088 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 264 28349.746 0.005 0.0679 0.611 1.97 0.234 0.311 0.758 0.932 0.0019 0.00233 ! Validation 264 28349.746 0.005 0.0607 0.269 1.48 0.222 0.294 0.495 0.618 0.00124 0.00154 Wall time: 28349.74647549307 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 265 100 1.58 0.0565 0.446 0.213 0.283 0.677 0.797 0.00169 0.00199 265 172 3.76 0.0528 2.7 0.206 0.274 1.92 1.96 0.0048 0.0049 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 265 100 3.16 0.0505 2.15 0.204 0.268 1.75 1.75 0.00436 0.00437 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 265 28457.425 0.005 0.0569 0.545 1.68 0.214 0.284 0.687 0.879 0.00172 0.0022 ! Validation 265 28457.425 0.005 0.0517 1.08 2.12 0.204 0.271 1.13 1.24 0.00283 0.0031 Wall time: 28457.42502995394 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 266 100 1.71 0.0506 0.7 0.201 0.268 0.918 0.997 0.0023 0.00249 266 172 1.55 0.0459 0.633 0.192 0.255 0.879 0.949 0.0022 0.00237 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 266 100 1.2 0.0441 0.321 0.191 0.25 0.669 0.675 0.00167 0.00169 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 266 28564.471 0.005 0.0512 0.589 1.61 0.203 0.27 0.739 0.915 0.00185 0.00229 ! Validation 266 28564.471 0.005 0.046 1.19 2.11 0.193 0.256 1.21 1.3 0.00302 0.00325 Wall time: 28564.471768196672 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 267 100 2.74 0.0444 1.85 0.19 0.251 1.56 1.62 0.00391 0.00405 267 172 1.08 0.0424 0.233 0.186 0.245 0.506 0.575 0.00126 0.00144 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 267 100 1.26 0.0398 0.461 0.181 0.238 0.805 0.81 0.00201 0.00202 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 267 28671.518 0.005 0.0454 0.526 1.43 0.191 0.254 0.683 0.865 0.00171 0.00216 ! Validation 267 28671.518 0.005 0.0411 0.254 1.08 0.183 0.242 0.498 0.6 0.00124 0.0015 Wall time: 28671.51811933471 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 268 100 1.05 0.0413 0.227 0.182 0.242 0.483 0.567 0.00121 0.00142 268 172 1.33 0.042 0.491 0.184 0.244 0.785 0.835 0.00196 0.00209 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 268 100 0.908 0.0412 0.0844 0.185 0.242 0.336 0.346 0.00084 0.000866 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 268 28778.553 0.005 0.0418 0.623 1.46 0.184 0.244 0.763 0.941 0.00191 0.00235 ! Validation 268 28778.553 0.005 0.0421 0.168 1.01 0.184 0.245 0.387 0.488 0.000968 0.00122 Wall time: 28778.553170628846 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 269 100 0.862 0.0368 0.127 0.172 0.229 0.356 0.424 0.00089 0.00106 269 172 0.83 0.0336 0.158 0.166 0.219 0.419 0.473 0.00105 0.00118 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 269 100 0.699 0.033 0.0381 0.166 0.217 0.218 0.233 0.000546 0.000581 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 269 28885.580 0.005 0.0373 0.356 1.1 0.174 0.23 0.561 0.711 0.0014 0.00178 ! Validation 269 28885.580 0.005 0.0343 0.532 1.22 0.168 0.221 0.762 0.869 0.00191 0.00217 Wall time: 28885.580200773664 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 270 100 0.831 0.0348 0.135 0.168 0.222 0.331 0.438 0.000827 0.0011 270 172 0.749 0.0337 0.0753 0.165 0.219 0.256 0.327 0.000639 0.000818 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 270 100 0.647 0.0317 0.0126 0.163 0.212 0.0863 0.134 0.000216 0.000334 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 270 28992.609 0.005 0.0354 0.528 1.24 0.17 0.224 0.691 0.866 0.00173 0.00217 ! Validation 270 28992.609 0.005 0.033 0.147 0.806 0.164 0.216 0.359 0.457 0.000896 0.00114 Wall time: 28992.609001125675 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 271 100 0.733 0.031 0.112 0.159 0.21 0.353 0.399 0.000883 0.000999 271 172 0.791 0.0352 0.0871 0.169 0.224 0.289 0.352 0.000724 0.000879 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 271 100 1.73 0.0348 1.03 0.171 0.222 1.2 1.21 0.00301 0.00302 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 271 29099.637 0.005 0.0327 0.56 1.21 0.163 0.216 0.721 0.892 0.0018 0.00223 ! Validation 271 29099.637 0.005 0.0358 1.52 2.24 0.171 0.226 1.4 1.47 0.00351 0.00367 Wall time: 29099.637546489947 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 272 100 1.4 0.0286 0.833 0.153 0.201 1.05 1.09 0.00263 0.00272 272 172 0.718 0.0277 0.164 0.151 0.198 0.398 0.483 0.000995 0.00121 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 272 100 0.558 0.027 0.0169 0.151 0.196 0.116 0.155 0.000289 0.000387 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 272 29206.669 0.005 0.0301 0.361 0.962 0.157 0.207 0.577 0.716 0.00144 0.00179 ! Validation 272 29206.669 0.005 0.028 0.126 0.686 0.152 0.2 0.343 0.423 0.000857 0.00106 Wall time: 29206.669676261954 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 273 100 0.806 0.0273 0.261 0.149 0.197 0.534 0.609 0.00134 0.00152 273 172 0.813 0.0289 0.234 0.154 0.203 0.495 0.576 0.00124 0.00144 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 273 100 0.637 0.0275 0.0874 0.152 0.198 0.316 0.352 0.00079 0.000881 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 273 29313.703 0.005 0.028 0.496 1.06 0.151 0.199 0.692 0.84 0.00173 0.0021 ! Validation 273 29313.703 0.005 0.0286 0.25 0.821 0.153 0.201 0.5 0.597 0.00125 0.00149 Wall time: 29313.703583343886 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 274 100 0.975 0.0228 0.519 0.137 0.18 0.777 0.859 0.00194 0.00215 274 172 0.653 0.0238 0.177 0.139 0.184 0.419 0.502 0.00105 0.00126 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 274 100 0.505 0.0222 0.0613 0.136 0.178 0.265 0.295 0.000663 0.000738 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 274 29420.730 0.005 0.0248 0.313 0.808 0.142 0.188 0.543 0.667 0.00136 0.00167 ! Validation 274 29420.730 0.005 0.0231 0.123 0.585 0.137 0.181 0.334 0.419 0.000836 0.00105 Wall time: 29420.730477910023 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 275 100 0.561 0.0234 0.0938 0.138 0.182 0.297 0.365 0.000743 0.000913 275 172 0.598 0.0231 0.136 0.138 0.181 0.359 0.439 0.000897 0.0011 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 275 100 0.517 0.0214 0.0895 0.133 0.174 0.31 0.357 0.000774 0.000891 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 275 29527.764 0.005 0.0238 0.429 0.904 0.139 0.184 0.626 0.781 0.00157 0.00195 ! Validation 275 29527.764 0.005 0.0223 0.0953 0.541 0.135 0.178 0.297 0.368 0.000742 0.00092 Wall time: 29527.76413367875 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 276 100 0.599 0.0202 0.195 0.129 0.169 0.437 0.527 0.00109 0.00132 276 172 1.37 0.0206 0.958 0.13 0.171 1.12 1.17 0.0028 0.00292 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 276 100 0.704 0.0189 0.327 0.125 0.164 0.667 0.682 0.00167 0.0017 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 276 29634.796 0.005 0.021 0.29 0.709 0.131 0.173 0.516 0.641 0.00129 0.0016 ! Validation 276 29634.796 0.005 0.0198 0.457 0.853 0.127 0.168 0.72 0.806 0.0018 0.00201 Wall time: 29634.796671942808 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 277 100 1.97 0.0214 1.55 0.132 0.175 1.44 1.48 0.00361 0.00371 277 172 1.34 0.0198 0.943 0.127 0.168 1.07 1.16 0.00268 0.00289 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 277 100 0.76 0.019 0.381 0.125 0.164 0.721 0.736 0.0018 0.00184 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 277 29741.835 0.005 0.0204 0.385 0.793 0.129 0.17 0.601 0.739 0.0015 0.00185 ! Validation 277 29741.835 0.005 0.0198 0.12 0.516 0.127 0.168 0.335 0.414 0.000838 0.00103 Wall time: 29741.834948041942 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 278 100 1.05 0.0186 0.678 0.123 0.163 0.944 0.981 0.00236 0.00245 278 172 0.555 0.0188 0.179 0.123 0.163 0.414 0.505 0.00104 0.00126 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 278 100 0.409 0.0176 0.0577 0.12 0.158 0.232 0.286 0.00058 0.000716 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 278 29848.880 0.005 0.0191 0.292 0.675 0.125 0.165 0.529 0.645 0.00132 0.00161 ! Validation 278 29848.880 0.005 0.0181 0.0876 0.45 0.122 0.16 0.286 0.353 0.000715 0.000882 Wall time: 29848.880208868068 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 279 100 0.41 0.0164 0.0809 0.115 0.153 0.289 0.339 0.000722 0.000848 279 172 0.395 0.0165 0.0648 0.116 0.153 0.216 0.303 0.000539 0.000759 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 279 100 0.429 0.0163 0.103 0.115 0.152 0.345 0.383 0.000863 0.000958 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 279 29955.909 0.005 0.0174 0.234 0.581 0.119 0.157 0.46 0.577 0.00115 0.00144 ! Validation 279 29955.909 0.005 0.0169 0.217 0.555 0.117 0.155 0.483 0.555 0.00121 0.00139 Wall time: 29955.908909359016 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 280 100 0.462 0.0178 0.105 0.121 0.159 0.287 0.386 0.000717 0.000965 280 172 0.489 0.016 0.17 0.114 0.151 0.434 0.491 0.00109 0.00123 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 280 100 0.887 0.0168 0.551 0.117 0.155 0.876 0.885 0.00219 0.00221 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 280 30062.933 0.005 0.0172 0.345 0.689 0.118 0.156 0.556 0.701 0.00139 0.00175 ! Validation 280 30062.933 0.005 0.0171 0.946 1.29 0.118 0.156 1.12 1.16 0.00279 0.0029 Wall time: 30062.932906424627 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 281 100 0.598 0.0166 0.267 0.116 0.154 0.555 0.616 0.00139 0.00154 281 172 0.484 0.0158 0.168 0.114 0.15 0.422 0.489 0.00105 0.00122 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 281 100 0.398 0.0151 0.0964 0.111 0.146 0.345 0.37 0.000863 0.000925 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 281 30169.949 0.005 0.0166 0.276 0.607 0.116 0.153 0.51 0.626 0.00127 0.00157 ! Validation 281 30169.949 0.005 0.0155 0.0606 0.371 0.112 0.149 0.233 0.293 0.000584 0.000734 Wall time: 30169.94933570968 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 282 100 0.615 0.0146 0.322 0.109 0.144 0.628 0.677 0.00157 0.00169 282 172 0.604 0.0144 0.316 0.108 0.143 0.622 0.67 0.00155 0.00167 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 282 100 0.605 0.0135 0.336 0.104 0.138 0.681 0.691 0.0017 0.00173 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 282 30277.272 0.005 0.0153 0.207 0.512 0.111 0.147 0.426 0.543 0.00107 0.00136 ! Validation 282 30277.272 0.005 0.0141 0.189 0.47 0.107 0.141 0.455 0.518 0.00114 0.0013 Wall time: 30277.272098898888 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 283 100 0.673 0.0132 0.408 0.103 0.137 0.721 0.762 0.0018 0.0019 283 172 0.955 0.0146 0.663 0.109 0.144 0.904 0.971 0.00226 0.00243 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 283 100 0.548 0.0138 0.273 0.106 0.14 0.607 0.622 0.00152 0.00156 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 283 30384.301 0.005 0.0146 0.245 0.537 0.109 0.144 0.471 0.59 0.00118 0.00147 ! Validation 283 30384.301 0.005 0.014 0.218 0.497 0.107 0.141 0.477 0.556 0.00119 0.00139 Wall time: 30384.30182530498 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 284 100 0.348 0.0133 0.082 0.104 0.137 0.285 0.341 0.000713 0.000854 284 172 0.517 0.0131 0.254 0.103 0.136 0.511 0.601 0.00128 0.0015 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 284 100 0.441 0.0125 0.19 0.101 0.134 0.509 0.52 0.00127 0.0013 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 284 30491.524 0.005 0.014 0.214 0.495 0.107 0.141 0.439 0.552 0.0011 0.00138 ! Validation 284 30491.524 0.005 0.0129 0.0721 0.331 0.103 0.136 0.258 0.32 0.000645 0.0008 Wall time: 30491.524414405692 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 285 100 0.546 0.0139 0.267 0.106 0.141 0.562 0.616 0.0014 0.00154 285 172 0.599 0.0145 0.31 0.109 0.143 0.615 0.664 0.00154 0.00166 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 285 100 0.635 0.0147 0.341 0.11 0.145 0.683 0.696 0.00171 0.00174 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 285 30598.645 0.005 0.0134 0.267 0.536 0.104 0.138 0.496 0.616 0.00124 0.00154 ! Validation 285 30598.645 0.005 0.0145 0.328 0.617 0.109 0.143 0.612 0.683 0.00153 0.00171 Wall time: 30598.645625832956 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 286 100 0.277 0.0113 0.0507 0.0965 0.127 0.209 0.268 0.000522 0.000671 286 172 0.336 0.0141 0.0546 0.107 0.141 0.222 0.279 0.000556 0.000696 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 286 100 1.14 0.0144 0.854 0.108 0.143 1.1 1.1 0.00274 0.00275 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 286 30705.739 0.005 0.0136 0.273 0.544 0.105 0.139 0.482 0.623 0.0012 0.00156 ! Validation 286 30705.739 0.005 0.014 0.774 1.05 0.107 0.141 0.995 1.05 0.00249 0.00262 Wall time: 30705.7391967969 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 287 100 0.297 0.0113 0.0705 0.0957 0.127 0.251 0.317 0.000627 0.000791 287 172 0.306 0.0121 0.0638 0.0987 0.131 0.26 0.301 0.00065 0.000753 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 287 100 0.249 0.0118 0.0128 0.0978 0.13 0.131 0.135 0.000327 0.000337 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 287 30812.984 0.005 0.0128 0.165 0.421 0.102 0.135 0.377 0.485 0.000943 0.00121 ! Validation 287 30812.984 0.005 0.0118 0.0509 0.286 0.098 0.129 0.213 0.269 0.000533 0.000673 Wall time: 30812.984599028714 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 288 100 0.904 0.0122 0.659 0.0998 0.132 0.949 0.968 0.00237 0.00242 288 172 0.641 0.0111 0.418 0.0948 0.126 0.742 0.771 0.00185 0.00193 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 288 100 0.541 0.0106 0.329 0.0931 0.123 0.674 0.683 0.00169 0.00171 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 288 30920.101 0.005 0.0117 0.155 0.39 0.0976 0.129 0.378 0.469 0.000946 0.00117 ! Validation 288 30920.101 0.005 0.0109 0.577 0.795 0.0944 0.125 0.868 0.905 0.00217 0.00226 Wall time: 30920.101293813903 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 289 100 0.436 0.0127 0.181 0.102 0.134 0.417 0.508 0.00104 0.00127 289 172 1.12 0.011 0.895 0.0946 0.125 1.09 1.13 0.00273 0.00282 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 289 100 0.608 0.0108 0.393 0.0937 0.124 0.74 0.747 0.00185 0.00187 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 289 31027.776 0.005 0.0118 0.213 0.449 0.0979 0.13 0.431 0.549 0.00108 0.00137 ! Validation 289 31027.776 0.005 0.0108 0.65 0.867 0.0941 0.124 0.926 0.961 0.00232 0.0024 Wall time: 31027.776019030716 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 290 100 0.239 0.0104 0.0322 0.0918 0.121 0.162 0.214 0.000406 0.000535 290 172 0.442 0.0112 0.219 0.0959 0.126 0.504 0.557 0.00126 0.00139 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 290 100 0.29 0.0116 0.0569 0.0974 0.129 0.26 0.284 0.000651 0.000711 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 290 31135.005 0.005 0.011 0.166 0.386 0.0945 0.125 0.373 0.485 0.000933 0.00121 ! Validation 290 31135.005 0.005 0.0114 0.0515 0.28 0.0968 0.127 0.217 0.27 0.000542 0.000676 Wall time: 31135.005443769973 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 291 100 0.38 0.0105 0.171 0.0936 0.122 0.438 0.492 0.0011 0.00123 291 172 0.274 0.011 0.0536 0.0943 0.125 0.228 0.276 0.000571 0.00069 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 291 100 0.213 0.00973 0.0181 0.0892 0.118 0.152 0.161 0.00038 0.000402 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 291 31242.097 0.005 0.0107 0.155 0.368 0.0931 0.123 0.376 0.469 0.000941 0.00117 ! Validation 291 31242.097 0.005 0.00989 0.0423 0.24 0.0899 0.119 0.193 0.245 0.000483 0.000613 Wall time: 31242.09742071666 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 292 100 0.307 0.0113 0.0818 0.0966 0.127 0.288 0.341 0.00072 0.000852 292 172 0.8 0.0101 0.599 0.0912 0.12 0.891 0.923 0.00223 0.00231 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 292 100 0.438 0.00961 0.246 0.0889 0.117 0.583 0.591 0.00146 0.00148 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 292 31349.198 0.005 0.0102 0.163 0.366 0.091 0.12 0.392 0.48 0.000979 0.0012 ! Validation 292 31349.198 0.005 0.00973 0.257 0.451 0.0895 0.118 0.561 0.604 0.0014 0.00151 Wall time: 31349.19845667295 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 293 100 0.242 0.00958 0.0507 0.0884 0.117 0.208 0.269 0.000519 0.000671 293 172 0.587 0.0104 0.38 0.0928 0.121 0.598 0.735 0.0015 0.00184 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 293 100 0.252 0.0104 0.0435 0.0932 0.122 0.229 0.248 0.000573 0.000621 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 293 31456.298 0.005 0.0101 0.192 0.393 0.0905 0.12 0.41 0.522 0.00103 0.0013 ! Validation 293 31456.298 0.005 0.0104 0.0795 0.287 0.0929 0.121 0.27 0.336 0.000675 0.00084 Wall time: 31456.29817008972 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 294 100 0.51 0.0103 0.304 0.0918 0.121 0.608 0.657 0.00152 0.00164 294 172 0.443 0.00945 0.254 0.0877 0.116 0.569 0.601 0.00142 0.0015 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 294 100 0.223 0.00918 0.0395 0.0866 0.114 0.218 0.237 0.000546 0.000592 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 294 31563.393 0.005 0.0101 0.189 0.39 0.0905 0.12 0.406 0.518 0.00101 0.0013 ! Validation 294 31563.393 0.005 0.00928 0.0385 0.224 0.0872 0.115 0.184 0.234 0.000461 0.000584 Wall time: 31563.393135550898 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 295 100 0.194 0.00831 0.0276 0.0822 0.109 0.16 0.198 0.0004 0.000495 295 172 0.257 0.00918 0.073 0.0867 0.114 0.262 0.322 0.000654 0.000805 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 295 100 0.183 0.00842 0.0151 0.0835 0.109 0.107 0.146 0.000268 0.000366 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 295 31670.516 0.005 0.0091 0.11 0.292 0.086 0.114 0.321 0.395 0.000802 0.000987 ! Validation 295 31670.516 0.005 0.00859 0.0357 0.207 0.0841 0.11 0.176 0.225 0.00044 0.000563 Wall time: 31670.51624516165 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 296 100 0.326 0.00892 0.148 0.0854 0.113 0.419 0.458 0.00105 0.00115 296 172 0.209 0.0084 0.0412 0.0827 0.109 0.21 0.242 0.000525 0.000605 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 296 100 0.279 0.00804 0.118 0.0812 0.107 0.4 0.409 0.001 0.00102 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 296 31777.604 0.005 0.00899 0.147 0.326 0.0855 0.113 0.371 0.457 0.000927 0.00114 ! Validation 296 31777.604 0.005 0.00817 0.178 0.342 0.0819 0.108 0.462 0.503 0.00116 0.00126 Wall time: 31777.60443493491 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 297 100 0.24 0.00849 0.0703 0.0833 0.11 0.248 0.316 0.000621 0.00079 297 172 0.438 0.00857 0.267 0.0836 0.11 0.592 0.616 0.00148 0.00154 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 297 100 0.448 0.00837 0.281 0.0831 0.109 0.627 0.632 0.00157 0.00158 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 297 31884.702 0.005 0.00865 0.138 0.311 0.0839 0.111 0.355 0.442 0.000887 0.00111 ! Validation 297 31884.702 0.005 0.00851 0.136 0.306 0.0838 0.11 0.386 0.44 0.000966 0.0011 Wall time: 31884.702510196716 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 298 100 0.199 0.00829 0.0334 0.0819 0.109 0.169 0.218 0.000423 0.000545 298 172 0.204 0.00837 0.0362 0.0822 0.109 0.181 0.227 0.000453 0.000567 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 298 100 0.155 0.00747 0.00527 0.0784 0.103 0.0737 0.0866 0.000184 0.000216 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 298 31991.792 0.005 0.00833 0.127 0.293 0.0822 0.109 0.341 0.425 0.000853 0.00106 ! Validation 298 31991.792 0.005 0.00771 0.0322 0.186 0.0794 0.105 0.166 0.214 0.000414 0.000534 Wall time: 31991.792870440055 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 299 100 0.254 0.00834 0.0872 0.0819 0.109 0.298 0.352 0.000744 0.00088 299 172 0.18 0.00798 0.0202 0.0805 0.106 0.122 0.169 0.000304 0.000423 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 299 100 0.225 0.0073 0.0792 0.0773 0.102 0.327 0.336 0.000818 0.000839 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 299 32098.896 0.005 0.00804 0.135 0.296 0.0808 0.107 0.358 0.438 0.000894 0.0011 ! Validation 299 32098.896 0.005 0.00748 0.0464 0.196 0.0783 0.103 0.209 0.257 0.000522 0.000642 Wall time: 32098.89679216873 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 300 100 0.506 0.00772 0.352 0.0793 0.105 0.685 0.707 0.00171 0.00177 300 172 0.183 0.00784 0.026 0.0799 0.106 0.151 0.192 0.000379 0.000481 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 300 100 0.177 0.00695 0.0377 0.0757 0.0994 0.222 0.231 0.000556 0.000579 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 300 32206.000 0.005 0.00773 0.11 0.265 0.0791 0.105 0.314 0.396 0.000784 0.000991 ! Validation 300 32206.000 0.005 0.00724 0.0325 0.177 0.077 0.101 0.167 0.215 0.000418 0.000537 Wall time: 32205.999976214953 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 301 100 0.395 0.00761 0.243 0.0788 0.104 0.517 0.588 0.00129 0.00147 301 172 0.227 0.00701 0.087 0.0753 0.0998 0.274 0.352 0.000685 0.000879 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 301 100 0.14 0.00688 0.00263 0.0755 0.0989 0.0521 0.0611 0.00013 0.000153 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 301 32313.105 0.005 0.0078 0.15 0.306 0.0795 0.105 0.365 0.462 0.000913 0.00116 ! Validation 301 32313.105 0.005 0.00718 0.0302 0.174 0.0767 0.101 0.16 0.207 0.000399 0.000518 Wall time: 32313.104987547733 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 302 100 0.216 0.00715 0.0729 0.0758 0.101 0.269 0.322 0.000672 0.000805 302 172 0.177 0.00781 0.0206 0.0795 0.105 0.138 0.171 0.000344 0.000428 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 302 100 0.212 0.00661 0.0797 0.0734 0.0969 0.332 0.337 0.00083 0.000841 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 302 32420.208 0.005 0.00731 0.0964 0.243 0.0768 0.102 0.298 0.37 0.000744 0.000926 ! Validation 302 32420.208 0.005 0.00692 0.0454 0.184 0.0751 0.0991 0.206 0.254 0.000516 0.000635 Wall time: 32420.208384149708 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 303 100 0.176 0.00656 0.0446 0.0727 0.0965 0.202 0.252 0.000505 0.000629 303 172 0.246 0.00812 0.0838 0.0814 0.107 0.29 0.345 0.000725 0.000863 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 303 100 0.192 0.00732 0.0457 0.0775 0.102 0.25 0.255 0.000624 0.000637 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 303 32527.302 0.005 0.00723 0.115 0.259 0.0764 0.101 0.323 0.404 0.000807 0.00101 ! Validation 303 32527.302 0.005 0.00756 0.0804 0.232 0.0793 0.104 0.277 0.338 0.000693 0.000845 Wall time: 32527.302886032034 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 304 100 0.191 0.00788 0.0338 0.0794 0.106 0.178 0.219 0.000444 0.000548 304 172 0.539 0.00697 0.399 0.0748 0.0995 0.731 0.753 0.00183 0.00188 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 304 100 0.519 0.00639 0.391 0.0724 0.0953 0.743 0.745 0.00186 0.00186 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 304 32634.406 0.005 0.00768 0.164 0.318 0.0789 0.104 0.374 0.483 0.000935 0.00121 ! Validation 304 32634.406 0.005 0.00668 0.195 0.329 0.0739 0.0974 0.481 0.527 0.0012 0.00132 Wall time: 32634.40655621607 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 305 100 0.417 0.0086 0.245 0.0841 0.111 0.423 0.59 0.00106 0.00147 305 172 0.16 0.00686 0.0224 0.0743 0.0988 0.145 0.179 0.000363 0.000446 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 305 100 0.224 0.00632 0.0972 0.0719 0.0948 0.368 0.372 0.000919 0.000929 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 305 32742.936 0.005 0.00733 0.131 0.277 0.0767 0.102 0.321 0.431 0.000802 0.00108 ! Validation 305 32742.936 0.005 0.00653 0.156 0.287 0.073 0.0963 0.428 0.471 0.00107 0.00118 Wall time: 32742.936740019824 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 306 100 0.295 0.00629 0.17 0.0713 0.0945 0.462 0.491 0.00115 0.00123 306 172 0.342 0.00645 0.213 0.0721 0.0957 0.49 0.55 0.00122 0.00137 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 306 100 0.176 0.00605 0.0547 0.0702 0.0927 0.274 0.279 0.000686 0.000697 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 306 32850.068 0.005 0.00677 0.0912 0.227 0.0738 0.0981 0.291 0.36 0.000728 0.0009 ! Validation 306 32850.068 0.005 0.00635 0.0712 0.198 0.0721 0.095 0.276 0.318 0.000689 0.000795 Wall time: 32850.06871064566 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 307 100 0.143 0.00627 0.0173 0.0708 0.0944 0.126 0.157 0.000314 0.000392 307 172 0.147 0.00644 0.0185 0.0719 0.0956 0.142 0.162 0.000355 0.000405 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 307 100 0.238 0.00619 0.115 0.0708 0.0938 0.402 0.404 0.00101 0.00101 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 307 32957.182 0.005 0.00657 0.0866 0.218 0.0726 0.0966 0.273 0.351 0.000684 0.000877 ! Validation 307 32957.182 0.005 0.00638 0.048 0.176 0.0722 0.0952 0.213 0.261 0.000533 0.000653 Wall time: 32957.182603195775 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 308 100 0.169 0.00657 0.0376 0.072 0.0966 0.19 0.231 0.000475 0.000578 308 172 0.198 0.00683 0.0611 0.0739 0.0985 0.26 0.295 0.000649 0.000737 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 308 100 0.175 0.00569 0.061 0.0677 0.0899 0.293 0.295 0.000731 0.000736 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 308 33064.462 0.005 0.00658 0.109 0.24 0.0728 0.0967 0.317 0.393 0.000793 0.000982 ! Validation 308 33064.462 0.005 0.00591 0.2 0.318 0.0692 0.0917 0.491 0.533 0.00123 0.00133 Wall time: 33064.46255310206 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 309 100 0.196 0.00716 0.0527 0.0759 0.101 0.239 0.274 0.000598 0.000684 309 172 0.311 0.0063 0.186 0.0715 0.0946 0.487 0.513 0.00122 0.00128 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 309 100 0.299 0.00587 0.181 0.0692 0.0913 0.506 0.508 0.00126 0.00127 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 309 33171.578 0.005 0.00711 0.139 0.281 0.0753 0.101 0.35 0.444 0.000875 0.00111 ! Validation 309 33171.578 0.005 0.00618 0.104 0.228 0.071 0.0937 0.336 0.384 0.00084 0.000961 Wall time: 33171.577954409644 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 310 100 0.125 0.00543 0.0168 0.0656 0.0878 0.122 0.154 0.000306 0.000386 310 172 0.214 0.00675 0.0791 0.0739 0.0979 0.285 0.335 0.000714 0.000838 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 310 100 0.15 0.00701 0.0098 0.0751 0.0998 0.0844 0.118 0.000211 0.000295 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 310 33278.762 0.005 0.00634 0.0929 0.22 0.0712 0.0949 0.28 0.363 0.0007 0.000909 ! Validation 310 33278.762 0.005 0.00737 0.0586 0.206 0.0772 0.102 0.244 0.289 0.00061 0.000722 Wall time: 33278.76267679501 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 311 100 0.165 0.00638 0.0369 0.0717 0.0952 0.19 0.229 0.000474 0.000573 311 172 0.297 0.00577 0.181 0.0679 0.0906 0.488 0.507 0.00122 0.00127 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 311 100 0.127 0.00539 0.0195 0.0659 0.0875 0.164 0.166 0.000409 0.000416 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 311 33385.927 0.005 0.00617 0.0809 0.204 0.0702 0.0937 0.269 0.339 0.000673 0.000847 ! Validation 311 33385.927 0.005 0.00567 0.0323 0.146 0.0677 0.0897 0.167 0.214 0.000417 0.000536 Wall time: 33385.92733874777 ! Best model 311 0.146 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 312 100 0.173 0.0055 0.0635 0.0662 0.0884 0.276 0.3 0.00069 0.000751 312 172 0.216 0.0053 0.11 0.065 0.0868 0.348 0.395 0.00087 0.000988 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 312 100 0.143 0.00509 0.0417 0.0641 0.085 0.242 0.243 0.000606 0.000609 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 312 33493.066 0.005 0.00576 0.0681 0.183 0.0677 0.0905 0.25 0.311 0.000624 0.000778 ! Validation 312 33493.066 0.005 0.00543 0.0334 0.142 0.0663 0.0878 0.175 0.218 0.000436 0.000544 Wall time: 33493.06644193968 ! Best model 312 0.142 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 313 100 0.181 0.00577 0.0661 0.0678 0.0906 0.262 0.306 0.000656 0.000766 313 172 0.134 0.00572 0.0191 0.0676 0.0902 0.133 0.165 0.000332 0.000412 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 313 100 0.249 0.00532 0.143 0.0657 0.0869 0.45 0.45 0.00112 0.00113 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 313 33600.200 0.005 0.00686 0.129 0.266 0.0736 0.0988 0.337 0.428 0.000843 0.00107 ! Validation 313 33600.200 0.005 0.00572 0.0574 0.172 0.068 0.0902 0.232 0.286 0.000581 0.000714 Wall time: 33600.20067678485 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 314 100 0.176 0.00535 0.0693 0.0655 0.0872 0.283 0.314 0.000707 0.000785 314 172 0.165 0.00539 0.0573 0.0651 0.0875 0.226 0.285 0.000564 0.000713 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 314 100 0.199 0.00511 0.0964 0.064 0.0852 0.366 0.37 0.000916 0.000925 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 314 33707.357 0.005 0.00587 0.079 0.196 0.0685 0.0913 0.272 0.335 0.00068 0.000838 ! Validation 314 33707.357 0.005 0.00549 0.152 0.262 0.0666 0.0883 0.425 0.464 0.00106 0.00116 Wall time: 33707.35725846188 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 315 100 0.163 0.00543 0.0541 0.0661 0.0879 0.222 0.277 0.000554 0.000693 315 172 0.197 0.00784 0.0403 0.0794 0.106 0.188 0.239 0.00047 0.000598 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 315 100 0.158 0.00592 0.0401 0.0701 0.0917 0.237 0.239 0.000592 0.000597 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 315 33814.460 0.005 0.00613 0.0934 0.216 0.0696 0.0934 0.282 0.364 0.000706 0.000911 ! Validation 315 33814.460 0.005 0.00619 0.0373 0.161 0.0719 0.0938 0.178 0.23 0.000446 0.000576 Wall time: 33814.46056924667 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 316 100 0.153 0.006 0.0333 0.0691 0.0924 0.185 0.218 0.000461 0.000544 316 172 0.13 0.00533 0.0232 0.0648 0.087 0.16 0.182 0.0004 0.000454 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 316 100 0.116 0.00467 0.0223 0.0612 0.0814 0.177 0.178 0.000441 0.000445 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 316 33921.558 0.005 0.00547 0.0478 0.157 0.0659 0.0882 0.205 0.261 0.000512 0.000652 ! Validation 316 33921.558 0.005 0.00505 0.0282 0.129 0.0638 0.0847 0.157 0.2 0.000393 0.0005 Wall time: 33921.55819535861 ! Best model 316 0.129 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 317 100 0.517 0.00576 0.401 0.0676 0.0905 0.735 0.755 0.00184 0.00189 317 172 0.158 0.00669 0.0241 0.0734 0.0975 0.143 0.185 0.000357 0.000463 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 317 100 0.113 0.00494 0.0143 0.0633 0.0838 0.141 0.143 0.000354 0.000357 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 317 34028.662 0.005 0.00646 0.138 0.267 0.0715 0.0958 0.353 0.443 0.000883 0.00111 ! Validation 317 34028.662 0.005 0.00526 0.118 0.223 0.0654 0.0864 0.365 0.409 0.000913 0.00102 Wall time: 34028.66244344367 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 318 100 0.181 0.00531 0.0746 0.0643 0.0869 0.279 0.326 0.000698 0.000814 318 172 0.149 0.00641 0.0208 0.0712 0.0955 0.145 0.172 0.000362 0.00043 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 318 100 0.47 0.00509 0.369 0.0638 0.085 0.723 0.724 0.00181 0.00181 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 318 34137.388 0.005 0.00533 0.049 0.156 0.065 0.087 0.21 0.264 0.000526 0.00066 ! Validation 318 34137.388 0.005 0.00525 0.0981 0.203 0.0652 0.0864 0.304 0.373 0.00076 0.000934 Wall time: 34137.387933590915 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 319 100 0.154 0.00524 0.0496 0.0649 0.0863 0.227 0.265 0.000568 0.000664 319 172 0.143 0.0062 0.0188 0.0709 0.0939 0.121 0.163 0.000303 0.000408 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 319 100 0.115 0.00564 0.00183 0.0688 0.0896 0.045 0.0509 0.000113 0.000127 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 319 34244.528 0.005 0.00723 0.121 0.265 0.0748 0.101 0.325 0.414 0.000811 0.00104 ! Validation 319 34244.528 0.005 0.00584 0.0675 0.184 0.0696 0.0911 0.259 0.31 0.000646 0.000774 Wall time: 34244.52846154291 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 320 100 0.123 0.0053 0.0164 0.0646 0.0868 0.12 0.153 0.000299 0.000382 320 172 0.124 0.00525 0.0193 0.064 0.0863 0.14 0.165 0.00035 0.000414 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 320 100 0.0963 0.00463 0.00369 0.0605 0.0811 0.0589 0.0724 0.000147 0.000181 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 320 34351.647 0.005 0.00531 0.0699 0.176 0.0649 0.0869 0.246 0.315 0.000616 0.000788 ! Validation 320 34351.647 0.005 0.00495 0.0257 0.125 0.063 0.0839 0.143 0.191 0.000357 0.000477 Wall time: 34351.647205932066 ! Best model 320 0.125 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 321 100 0.133 0.00544 0.0239 0.0656 0.0879 0.161 0.184 0.000402 0.000461 321 172 0.294 0.00511 0.192 0.0634 0.0852 0.47 0.522 0.00118 0.0013 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 321 100 0.294 0.00485 0.197 0.0624 0.083 0.529 0.53 0.00132 0.00132 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 321 34458.805 0.005 0.00503 0.0652 0.166 0.0631 0.0846 0.244 0.304 0.00061 0.00076 ! Validation 321 34458.805 0.005 0.00509 0.0825 0.184 0.0641 0.0851 0.293 0.342 0.000733 0.000856 Wall time: 34458.80546542164 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 322 100 0.145 0.00486 0.0479 0.0621 0.0831 0.231 0.261 0.000577 0.000652 322 172 0.251 0.01 0.0507 0.0909 0.119 0.233 0.269 0.000583 0.000671 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 322 100 0.166 0.00806 0.00498 0.0802 0.107 0.0812 0.0841 0.000203 0.00021 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 322 34565.929 0.005 0.00536 0.0768 0.184 0.0649 0.0872 0.252 0.331 0.00063 0.000826 ! Validation 322 34565.929 0.005 0.00852 0.259 0.43 0.0831 0.11 0.514 0.607 0.00128 0.00152 Wall time: 34565.929320893716 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 323 100 0.11 0.00455 0.0186 0.0601 0.0804 0.136 0.163 0.00034 0.000406 323 172 0.164 0.00465 0.0707 0.0609 0.0813 0.261 0.317 0.000651 0.000792 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 323 100 0.141 0.00431 0.0547 0.0584 0.0783 0.276 0.279 0.00069 0.000697 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 323 34673.055 0.005 0.00572 0.0613 0.176 0.0667 0.0902 0.225 0.295 0.000562 0.000738 ! Validation 323 34673.055 0.005 0.00456 0.099 0.19 0.0604 0.0805 0.335 0.375 0.000837 0.000938 Wall time: 34673.05492511997 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 324 100 0.109 0.00482 0.012 0.0614 0.0828 0.0991 0.131 0.000248 0.000327 324 172 0.108 0.00485 0.0108 0.0619 0.083 0.0944 0.124 0.000236 0.000309 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 324 100 0.127 0.0043 0.0409 0.0584 0.0781 0.237 0.241 0.000593 0.000602 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 324 34780.179 0.005 0.00556 0.105 0.216 0.0664 0.0889 0.293 0.386 0.000733 0.000966 ! Validation 324 34780.179 0.005 0.00453 0.0334 0.124 0.0602 0.0802 0.177 0.218 0.000443 0.000545 Wall time: 34780.17889647698 ! Best model 324 0.124 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 325 100 0.241 0.00523 0.136 0.0638 0.0862 0.412 0.44 0.00103 0.0011 325 172 0.112 0.00495 0.0126 0.0625 0.0839 0.0985 0.134 0.000246 0.000334 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 325 100 0.176 0.00459 0.0845 0.0604 0.0808 0.343 0.347 0.000858 0.000866 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 325 34887.431 0.005 0.00463 0.0387 0.131 0.0603 0.0811 0.184 0.235 0.000461 0.000586 ! Validation 325 34887.431 0.005 0.00475 0.0599 0.155 0.0617 0.0822 0.255 0.292 0.000637 0.00073 Wall time: 34887.43114008475 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 326 100 0.1 0.00439 0.0121 0.0593 0.079 0.113 0.131 0.000281 0.000328 326 172 0.123 0.00437 0.0357 0.0587 0.0788 0.2 0.225 0.000499 0.000563 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 326 100 0.0961 0.00475 0.00105 0.0612 0.0822 0.0297 0.0386 7.43e-05 9.64e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 326 34994.547 0.005 0.00479 0.0564 0.152 0.0615 0.0825 0.214 0.283 0.000535 0.000708 ! Validation 326 34994.547 0.005 0.00478 0.0387 0.134 0.0619 0.0824 0.191 0.235 0.000478 0.000586 Wall time: 34994.54778758669 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 327 100 0.175 0.00506 0.074 0.0632 0.0848 0.276 0.324 0.000689 0.000811 327 172 0.176 0.00471 0.082 0.061 0.0818 0.291 0.341 0.000727 0.000853 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 327 100 0.123 0.00514 0.0204 0.0641 0.0854 0.166 0.17 0.000415 0.000426 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 327 35104.637 0.005 0.00499 0.0675 0.167 0.0629 0.0842 0.249 0.31 0.000622 0.000775 ! Validation 327 35104.637 0.005 0.00536 0.0561 0.163 0.0658 0.0873 0.238 0.282 0.000595 0.000706 Wall time: 35104.63713478483 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 328 100 0.119 0.00461 0.0264 0.0604 0.081 0.157 0.194 0.000393 0.000484 328 172 0.113 0.00432 0.0266 0.0584 0.0783 0.149 0.195 0.000373 0.000486 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 328 100 0.088 0.00418 0.00441 0.0582 0.0771 0.0666 0.0791 0.000166 0.000198 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 328 35211.752 0.005 0.00488 0.0714 0.169 0.0622 0.0833 0.251 0.319 0.000627 0.000796 ! Validation 328 35211.752 0.005 0.00438 0.0278 0.115 0.0594 0.0789 0.156 0.199 0.000389 0.000497 Wall time: 35211.75230610091 ! Best model 328 0.115 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 329 100 0.123 0.00488 0.0259 0.0624 0.0833 0.151 0.192 0.000378 0.000479 329 172 0.227 0.00468 0.134 0.0615 0.0816 0.422 0.436 0.00105 0.00109 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 329 100 0.152 0.0047 0.0576 0.0611 0.0818 0.285 0.286 0.000712 0.000715 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 329 35318.871 0.005 0.00515 0.0757 0.179 0.0637 0.0855 0.269 0.328 0.000671 0.00082 ! Validation 329 35318.871 0.005 0.00496 0.0564 0.156 0.0632 0.084 0.245 0.283 0.000612 0.000708 Wall time: 35318.871313005686 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 330 100 0.132 0.0046 0.0402 0.06 0.0808 0.202 0.239 0.000504 0.000598 330 172 0.265 0.00506 0.164 0.0631 0.0848 0.416 0.483 0.00104 0.00121 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 330 100 0.286 0.00419 0.202 0.0574 0.0771 0.528 0.536 0.00132 0.00134 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 330 35426.167 0.005 0.00471 0.058 0.152 0.061 0.0818 0.231 0.287 0.000577 0.000718 ! Validation 330 35426.167 0.005 0.00448 0.308 0.397 0.0602 0.0798 0.639 0.661 0.0016 0.00165 Wall time: 35426.167866440956 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 331 100 0.123 0.00488 0.0255 0.062 0.0833 0.163 0.19 0.000408 0.000476 331 172 0.109 0.00404 0.0278 0.0564 0.0758 0.159 0.199 0.000398 0.000497 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 331 100 0.143 0.00437 0.0556 0.0586 0.0788 0.278 0.281 0.000696 0.000703 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 331 35533.213 0.005 0.00549 0.0733 0.183 0.0656 0.0883 0.256 0.323 0.00064 0.000807 ! Validation 331 35533.213 0.005 0.00463 0.0254 0.118 0.0608 0.0811 0.15 0.19 0.000375 0.000475 Wall time: 35533.213751245756 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 332 100 28.1 0.936 9.35 0.859 1.15 2.87 3.65 0.00718 0.00911 332 172 13.7 0.525 3.21 0.65 0.864 1.73 2.14 0.00431 0.00534 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 332 100 11.3 0.465 2 0.612 0.813 1.67 1.69 0.00419 0.00422 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 332 35640.621 0.005 0.4 7.66 15.7 0.434 0.754 1.8 3.3 0.0045 0.00825 ! Validation 332 35640.621 0.005 0.488 7.14 16.9 0.629 0.833 2.93 3.18 0.00732 0.00796 Wall time: 35640.62151741469 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 333 100 3.16 0.106 1.04 0.297 0.388 1.07 1.22 0.00267 0.00304 333 172 1.7 0.0747 0.209 0.248 0.326 0.45 0.545 0.00112 0.00136 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 333 100 2.19 0.0716 0.754 0.243 0.319 0.993 1.04 0.00248 0.00259 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 333 35747.756 0.005 0.153 1.11 4.18 0.345 0.467 0.963 1.26 0.00241 0.00315 ! Validation 333 35747.756 0.005 0.074 0.623 2.1 0.247 0.324 0.792 0.941 0.00198 0.00235 Wall time: 35747.756222928874 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 334 100 1.18 0.0526 0.13 0.207 0.273 0.329 0.43 0.000823 0.00108 334 172 2.07 0.0403 1.26 0.181 0.239 1.3 1.34 0.00324 0.00334 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 334 100 1.43 0.0382 0.669 0.178 0.233 0.946 0.975 0.00236 0.00244 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 334 35854.953 0.005 0.056 0.569 1.69 0.213 0.282 0.721 0.899 0.0018 0.00225 ! Validation 334 35854.953 0.005 0.04 0.308 1.11 0.181 0.238 0.557 0.662 0.00139 0.00165 Wall time: 35854.95307994401 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 335 100 0.686 0.0269 0.148 0.147 0.195 0.382 0.459 0.000956 0.00115 335 172 1.15 0.0278 0.59 0.149 0.199 0.826 0.916 0.00206 0.00229 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 335 100 0.659 0.0284 0.0918 0.152 0.201 0.322 0.361 0.000805 0.000903 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 335 35962.087 0.005 0.0315 0.485 1.12 0.159 0.212 0.658 0.83 0.00165 0.00208 ! Validation 335 35962.087 0.005 0.0286 0.132 0.704 0.151 0.202 0.338 0.433 0.000845 0.00108 Wall time: 35962.087093852926 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 336 100 0.624 0.0228 0.169 0.134 0.18 0.397 0.49 0.000993 0.00123 336 172 0.501 0.0189 0.124 0.123 0.164 0.357 0.419 0.000891 0.00105 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 336 100 0.374 0.0181 0.012 0.122 0.16 0.129 0.131 0.000323 0.000326 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 336 36069.211 0.005 0.0226 0.188 0.64 0.134 0.179 0.408 0.516 0.00102 0.00129 ! Validation 336 36069.211 0.005 0.0188 0.0874 0.463 0.123 0.163 0.282 0.352 0.000705 0.000881 Wall time: 36069.21170722507 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 337 100 0.561 0.0176 0.209 0.119 0.158 0.484 0.545 0.00121 0.00136 337 172 0.513 0.0158 0.197 0.113 0.15 0.464 0.53 0.00116 0.00132 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 337 100 0.336 0.0155 0.0253 0.113 0.149 0.143 0.19 0.000358 0.000474 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 337 36176.559 0.005 0.0175 0.229 0.578 0.118 0.157 0.463 0.57 0.00116 0.00142 ! Validation 337 36176.559 0.005 0.0158 0.0916 0.408 0.113 0.15 0.289 0.361 0.000722 0.000902 Wall time: 36176.55962825799 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 338 100 0.351 0.0154 0.0421 0.112 0.148 0.203 0.245 0.000506 0.000611 338 172 0.353 0.0136 0.0815 0.104 0.139 0.285 0.34 0.000712 0.000851 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 338 100 0.33 0.0131 0.068 0.104 0.136 0.291 0.311 0.000728 0.000777 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 338 36283.990 0.005 0.0146 0.17 0.463 0.108 0.144 0.388 0.492 0.000971 0.00123 ! Validation 338 36283.990 0.005 0.0134 0.0836 0.352 0.104 0.138 0.276 0.345 0.000689 0.000862 Wall time: 36283.99032709794 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 339 100 0.283 0.0112 0.0602 0.0953 0.126 0.226 0.292 0.000565 0.000731 339 172 0.712 0.0112 0.489 0.0952 0.126 0.785 0.833 0.00196 0.00208 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 339 100 0.45 0.0109 0.232 0.0954 0.124 0.566 0.575 0.00141 0.00144 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 339 36391.124 0.005 0.0122 0.127 0.37 0.0992 0.131 0.336 0.424 0.00084 0.00106 ! Validation 339 36391.124 0.005 0.0114 0.348 0.575 0.0967 0.127 0.659 0.703 0.00165 0.00176 Wall time: 36391.12391705299 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 340 100 0.264 0.0112 0.0409 0.095 0.126 0.181 0.241 0.000453 0.000602 340 172 0.248 0.0104 0.0406 0.0916 0.121 0.204 0.24 0.00051 0.000601 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 340 100 0.196 0.00932 0.00973 0.0882 0.115 0.105 0.118 0.000262 0.000294 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 340 36498.584 0.005 0.0108 0.129 0.346 0.0938 0.124 0.339 0.429 0.000848 0.00107 ! Validation 340 36498.584 0.005 0.00981 0.0443 0.241 0.0898 0.118 0.198 0.251 0.000495 0.000628 Wall time: 36498.58477965789 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 341 100 0.338 0.00997 0.138 0.0901 0.119 0.386 0.443 0.000964 0.00111 341 172 0.276 0.00972 0.0817 0.0891 0.118 0.293 0.341 0.000732 0.000852 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 341 100 0.224 0.00964 0.0308 0.0896 0.117 0.183 0.209 0.000458 0.000523 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 341 36605.716 0.005 0.0101 0.209 0.411 0.0908 0.12 0.444 0.545 0.00111 0.00136 ! Validation 341 36605.716 0.005 0.00992 0.0631 0.262 0.0903 0.119 0.232 0.3 0.00058 0.000749 Wall time: 36605.715913943015 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 342 100 0.219 0.00878 0.0435 0.0856 0.112 0.199 0.249 0.000497 0.000622 342 172 0.204 0.00843 0.0357 0.0833 0.109 0.164 0.225 0.000411 0.000563 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 342 100 0.168 0.00811 0.0062 0.0822 0.107 0.084 0.0939 0.00021 0.000235 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 342 36712.852 0.005 0.00906 0.103 0.284 0.086 0.113 0.307 0.383 0.000766 0.000956 ! Validation 342 36712.852 0.005 0.00857 0.0369 0.208 0.0841 0.11 0.174 0.229 0.000434 0.000573 Wall time: 36712.85206266679 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 343 100 0.243 0.00807 0.0815 0.0817 0.107 0.29 0.34 0.000725 0.000851 343 172 0.273 0.00846 0.104 0.0831 0.11 0.348 0.385 0.00087 0.000963 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 343 100 0.33 0.00826 0.165 0.0827 0.108 0.476 0.484 0.00119 0.00121 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 343 36819.983 0.005 0.00829 0.11 0.276 0.0822 0.109 0.308 0.395 0.00077 0.000989 ! Validation 343 36819.983 0.005 0.00869 0.312 0.486 0.0847 0.111 0.62 0.666 0.00155 0.00167 Wall time: 36819.98356444063 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 344 100 0.257 0.00784 0.1 0.0794 0.106 0.32 0.378 0.0008 0.000945 344 172 0.316 0.00758 0.164 0.0789 0.104 0.448 0.483 0.00112 0.00121 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 344 100 0.141 0.00677 0.00539 0.0748 0.0981 0.0711 0.0875 0.000178 0.000219 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 344 36927.106 0.005 0.00774 0.0969 0.252 0.0795 0.105 0.298 0.371 0.000744 0.000927 ! Validation 344 36927.106 0.005 0.00725 0.0304 0.175 0.0774 0.102 0.16 0.208 0.0004 0.00052 Wall time: 36927.10591150774 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 345 100 0.191 0.00722 0.0468 0.0765 0.101 0.217 0.258 0.000544 0.000645 345 172 0.174 0.00721 0.0295 0.0765 0.101 0.164 0.205 0.00041 0.000512 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 345 100 0.204 0.00658 0.0721 0.0734 0.0967 0.313 0.32 0.000783 0.000801 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 345 37034.232 0.005 0.00721 0.0958 0.24 0.0766 0.101 0.296 0.369 0.000739 0.000922 ! Validation 345 37034.232 0.005 0.00691 0.095 0.233 0.0753 0.0991 0.314 0.367 0.000785 0.000919 Wall time: 37034.23204972362 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 346 100 0.178 0.0072 0.034 0.0754 0.101 0.165 0.22 0.000413 0.000549 346 172 0.158 0.00639 0.0299 0.0729 0.0953 0.175 0.206 0.000437 0.000515 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 346 100 0.166 0.00623 0.0413 0.0717 0.0941 0.235 0.242 0.000588 0.000606 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 346 37141.355 0.005 0.00717 0.129 0.272 0.0764 0.101 0.335 0.428 0.000837 0.00107 ! Validation 346 37141.355 0.005 0.00661 0.0528 0.185 0.0738 0.0969 0.211 0.274 0.000527 0.000685 Wall time: 37141.3552113669 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 347 100 0.169 0.0064 0.0411 0.072 0.0954 0.194 0.242 0.000486 0.000604 347 172 0.161 0.00625 0.0358 0.0714 0.0943 0.183 0.226 0.000457 0.000564 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 347 100 0.119 0.00576 0.00362 0.0689 0.0905 0.0662 0.0718 0.000165 0.000179 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 347 37248.479 0.005 0.00664 0.0857 0.218 0.0734 0.0971 0.275 0.349 0.000689 0.000873 ! Validation 347 37248.479 0.005 0.0062 0.0329 0.157 0.0714 0.0938 0.166 0.216 0.000416 0.00054 Wall time: 37248.47920482978 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 348 100 0.282 0.00644 0.153 0.072 0.0957 0.419 0.466 0.00105 0.00116 348 172 0.144 0.00602 0.0232 0.0699 0.0925 0.137 0.182 0.000343 0.000454 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 348 100 0.164 0.0055 0.0537 0.0669 0.0884 0.271 0.276 0.000677 0.000691 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 348 37355.597 0.005 0.00636 0.0856 0.213 0.0717 0.095 0.275 0.349 0.000688 0.000872 ! Validation 348 37355.597 0.005 0.0059 0.0334 0.151 0.0695 0.0916 0.174 0.218 0.000435 0.000544 Wall time: 37355.59767822269 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 349 100 0.137 0.00601 0.0168 0.0703 0.0924 0.12 0.154 0.000299 0.000386 349 172 0.161 0.006 0.041 0.069 0.0923 0.198 0.241 0.000496 0.000604 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 349 100 0.108 0.0053 0.00254 0.0656 0.0868 0.0516 0.0601 0.000129 0.00015 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 349 37462.722 0.005 0.00625 0.104 0.229 0.0711 0.0943 0.293 0.385 0.000732 0.000963 ! Validation 349 37462.722 0.005 0.00568 0.0249 0.139 0.0681 0.0899 0.141 0.188 0.000354 0.00047 Wall time: 37462.72235504491 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 350 100 0.149 0.00604 0.0284 0.0699 0.0927 0.16 0.201 0.000399 0.000503 350 172 0.128 0.00538 0.0207 0.0658 0.0875 0.143 0.171 0.000358 0.000429 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 350 100 0.145 0.0052 0.0411 0.0653 0.0859 0.237 0.242 0.000594 0.000605 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 350 37570.311 0.005 0.00585 0.0678 0.185 0.0686 0.0912 0.248 0.311 0.00062 0.000776 ! Validation 350 37570.311 0.005 0.00554 0.0717 0.182 0.0673 0.0887 0.272 0.319 0.000681 0.000798 Wall time: 37570.31131157372 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 351 100 0.331 0.0061 0.209 0.0699 0.0931 0.511 0.545 0.00128 0.00136 351 172 0.134 0.00522 0.0298 0.0653 0.0862 0.168 0.206 0.000419 0.000515 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 351 100 0.13 0.00499 0.03 0.0638 0.0842 0.202 0.206 0.000504 0.000516 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 351 37677.427 0.005 0.00579 0.0896 0.205 0.0683 0.0907 0.284 0.357 0.000709 0.000892 ! Validation 351 37677.427 0.005 0.00539 0.0369 0.145 0.0663 0.0875 0.19 0.229 0.000474 0.000573 Wall time: 37677.42689631088 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 352 100 0.175 0.0052 0.0708 0.0651 0.0859 0.292 0.317 0.000731 0.000793 352 172 0.165 0.00551 0.0549 0.0664 0.0885 0.249 0.279 0.000623 0.000698 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 352 100 0.146 0.0049 0.0481 0.063 0.0834 0.256 0.261 0.00064 0.000653 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 352 37785.409 0.005 0.00564 0.0782 0.191 0.0673 0.0895 0.268 0.333 0.000669 0.000834 ! Validation 352 37785.409 0.005 0.00528 0.0278 0.133 0.0656 0.0866 0.155 0.199 0.000386 0.000497 Wall time: 37785.40931366896 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 353 100 0.137 0.00555 0.0262 0.0664 0.0889 0.146 0.193 0.000366 0.000482 353 172 0.148 0.00525 0.0432 0.0654 0.0864 0.207 0.248 0.000517 0.000619 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 353 100 0.169 0.00499 0.0691 0.0642 0.0843 0.31 0.313 0.000775 0.000784 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 353 37892.544 0.005 0.00541 0.067 0.175 0.0658 0.0877 0.249 0.309 0.000622 0.000771 ! Validation 353 37892.544 0.005 0.00526 0.0719 0.177 0.0656 0.0864 0.281 0.32 0.000703 0.000799 Wall time: 37892.54408277199 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 354 100 0.316 0.00525 0.211 0.065 0.0864 0.52 0.547 0.0013 0.00137 354 172 0.171 0.0055 0.0609 0.0663 0.0884 0.255 0.294 0.000639 0.000736 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 354 100 0.104 0.00461 0.0115 0.0611 0.081 0.121 0.128 0.000303 0.00032 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 354 37999.680 0.005 0.00524 0.054 0.159 0.0647 0.0863 0.222 0.277 0.000554 0.000693 ! Validation 354 37999.680 0.005 0.00496 0.043 0.142 0.0634 0.084 0.2 0.247 0.000501 0.000618 Wall time: 37999.68008674169 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 355 100 0.237 0.00585 0.12 0.0685 0.0912 0.37 0.412 0.000925 0.00103 355 172 0.128 0.00498 0.0283 0.0627 0.0841 0.168 0.2 0.000421 0.000501 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 355 100 0.105 0.00448 0.0155 0.0604 0.0798 0.144 0.149 0.00036 0.000371 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 355 38107.713 0.005 0.00525 0.0784 0.183 0.0648 0.0864 0.259 0.334 0.000649 0.000835 ! Validation 355 38107.713 0.005 0.00483 0.024 0.121 0.0626 0.0828 0.145 0.185 0.000362 0.000462 Wall time: 38107.71304121381 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 356 100 0.137 0.00501 0.0364 0.0631 0.0844 0.151 0.227 0.000377 0.000568 356 172 0.11 0.00493 0.0113 0.0631 0.0837 0.107 0.127 0.000268 0.000317 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 356 100 0.0909 0.00445 0.00194 0.0602 0.0795 0.0415 0.0525 0.000104 0.000131 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 356 38214.841 0.005 0.0051 0.0708 0.173 0.0638 0.0851 0.247 0.317 0.000618 0.000793 ! Validation 356 38214.841 0.005 0.00479 0.0237 0.12 0.0624 0.0825 0.135 0.183 0.000338 0.000459 Wall time: 38214.84107142873 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 357 100 0.138 0.00495 0.0392 0.0635 0.0839 0.209 0.236 0.000523 0.00059 357 172 0.193 0.00488 0.0951 0.0624 0.0833 0.328 0.368 0.00082 0.000919 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 357 100 0.266 0.00459 0.174 0.0618 0.0808 0.497 0.498 0.00124 0.00124 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 357 38321.971 0.005 0.005 0.0724 0.172 0.0631 0.0843 0.257 0.321 0.000642 0.000802 ! Validation 357 38321.971 0.005 0.00492 0.151 0.249 0.0634 0.0836 0.432 0.463 0.00108 0.00116 Wall time: 38321.9710248257 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 358 100 0.213 0.00513 0.11 0.0635 0.0854 0.371 0.396 0.000927 0.000989 358 172 0.218 0.00543 0.109 0.0654 0.0879 0.345 0.394 0.000862 0.000986 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 358 100 0.132 0.00437 0.0451 0.0595 0.0788 0.247 0.253 0.000616 0.000633 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 358 38429.110 0.005 0.0049 0.0622 0.16 0.0624 0.0834 0.239 0.297 0.000598 0.000743 ! Validation 358 38429.110 0.005 0.00476 0.142 0.237 0.0621 0.0823 0.415 0.449 0.00104 0.00112 Wall time: 38429.110396957025 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 359 100 0.104 0.00445 0.0152 0.0596 0.0795 0.12 0.147 0.000301 0.000368 359 172 0.112 0.00476 0.0164 0.0619 0.0823 0.126 0.153 0.000316 0.000382 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 359 100 0.0858 0.00422 0.0014 0.0582 0.0774 0.0319 0.0446 7.99e-05 0.000112 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 359 38536.225 0.005 0.00625 0.134 0.259 0.0699 0.0943 0.292 0.436 0.000731 0.00109 ! Validation 359 38536.225 0.005 0.00462 0.0238 0.116 0.0611 0.081 0.142 0.184 0.000355 0.00046 Wall time: 38536.22561717173 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 360 100 0.275 0.00484 0.179 0.0607 0.083 0.464 0.504 0.00116 0.00126 360 172 0.106 0.00435 0.0188 0.0583 0.0786 0.134 0.163 0.000336 0.000408 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 360 100 0.213 0.0041 0.131 0.0574 0.0763 0.431 0.432 0.00108 0.00108 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 360 38643.354 0.005 0.0047 0.042 0.136 0.061 0.0817 0.196 0.244 0.000489 0.000611 ! Validation 360 38643.354 0.005 0.00446 0.0761 0.165 0.06 0.0796 0.293 0.329 0.000733 0.000822 Wall time: 38643.35403147293 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 361 100 0.124 0.00476 0.0287 0.061 0.0823 0.15 0.202 0.000375 0.000505 361 172 0.116 0.00481 0.0201 0.0621 0.0827 0.141 0.169 0.000352 0.000423 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 361 100 0.187 0.00429 0.101 0.0588 0.0781 0.374 0.379 0.000934 0.000948 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 361 38750.473 0.005 0.00476 0.0731 0.168 0.0615 0.0822 0.26 0.322 0.000649 0.000806 ! Validation 361 38750.473 0.005 0.0046 0.0558 0.148 0.0611 0.0808 0.24 0.282 0.0006 0.000704 Wall time: 38750.473141847644 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 362 100 0.248 0.00456 0.157 0.0599 0.0805 0.453 0.472 0.00113 0.00118 362 172 0.119 0.0048 0.0233 0.0615 0.0826 0.141 0.182 0.000352 0.000455 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 362 100 0.0926 0.00425 0.00749 0.0589 0.0778 0.0985 0.103 0.000246 0.000258 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 362 38857.608 0.005 0.00481 0.0797 0.176 0.0619 0.0827 0.276 0.337 0.000691 0.000842 ! Validation 362 38857.608 0.005 0.00467 0.0275 0.121 0.0615 0.0814 0.155 0.198 0.000387 0.000494 Wall time: 38857.608101926744 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 363 100 0.282 0.00454 0.192 0.0604 0.0804 0.494 0.522 0.00124 0.0013 363 172 0.24 0.00485 0.143 0.0622 0.0831 0.426 0.451 0.00106 0.00113 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 363 100 0.225 0.00433 0.139 0.0597 0.0784 0.442 0.444 0.0011 0.00111 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 363 38964.739 0.005 0.00467 0.0681 0.162 0.0609 0.0815 0.249 0.311 0.000621 0.000778 ! Validation 363 38964.739 0.005 0.00458 0.0918 0.183 0.061 0.0807 0.32 0.361 0.0008 0.000903 Wall time: 38964.739385013934 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 364 100 0.178 0.00409 0.0961 0.0573 0.0762 0.34 0.37 0.00085 0.000924 364 172 0.119 0.00469 0.0247 0.0611 0.0817 0.157 0.187 0.000393 0.000468 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 364 100 0.0941 0.00399 0.0144 0.0564 0.0753 0.139 0.143 0.000347 0.000357 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 364 39071.851 0.005 0.00466 0.0598 0.153 0.0608 0.0813 0.23 0.291 0.000576 0.000729 ! Validation 364 39071.851 0.005 0.00434 0.0416 0.128 0.0591 0.0785 0.203 0.243 0.000508 0.000608 Wall time: 39071.851265208796 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 365 100 0.136 0.00453 0.0454 0.0604 0.0802 0.225 0.254 0.000561 0.000635 365 172 0.219 0.00476 0.124 0.0618 0.0822 0.357 0.42 0.000891 0.00105 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 365 100 0.322 0.00498 0.222 0.0635 0.0841 0.562 0.562 0.0014 0.0014 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 365 39180.986 0.005 0.0045 0.0612 0.151 0.0597 0.08 0.234 0.295 0.000584 0.000737 ! Validation 365 39180.986 0.005 0.00533 0.0701 0.177 0.0657 0.0871 0.256 0.316 0.000639 0.000789 Wall time: 39180.98620317271 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 366 100 0.137 0.00472 0.0427 0.0616 0.0819 0.21 0.246 0.000526 0.000616 366 172 0.103 0.00407 0.0219 0.0569 0.076 0.142 0.176 0.000355 0.000441 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 366 100 0.0948 0.00378 0.0191 0.0552 0.0733 0.164 0.165 0.000409 0.000412 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 366 39288.065 0.005 0.00567 0.0883 0.202 0.0662 0.0898 0.274 0.354 0.000686 0.000886 ! Validation 366 39288.065 0.005 0.00412 0.0218 0.104 0.0576 0.0765 0.134 0.176 0.000336 0.00044 Wall time: 39288.06518751476 ! Best model 366 0.104 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 367 100 0.0947 0.00421 0.0105 0.0578 0.0774 0.0931 0.122 0.000233 0.000305 367 172 0.0967 0.00415 0.0136 0.0572 0.0768 0.117 0.139 0.000292 0.000347 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 367 100 0.0758 0.00377 0.000459 0.0551 0.0732 0.0196 0.0255 4.9e-05 6.38e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 367 39395.091 0.005 0.00452 0.0648 0.155 0.0599 0.0801 0.232 0.304 0.000581 0.000759 ! Validation 367 39395.091 0.005 0.00407 0.0233 0.105 0.0572 0.076 0.136 0.182 0.000341 0.000455 Wall time: 39395.09088268783 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 368 100 0.0959 0.00385 0.0188 0.0552 0.074 0.13 0.163 0.000324 0.000408 368 172 0.0985 0.00412 0.0161 0.0573 0.0765 0.115 0.151 0.000287 0.000379 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 368 100 0.156 0.00366 0.0834 0.0539 0.0721 0.344 0.344 0.000859 0.000861 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 368 39502.089 0.005 0.00423 0.0366 0.121 0.0577 0.0775 0.181 0.228 0.000454 0.00057 ! Validation 368 39502.089 0.005 0.00394 0.0538 0.132 0.0562 0.0748 0.24 0.276 0.000599 0.000691 Wall time: 39502.08936958574 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 369 100 0.242 0.00508 0.14 0.0639 0.085 0.403 0.446 0.00101 0.00112 369 172 0.216 0.00486 0.118 0.0626 0.0831 0.363 0.41 0.000909 0.00103 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 369 100 0.154 0.00459 0.0625 0.0618 0.0808 0.298 0.298 0.000745 0.000745 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 369 39609.183 0.005 0.0044 0.0617 0.15 0.0591 0.0791 0.231 0.296 0.000576 0.00074 ! Validation 369 39609.183 0.005 0.00501 0.0637 0.164 0.0639 0.0844 0.248 0.301 0.000619 0.000752 Wall time: 39609.183025748 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 370 100 0.155 0.00433 0.0686 0.0582 0.0784 0.253 0.312 0.000633 0.000781 370 172 0.142 0.00433 0.0558 0.0584 0.0784 0.244 0.282 0.000611 0.000704 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 370 100 0.0838 0.00376 0.00861 0.0554 0.0731 0.109 0.111 0.000273 0.000277 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 370 39716.288 0.005 0.00466 0.0622 0.155 0.0607 0.0814 0.234 0.297 0.000586 0.000743 ! Validation 370 39716.288 0.005 0.00415 0.0363 0.119 0.0582 0.0768 0.188 0.227 0.000469 0.000568 Wall time: 39716.28819956491 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 371 100 0.194 0.00463 0.101 0.0604 0.0811 0.315 0.379 0.000786 0.000949 371 172 0.149 0.00571 0.0351 0.0685 0.0901 0.195 0.223 0.000488 0.000558 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 371 100 0.336 0.00423 0.252 0.0585 0.0775 0.598 0.598 0.00149 0.0015 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 371 39823.380 0.005 0.00446 0.0639 0.153 0.0595 0.0796 0.239 0.301 0.000598 0.000753 ! Validation 371 39823.380 0.005 0.00441 0.0833 0.172 0.0599 0.0792 0.287 0.344 0.000718 0.00086 Wall time: 39823.380703527946 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 372 100 0.171 0.00459 0.0794 0.0607 0.0808 0.245 0.336 0.000613 0.00084 372 172 0.135 0.00455 0.0444 0.0611 0.0804 0.218 0.251 0.000546 0.000628 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 372 100 0.0799 0.0039 0.0019 0.0552 0.0744 0.0476 0.0519 0.000119 0.00013 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 372 39930.482 0.005 0.0051 0.0738 0.176 0.0634 0.0851 0.248 0.324 0.000621 0.00081 ! Validation 372 39930.482 0.005 0.00429 0.0396 0.125 0.0583 0.0781 0.192 0.237 0.000481 0.000593 Wall time: 39930.48260122677 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 373 100 0.0963 0.00416 0.0131 0.0572 0.0769 0.107 0.136 0.000268 0.000341 373 172 0.195 0.00374 0.121 0.0543 0.0729 0.357 0.414 0.000891 0.00103 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 373 100 0.377 0.00367 0.304 0.0546 0.0722 0.657 0.657 0.00164 0.00164 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 373 40037.587 0.005 0.00414 0.0443 0.127 0.0571 0.0767 0.198 0.251 0.000496 0.000627 ! Validation 373 40037.587 0.005 0.00395 0.499 0.578 0.0567 0.0749 0.822 0.842 0.00205 0.0021 Wall time: 40037.587471215986 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 374 100 0.12 0.00457 0.0282 0.0601 0.0806 0.15 0.2 0.000376 0.000501 374 172 0.0831 0.00374 0.00836 0.0544 0.0729 0.0852 0.109 0.000213 0.000273 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 374 100 0.0759 0.00342 0.00763 0.0521 0.0697 0.0994 0.104 0.000248 0.00026 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 374 40144.683 0.005 0.00439 0.0517 0.14 0.0587 0.079 0.201 0.271 0.000503 0.000678 ! Validation 374 40144.683 0.005 0.00367 0.0172 0.0907 0.0542 0.0723 0.121 0.156 0.000302 0.000391 Wall time: 40144.68333320506 ! Best model 374 0.091 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 375 100 0.114 0.00427 0.0287 0.0582 0.0779 0.16 0.202 0.0004 0.000505 375 172 0.106 0.00394 0.0273 0.056 0.0748 0.158 0.197 0.000396 0.000493 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 375 100 0.254 0.00389 0.176 0.0558 0.0743 0.495 0.501 0.00124 0.00125 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 375 40251.841 0.005 0.00445 0.069 0.158 0.0592 0.0795 0.244 0.313 0.00061 0.000783 ! Validation 375 40251.841 0.005 0.00413 0.149 0.232 0.0577 0.0766 0.427 0.461 0.00107 0.00115 Wall time: 40251.841282601 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 376 100 0.104 0.00391 0.0253 0.0554 0.0746 0.156 0.19 0.00039 0.000474 376 172 0.0963 0.00403 0.0157 0.0565 0.0757 0.122 0.149 0.000304 0.000373 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 376 100 0.0952 0.00449 0.00544 0.0622 0.0799 0.0791 0.0879 0.000198 0.00022 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 376 40358.933 0.005 0.00417 0.0642 0.147 0.0574 0.0769 0.244 0.302 0.00061 0.000755 ! Validation 376 40358.933 0.005 0.00471 0.0282 0.122 0.0631 0.0818 0.151 0.2 0.000378 0.0005 Wall time: 40358.93293581763 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 377 100 0.118 0.00419 0.0344 0.0573 0.0772 0.187 0.221 0.000468 0.000553 377 172 0.136 0.00392 0.0577 0.0553 0.0747 0.261 0.286 0.000653 0.000716 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 377 100 0.0861 0.00345 0.017 0.0524 0.0701 0.154 0.156 0.000386 0.000389 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 377 40466.026 0.005 0.00403 0.0483 0.129 0.0564 0.0757 0.206 0.262 0.000516 0.000655 ! Validation 377 40466.026 0.005 0.00365 0.0323 0.105 0.054 0.072 0.177 0.214 0.000443 0.000536 Wall time: 40466.02666972205 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 378 100 0.111 0.00344 0.0422 0.0521 0.07 0.212 0.245 0.000531 0.000612 378 172 0.157 0.00478 0.0615 0.0624 0.0824 0.261 0.296 0.000653 0.000739 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 378 100 0.796 0.00525 0.691 0.0669 0.0864 0.991 0.991 0.00248 0.00248 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 378 40573.124 0.005 0.00398 0.0601 0.14 0.056 0.0752 0.229 0.292 0.000572 0.000731 ! Validation 378 40573.124 0.005 0.00544 0.427 0.536 0.0675 0.0879 0.743 0.779 0.00186 0.00195 Wall time: 40573.124351290055 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 379 100 2.65 0.1 0.639 0.289 0.378 0.819 0.953 0.00205 0.00238 379 172 0.537 0.018 0.176 0.121 0.16 0.408 0.501 0.00102 0.00125 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 379 100 1.13 0.0171 0.783 0.119 0.156 1.05 1.05 0.00263 0.00264 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 379 40680.208 0.005 0.086 2.15 3.87 0.208 0.35 1.09 1.75 0.00272 0.00437 ! Validation 379 40680.208 0.005 0.0175 0.285 0.636 0.12 0.158 0.538 0.637 0.00134 0.00159 Wall time: 40680.208575319964 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 380 100 0.228 0.00868 0.0543 0.0837 0.111 0.2 0.278 0.000501 0.000694 380 172 0.179 0.0074 0.0313 0.077 0.103 0.165 0.211 0.000412 0.000527 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 380 100 0.141 0.00682 0.00437 0.0751 0.0984 0.0732 0.0788 0.000183 0.000197 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 380 40787.304 0.005 0.0104 0.0845 0.293 0.0911 0.122 0.27 0.347 0.000675 0.000867 ! Validation 380 40787.304 0.005 0.00713 0.128 0.271 0.0764 0.101 0.362 0.427 0.000904 0.00107 Wall time: 40787.30389289791 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 381 100 0.171 0.0062 0.0468 0.0706 0.0939 0.22 0.258 0.000549 0.000645 381 172 0.176 0.00611 0.0544 0.0692 0.0932 0.227 0.278 0.000567 0.000695 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 381 100 0.16 0.0052 0.0556 0.065 0.0859 0.237 0.281 0.000592 0.000703 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 381 40894.694 0.005 0.00644 0.0582 0.187 0.0717 0.0957 0.232 0.288 0.000579 0.000719 ! Validation 381 40894.694 0.005 0.00557 0.0914 0.203 0.0671 0.089 0.304 0.36 0.00076 0.000901 Wall time: 40894.694828859996 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 382 100 0.185 0.00558 0.0735 0.0666 0.0891 0.291 0.323 0.000727 0.000808 382 172 0.137 0.0054 0.0288 0.0653 0.0876 0.165 0.202 0.000412 0.000506 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 382 100 0.138 0.00452 0.0479 0.0608 0.0802 0.255 0.261 0.000636 0.000652 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 382 41001.784 0.005 0.00544 0.0621 0.171 0.0656 0.0879 0.238 0.297 0.000594 0.000743 ! Validation 382 41001.784 0.005 0.00491 0.0293 0.127 0.0629 0.0835 0.16 0.204 0.000401 0.00051 Wall time: 41001.78427507682 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 383 100 0.111 0.00475 0.0164 0.0611 0.0821 0.113 0.153 0.000284 0.000381 383 172 0.106 0.00456 0.0146 0.0599 0.0805 0.111 0.144 0.000278 0.00036 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 383 100 0.1 0.00416 0.0171 0.0576 0.0769 0.132 0.156 0.000331 0.000389 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 383 41110.159 0.005 0.00485 0.0386 0.136 0.0617 0.083 0.186 0.234 0.000466 0.000586 ! Validation 383 41110.159 0.005 0.00449 0.0389 0.129 0.06 0.0798 0.19 0.235 0.000474 0.000588 Wall time: 41110.159426007885 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 384 100 0.116 0.00416 0.0329 0.0576 0.0769 0.181 0.216 0.000453 0.000541 384 172 0.108 0.00396 0.0287 0.0562 0.075 0.171 0.202 0.000427 0.000505 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 384 100 0.0918 0.00398 0.0123 0.0565 0.0752 0.117 0.132 0.000293 0.00033 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 384 41217.250 0.005 0.00453 0.0447 0.135 0.0596 0.0803 0.2 0.252 0.0005 0.00063 ! Validation 384 41217.250 0.005 0.00426 0.038 0.123 0.0583 0.0778 0.183 0.232 0.000457 0.000581 Wall time: 41217.250547653995 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 385 100 0.429 0.00416 0.346 0.0574 0.0769 0.687 0.701 0.00172 0.00175 385 172 0.113 0.00396 0.0334 0.0558 0.075 0.186 0.218 0.000464 0.000545 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 385 100 0.081 0.00374 0.00615 0.0548 0.073 0.0917 0.0935 0.000229 0.000234 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 385 41324.933 0.005 0.0043 0.037 0.123 0.0579 0.0782 0.179 0.229 0.000449 0.000573 ! Validation 385 41324.933 0.005 0.00405 0.0242 0.105 0.0568 0.0759 0.143 0.186 0.000357 0.000464 Wall time: 41324.93354545301 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 386 100 0.142 0.00447 0.0525 0.0589 0.0797 0.23 0.273 0.000574 0.000683 386 172 0.147 0.00413 0.0646 0.057 0.0766 0.279 0.303 0.000699 0.000757 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 386 100 0.0851 0.00373 0.0104 0.0545 0.0728 0.105 0.122 0.000264 0.000304 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 386 41432.219 0.005 0.00418 0.0437 0.127 0.0571 0.077 0.199 0.249 0.000498 0.000623 ! Validation 386 41432.219 0.005 0.00392 0.0426 0.121 0.0559 0.0747 0.202 0.246 0.000504 0.000615 Wall time: 41432.219678967725 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 387 100 0.107 0.00382 0.0307 0.0549 0.0737 0.178 0.209 0.000445 0.000522 387 172 0.109 0.0043 0.0227 0.0581 0.0782 0.151 0.179 0.000377 0.000449 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 387 100 0.107 0.00365 0.0344 0.0539 0.0721 0.211 0.221 0.000526 0.000553 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 387 41539.329 0.005 0.00401 0.031 0.111 0.0559 0.0755 0.167 0.21 0.000419 0.000525 ! Validation 387 41539.329 0.005 0.00385 0.0255 0.102 0.0554 0.074 0.155 0.19 0.000387 0.000476 Wall time: 41539.32897431264 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 388 100 0.169 0.00425 0.0836 0.0578 0.0777 0.313 0.345 0.000783 0.000862 388 172 0.0885 0.00379 0.0128 0.0546 0.0734 0.108 0.135 0.000269 0.000337 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 388 100 0.0757 0.00352 0.00533 0.0528 0.0707 0.0826 0.0871 0.000206 0.000218 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 388 41646.621 0.005 0.00393 0.0383 0.117 0.0553 0.0747 0.186 0.233 0.000465 0.000584 ! Validation 388 41646.621 0.005 0.00372 0.0555 0.13 0.0545 0.0727 0.243 0.281 0.000606 0.000702 Wall time: 41646.621815637685 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 389 100 0.0961 0.00385 0.0191 0.0543 0.074 0.133 0.165 0.000332 0.000412 389 172 0.101 0.00357 0.0301 0.0525 0.0712 0.172 0.207 0.000429 0.000517 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 389 100 0.17 0.00347 0.101 0.0525 0.0703 0.378 0.379 0.000945 0.000947 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 389 41754.061 0.005 0.00388 0.0408 0.119 0.055 0.0743 0.193 0.241 0.000482 0.000602 ! Validation 389 41754.061 0.005 0.00368 0.0642 0.138 0.0541 0.0723 0.266 0.302 0.000665 0.000755 Wall time: 41754.061754846014 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 390 100 0.0985 0.00351 0.0284 0.0522 0.0706 0.175 0.201 0.000438 0.000502 390 172 0.0974 0.00357 0.026 0.0531 0.0713 0.155 0.192 0.000387 0.00048 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 390 100 0.104 0.00354 0.0331 0.053 0.0709 0.217 0.217 0.000542 0.000543 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 390 41861.075 0.005 0.00384 0.0467 0.123 0.0547 0.0738 0.206 0.258 0.000516 0.000644 ! Validation 390 41861.075 0.005 0.00371 0.021 0.0952 0.0543 0.0726 0.139 0.173 0.000346 0.000432 Wall time: 41861.07570582861 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 391 100 0.165 0.00359 0.0928 0.0531 0.0714 0.32 0.363 0.0008 0.000908 391 172 0.27 0.00382 0.194 0.0545 0.0737 0.51 0.525 0.00127 0.00131 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 391 100 0.15 0.00331 0.0835 0.051 0.0686 0.34 0.344 0.000849 0.000861 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 391 41968.099 0.005 0.00373 0.0347 0.109 0.0539 0.0728 0.176 0.222 0.000439 0.000554 ! Validation 391 41968.099 0.005 0.00353 0.124 0.194 0.053 0.0708 0.388 0.419 0.000971 0.00105 Wall time: 41968.09910752298 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 392 100 0.0768 0.00332 0.0105 0.0511 0.0687 0.103 0.122 0.000258 0.000305 392 172 0.155 0.00354 0.0844 0.0526 0.0709 0.312 0.346 0.000781 0.000866 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 392 100 0.14 0.0034 0.072 0.0521 0.0695 0.319 0.32 0.000799 0.0008 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 392 42075.407 0.005 0.00364 0.0352 0.108 0.0532 0.0719 0.177 0.223 0.000442 0.000559 ! Validation 392 42075.407 0.005 0.00352 0.0256 0.096 0.0529 0.0707 0.149 0.191 0.000371 0.000477 Wall time: 42075.407124713995 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 393 100 0.191 0.00385 0.114 0.0541 0.0739 0.386 0.402 0.000966 0.00101 393 172 0.234 0.00375 0.159 0.0543 0.073 0.439 0.476 0.0011 0.00119 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 393 100 0.124 0.00341 0.0554 0.0524 0.0696 0.28 0.281 0.000701 0.000702 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 393 42182.421 0.005 0.00366 0.0463 0.12 0.0534 0.0721 0.208 0.256 0.000519 0.000641 ! Validation 393 42182.421 0.005 0.00358 0.082 0.154 0.0536 0.0713 0.298 0.341 0.000746 0.000854 Wall time: 42182.4209225229 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 394 100 0.0881 0.00357 0.0167 0.0525 0.0712 0.112 0.154 0.00028 0.000385 394 172 0.0785 0.00344 0.00964 0.0516 0.0699 0.0949 0.117 0.000237 0.000293 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 394 100 0.0654 0.00324 0.00059 0.0505 0.0678 0.0257 0.029 6.44e-05 7.24e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 394 42289.446 0.005 0.00372 0.0448 0.119 0.0539 0.0727 0.191 0.252 0.000478 0.000631 ! Validation 394 42289.446 0.005 0.0034 0.0251 0.0931 0.0519 0.0695 0.146 0.189 0.000366 0.000472 Wall time: 42289.446179446764 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 395 100 0.0809 0.00362 0.0086 0.0528 0.0717 0.0888 0.111 0.000222 0.000276 395 172 0.0845 0.00368 0.011 0.0533 0.0723 0.105 0.125 0.000262 0.000312 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 395 100 0.0653 0.00321 0.00102 0.0504 0.0676 0.0324 0.0381 8.11e-05 9.52e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 395 42396.459 0.005 0.00366 0.0449 0.118 0.0535 0.0721 0.198 0.253 0.000495 0.000632 ! Validation 395 42396.459 0.005 0.00336 0.0189 0.0861 0.0516 0.0691 0.124 0.164 0.000311 0.00041 Wall time: 42396.459431029856 ! Best model 395 0.086 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 396 100 0.113 0.00326 0.0484 0.0509 0.068 0.235 0.262 0.000587 0.000655 396 172 0.116 0.00358 0.0447 0.0521 0.0714 0.228 0.252 0.000569 0.00063 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 396 100 0.0764 0.00327 0.0109 0.0509 0.0682 0.108 0.125 0.00027 0.000312 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 396 42503.496 0.005 0.00356 0.0345 0.106 0.0527 0.0712 0.179 0.222 0.000447 0.000554 ! Validation 396 42503.496 0.005 0.00338 0.0308 0.0985 0.0519 0.0694 0.168 0.209 0.000421 0.000523 Wall time: 42503.49623941304 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 397 100 0.0794 0.00318 0.0157 0.0499 0.0672 0.116 0.149 0.000291 0.000374 397 172 0.0974 0.00347 0.028 0.0519 0.0702 0.17 0.2 0.000425 0.000499 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 397 100 0.0635 0.00313 0.000872 0.0496 0.0667 0.0263 0.0352 6.58e-05 8.8e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 397 42610.518 0.005 0.00344 0.028 0.0967 0.0517 0.0699 0.158 0.199 0.000395 0.000498 ! Validation 397 42610.518 0.005 0.00327 0.0358 0.101 0.051 0.0682 0.184 0.226 0.00046 0.000564 Wall time: 42610.518771984614 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 398 100 0.211 0.00365 0.138 0.0528 0.0721 0.363 0.443 0.000907 0.00111 398 172 0.118 0.0037 0.0438 0.0541 0.0725 0.21 0.249 0.000525 0.000624 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 398 100 0.101 0.0036 0.0293 0.0532 0.0715 0.203 0.204 0.000508 0.00051 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 398 42717.608 0.005 0.00358 0.0448 0.116 0.0528 0.0713 0.192 0.252 0.000481 0.000631 ! Validation 398 42717.608 0.005 0.00378 0.0304 0.106 0.0549 0.0733 0.171 0.208 0.000427 0.00052 Wall time: 42717.608614217956 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 399 100 0.095 0.0035 0.0249 0.0517 0.0706 0.158 0.188 0.000395 0.00047 399 172 0.293 0.0043 0.207 0.0588 0.0782 0.516 0.543 0.00129 0.00136 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 399 100 0.222 0.00441 0.134 0.0598 0.0791 0.432 0.436 0.00108 0.00109 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 399 42824.722 0.005 0.00351 0.0514 0.122 0.0524 0.0707 0.208 0.27 0.000521 0.000675 ! Validation 399 42824.722 0.005 0.00468 0.0597 0.153 0.0616 0.0815 0.233 0.291 0.000582 0.000728 Wall time: 42824.72246716963 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 400 100 0.132 0.00424 0.047 0.0576 0.0776 0.222 0.258 0.000555 0.000646 400 172 0.118 0.00384 0.0416 0.055 0.0739 0.191 0.243 0.000477 0.000608 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 400 100 0.0669 0.00316 0.00382 0.0499 0.067 0.0641 0.0736 0.00016 0.000184 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 400 42931.996 0.005 0.00392 0.047 0.125 0.0555 0.0746 0.196 0.259 0.00049 0.000646 ! Validation 400 42931.996 0.005 0.00328 0.0218 0.0874 0.0511 0.0683 0.141 0.176 0.000351 0.00044 Wall time: 42931.9965179367 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 401 100 0.0899 0.0032 0.026 0.0496 0.0674 0.161 0.192 0.000402 0.000481 401 172 0.0902 0.00338 0.0227 0.0511 0.0693 0.155 0.18 0.000387 0.000449 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 401 100 0.0709 0.00315 0.00795 0.0498 0.0669 0.106 0.106 0.000265 0.000266 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 401 43039.098 0.005 0.00363 0.0381 0.111 0.0533 0.0718 0.182 0.233 0.000455 0.000582 ! Validation 401 43039.098 0.005 0.00331 0.0192 0.0854 0.0514 0.0686 0.131 0.165 0.000328 0.000413 Wall time: 43039.09840096487 ! Best model 401 0.085 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 402 100 0.117 0.00379 0.0412 0.0549 0.0734 0.219 0.242 0.000548 0.000605 402 172 0.0834 0.00323 0.0188 0.0501 0.0677 0.138 0.163 0.000345 0.000408 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 402 100 0.0625 0.00306 0.00122 0.0491 0.066 0.0363 0.0416 9.07e-05 0.000104 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 402 43146.197 0.005 0.00521 0.083 0.187 0.0623 0.0861 0.244 0.344 0.00061 0.000859 ! Validation 402 43146.197 0.005 0.00323 0.0346 0.0991 0.0506 0.0677 0.182 0.222 0.000455 0.000555 Wall time: 43146.19764825003 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 403 100 0.0963 0.00338 0.0288 0.0511 0.0693 0.158 0.202 0.000395 0.000505 403 172 0.0767 0.00328 0.0111 0.0508 0.0683 0.105 0.126 0.000264 0.000315 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 403 100 0.292 0.0031 0.23 0.0496 0.0663 0.57 0.571 0.00142 0.00143 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 403 43254.394 0.005 0.00338 0.0277 0.0953 0.0513 0.0693 0.156 0.198 0.000389 0.000496 ! Validation 403 43254.394 0.005 0.0032 0.0526 0.117 0.0505 0.0675 0.211 0.273 0.000527 0.000684 Wall time: 43254.39437526371 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 404 100 0.0774 0.00325 0.0125 0.0508 0.0679 0.113 0.133 0.000282 0.000334 404 172 0.0881 0.00355 0.0172 0.0531 0.071 0.134 0.156 0.000334 0.00039 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 404 100 0.133 0.0036 0.0607 0.053 0.0715 0.267 0.294 0.000668 0.000734 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 404 43361.506 0.005 0.00355 0.0393 0.11 0.0526 0.0711 0.182 0.236 0.000456 0.000591 ! Validation 404 43361.506 0.005 0.00369 0.0264 0.1 0.0544 0.0724 0.144 0.194 0.00036 0.000484 Wall time: 43361.50601836294 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 405 100 0.0616 0.00289 0.0039 0.0475 0.064 0.0631 0.0744 0.000158 0.000186 405 172 0.0853 0.00321 0.0212 0.0493 0.0675 0.149 0.173 0.000374 0.000434 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 405 100 0.0788 0.00295 0.0199 0.0483 0.0647 0.164 0.168 0.000409 0.00042 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 405 43468.660 0.005 0.00335 0.0303 0.0972 0.0511 0.069 0.164 0.208 0.000411 0.000519 ! Validation 405 43468.660 0.005 0.00309 0.031 0.0928 0.0495 0.0663 0.174 0.21 0.000434 0.000525 Wall time: 43468.66081021074 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 406 100 0.124 0.00327 0.0586 0.0502 0.0682 0.272 0.288 0.000679 0.000721 406 172 0.11 0.00332 0.0438 0.0507 0.0687 0.217 0.249 0.000543 0.000624 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 406 100 0.127 0.00298 0.0671 0.0483 0.065 0.307 0.309 0.000767 0.000772 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 406 43576.759 0.005 0.00343 0.0342 0.103 0.0518 0.0698 0.172 0.221 0.00043 0.000551 ! Validation 406 43576.759 0.005 0.00312 0.0239 0.0862 0.0498 0.0666 0.145 0.184 0.000362 0.000461 Wall time: 43576.75975054968 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 407 100 0.121 0.00354 0.05 0.0525 0.0709 0.226 0.267 0.000565 0.000667 407 172 0.105 0.00358 0.0335 0.0531 0.0714 0.175 0.218 0.000439 0.000546 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 407 100 0.0714 0.00309 0.00963 0.0491 0.0663 0.105 0.117 0.000262 0.000292 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 407 43683.801 0.005 0.00342 0.039 0.107 0.0518 0.0698 0.185 0.235 0.000464 0.000588 ! Validation 407 43683.801 0.005 0.00316 0.029 0.0923 0.0501 0.067 0.162 0.203 0.000405 0.000508 Wall time: 43683.80185364699 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 408 100 0.121 0.00402 0.0404 0.0564 0.0756 0.189 0.24 0.000474 0.000599 408 172 0.0834 0.00368 0.00983 0.0533 0.0723 0.0978 0.118 0.000244 0.000295 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 408 100 0.0669 0.00295 0.0079 0.0482 0.0648 0.0891 0.106 0.000223 0.000265 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 408 43790.850 0.005 0.00348 0.0405 0.11 0.0522 0.0703 0.182 0.24 0.000455 0.0006 ! Validation 408 43790.850 0.005 0.00307 0.0164 0.0777 0.0494 0.066 0.12 0.153 0.000301 0.000381 Wall time: 43790.850485230796 ! Best model 408 0.078 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 409 100 0.0838 0.00322 0.0194 0.0504 0.0676 0.132 0.166 0.000331 0.000416 409 172 0.09 0.00322 0.0256 0.0498 0.0676 0.165 0.191 0.000413 0.000477 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 409 100 0.101 0.00285 0.0442 0.0471 0.0637 0.25 0.251 0.000624 0.000626 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 409 43897.972 0.005 0.00339 0.0338 0.102 0.0514 0.0694 0.174 0.219 0.000436 0.000548 ! Validation 409 43897.972 0.005 0.003 0.0197 0.0796 0.0488 0.0653 0.132 0.167 0.00033 0.000418 Wall time: 43897.97254273901 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 410 100 0.0769 0.00344 0.0081 0.0519 0.0699 0.0862 0.107 0.000216 0.000268 410 172 0.0761 0.00325 0.011 0.0504 0.068 0.101 0.125 0.000253 0.000312 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 410 100 0.183 0.00306 0.122 0.0493 0.0659 0.415 0.416 0.00104 0.00104 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 410 44005.004 0.005 0.00342 0.0395 0.108 0.0517 0.0697 0.185 0.237 0.000461 0.000592 ! Validation 410 44005.004 0.005 0.00312 0.0723 0.135 0.05 0.0666 0.289 0.321 0.000723 0.000802 Wall time: 44005.00394576602 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 411 100 0.0886 0.00325 0.0237 0.0501 0.0679 0.136 0.183 0.00034 0.000459 411 172 0.112 0.00324 0.0476 0.0501 0.0678 0.229 0.26 0.000573 0.00065 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 411 100 0.0696 0.00311 0.0074 0.0496 0.0665 0.0849 0.103 0.000212 0.000256 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 411 44112.248 0.005 0.00383 0.0474 0.124 0.0551 0.0738 0.204 0.259 0.000509 0.000649 ! Validation 411 44112.248 0.005 0.00328 0.0226 0.0882 0.0512 0.0683 0.139 0.179 0.000346 0.000448 Wall time: 44112.248744386714 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 412 100 0.109 0.00353 0.039 0.0524 0.0708 0.209 0.235 0.000522 0.000588 412 172 0.17 0.00313 0.108 0.0496 0.0667 0.358 0.391 0.000896 0.000978 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 412 100 0.142 0.00349 0.0722 0.0528 0.0704 0.319 0.32 0.000798 0.000801 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 412 44219.270 0.005 0.00343 0.0386 0.107 0.0519 0.0698 0.189 0.234 0.000473 0.000585 ! Validation 412 44219.270 0.005 0.0036 0.0346 0.107 0.0539 0.0715 0.189 0.222 0.000471 0.000555 Wall time: 44219.27023229562 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 413 100 0.0696 0.00312 0.00718 0.0492 0.0666 0.0817 0.101 0.000204 0.000252 413 172 0.137 0.00301 0.0763 0.0486 0.0654 0.316 0.329 0.00079 0.000823 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 413 100 0.171 0.00285 0.114 0.0475 0.0637 0.401 0.402 0.001 0.001 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 413 44326.292 0.005 0.00319 0.028 0.0919 0.0499 0.0674 0.159 0.199 0.000398 0.000498 ! Validation 413 44326.292 0.005 0.00296 0.0946 0.154 0.0485 0.0649 0.339 0.367 0.000847 0.000917 Wall time: 44326.29253881285 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 414 100 0.0971 0.00343 0.0286 0.0518 0.0698 0.15 0.202 0.000376 0.000504 414 172 0.106 0.00386 0.029 0.0555 0.0741 0.173 0.203 0.000432 0.000508 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 414 100 0.184 0.00343 0.116 0.0517 0.0698 0.406 0.406 0.00101 0.00101 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 414 44433.385 0.005 0.00477 0.0702 0.166 0.0603 0.0824 0.242 0.316 0.000606 0.00079 ! Validation 414 44433.385 0.005 0.00348 0.0536 0.123 0.0526 0.0703 0.232 0.276 0.000581 0.00069 Wall time: 44433.38500290597 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 415 100 0.163 0.00485 0.0665 0.0621 0.083 0.272 0.307 0.00068 0.000768 415 172 0.136 0.00302 0.0755 0.0482 0.0655 0.307 0.328 0.000768 0.000819 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 415 100 0.143 0.00313 0.0802 0.0505 0.0667 0.337 0.338 0.000842 0.000844 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 415 44540.442 0.005 0.00373 0.0439 0.118 0.0538 0.0728 0.198 0.25 0.000494 0.000624 ! Validation 415 44540.442 0.005 0.00317 0.0678 0.131 0.0509 0.0671 0.275 0.31 0.000687 0.000776 Wall time: 44540.442650754005 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 416 100 0.121 0.00341 0.0532 0.052 0.0696 0.241 0.275 0.000603 0.000687 416 172 0.0694 0.00298 0.00986 0.048 0.065 0.0996 0.118 0.000249 0.000296 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 416 100 0.0564 0.00272 0.00195 0.0459 0.0622 0.0479 0.0527 0.00012 0.000132 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 416 44647.495 0.005 0.00346 0.0465 0.116 0.052 0.0701 0.192 0.257 0.00048 0.000643 ! Validation 416 44647.495 0.005 0.00286 0.0162 0.0733 0.0475 0.0637 0.118 0.152 0.000296 0.000379 Wall time: 44647.49538539769 ! Best model 416 0.073 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 417 100 0.0724 0.00296 0.0131 0.0485 0.0649 0.113 0.137 0.000281 0.000341 417 172 0.141 0.00356 0.0698 0.0524 0.0712 0.285 0.315 0.000712 0.000787 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 417 100 0.0667 0.00277 0.0113 0.0466 0.0627 0.124 0.127 0.00031 0.000317 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 417 44754.558 0.005 0.0031 0.027 0.0891 0.0492 0.0664 0.156 0.196 0.00039 0.00049 ! Validation 417 44754.558 0.005 0.00286 0.0264 0.0837 0.0476 0.0638 0.153 0.194 0.000382 0.000485 Wall time: 44754.557877858635 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 418 100 0.0746 0.00289 0.0169 0.0473 0.064 0.115 0.155 0.000288 0.000387 418 172 0.125 0.00416 0.0419 0.058 0.0769 0.198 0.244 0.000494 0.00061 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 418 100 0.0733 0.00332 0.00692 0.0518 0.0687 0.084 0.0992 0.00021 0.000248 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 418 44861.590 0.005 0.00313 0.0339 0.0965 0.0494 0.0667 0.173 0.22 0.000433 0.000549 ! Validation 418 44861.590 0.005 0.00344 0.0221 0.0909 0.0529 0.0699 0.141 0.177 0.000352 0.000443 Wall time: 44861.590258803684 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 419 100 0.112 0.0029 0.0535 0.0476 0.0642 0.251 0.276 0.000627 0.000689 419 172 0.0699 0.0029 0.0118 0.0475 0.0642 0.111 0.13 0.000279 0.000324 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 419 100 0.191 0.00287 0.134 0.0475 0.0639 0.435 0.436 0.00109 0.00109 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 419 44968.947 0.005 0.00322 0.0338 0.0983 0.0502 0.0677 0.175 0.219 0.000438 0.000548 ! Validation 419 44968.947 0.005 0.00291 0.0658 0.124 0.048 0.0643 0.276 0.306 0.000691 0.000764 Wall time: 44968.947132142726 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 420 100 0.153 0.00339 0.0852 0.0519 0.0694 0.307 0.348 0.000767 0.00087 420 172 0.111 0.00316 0.0475 0.049 0.067 0.238 0.26 0.000595 0.000649 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 420 100 0.0559 0.00275 0.000818 0.0467 0.0625 0.0276 0.0341 6.91e-05 8.52e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 420 45075.987 0.005 0.00328 0.0357 0.101 0.0505 0.0682 0.182 0.225 0.000454 0.000563 ! Validation 420 45075.987 0.005 0.00286 0.0195 0.0767 0.0477 0.0638 0.132 0.167 0.000329 0.000416 Wall time: 45075.98767052265 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 421 100 0.0949 0.00337 0.0275 0.0514 0.0692 0.162 0.198 0.000404 0.000494 421 172 0.0792 0.00319 0.0153 0.05 0.0674 0.124 0.148 0.000311 0.000369 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 421 100 0.0683 0.00284 0.0115 0.0467 0.0636 0.113 0.128 0.000283 0.00032 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 421 45183.027 0.005 0.00328 0.0414 0.107 0.0507 0.0683 0.19 0.243 0.000476 0.000607 ! Validation 421 45183.027 0.005 0.00297 0.0216 0.0811 0.0484 0.065 0.139 0.175 0.000348 0.000438 Wall time: 45183.027039194945 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 422 100 0.0615 0.00279 0.0057 0.047 0.063 0.0758 0.09 0.00019 0.000225 422 172 0.085 0.0028 0.029 0.0473 0.0631 0.171 0.203 0.000428 0.000507 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 422 100 0.126 0.00265 0.0732 0.0455 0.0614 0.31 0.323 0.000775 0.000806 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 422 45290.074 0.005 0.0032 0.0404 0.104 0.0501 0.0674 0.19 0.24 0.000474 0.000599 ! Validation 422 45290.074 0.005 0.00278 0.065 0.121 0.0469 0.0629 0.27 0.304 0.000674 0.00076 Wall time: 45290.07470131898 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 423 100 0.0926 0.003 0.0327 0.0484 0.0653 0.198 0.215 0.000495 0.000539 423 172 0.0793 0.00333 0.0127 0.0512 0.0688 0.102 0.135 0.000255 0.000336 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 423 100 0.059 0.00282 0.00259 0.0469 0.0633 0.0534 0.0607 0.000134 0.000152 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 423 45397.114 0.005 0.00323 0.0359 0.101 0.0502 0.0678 0.17 0.226 0.000426 0.000565 ! Validation 423 45397.114 0.005 0.00292 0.0266 0.085 0.0482 0.0644 0.154 0.194 0.000384 0.000486 Wall time: 45397.11428105878 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 424 100 0.0789 0.0031 0.017 0.0493 0.0663 0.107 0.155 0.000267 0.000388 424 172 0.135 0.00364 0.0623 0.0538 0.0719 0.272 0.298 0.000681 0.000744 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 424 100 0.0975 0.00287 0.0401 0.0473 0.0639 0.235 0.239 0.000588 0.000597 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 424 45504.167 0.005 0.00314 0.0378 0.101 0.0496 0.0668 0.18 0.232 0.000451 0.000579 ! Validation 424 45504.167 0.005 0.00298 0.0423 0.102 0.0487 0.065 0.208 0.245 0.00052 0.000613 Wall time: 45504.167466857005 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 425 100 16.7 0.722 2.26 0.75 1.01 1.48 1.79 0.00369 0.00448 425 172 6.67 0.142 3.84 0.343 0.449 2.25 2.34 0.00563 0.00584 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 425 100 2.73 0.131 0.114 0.326 0.431 0.343 0.402 0.000857 0.001 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 425 45611.209 0.005 0.582 7.56 19.2 0.616 0.91 2.22 3.28 0.00555 0.00819 ! Validation 425 45611.209 0.005 0.141 1.2 4.02 0.341 0.448 1.09 1.31 0.00273 0.00326 Wall time: 45611.20960987499 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 426 100 1.35 0.0487 0.378 0.198 0.263 0.617 0.733 0.00154 0.00183 426 172 0.807 0.0349 0.109 0.165 0.223 0.337 0.394 0.000842 0.000986 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 426 100 0.892 0.0315 0.262 0.157 0.212 0.587 0.61 0.00147 0.00152 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 426 45719.128 0.005 0.0612 0.536 1.76 0.219 0.295 0.7 0.873 0.00175 0.00218 ! Validation 426 45719.128 0.005 0.0337 0.723 1.4 0.163 0.219 0.912 1.01 0.00228 0.00253 Wall time: 45719.12825063802 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 427 100 0.824 0.0273 0.278 0.147 0.197 0.523 0.629 0.00131 0.00157 427 172 0.717 0.0215 0.287 0.129 0.175 0.576 0.638 0.00144 0.0016 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 427 100 0.518 0.0206 0.107 0.127 0.171 0.343 0.389 0.000857 0.000973 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 427 45826.180 0.005 0.0263 0.229 0.755 0.143 0.193 0.451 0.57 0.00113 0.00143 ! Validation 427 45826.180 0.005 0.0214 0.112 0.539 0.13 0.174 0.321 0.398 0.000802 0.000995 Wall time: 45826.18029281078 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 428 100 0.505 0.0198 0.11 0.124 0.168 0.339 0.395 0.000846 0.000988 428 172 0.41 0.0161 0.0885 0.113 0.151 0.277 0.355 0.000693 0.000887 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 428 100 0.408 0.0158 0.0917 0.112 0.15 0.332 0.361 0.000831 0.000902 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 428 45933.623 0.005 0.0195 0.199 0.588 0.123 0.166 0.41 0.532 0.00102 0.00133 ! Validation 428 45933.623 0.005 0.0164 0.0764 0.404 0.113 0.153 0.265 0.33 0.000661 0.000824 Wall time: 45933.62301122397 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 429 100 0.459 0.0157 0.144 0.111 0.15 0.381 0.452 0.000952 0.00113 429 172 0.357 0.0144 0.0684 0.106 0.143 0.251 0.312 0.000628 0.00078 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 429 100 0.311 0.0134 0.0436 0.103 0.138 0.21 0.249 0.000526 0.000622 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 429 46040.641 0.005 0.0158 0.21 0.526 0.111 0.15 0.426 0.546 0.00107 0.00137 ! Validation 429 46040.641 0.005 0.0139 0.178 0.455 0.105 0.14 0.43 0.503 0.00107 0.00126 Wall time: 46040.641777890734 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 430 100 0.499 0.0127 0.244 0.1 0.135 0.533 0.589 0.00133 0.00147 430 172 0.647 0.0118 0.411 0.0972 0.129 0.665 0.765 0.00166 0.00191 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 430 100 0.286 0.0111 0.0634 0.0943 0.126 0.28 0.3 0.000699 0.000751 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 430 46147.889 0.005 0.0129 0.156 0.414 0.101 0.135 0.371 0.471 0.000927 0.00118 ! Validation 430 46147.889 0.005 0.0117 0.169 0.402 0.0963 0.129 0.418 0.49 0.00105 0.00122 Wall time: 46147.88917956175 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 431 100 0.299 0.0105 0.0895 0.0912 0.122 0.294 0.357 0.000735 0.000891 431 172 0.324 0.00986 0.127 0.0888 0.118 0.38 0.424 0.000949 0.00106 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 431 100 0.29 0.00917 0.106 0.0859 0.114 0.372 0.389 0.000929 0.000972 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 431 46254.932 0.005 0.011 0.124 0.344 0.0932 0.125 0.338 0.42 0.000844 0.00105 ! Validation 431 46254.932 0.005 0.00963 0.0465 0.239 0.0879 0.117 0.201 0.257 0.000503 0.000642 Wall time: 46254.93233717373 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 432 100 0.402 0.00964 0.209 0.0877 0.117 0.511 0.545 0.00128 0.00136 432 172 0.211 0.0085 0.0412 0.0825 0.11 0.202 0.242 0.000505 0.000605 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 432 100 0.213 0.00785 0.0559 0.0798 0.106 0.264 0.282 0.000659 0.000705 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 432 46362.016 0.005 0.0096 0.127 0.319 0.0874 0.117 0.337 0.424 0.000843 0.00106 ! Validation 432 46362.016 0.005 0.00845 0.281 0.45 0.0825 0.11 0.587 0.632 0.00147 0.00158 Wall time: 46362.016296162736 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 433 100 0.215 0.00836 0.0475 0.0823 0.109 0.185 0.26 0.000464 0.00065 433 172 0.199 0.00746 0.0498 0.0773 0.103 0.204 0.266 0.000509 0.000665 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 433 100 0.147 0.00685 0.01 0.0749 0.0987 0.104 0.119 0.000259 0.000299 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 433 46469.062 0.005 0.00829 0.101 0.267 0.0815 0.109 0.308 0.379 0.00077 0.000947 ! Validation 433 46469.062 0.005 0.00748 0.0873 0.237 0.0779 0.103 0.295 0.352 0.000739 0.00088 Wall time: 46469.06219017599 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 434 100 0.218 0.00735 0.0709 0.0769 0.102 0.274 0.317 0.000685 0.000793 434 172 0.189 0.00693 0.0506 0.0744 0.0993 0.217 0.268 0.000544 0.00067 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 434 100 0.127 0.00611 0.0049 0.071 0.0932 0.0592 0.0834 0.000148 0.000209 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 434 46576.103 0.005 0.00741 0.0956 0.244 0.0771 0.103 0.295 0.369 0.000738 0.000922 ! Validation 434 46576.103 0.005 0.00684 0.0469 0.184 0.0747 0.0986 0.199 0.258 0.000498 0.000645 Wall time: 46576.10331641976 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 435 100 0.152 0.0067 0.0182 0.0734 0.0976 0.133 0.161 0.000334 0.000402 435 172 0.154 0.0058 0.0379 0.0684 0.0908 0.191 0.232 0.000477 0.00058 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 435 100 0.135 0.00539 0.0276 0.0668 0.0875 0.187 0.198 0.000467 0.000495 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 435 46683.160 0.005 0.00658 0.0752 0.207 0.0728 0.0967 0.267 0.327 0.000667 0.000818 ! Validation 435 46683.160 0.005 0.00607 0.0344 0.156 0.0704 0.0929 0.171 0.221 0.000428 0.000553 Wall time: 46683.16005415004 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 436 100 0.136 0.00563 0.0234 0.0675 0.0894 0.143 0.183 0.000357 0.000456 436 172 0.142 0.00528 0.0361 0.0653 0.0866 0.191 0.226 0.000478 0.000566 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 436 100 0.114 0.00482 0.0178 0.0632 0.0828 0.146 0.159 0.000365 0.000398 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 436 46790.209 0.005 0.00611 0.0792 0.201 0.0701 0.0932 0.257 0.336 0.000642 0.000839 ! Validation 436 46790.209 0.005 0.00549 0.0286 0.138 0.0669 0.0883 0.152 0.202 0.000381 0.000504 Wall time: 46790.209596766625 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 437 100 0.27 0.00603 0.149 0.0699 0.0925 0.426 0.461 0.00106 0.00115 437 172 0.167 0.00535 0.0598 0.0659 0.0872 0.252 0.291 0.000629 0.000729 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 437 100 0.208 0.00476 0.112 0.063 0.0823 0.396 0.4 0.00099 0.000999 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 437 46897.255 0.005 0.00571 0.0933 0.208 0.0677 0.0901 0.292 0.364 0.000731 0.00091 ! Validation 437 46897.255 0.005 0.00538 0.0908 0.198 0.0662 0.0875 0.302 0.359 0.000754 0.000898 Wall time: 46897.255075756926 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 438 100 0.126 0.00486 0.0288 0.0623 0.0831 0.163 0.202 0.000407 0.000506 438 172 0.141 0.0048 0.0451 0.0621 0.0826 0.22 0.253 0.000551 0.000633 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 438 100 0.194 0.00437 0.107 0.0603 0.0788 0.384 0.39 0.00096 0.000974 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 438 47004.307 0.005 0.00524 0.0514 0.156 0.0647 0.0863 0.218 0.27 0.000544 0.000676 ! Validation 438 47004.307 0.005 0.00498 0.117 0.217 0.0635 0.0841 0.372 0.408 0.00093 0.00102 Wall time: 47004.30747243669 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 439 100 0.123 0.00538 0.0152 0.0646 0.0874 0.117 0.147 0.000294 0.000368 439 172 0.153 0.00455 0.0616 0.0606 0.0804 0.246 0.296 0.000616 0.00074 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 439 100 0.096 0.00405 0.0151 0.0578 0.0759 0.139 0.146 0.000347 0.000366 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 439 47111.487 0.005 0.00489 0.0538 0.152 0.0623 0.0834 0.222 0.277 0.000556 0.000691 ! Validation 439 47111.487 0.005 0.00463 0.0341 0.127 0.0612 0.0811 0.176 0.22 0.000439 0.00055 Wall time: 47111.487010600045 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 440 100 0.109 0.00489 0.0111 0.0615 0.0834 0.103 0.126 0.000257 0.000314 440 172 0.0953 0.00432 0.00897 0.0585 0.0783 0.0924 0.113 0.000231 0.000282 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 440 100 0.0811 0.00385 0.00404 0.0563 0.074 0.0654 0.0758 0.000163 0.00019 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 440 47218.532 0.005 0.00465 0.0502 0.143 0.0607 0.0813 0.213 0.267 0.000533 0.000668 ! Validation 440 47218.532 0.005 0.00439 0.0585 0.146 0.0596 0.079 0.241 0.288 0.000603 0.000721 Wall time: 47218.532878162805 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 441 100 0.141 0.00441 0.0523 0.0591 0.0792 0.221 0.273 0.000554 0.000682 441 172 0.163 0.00485 0.0656 0.0622 0.0831 0.256 0.305 0.00064 0.000763 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 441 100 0.122 0.0039 0.0442 0.0568 0.0744 0.247 0.251 0.000618 0.000626 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 441 47325.811 0.005 0.00443 0.0506 0.139 0.0592 0.0794 0.216 0.268 0.00054 0.00067 ! Validation 441 47325.811 0.005 0.00435 0.0462 0.133 0.0593 0.0786 0.21 0.256 0.000525 0.00064 Wall time: 47325.81121206703 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 442 100 0.087 0.00399 0.0073 0.0563 0.0753 0.0764 0.102 0.000191 0.000255 442 172 0.11 0.00439 0.0222 0.0588 0.079 0.142 0.177 0.000355 0.000444 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 442 100 0.0738 0.00352 0.00337 0.0538 0.0707 0.0488 0.0692 0.000122 0.000173 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 442 47432.894 0.005 0.00429 0.0535 0.139 0.0581 0.0781 0.221 0.276 0.000553 0.00069 ! Validation 442 47432.894 0.005 0.00404 0.026 0.107 0.057 0.0758 0.144 0.192 0.000359 0.000481 Wall time: 47432.89458192373 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 443 100 0.104 0.00459 0.0119 0.0601 0.0807 0.1 0.13 0.00025 0.000326 443 172 0.111 0.00394 0.0317 0.0562 0.0749 0.163 0.212 0.000407 0.00053 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 443 100 0.35 0.00346 0.281 0.0532 0.0701 0.631 0.631 0.00158 0.00158 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 443 47539.942 0.005 0.00412 0.0506 0.133 0.0569 0.0766 0.21 0.268 0.000526 0.000671 ! Validation 443 47539.942 0.005 0.00398 0.0938 0.173 0.0565 0.0752 0.318 0.365 0.000796 0.000913 Wall time: 47539.94206814794 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 444 100 0.227 0.00438 0.14 0.0589 0.0789 0.401 0.445 0.001 0.00111 444 172 0.0937 0.00391 0.0156 0.0554 0.0745 0.119 0.149 0.000297 0.000372 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 444 100 0.0695 0.00342 0.001 0.0531 0.0698 0.0337 0.0377 8.43e-05 9.43e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 444 47646.998 0.005 0.00409 0.0643 0.146 0.0567 0.0762 0.242 0.302 0.000605 0.000756 ! Validation 444 47646.998 0.005 0.00387 0.0274 0.105 0.0557 0.0741 0.15 0.197 0.000376 0.000493 Wall time: 47646.998494172 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 445 100 0.0904 0.00354 0.0195 0.0527 0.071 0.14 0.167 0.000349 0.000416 445 172 0.292 0.00365 0.218 0.0535 0.0721 0.545 0.557 0.00136 0.00139 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 445 100 0.196 0.00343 0.127 0.0527 0.0698 0.425 0.426 0.00106 0.00106 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 445 47754.036 0.005 0.00384 0.0307 0.108 0.0548 0.0739 0.165 0.209 0.000411 0.000521 ! Validation 445 47754.036 0.005 0.00393 0.186 0.264 0.0562 0.0747 0.484 0.513 0.00121 0.00128 Wall time: 47754.03678725986 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 446 100 0.102 0.0035 0.0318 0.0524 0.0705 0.183 0.213 0.000457 0.000532 446 172 0.0913 0.00367 0.0179 0.054 0.0723 0.13 0.159 0.000325 0.000398 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 446 100 0.0733 0.00308 0.0118 0.0498 0.0661 0.124 0.13 0.00031 0.000324 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 446 47861.471 0.005 0.00371 0.0313 0.105 0.0538 0.0726 0.167 0.211 0.000417 0.000527 ! Validation 446 47861.471 0.005 0.00351 0.0252 0.0953 0.0529 0.0706 0.147 0.189 0.000368 0.000473 Wall time: 47861.471612685826 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 447 100 0.108 0.00355 0.0371 0.0525 0.071 0.2 0.23 0.000499 0.000574 447 172 0.188 0.00337 0.12 0.0516 0.0693 0.398 0.413 0.000995 0.00103 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 447 100 0.0625 0.00305 0.00142 0.0498 0.0659 0.0418 0.0449 0.000105 0.000112 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 447 47969.825 0.005 0.00373 0.0585 0.133 0.054 0.0728 0.23 0.288 0.000575 0.000721 ! Validation 447 47969.825 0.005 0.00352 0.0299 0.1 0.0529 0.0707 0.164 0.206 0.000409 0.000515 Wall time: 47969.825577259995 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 448 100 0.12 0.00361 0.0483 0.0533 0.0716 0.231 0.262 0.000578 0.000655 448 172 0.102 0.00387 0.0245 0.0549 0.0741 0.169 0.187 0.000423 0.000467 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 448 100 0.0831 0.0031 0.0212 0.0505 0.0663 0.17 0.174 0.000425 0.000434 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 448 48077.010 0.005 0.00376 0.0696 0.145 0.0543 0.0731 0.244 0.314 0.00061 0.000786 ! Validation 448 48077.010 0.005 0.00345 0.0717 0.141 0.0524 0.07 0.282 0.319 0.000706 0.000798 Wall time: 48077.010487074964 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 449 100 0.118 0.00376 0.0431 0.0547 0.0731 0.217 0.247 0.000543 0.000618 449 172 0.119 0.00335 0.0519 0.0513 0.069 0.249 0.272 0.000623 0.000679 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 449 100 0.0612 0.00304 0.000336 0.0499 0.0658 0.0189 0.0219 4.74e-05 5.47e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 449 48184.068 0.005 0.00357 0.0439 0.115 0.0528 0.0712 0.204 0.25 0.000511 0.000624 ! Validation 449 48184.068 0.005 0.00339 0.0413 0.109 0.052 0.0695 0.201 0.242 0.000503 0.000606 Wall time: 48184.068048679736 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 450 100 0.085 0.00345 0.016 0.0519 0.07 0.124 0.151 0.00031 0.000377 450 172 0.121 0.00372 0.0466 0.0536 0.0727 0.222 0.257 0.000556 0.000643 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 450 100 0.0879 0.00296 0.0287 0.0489 0.0649 0.196 0.202 0.000491 0.000505 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 450 48291.123 0.005 0.00342 0.0235 0.0918 0.0515 0.0697 0.144 0.183 0.00036 0.000456 ! Validation 450 48291.123 0.005 0.00332 0.0323 0.0987 0.0513 0.0687 0.172 0.214 0.000429 0.000536 Wall time: 48291.12376337871 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 451 100 0.0954 0.00334 0.0285 0.0513 0.0689 0.181 0.201 0.000453 0.000504 451 172 0.105 0.00312 0.0432 0.0495 0.0665 0.223 0.248 0.000558 0.000619 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 451 100 0.064 0.00286 0.00689 0.048 0.0637 0.0896 0.099 0.000224 0.000247 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 451 48398.187 0.005 0.00337 0.0352 0.103 0.0512 0.0692 0.183 0.224 0.000457 0.000559 ! Validation 451 48398.187 0.005 0.00321 0.0277 0.0919 0.0505 0.0675 0.154 0.199 0.000385 0.000496 Wall time: 48398.18752044393 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 452 100 0.12 0.00337 0.0526 0.051 0.0692 0.256 0.273 0.000641 0.000684 452 172 0.159 0.00357 0.0873 0.0526 0.0712 0.341 0.352 0.000852 0.00088 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 452 100 0.0568 0.00282 0.000401 0.0477 0.0633 0.017 0.0239 4.24e-05 5.97e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 452 48505.266 0.005 0.00331 0.0353 0.101 0.0507 0.0686 0.182 0.224 0.000454 0.00056 ! Validation 452 48505.266 0.005 0.00317 0.036 0.0995 0.0502 0.0672 0.182 0.226 0.000456 0.000565 Wall time: 48505.26662762463 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 453 100 0.0966 0.00375 0.0215 0.0547 0.073 0.136 0.175 0.00034 0.000437 453 172 0.076 0.00335 0.00911 0.051 0.069 0.0813 0.114 0.000203 0.000284 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 453 100 0.125 0.00286 0.0676 0.048 0.0638 0.306 0.31 0.000765 0.000775 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 453 48612.322 0.005 0.00338 0.0542 0.122 0.0513 0.0693 0.214 0.278 0.000535 0.000694 ! Validation 453 48612.322 0.005 0.0032 0.0807 0.145 0.0505 0.0674 0.306 0.339 0.000766 0.000847 Wall time: 48612.32220968092 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 454 100 0.0798 0.00328 0.0142 0.0508 0.0683 0.119 0.142 0.000298 0.000355 454 172 0.125 0.00314 0.0618 0.0501 0.0668 0.278 0.296 0.000696 0.000741 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 454 100 0.0615 0.00302 0.00113 0.0497 0.0655 0.0383 0.04 9.58e-05 0.0001 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 454 48719.370 0.005 0.00324 0.0352 0.1 0.0502 0.0679 0.176 0.224 0.00044 0.000559 ! Validation 454 48719.370 0.005 0.00332 0.0449 0.111 0.0518 0.0687 0.21 0.253 0.000526 0.000631 Wall time: 48719.37053416204 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 455 100 0.0689 0.00301 0.00861 0.0481 0.0654 0.0876 0.111 0.000219 0.000277 455 172 0.0947 0.00313 0.0321 0.0494 0.0667 0.171 0.213 0.000428 0.000534 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 455 100 0.0907 0.00266 0.0375 0.0462 0.0615 0.226 0.231 0.000564 0.000577 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 455 48826.416 0.005 0.00318 0.0206 0.0842 0.0497 0.0672 0.136 0.171 0.000339 0.000428 ! Validation 455 48826.416 0.005 0.00295 0.0643 0.123 0.0483 0.0647 0.268 0.302 0.000669 0.000756 Wall time: 48826.416313441005 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 456 100 0.103 0.00309 0.0415 0.0493 0.0663 0.224 0.243 0.00056 0.000607 456 172 0.0872 0.00342 0.0188 0.0513 0.0697 0.138 0.164 0.000345 0.000409 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 456 100 0.102 0.0028 0.0464 0.047 0.063 0.256 0.257 0.000641 0.000642 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 456 48933.452 0.005 0.00332 0.0484 0.115 0.0509 0.0687 0.203 0.262 0.000508 0.000656 ! Validation 456 48933.452 0.005 0.00315 0.0214 0.0843 0.05 0.0669 0.138 0.174 0.000345 0.000436 Wall time: 48933.45256081084 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 457 100 0.0839 0.00293 0.0253 0.0485 0.0645 0.167 0.19 0.000417 0.000474 457 172 0.075 0.00287 0.0176 0.0469 0.0639 0.127 0.158 0.000319 0.000395 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 457 100 0.0579 0.00262 0.00539 0.0458 0.0611 0.0833 0.0875 0.000208 0.000219 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 457 49040.945 0.005 0.00315 0.0315 0.0945 0.0495 0.0669 0.17 0.212 0.000425 0.000529 ! Validation 457 49040.945 0.005 0.00291 0.0517 0.11 0.048 0.0643 0.232 0.271 0.000581 0.000678 Wall time: 49040.94582961593 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 458 100 0.086 0.00378 0.0105 0.055 0.0733 0.0919 0.122 0.00023 0.000305 458 172 0.147 0.00447 0.058 0.0593 0.0797 0.259 0.287 0.000647 0.000718 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 458 100 0.0791 0.00361 0.00677 0.0528 0.0717 0.0827 0.0981 0.000207 0.000245 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 458 49148.054 0.005 0.00326 0.0465 0.112 0.0504 0.068 0.205 0.257 0.000512 0.000643 ! Validation 458 49148.054 0.005 0.00399 0.0221 0.102 0.056 0.0753 0.141 0.177 0.000352 0.000443 Wall time: 49148.054076798726 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 459 100 0.0683 0.003 0.00839 0.0483 0.0653 0.0884 0.109 0.000221 0.000273 459 172 0.0784 0.00296 0.0191 0.0485 0.0649 0.12 0.165 0.000301 0.000412 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 459 100 0.064 0.00282 0.00765 0.0477 0.0633 0.0898 0.104 0.000225 0.000261 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 459 49255.151 0.005 0.00313 0.0355 0.0981 0.0493 0.0667 0.177 0.225 0.000442 0.000561 ! Validation 459 49255.151 0.005 0.00305 0.0913 0.152 0.0492 0.0658 0.303 0.36 0.000757 0.0009 Wall time: 49255.15123973461 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 460 100 0.0697 0.00284 0.0129 0.047 0.0636 0.11 0.135 0.000275 0.000338 460 172 0.175 0.00341 0.106 0.0517 0.0696 0.372 0.389 0.000931 0.000972 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 460 100 0.226 0.00283 0.17 0.0471 0.0634 0.491 0.491 0.00123 0.00123 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 460 49362.251 0.005 0.0032 0.0393 0.103 0.05 0.0675 0.186 0.236 0.000466 0.000591 ! Validation 460 49362.251 0.005 0.00314 0.0471 0.11 0.05 0.0668 0.205 0.259 0.000513 0.000647 Wall time: 49362.25094919 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 461 100 0.0864 0.00306 0.0252 0.049 0.066 0.161 0.189 0.000403 0.000473 461 172 0.0798 0.00287 0.0225 0.0471 0.0639 0.154 0.179 0.000386 0.000447 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 461 100 0.0558 0.00258 0.00419 0.0455 0.0605 0.0709 0.0772 0.000177 0.000193 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 461 49469.356 0.005 0.00301 0.0296 0.0899 0.0484 0.0654 0.167 0.205 0.000418 0.000513 ! Validation 461 49469.356 0.005 0.00283 0.0161 0.0726 0.0474 0.0634 0.116 0.151 0.000291 0.000378 Wall time: 49469.35603707284 ! Best model 461 0.073 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 462 100 0.0754 0.00282 0.0189 0.0468 0.0633 0.139 0.164 0.000346 0.00041 462 172 0.101 0.0029 0.0431 0.0474 0.0642 0.233 0.247 0.000582 0.000619 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 462 100 0.138 0.00266 0.085 0.046 0.0615 0.347 0.348 0.000868 0.000869 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 462 49577.673 0.005 0.00334 0.0459 0.113 0.0508 0.0689 0.189 0.255 0.000472 0.000639 ! Validation 462 49577.673 0.005 0.00292 0.037 0.0955 0.0481 0.0644 0.197 0.229 0.000491 0.000574 Wall time: 49577.67313110689 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 463 100 0.0818 0.00312 0.0195 0.0496 0.0665 0.136 0.166 0.000341 0.000416 463 172 0.0746 0.00291 0.0163 0.0477 0.0644 0.134 0.152 0.000335 0.00038 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 463 100 0.0889 0.00297 0.0295 0.0493 0.065 0.203 0.205 0.000507 0.000512 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 463 49684.798 0.005 0.0031 0.0326 0.0945 0.0491 0.0663 0.163 0.215 0.000408 0.000539 ! Validation 463 49684.798 0.005 0.0032 0.0944 0.158 0.051 0.0675 0.336 0.366 0.000841 0.000916 Wall time: 49684.79880030407 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 464 100 0.0739 0.00307 0.0125 0.0489 0.066 0.105 0.133 0.000262 0.000334 464 172 0.12 0.00295 0.0612 0.0482 0.0648 0.275 0.295 0.000688 0.000738 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 464 100 0.274 0.00328 0.209 0.0508 0.0682 0.544 0.544 0.00136 0.00136 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 464 49791.914 0.005 0.00307 0.0366 0.098 0.0489 0.066 0.183 0.228 0.000458 0.00057 ! Validation 464 49791.914 0.005 0.00339 0.109 0.177 0.0518 0.0694 0.366 0.394 0.000914 0.000984 Wall time: 49791.91440600902 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 465 100 0.114 0.0031 0.0523 0.0495 0.0664 0.248 0.273 0.000621 0.000682 465 172 0.0644 0.00277 0.00896 0.0457 0.0628 0.0932 0.113 0.000233 0.000282 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 465 100 0.0983 0.00251 0.048 0.0445 0.0598 0.26 0.261 0.00065 0.000653 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 465 49899.021 0.005 0.00299 0.0289 0.0887 0.0483 0.0652 0.16 0.203 0.000399 0.000506 ! Validation 465 49899.021 0.005 0.00271 0.0283 0.0825 0.0462 0.0621 0.167 0.201 0.000419 0.000502 Wall time: 49899.02159937285 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 466 100 0.0931 0.00289 0.0352 0.0473 0.0641 0.182 0.224 0.000454 0.000559 466 172 0.0707 0.00287 0.0132 0.0476 0.0639 0.105 0.137 0.000262 0.000343 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 466 100 0.0736 0.00266 0.0204 0.0463 0.0615 0.169 0.17 0.000422 0.000425 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 466 50006.120 0.005 0.00286 0.0256 0.0827 0.0471 0.0637 0.154 0.191 0.000384 0.000477 ! Validation 466 50006.120 0.005 0.00286 0.014 0.0713 0.0479 0.0638 0.111 0.141 0.000277 0.000352 Wall time: 50006.12027404364 ! Best model 466 0.071 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 467 100 0.0606 0.00256 0.00938 0.0447 0.0603 0.0973 0.115 0.000243 0.000289 467 172 0.0708 0.00307 0.00932 0.0492 0.0661 0.0961 0.115 0.00024 0.000288 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 467 100 0.0531 0.00263 0.000468 0.0459 0.0612 0.0216 0.0258 5.39e-05 6.45e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 467 50113.248 0.005 0.00297 0.0352 0.0946 0.0481 0.065 0.178 0.224 0.000445 0.000559 ! Validation 467 50113.248 0.005 0.00291 0.0168 0.0749 0.0482 0.0643 0.12 0.154 0.000301 0.000386 Wall time: 50113.248212567996 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 468 100 0.102 0.00396 0.0231 0.0553 0.0751 0.16 0.181 0.000401 0.000453 468 172 0.0786 0.00307 0.0173 0.0492 0.066 0.132 0.157 0.000329 0.000392 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 468 100 0.0563 0.00277 0.00095 0.0469 0.0627 0.0283 0.0367 7.08e-05 9.18e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 468 50220.701 0.005 0.00385 0.0582 0.135 0.0545 0.074 0.217 0.288 0.000541 0.000719 ! Validation 468 50220.701 0.005 0.00294 0.0346 0.0934 0.0484 0.0646 0.186 0.222 0.000465 0.000555 Wall time: 50220.70114515163 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 469 100 0.145 0.00279 0.0896 0.0464 0.063 0.332 0.357 0.00083 0.000892 469 172 0.0683 0.00273 0.0137 0.0461 0.0623 0.113 0.139 0.000283 0.000349 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 469 100 0.0583 0.00256 0.00713 0.0449 0.0603 0.0915 0.101 0.000229 0.000252 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 469 50327.793 0.005 0.0029 0.0253 0.0833 0.0475 0.0642 0.152 0.19 0.000381 0.000474 ! Validation 469 50327.793 0.005 0.00276 0.0376 0.0929 0.0469 0.0627 0.193 0.231 0.000484 0.000578 Wall time: 50327.793024577666 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 470 100 0.0956 0.0026 0.0436 0.0452 0.0608 0.221 0.249 0.000552 0.000623 470 172 0.0657 0.00263 0.0131 0.0449 0.0611 0.12 0.136 0.0003 0.000341 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 470 100 0.0795 0.0025 0.0295 0.0443 0.0596 0.203 0.205 0.000509 0.000512 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 470 50434.888 0.005 0.00349 0.0415 0.111 0.0519 0.0704 0.188 0.243 0.00047 0.000607 ! Validation 470 50434.888 0.005 0.00265 0.0152 0.0682 0.0458 0.0614 0.116 0.147 0.00029 0.000367 Wall time: 50434.88882963266 ! Best model 470 0.068 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 471 100 0.124 0.00268 0.0699 0.0458 0.0617 0.298 0.315 0.000746 0.000788 471 172 0.0758 0.00306 0.0146 0.0486 0.0659 0.12 0.144 0.000301 0.00036 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 471 100 0.0554 0.0026 0.00333 0.0452 0.0608 0.0552 0.0688 0.000138 0.000172 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 471 50541.991 0.005 0.00296 0.0317 0.0909 0.048 0.0649 0.167 0.212 0.000417 0.000531 ! Validation 471 50541.991 0.005 0.00276 0.0216 0.0768 0.0468 0.0627 0.14 0.175 0.000349 0.000438 Wall time: 50541.99117737496 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 472 100 0.139 0.00288 0.0815 0.0475 0.064 0.307 0.34 0.000768 0.000851 472 172 0.202 0.00344 0.133 0.0523 0.0699 0.416 0.435 0.00104 0.00109 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 472 100 0.0724 0.00254 0.0217 0.0448 0.06 0.173 0.176 0.000432 0.000439 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 472 50649.154 0.005 0.00288 0.0287 0.0863 0.0474 0.064 0.16 0.202 0.0004 0.000504 ! Validation 472 50649.154 0.005 0.00274 0.0381 0.0929 0.0466 0.0624 0.191 0.233 0.000477 0.000582 Wall time: 50649.15488411207 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 473 100 0.0823 0.00263 0.0296 0.0446 0.0612 0.183 0.205 0.000459 0.000513 473 172 0.17 0.00363 0.0979 0.0537 0.0718 0.35 0.373 0.000875 0.000932 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 473 100 0.0506 0.00249 0.000786 0.0444 0.0595 0.03 0.0334 7.51e-05 8.35e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 473 50756.258 0.005 0.00292 0.0317 0.0901 0.0478 0.0644 0.169 0.212 0.000423 0.00053 ! Validation 473 50756.258 0.005 0.00265 0.0822 0.135 0.0459 0.0613 0.299 0.342 0.000749 0.000854 Wall time: 50756.25888715172 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 474 100 0.0608 0.00278 0.00516 0.0463 0.0629 0.0663 0.0856 0.000166 0.000214 474 172 0.0693 0.00292 0.0109 0.0478 0.0644 0.102 0.125 0.000254 0.000312 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 474 100 0.089 0.00239 0.0411 0.0434 0.0583 0.238 0.242 0.000594 0.000604 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 474 50863.372 0.005 0.00309 0.0311 0.0929 0.049 0.0662 0.166 0.21 0.000415 0.000526 ! Validation 474 50863.372 0.005 0.00258 0.0183 0.0699 0.0451 0.0605 0.128 0.161 0.000319 0.000403 Wall time: 50863.372002739925 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 475 100 0.0776 0.0033 0.0116 0.0512 0.0685 0.101 0.128 0.000253 0.00032 475 172 23 0.9 5.02 0.843 1.13 2.11 2.67 0.00527 0.00668 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 475 100 17.5 0.856 0.339 0.818 1.1 0.587 0.694 0.00147 0.00173 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 475 50970.545 0.005 0.26 9.65 14.9 0.273 0.608 1.46 3.7 0.00364 0.00926 ! Validation 475 50970.545 0.005 0.905 2.17 20.3 0.843 1.13 1.38 1.75 0.00345 0.00439 Wall time: 50970.54545372771 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 476 100 5.51 0.163 2.25 0.367 0.482 1.65 1.79 0.00414 0.00447 476 172 2.51 0.118 0.146 0.311 0.41 0.345 0.455 0.000863 0.00114 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 476 100 2.43 0.11 0.224 0.302 0.396 0.539 0.565 0.00135 0.00141 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 476 51077.656 0.005 0.32 1.83 8.23 0.474 0.675 1.22 1.61 0.00306 0.00403 ! Validation 476 51077.656 0.005 0.115 0.4 2.71 0.307 0.405 0.603 0.754 0.00151 0.00189 Wall time: 51077.656210407615 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 477 100 1.92 0.0857 0.208 0.262 0.349 0.434 0.543 0.00108 0.00136 477 172 1.87 0.0799 0.275 0.252 0.337 0.532 0.625 0.00133 0.00156 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 477 100 1.92 0.0739 0.439 0.244 0.324 0.765 0.79 0.00191 0.00197 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 477 51185.663 0.005 0.0933 0.746 2.61 0.274 0.364 0.808 1.03 0.00202 0.00257 ! Validation 477 51185.663 0.005 0.0773 0.286 1.83 0.249 0.331 0.515 0.638 0.00129 0.00159 Wall time: 51185.663604648784 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 478 100 1.35 0.0609 0.129 0.22 0.294 0.346 0.429 0.000865 0.00107 478 172 1.3 0.0568 0.159 0.213 0.284 0.368 0.475 0.00092 0.00119 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 478 100 1.22 0.0557 0.109 0.212 0.281 0.348 0.393 0.00087 0.000983 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 478 51292.760 0.005 0.0622 0.531 1.78 0.223 0.297 0.672 0.869 0.00168 0.00217 ! Validation 478 51292.760 0.005 0.056 0.387 1.51 0.212 0.282 0.609 0.742 0.00152 0.00185 Wall time: 51292.760752792936 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 479 100 1.56 0.0386 0.787 0.176 0.234 0.985 1.06 0.00246 0.00264 479 172 0.81 0.036 0.09 0.17 0.226 0.283 0.358 0.000708 0.000894 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 479 100 0.723 0.0344 0.0347 0.165 0.221 0.174 0.222 0.000435 0.000555 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 479 51400.865 0.005 0.0425 0.48 1.33 0.184 0.246 0.654 0.827 0.00164 0.00207 ! Validation 479 51400.865 0.005 0.0348 0.126 0.823 0.167 0.223 0.33 0.423 0.000825 0.00106 Wall time: 51400.86502249865 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 480 100 0.758 0.0266 0.226 0.147 0.194 0.457 0.566 0.00114 0.00142 480 172 0.864 0.0265 0.334 0.146 0.194 0.649 0.689 0.00162 0.00172 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 480 100 0.696 0.0249 0.198 0.14 0.188 0.515 0.53 0.00129 0.00133 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 480 51507.979 0.005 0.0293 0.221 0.807 0.153 0.204 0.444 0.561 0.00111 0.0014 ! Validation 480 51507.979 0.005 0.0256 0.222 0.733 0.143 0.191 0.48 0.561 0.0012 0.0014 Wall time: 51507.97974972194 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 481 100 1.12 0.0238 0.642 0.138 0.184 0.9 0.956 0.00225 0.00239 481 172 1.15 0.0234 0.686 0.136 0.182 0.924 0.988 0.00231 0.00247 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 481 100 0.454 0.0216 0.0219 0.13 0.175 0.136 0.176 0.00034 0.000441 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 481 51615.104 0.005 0.0237 0.267 0.741 0.138 0.183 0.502 0.616 0.00125 0.00154 ! Validation 481 51615.104 0.005 0.0221 0.105 0.546 0.133 0.177 0.308 0.386 0.000771 0.000965 Wall time: 51615.103993171826 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 482 100 0.44 0.0192 0.0567 0.124 0.165 0.219 0.284 0.000547 0.00071 482 172 0.496 0.0202 0.093 0.128 0.169 0.311 0.364 0.000778 0.000909 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 482 100 0.623 0.019 0.243 0.123 0.164 0.574 0.587 0.00144 0.00147 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 482 51722.220 0.005 0.0206 0.246 0.657 0.128 0.171 0.472 0.591 0.00118 0.00148 ! Validation 482 51722.220 0.005 0.0193 0.3 0.687 0.125 0.166 0.579 0.653 0.00145 0.00163 Wall time: 51722.22005725093 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 483 100 0.432 0.0179 0.0743 0.12 0.159 0.262 0.325 0.000656 0.000813 483 172 0.66 0.0181 0.297 0.12 0.161 0.612 0.65 0.00153 0.00162 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 483 100 0.857 0.0164 0.528 0.114 0.153 0.858 0.866 0.00214 0.00217 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 483 51829.336 0.005 0.018 0.203 0.563 0.12 0.16 0.422 0.537 0.00106 0.00134 ! Validation 483 51829.336 0.005 0.0169 0.423 0.76 0.116 0.155 0.716 0.775 0.00179 0.00194 Wall time: 51829.33685464179 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 484 100 0.623 0.0148 0.327 0.109 0.145 0.644 0.682 0.00161 0.0017 484 172 0.36 0.0156 0.049 0.112 0.149 0.205 0.264 0.000511 0.00066 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 484 100 0.348 0.0144 0.0608 0.107 0.143 0.27 0.294 0.000674 0.000735 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 484 51936.455 0.005 0.0161 0.213 0.535 0.114 0.151 0.448 0.55 0.00112 0.00138 ! Validation 484 51936.455 0.005 0.0149 0.244 0.542 0.11 0.146 0.518 0.589 0.00129 0.00147 Wall time: 51936.45578803774 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 485 100 0.403 0.0141 0.121 0.107 0.142 0.371 0.414 0.000928 0.00103 485 172 0.314 0.0134 0.046 0.103 0.138 0.22 0.256 0.00055 0.000639 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 485 100 0.452 0.0125 0.202 0.0997 0.133 0.527 0.536 0.00132 0.00134 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 485 52043.748 0.005 0.0142 0.154 0.438 0.107 0.142 0.377 0.468 0.000943 0.00117 ! Validation 485 52043.748 0.005 0.013 0.171 0.431 0.103 0.136 0.438 0.492 0.00109 0.00123 Wall time: 52043.7487968998 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 486 100 0.271 0.0121 0.029 0.0984 0.131 0.164 0.203 0.000411 0.000508 486 172 0.482 0.0129 0.224 0.102 0.135 0.491 0.565 0.00123 0.00141 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 486 100 0.267 0.0115 0.0373 0.0959 0.128 0.208 0.23 0.000519 0.000576 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 486 52150.849 0.005 0.0128 0.19 0.446 0.101 0.135 0.426 0.52 0.00106 0.0013 ! Validation 486 52150.849 0.005 0.0119 0.13 0.369 0.0983 0.13 0.35 0.43 0.000875 0.00107 Wall time: 52150.849396193866 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 487 100 0.276 0.0117 0.0424 0.0974 0.129 0.203 0.245 0.000507 0.000613 487 172 0.363 0.0107 0.149 0.0927 0.123 0.405 0.46 0.00101 0.00115 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 487 100 0.216 0.0101 0.014 0.0901 0.12 0.108 0.141 0.00027 0.000352 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 487 52257.962 0.005 0.0114 0.124 0.352 0.0957 0.127 0.334 0.42 0.000834 0.00105 ! Validation 487 52257.962 0.005 0.0106 0.042 0.253 0.0926 0.123 0.195 0.244 0.000488 0.000611 Wall time: 52257.962794614956 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 488 100 0.245 0.0108 0.0282 0.093 0.124 0.155 0.2 0.000386 0.0005 488 172 0.375 0.011 0.154 0.0936 0.125 0.394 0.469 0.000984 0.00117 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 488 100 0.463 0.00988 0.266 0.0893 0.118 0.606 0.614 0.00152 0.00154 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 488 52365.177 0.005 0.0105 0.176 0.386 0.0918 0.122 0.401 0.5 0.001 0.00125 ! Validation 488 52365.177 0.005 0.0103 0.26 0.467 0.0915 0.121 0.536 0.608 0.00134 0.00152 Wall time: 52365.17771749478 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 489 100 0.268 0.00885 0.0906 0.0843 0.112 0.299 0.359 0.000748 0.000897 489 172 0.407 0.00999 0.207 0.0879 0.119 0.488 0.542 0.00122 0.00136 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 489 100 0.397 0.00823 0.233 0.0814 0.108 0.57 0.575 0.00143 0.00144 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 489 52472.309 0.005 0.00954 0.0973 0.288 0.0875 0.116 0.29 0.372 0.000725 0.000929 ! Validation 489 52472.309 0.005 0.00882 0.2 0.376 0.0845 0.112 0.484 0.533 0.00121 0.00133 Wall time: 52472.309840634 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 490 100 0.188 0.00865 0.015 0.0827 0.111 0.12 0.146 0.000301 0.000365 490 172 0.204 0.00757 0.0529 0.0778 0.104 0.243 0.274 0.000607 0.000686 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 490 100 0.157 0.0074 0.00889 0.0767 0.103 0.0767 0.112 0.000192 0.000281 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 490 52579.422 0.005 0.00864 0.0915 0.264 0.0832 0.111 0.287 0.361 0.000718 0.000902 ! Validation 490 52579.422 0.005 0.00801 0.0431 0.203 0.0804 0.107 0.201 0.248 0.000502 0.000619 Wall time: 52579.42228896497 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 491 100 0.299 0.0081 0.137 0.0802 0.107 0.409 0.441 0.00102 0.0011 491 172 0.187 0.00756 0.0357 0.0779 0.104 0.199 0.225 0.000497 0.000563 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 491 100 0.155 0.00712 0.0121 0.0757 0.101 0.112 0.131 0.000281 0.000328 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 491 52686.466 0.005 0.00828 0.155 0.321 0.0814 0.108 0.385 0.47 0.000963 0.00117 ! Validation 491 52686.466 0.005 0.00771 0.0335 0.188 0.0789 0.105 0.173 0.218 0.000432 0.000546 Wall time: 52686.46670473786 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 492 100 0.176 0.00754 0.0256 0.0771 0.104 0.141 0.191 0.000351 0.000477 492 172 0.19 0.00745 0.041 0.0773 0.103 0.2 0.242 0.000501 0.000604 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 492 100 0.43 0.00642 0.302 0.0719 0.0955 0.652 0.655 0.00163 0.00164 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 492 52793.803 0.005 0.00755 0.079 0.23 0.0775 0.104 0.268 0.335 0.000671 0.000838 ! Validation 492 52793.803 0.005 0.00704 0.221 0.362 0.0753 0.1 0.522 0.561 0.0013 0.0014 Wall time: 52793.80344969267 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 493 100 0.222 0.007 0.0823 0.075 0.0998 0.309 0.342 0.000772 0.000855 493 172 0.211 0.0069 0.0727 0.0743 0.099 0.289 0.321 0.000723 0.000804 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 493 100 0.244 0.00625 0.119 0.0708 0.0943 0.407 0.411 0.00102 0.00103 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 493 52900.846 0.005 0.00726 0.121 0.266 0.076 0.102 0.326 0.414 0.000815 0.00104 ! Validation 493 52900.846 0.005 0.00675 0.0534 0.188 0.0737 0.098 0.229 0.275 0.000573 0.000689 Wall time: 52900.84682899667 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 494 100 0.51 0.00754 0.359 0.0778 0.104 0.676 0.714 0.00169 0.00179 494 172 0.227 0.0069 0.0886 0.074 0.099 0.301 0.355 0.000753 0.000887 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 494 100 0.175 0.00602 0.0549 0.0694 0.0925 0.273 0.279 0.000682 0.000699 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 494 53007.890 0.005 0.00696 0.134 0.273 0.0744 0.0994 0.343 0.437 0.000856 0.00109 ! Validation 494 53007.890 0.005 0.00651 0.06 0.19 0.0724 0.0962 0.246 0.292 0.000614 0.00073 Wall time: 53007.890431122854 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 495 100 0.153 0.00638 0.0252 0.0709 0.0952 0.139 0.189 0.000349 0.000473 495 172 0.172 0.00592 0.0532 0.0686 0.0917 0.247 0.275 0.000617 0.000687 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 495 100 0.394 0.00556 0.283 0.0665 0.0889 0.634 0.634 0.00158 0.00159 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 495 53114.937 0.005 0.00642 0.0564 0.185 0.0713 0.0955 0.228 0.283 0.000571 0.000708 ! Validation 495 53114.937 0.005 0.00605 0.221 0.342 0.0696 0.0927 0.526 0.561 0.00132 0.0014 Wall time: 53114.93703842396 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 496 100 0.203 0.00648 0.0739 0.0706 0.0959 0.279 0.324 0.000697 0.00081 496 172 0.206 0.0066 0.074 0.072 0.0969 0.297 0.324 0.000743 0.000811 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 496 100 0.153 0.00561 0.0406 0.0669 0.0893 0.212 0.24 0.000531 0.000601 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 496 53221.977 0.005 0.00619 0.0984 0.222 0.07 0.0938 0.306 0.374 0.000766 0.000935 ! Validation 496 53221.977 0.005 0.00604 0.0409 0.162 0.0696 0.0926 0.191 0.241 0.000477 0.000603 Wall time: 53221.977764728945 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 497 100 0.282 0.00619 0.159 0.0698 0.0938 0.434 0.475 0.00108 0.00119 497 172 0.153 0.00621 0.0291 0.07 0.0939 0.166 0.203 0.000416 0.000509 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 497 100 0.147 0.00532 0.041 0.0654 0.087 0.239 0.241 0.000597 0.000603 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 497 53329.033 0.005 0.006 0.0979 0.218 0.0688 0.0923 0.301 0.373 0.000752 0.000933 ! Validation 497 53329.033 0.005 0.00568 0.116 0.23 0.0675 0.0899 0.351 0.407 0.000877 0.00102 Wall time: 53329.033856109716 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 498 100 0.199 0.00586 0.0815 0.0682 0.0912 0.301 0.34 0.000753 0.000851 498 172 0.174 0.00543 0.0657 0.0655 0.0878 0.273 0.306 0.000683 0.000764 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 498 100 0.105 0.00492 0.00692 0.0622 0.0836 0.0874 0.0992 0.000219 0.000248 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 498 53436.282 0.005 0.00573 0.0577 0.172 0.0672 0.0903 0.227 0.286 0.000567 0.000716 ! Validation 498 53436.282 0.005 0.00531 0.0257 0.132 0.065 0.0869 0.15 0.191 0.000375 0.000478 Wall time: 53436.28278358886 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 499 100 0.169 0.00549 0.0589 0.0657 0.0883 0.241 0.289 0.000603 0.000723 499 172 0.22 0.0059 0.102 0.0679 0.0915 0.336 0.38 0.000839 0.00095 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 499 100 0.109 0.00491 0.0112 0.0622 0.0835 0.103 0.126 0.000258 0.000316 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 499 53543.371 0.005 0.00545 0.0718 0.181 0.0655 0.088 0.257 0.319 0.000644 0.000798 ! Validation 499 53543.371 0.005 0.00526 0.0224 0.128 0.0649 0.0865 0.144 0.178 0.000359 0.000446 Wall time: 53543.37172591593 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 500 100 0.135 0.0055 0.0253 0.066 0.0884 0.154 0.19 0.000385 0.000474 500 172 0.155 0.00535 0.0478 0.0648 0.0872 0.228 0.261 0.000569 0.000652 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 500 100 0.12 0.0046 0.0279 0.0602 0.0809 0.192 0.199 0.00048 0.000498 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 500 53650.469 0.005 0.00534 0.0669 0.174 0.0648 0.0871 0.252 0.308 0.000629 0.000771 ! Validation 500 53650.469 0.005 0.00496 0.0278 0.127 0.0628 0.0839 0.162 0.199 0.000405 0.000497 Wall time: 53650.4692227277 ! Stop training: max epochs Wall time: 53650.51114022778 Cumulative wall time: 53650.51114022778 Using device: cuda Please note that _all_ machine learning models running on CUDA hardware are generally somewhat nondeterministic and that this can manifest in small, generally unimportant variation in the final test errors. Loading model... loaded model Loading dataset... Processing dataset... Done! Loaded dataset specified in test_config.yaml. Using all frames from the specified test dataset, yielding a test set size of 500 frames. Starting... --- Final result: --- f_mae = 0.047247 f_rmse = 0.063321 e_mae = 0.103222 e_rmse = 0.138615 e/N_mae = 0.000258 e/N_rmse = 0.000347 f_mae = 0.047247 f_rmse = 0.063321 e_mae = 0.103222 e_rmse = 0.138615 e/N_mae = 0.000258 e/N_rmse = 0.000347 Train end time: 2024-12-09_01:55:06 Training duration: 14h 57m 55s