Train start time: 2024-12-08_10:34:37 Torch device: cuda Processing dataset... Loaded data: Batch(atomic_numbers=[3200000, 1], batch=[3200000], cell=[8000, 3, 3], edge_cell_shift=[110025848, 3], edge_index=[2, 110025848], forces=[3200000, 3], pbc=[8000, 3], pos=[3200000, 3], ptr=[8001], total_energy=[8000, 1]) processed data size: ~4393.16 MB Cached processed data to disk Done! Successfully loaded the data set of type ASEDataset(8000)... Replace string dataset_per_atom_total_energy_mean to -347.4158314169961 Atomic outputs are scaled by: [H, C, N, O, Zn: None], shifted by [H, C, N, O, Zn: -347.415831]. Replace string dataset_forces_rms to 1.1935424919787043 Initially outputs are globally scaled by: 1.1935424919787043, total_energy are globally shifted by None. Successfully built the network... Number of weights: 363624 Number of trainable weights: 363624 ! Starting training ... validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 0 100 28.7 1.02 8.37 0.896 1.2 3.39 3.45 0.00848 0.00863 Initialization # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Initial Validation 0 6.358 0.005 0.993 4.42 24.3 0.884 1.19 2.02 2.51 0.00506 0.00628 Wall time: 6.3580403393134475 ! Best model 0 24.288 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 1 100 24.7 0.996 4.77 0.884 1.19 2.06 2.61 0.00514 0.00652 1 118 22.4 0.972 2.93 0.881 1.18 1.57 2.04 0.00391 0.00511 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 1 100 29 1.01 8.7 0.894 1.2 3.46 3.52 0.00866 0.0088 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 1 174.389 0.005 0.999 4.88 24.9 0.887 1.19 2.12 2.64 0.00531 0.0066 ! Validation 1 174.389 0.005 0.99 4.08 23.9 0.883 1.19 1.94 2.41 0.00485 0.00603 Wall time: 174.39011605409905 ! Best model 1 23.881 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 2 100 10.6 0.47 1.24 0.616 0.818 1.14 1.33 0.00285 0.00332 2 118 9.24 0.401 1.21 0.57 0.756 0.895 1.31 0.00224 0.00328 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 2 100 11.1 0.392 3.25 0.556 0.747 2.11 2.15 0.00527 0.00538 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 2 341.830 0.005 0.753 2.41 17.5 0.764 1.04 1.43 1.86 0.00358 0.00464 ! Validation 2 341.830 0.005 0.393 4.1 12 0.561 0.748 2.21 2.42 0.00551 0.00604 Wall time: 341.83007658226416 ! Best model 2 11.957 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 3 100 5.96 0.25 0.955 0.448 0.597 0.857 1.17 0.00214 0.00292 3 118 6.12 0.246 1.2 0.441 0.592 1.12 1.31 0.00281 0.00326 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 3 100 4.75 0.234 0.0671 0.425 0.578 0.225 0.309 0.000562 0.000773 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 3 509.367 0.005 0.292 1.31 7.16 0.482 0.646 1.08 1.37 0.00269 0.00342 ! Validation 3 509.367 0.005 0.237 0.808 5.54 0.435 0.581 0.788 1.07 0.00197 0.00268 Wall time: 509.3696379433386 ! Best model 3 5.541 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 4 100 5.7 0.194 1.83 0.393 0.525 1.4 1.61 0.0035 0.00403 4 118 4.16 0.186 0.444 0.387 0.514 0.592 0.796 0.00148 0.00199 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 4 100 3.5 0.173 0.038 0.367 0.497 0.187 0.233 0.000468 0.000582 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 4 676.806 0.005 0.201 1.08 5.11 0.402 0.536 0.987 1.24 0.00247 0.00311 ! Validation 4 676.806 0.005 0.181 0.626 4.25 0.38 0.508 0.69 0.944 0.00173 0.00236 Wall time: 676.8062033960596 ! Best model 4 4.251 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 5 100 3.43 0.15 0.425 0.345 0.462 0.587 0.778 0.00147 0.00195 5 118 3.49 0.148 0.541 0.341 0.459 0.765 0.878 0.00191 0.00219 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 5 100 2.81 0.137 0.0794 0.326 0.441 0.297 0.336 0.000743 0.000841 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 5 844.237 0.005 0.16 0.744 3.94 0.357 0.477 0.821 1.03 0.00205 0.00258 ! Validation 5 844.237 0.005 0.147 0.951 3.9 0.341 0.458 0.926 1.16 0.00231 0.00291 Wall time: 844.2370626013726 ! Best model 5 3.896 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 6 100 3.69 0.127 1.14 0.318 0.426 1.1 1.28 0.00276 0.00319 6 118 3.58 0.131 0.96 0.323 0.432 1 1.17 0.0025 0.00292 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 6 100 4.32 0.119 1.94 0.303 0.412 1.66 1.66 0.00414 0.00416 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 6 1012.072 0.005 0.136 0.806 3.52 0.328 0.44 0.858 1.07 0.00214 0.00268 ! Validation 6 1012.072 0.005 0.129 1.21 3.79 0.318 0.429 1.17 1.31 0.00292 0.00328 Wall time: 1012.0729714543559 ! Best model 6 3.791 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 7 100 2.58 0.117 0.242 0.303 0.408 0.464 0.587 0.00116 0.00147 7 118 2.94 0.114 0.657 0.299 0.403 0.894 0.967 0.00224 0.00242 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 7 100 2.52 0.103 0.464 0.283 0.383 0.8 0.813 0.002 0.00203 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 7 1179.539 0.005 0.121 0.7 3.11 0.308 0.415 0.784 0.999 0.00196 0.0025 ! Validation 7 1179.539 0.005 0.115 0.362 2.66 0.3 0.405 0.582 0.718 0.00146 0.00179 Wall time: 1179.5396996671334 ! Best model 7 2.664 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 8 100 2.75 0.106 0.637 0.288 0.388 0.813 0.953 0.00203 0.00238 8 118 2.71 0.0994 0.717 0.281 0.376 0.769 1.01 0.00192 0.00253 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 8 100 1.88 0.0933 0.0173 0.269 0.365 0.105 0.157 0.000262 0.000393 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 8 1347.121 0.005 0.108 0.637 2.79 0.291 0.392 0.765 0.952 0.00191 0.00238 ! Validation 8 1347.121 0.005 0.105 0.686 2.79 0.287 0.387 0.792 0.989 0.00198 0.00247 Wall time: 1347.12112014601 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 9 100 2.12 0.0943 0.234 0.272 0.367 0.45 0.577 0.00112 0.00144 9 118 2.33 0.0942 0.451 0.272 0.366 0.678 0.802 0.0017 0.002 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 9 100 2.36 0.0858 0.645 0.258 0.35 0.95 0.959 0.00238 0.0024 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 9 1514.584 0.005 0.1 0.652 2.66 0.281 0.378 0.775 0.965 0.00194 0.00241 ! Validation 9 1514.584 0.005 0.0978 0.345 2.3 0.277 0.373 0.579 0.701 0.00145 0.00175 Wall time: 1514.5844786730595 ! Best model 9 2.300 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 10 100 2.1 0.0913 0.273 0.268 0.361 0.484 0.623 0.00121 0.00156 10 118 2.51 0.0942 0.625 0.272 0.366 0.838 0.943 0.00209 0.00236 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 10 100 1.89 0.0803 0.282 0.25 0.338 0.621 0.634 0.00155 0.00159 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 10 1682.217 0.005 0.0928 0.532 2.39 0.27 0.363 0.697 0.87 0.00174 0.00218 ! Validation 10 1682.217 0.005 0.0923 0.258 2.1 0.269 0.363 0.481 0.606 0.0012 0.00151 Wall time: 1682.2171152411029 ! Best model 10 2.105 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 11 100 2.74 0.0851 1.04 0.261 0.348 1.11 1.22 0.00278 0.00304 11 118 2.36 0.0878 0.608 0.261 0.354 0.658 0.93 0.00165 0.00233 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 11 100 1.52 0.0753 0.0117 0.242 0.328 0.0961 0.129 0.00024 0.000322 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 11 1851.670 0.005 0.0875 0.617 2.37 0.263 0.353 0.768 0.938 0.00192 0.00234 ! Validation 11 1851.670 0.005 0.0868 0.533 2.27 0.261 0.352 0.69 0.871 0.00172 0.00218 Wall time: 1851.6707933051512 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 12 100 2.09 0.0828 0.437 0.256 0.343 0.676 0.789 0.00169 0.00197 12 118 1.85 0.0837 0.176 0.259 0.345 0.377 0.5 0.000942 0.00125 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 12 100 1.87 0.0703 0.464 0.235 0.316 0.802 0.813 0.002 0.00203 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 12 2019.796 0.005 0.0836 0.549 2.22 0.257 0.345 0.696 0.887 0.00174 0.00222 ! Validation 12 2019.796 0.005 0.0818 0.285 1.92 0.254 0.341 0.528 0.638 0.00132 0.00159 Wall time: 2019.79748495901 ! Best model 12 1.921 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 13 100 3.29 0.0767 1.75 0.246 0.331 1.49 1.58 0.00372 0.00395 13 118 2.35 0.0797 0.754 0.249 0.337 0.968 1.04 0.00242 0.00259 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 13 100 1.4 0.0685 0.0314 0.231 0.312 0.168 0.211 0.000421 0.000528 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 13 2187.118 0.005 0.0786 0.558 2.13 0.249 0.335 0.724 0.89 0.00181 0.00223 ! Validation 13 2187.118 0.005 0.0791 0.562 2.14 0.25 0.336 0.739 0.895 0.00185 0.00224 Wall time: 2187.118289224338 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 14 100 1.59 0.0722 0.145 0.24 0.321 0.354 0.454 0.000886 0.00114 14 118 1.56 0.07 0.159 0.235 0.316 0.401 0.476 0.001 0.00119 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 14 100 1.55 0.0637 0.278 0.223 0.301 0.616 0.629 0.00154 0.00157 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 14 2354.420 0.005 0.0751 0.508 2.01 0.244 0.327 0.69 0.853 0.00173 0.00213 ! Validation 14 2354.420 0.005 0.0749 1.25 2.75 0.243 0.327 1.22 1.34 0.00304 0.00334 Wall time: 2354.420513842255 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 15 100 1.69 0.0702 0.288 0.236 0.316 0.504 0.641 0.00126 0.0016 15 118 1.87 0.068 0.511 0.232 0.311 0.737 0.853 0.00184 0.00213 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 15 100 1.6 0.0594 0.411 0.217 0.291 0.754 0.765 0.00189 0.00191 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 15 2521.697 0.005 0.0715 0.42 1.85 0.238 0.319 0.618 0.773 0.00154 0.00193 ! Validation 15 2521.697 0.005 0.0705 0.246 1.66 0.236 0.317 0.481 0.592 0.0012 0.00148 Wall time: 2521.6971200071275 ! Best model 15 1.656 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 16 100 1.64 0.0687 0.265 0.234 0.313 0.499 0.615 0.00125 0.00154 16 118 1.53 0.0656 0.221 0.227 0.306 0.474 0.561 0.00119 0.0014 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 16 100 1.58 0.0578 0.427 0.214 0.287 0.769 0.78 0.00192 0.00195 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 16 2689.018 0.005 0.0686 0.512 1.88 0.233 0.313 0.691 0.856 0.00173 0.00214 ! Validation 16 2689.018 0.005 0.0679 0.252 1.61 0.232 0.311 0.49 0.599 0.00123 0.0015 Wall time: 2689.0184356682003 ! Best model 16 1.609 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 17 100 2.03 0.0655 0.724 0.228 0.305 0.894 1.02 0.00224 0.00254 17 118 1.57 0.0665 0.242 0.231 0.308 0.484 0.587 0.00121 0.00147 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 17 100 1.42 0.0553 0.311 0.209 0.281 0.654 0.666 0.00164 0.00166 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 17 2856.605 0.005 0.0651 0.438 1.74 0.228 0.305 0.643 0.791 0.00161 0.00198 ! Validation 17 2856.605 0.005 0.0652 0.196 1.5 0.227 0.305 0.426 0.528 0.00107 0.00132 Wall time: 2856.6054014242254 ! Best model 17 1.499 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 18 100 1.57 0.0626 0.313 0.224 0.299 0.555 0.668 0.00139 0.00167 18 118 1.47 0.06 0.274 0.219 0.292 0.532 0.625 0.00133 0.00156 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 18 100 1.4 0.053 0.344 0.205 0.275 0.691 0.7 0.00173 0.00175 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 18 3024.126 0.005 0.063 0.456 1.71 0.224 0.3 0.655 0.807 0.00164 0.00202 ! Validation 18 3024.126 0.005 0.0625 0.205 1.46 0.223 0.298 0.438 0.54 0.0011 0.00135 Wall time: 3024.126394457184 ! Best model 18 1.456 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 19 100 1.91 0.0588 0.73 0.217 0.289 0.884 1.02 0.00221 0.00255 19 118 1.59 0.0632 0.323 0.224 0.3 0.518 0.678 0.0013 0.0017 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 19 100 2.4 0.0514 1.37 0.202 0.271 1.39 1.4 0.00348 0.00349 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 19 3191.427 0.005 0.0607 0.511 1.72 0.22 0.294 0.701 0.854 0.00175 0.00214 ! Validation 19 3191.427 0.005 0.0606 0.733 1.95 0.219 0.294 0.917 1.02 0.00229 0.00256 Wall time: 3191.4276354052126 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 20 100 2.03 0.0586 0.855 0.216 0.289 1.01 1.1 0.00252 0.00276 20 118 1.52 0.0593 0.336 0.217 0.291 0.556 0.692 0.00139 0.00173 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 20 100 1.12 0.0483 0.153 0.197 0.262 0.451 0.467 0.00113 0.00117 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 20 3358.732 0.005 0.058 0.34 1.5 0.215 0.287 0.555 0.696 0.00139 0.00174 ! Validation 20 3358.732 0.005 0.0579 0.707 1.86 0.215 0.287 0.895 1 0.00224 0.00251 Wall time: 3358.732003442012 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 21 100 1.8 0.0547 0.704 0.21 0.279 0.902 1 0.00226 0.0025 21 118 1.35 0.0556 0.237 0.21 0.281 0.47 0.581 0.00117 0.00145 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 21 100 1 0.0469 0.0609 0.194 0.259 0.268 0.295 0.00067 0.000737 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 21 3526.019 0.005 0.0564 0.513 1.64 0.212 0.283 0.701 0.857 0.00175 0.00214 ! Validation 21 3526.019 0.005 0.0563 0.198 1.32 0.212 0.283 0.414 0.531 0.00104 0.00133 Wall time: 3526.0196879622526 ! Best model 21 1.324 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 22 100 1.54 0.0549 0.437 0.21 0.28 0.68 0.789 0.0017 0.00197 22 118 1.18 0.0499 0.185 0.202 0.267 0.413 0.514 0.00103 0.00128 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 22 100 1.4 0.0459 0.481 0.192 0.256 0.82 0.827 0.00205 0.00207 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 22 3693.414 0.005 0.0546 0.471 1.56 0.209 0.279 0.665 0.82 0.00166 0.00205 ! Validation 22 3693.414 0.005 0.0545 0.237 1.33 0.209 0.279 0.477 0.582 0.00119 0.00145 Wall time: 3693.414395957254 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 23 100 1.39 0.0517 0.351 0.204 0.271 0.579 0.707 0.00145 0.00177 23 118 1.34 0.0507 0.327 0.201 0.269 0.605 0.683 0.00151 0.00171 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 23 100 1.34 0.0444 0.452 0.188 0.252 0.795 0.802 0.00199 0.00201 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 23 3860.712 0.005 0.0528 0.436 1.49 0.206 0.274 0.635 0.788 0.00159 0.00197 ! Validation 23 3860.712 0.005 0.0531 0.246 1.31 0.205 0.275 0.486 0.592 0.00122 0.00148 Wall time: 3860.712141085416 ! Best model 23 1.307 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 24 100 1.12 0.0494 0.137 0.2 0.265 0.354 0.441 0.000884 0.0011 24 118 2.04 0.0513 1.02 0.203 0.27 1.17 1.2 0.00293 0.00301 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 24 100 2.9 0.0429 2.05 0.185 0.247 1.71 1.71 0.00426 0.00427 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 24 4029.069 0.005 0.0511 0.407 1.43 0.202 0.27 0.619 0.757 0.00155 0.00189 ! Validation 24 4029.069 0.005 0.0512 1.28 2.3 0.202 0.27 1.27 1.35 0.00318 0.00337 Wall time: 4029.0699212430045 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 25 100 1.71 0.0518 0.672 0.203 0.272 0.9 0.979 0.00225 0.00245 25 118 1.32 0.05 0.317 0.202 0.267 0.482 0.672 0.00121 0.00168 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 25 100 1.08 0.0415 0.25 0.182 0.243 0.585 0.597 0.00146 0.00149 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 25 4207.325 0.005 0.0495 0.419 1.41 0.199 0.266 0.626 0.773 0.00156 0.00193 ! Validation 25 4207.325 0.005 0.0503 0.164 1.17 0.2 0.268 0.383 0.483 0.000958 0.00121 Wall time: 4207.325034392066 ! Best model 25 1.169 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 26 100 1.14 0.0473 0.191 0.195 0.259 0.431 0.521 0.00108 0.0013 26 118 1.53 0.0436 0.657 0.187 0.249 0.92 0.968 0.0023 0.00242 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 26 100 0.89 0.0396 0.0978 0.177 0.238 0.357 0.373 0.000892 0.000933 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 26 4374.650 0.005 0.0482 0.404 1.37 0.197 0.262 0.618 0.757 0.00155 0.00189 ! Validation 26 4374.650 0.005 0.0484 0.535 1.5 0.196 0.262 0.766 0.873 0.00192 0.00218 Wall time: 4374.650422966108 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 27 100 1.05 0.0459 0.132 0.192 0.256 0.35 0.433 0.000875 0.00108 27 118 1.26 0.047 0.322 0.195 0.259 0.614 0.678 0.00153 0.00169 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 27 100 0.984 0.0385 0.215 0.175 0.234 0.547 0.553 0.00137 0.00138 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 27 4542.068 0.005 0.0471 0.384 1.33 0.194 0.259 0.563 0.74 0.00141 0.00185 ! Validation 27 4542.068 0.005 0.0467 0.145 1.08 0.193 0.258 0.362 0.455 0.000905 0.00114 Wall time: 4542.068860084284 ! Best model 27 1.079 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 28 100 1.09 0.0437 0.219 0.187 0.249 0.438 0.558 0.0011 0.0014 28 118 1.22 0.0441 0.335 0.189 0.251 0.595 0.69 0.00149 0.00173 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 28 100 0.804 0.0369 0.0652 0.171 0.229 0.289 0.305 0.000722 0.000762 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 28 4709.400 0.005 0.0451 0.356 1.26 0.19 0.253 0.573 0.712 0.00143 0.00178 ! Validation 28 4709.400 0.005 0.0451 0.153 1.06 0.189 0.253 0.364 0.467 0.000911 0.00117 Wall time: 4709.400411950424 ! Best model 28 1.055 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 29 100 1.29 0.0448 0.397 0.19 0.253 0.661 0.752 0.00165 0.00188 29 118 1.4 0.0407 0.589 0.18 0.241 0.767 0.916 0.00192 0.00229 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 29 100 0.827 0.0361 0.105 0.169 0.227 0.376 0.387 0.000941 0.000967 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 29 4876.716 0.005 0.0442 0.412 1.3 0.188 0.251 0.618 0.765 0.00155 0.00191 ! Validation 29 4876.716 0.005 0.0441 0.535 1.42 0.187 0.251 0.774 0.873 0.00194 0.00218 Wall time: 4876.716017507017 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 30 100 0.94 0.0418 0.104 0.183 0.244 0.298 0.385 0.000745 0.000962 30 118 1.02 0.042 0.178 0.185 0.245 0.4 0.503 0.001 0.00126 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 30 100 0.732 0.035 0.0317 0.167 0.223 0.189 0.213 0.000473 0.000531 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 30 5044.004 0.005 0.0432 0.404 1.27 0.186 0.248 0.608 0.761 0.00152 0.0019 ! Validation 30 5044.004 0.005 0.043 0.355 1.22 0.185 0.248 0.603 0.711 0.00151 0.00178 Wall time: 5044.004979152232 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 31 100 1.38 0.0422 0.539 0.184 0.245 0.809 0.876 0.00202 0.00219 31 118 1.22 0.0463 0.292 0.194 0.257 0.531 0.645 0.00133 0.00161 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 31 100 1.43 0.0342 0.748 0.165 0.221 1.03 1.03 0.00257 0.00258 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 31 5211.399 0.005 0.0414 0.338 1.17 0.182 0.243 0.567 0.694 0.00142 0.00173 ! Validation 31 5211.399 0.005 0.0418 0.413 1.25 0.182 0.244 0.673 0.767 0.00168 0.00192 Wall time: 5211.399076862261 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 32 100 0.994 0.0406 0.182 0.18 0.24 0.406 0.51 0.00101 0.00127 32 118 0.9 0.0395 0.11 0.178 0.237 0.328 0.396 0.000821 0.000989 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 32 100 0.713 0.0334 0.0453 0.164 0.218 0.233 0.254 0.000583 0.000635 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 32 5378.698 0.005 0.0407 0.418 1.23 0.181 0.241 0.634 0.774 0.00159 0.00193 ! Validation 32 5378.698 0.005 0.0414 0.363 1.19 0.182 0.243 0.61 0.719 0.00153 0.0018 Wall time: 5378.6982706203125 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 33 100 0.938 0.0404 0.131 0.18 0.24 0.351 0.431 0.000877 0.00108 33 118 0.98 0.0388 0.205 0.176 0.235 0.438 0.541 0.0011 0.00135 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 33 100 0.676 0.0329 0.0172 0.162 0.217 0.13 0.157 0.000324 0.000392 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 33 5545.987 0.005 0.0403 0.463 1.27 0.18 0.24 0.667 0.813 0.00167 0.00203 ! Validation 33 5545.987 0.005 0.0408 0.319 1.13 0.18 0.241 0.563 0.675 0.00141 0.00169 Wall time: 5545.987347880378 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 34 100 1.17 0.0389 0.393 0.177 0.235 0.665 0.748 0.00166 0.00187 34 118 2 0.0373 1.25 0.173 0.231 1.24 1.34 0.00309 0.00334 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 34 100 0.863 0.032 0.224 0.16 0.213 0.556 0.564 0.00139 0.00141 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 34 5713.788 0.005 0.0388 0.362 1.14 0.176 0.235 0.582 0.712 0.00146 0.00178 ! Validation 34 5713.788 0.005 0.0398 0.873 1.67 0.178 0.238 1.02 1.12 0.00255 0.00279 Wall time: 5713.788826073054 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 35 100 0.934 0.0375 0.184 0.174 0.231 0.45 0.512 0.00113 0.00128 35 118 1.01 0.0402 0.206 0.178 0.239 0.454 0.542 0.00113 0.00136 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 35 100 0.769 0.0307 0.155 0.157 0.209 0.465 0.47 0.00116 0.00118 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 35 5881.085 0.005 0.0379 0.308 1.07 0.175 0.232 0.532 0.663 0.00133 0.00166 ! Validation 35 5881.085 0.005 0.0382 0.569 1.33 0.174 0.233 0.81 0.9 0.00202 0.00225 Wall time: 5881.085445408244 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 36 100 0.857 0.0368 0.122 0.172 0.229 0.328 0.416 0.000819 0.00104 36 118 0.94 0.0379 0.183 0.174 0.232 0.441 0.51 0.0011 0.00128 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 36 100 1.46 0.0299 0.86 0.155 0.206 1.11 1.11 0.00276 0.00277 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 36 6048.490 0.005 0.037 0.369 1.11 0.172 0.23 0.578 0.726 0.00144 0.00181 ! Validation 36 6048.490 0.005 0.0371 0.562 1.31 0.172 0.23 0.812 0.895 0.00203 0.00224 Wall time: 6048.490027527325 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 37 100 1.03 0.0359 0.312 0.169 0.226 0.589 0.666 0.00147 0.00167 37 118 0.803 0.0355 0.0928 0.169 0.225 0.29 0.364 0.000725 0.000909 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 37 100 0.579 0.0288 0.00413 0.152 0.202 0.0751 0.0767 0.000188 0.000192 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 37 6215.764 0.005 0.0361 0.366 1.09 0.17 0.227 0.595 0.724 0.00149 0.00181 ! Validation 37 6215.764 0.005 0.0363 0.192 0.918 0.17 0.227 0.416 0.523 0.00104 0.00131 Wall time: 6215.764725954272 ! Best model 37 0.918 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 38 100 0.906 0.0345 0.216 0.166 0.222 0.46 0.555 0.00115 0.00139 38 118 1.09 0.0367 0.359 0.173 0.229 0.623 0.716 0.00156 0.00179 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 38 100 1.1 0.0289 0.526 0.152 0.203 0.863 0.866 0.00216 0.00216 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 38 6383.056 0.005 0.0354 0.382 1.09 0.169 0.225 0.602 0.738 0.0015 0.00184 ! Validation 38 6383.056 0.005 0.0361 0.307 1.03 0.169 0.227 0.572 0.661 0.00143 0.00165 Wall time: 6383.05659274105 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 39 100 1.3 0.0344 0.614 0.167 0.221 0.868 0.935 0.00217 0.00234 39 118 0.812 0.0314 0.184 0.158 0.211 0.449 0.512 0.00112 0.00128 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 39 100 0.576 0.0279 0.017 0.15 0.2 0.137 0.155 0.000343 0.000389 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 39 6550.324 0.005 0.0345 0.388 1.08 0.167 0.222 0.617 0.744 0.00154 0.00186 ! Validation 39 6550.324 0.005 0.0354 0.248 0.956 0.168 0.225 0.49 0.594 0.00123 0.00149 Wall time: 6550.324303986039 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 40 100 0.821 0.0331 0.158 0.164 0.217 0.392 0.474 0.000981 0.00119 40 118 1.04 0.0341 0.363 0.166 0.22 0.644 0.719 0.00161 0.0018 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 40 100 1.1 0.0278 0.541 0.149 0.199 0.875 0.878 0.00219 0.00219 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 40 6717.588 0.005 0.0343 0.409 1.09 0.166 0.221 0.62 0.763 0.00155 0.00191 ! Validation 40 6717.588 0.005 0.0346 0.313 1.01 0.166 0.222 0.579 0.667 0.00145 0.00167 Wall time: 6717.588284178171 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 41 100 1.08 0.0339 0.403 0.165 0.22 0.657 0.758 0.00164 0.00189 41 118 0.773 0.0331 0.111 0.164 0.217 0.339 0.398 0.000848 0.000994 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 41 100 0.841 0.0265 0.31 0.147 0.194 0.662 0.664 0.00165 0.00166 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 41 6884.975 0.005 0.0329 0.276 0.934 0.163 0.216 0.505 0.628 0.00126 0.00157 ! Validation 41 6884.975 0.005 0.0334 0.172 0.839 0.164 0.218 0.405 0.495 0.00101 0.00124 Wall time: 6884.975482440088 ! Best model 41 0.839 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 42 100 1.21 0.0345 0.522 0.167 0.222 0.792 0.863 0.00198 0.00216 42 118 0.729 0.0308 0.114 0.158 0.209 0.32 0.403 0.000801 0.00101 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 42 100 0.586 0.0258 0.0685 0.145 0.192 0.305 0.312 0.000762 0.000781 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 42 7052.265 0.005 0.0323 0.391 1.04 0.161 0.214 0.591 0.748 0.00148 0.00187 ! Validation 42 7052.265 0.005 0.0329 0.35 1.01 0.162 0.217 0.612 0.706 0.00153 0.00177 Wall time: 7052.265214377083 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 43 100 0.903 0.0302 0.3 0.157 0.207 0.594 0.654 0.00148 0.00163 43 118 2.35 0.0382 1.59 0.176 0.233 1.45 1.5 0.00361 0.00376 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 43 100 2.11 0.0355 1.4 0.169 0.225 1.41 1.41 0.00352 0.00353 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 43 7219.542 0.005 0.0309 0.293 0.912 0.158 0.21 0.48 0.636 0.0012 0.00159 ! Validation 43 7219.542 0.005 0.043 0.596 1.45 0.185 0.247 0.828 0.921 0.00207 0.0023 Wall time: 7219.542159063276 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 44 100 0.649 0.0285 0.0797 0.152 0.201 0.254 0.337 0.000635 0.000842 44 118 0.779 0.0316 0.147 0.158 0.212 0.375 0.458 0.000938 0.00115 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 44 100 2.67 0.0255 2.16 0.143 0.191 1.75 1.76 0.00439 0.00439 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 44 7386.814 0.005 0.0312 0.335 0.959 0.159 0.211 0.539 0.692 0.00135 0.00173 ! Validation 44 7386.814 0.005 0.0326 1.27 1.92 0.161 0.216 1.28 1.34 0.00321 0.00336 Wall time: 7386.8146933563985 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 45 100 1.24 0.0308 0.627 0.158 0.21 0.895 0.945 0.00224 0.00236 45 118 0.73 0.03 0.13 0.156 0.207 0.374 0.431 0.000934 0.00108 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 45 100 0.688 0.0244 0.201 0.141 0.186 0.531 0.535 0.00133 0.00134 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 45 7554.103 0.005 0.0305 0.416 1.03 0.157 0.208 0.619 0.772 0.00155 0.00193 ! Validation 45 7554.103 0.005 0.031 0.619 1.24 0.157 0.21 0.86 0.939 0.00215 0.00235 Wall time: 7554.103037211113 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 46 100 0.653 0.0283 0.0871 0.152 0.201 0.291 0.352 0.000727 0.00088 46 118 0.704 0.029 0.124 0.154 0.203 0.335 0.421 0.000837 0.00105 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 46 100 0.67 0.0225 0.22 0.136 0.179 0.557 0.559 0.00139 0.0014 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 46 7723.808 0.005 0.0286 0.23 0.802 0.152 0.202 0.453 0.573 0.00113 0.00143 ! Validation 46 7723.808 0.005 0.029 0.177 0.757 0.152 0.203 0.406 0.502 0.00102 0.00125 Wall time: 7723.808401583228 ! Best model 46 0.757 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 47 100 0.767 0.0278 0.211 0.15 0.199 0.467 0.548 0.00117 0.00137 47 118 0.662 0.0259 0.145 0.146 0.192 0.409 0.454 0.00102 0.00114 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 47 100 0.434 0.0216 0.00272 0.134 0.175 0.0497 0.0622 0.000124 0.000156 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 47 7891.290 0.005 0.0279 0.294 0.851 0.15 0.199 0.531 0.648 0.00133 0.00162 ! Validation 47 7891.290 0.005 0.028 0.144 0.704 0.15 0.2 0.367 0.453 0.000918 0.00113 Wall time: 7891.290293193422 ! Best model 47 0.704 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 48 100 0.819 0.027 0.278 0.148 0.196 0.56 0.63 0.0014 0.00157 48 118 0.996 0.026 0.476 0.147 0.192 0.765 0.823 0.00191 0.00206 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 48 100 0.613 0.0213 0.187 0.133 0.174 0.514 0.516 0.00128 0.00129 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 48 8058.767 0.005 0.0273 0.372 0.918 0.149 0.197 0.603 0.728 0.00151 0.00182 ! Validation 48 8058.767 0.005 0.0277 0.415 0.968 0.149 0.199 0.683 0.768 0.00171 0.00192 Wall time: 8058.767576611135 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 49 100 0.585 0.0257 0.0716 0.145 0.191 0.25 0.319 0.000625 0.000799 49 118 0.566 0.0242 0.0824 0.141 0.186 0.305 0.343 0.000762 0.000856 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 49 100 0.648 0.0207 0.234 0.132 0.172 0.574 0.578 0.00144 0.00144 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 49 8226.225 0.005 0.0263 0.266 0.791 0.146 0.193 0.502 0.617 0.00125 0.00154 ! Validation 49 8226.225 0.005 0.0267 0.184 0.717 0.147 0.195 0.425 0.512 0.00106 0.00128 Wall time: 8226.225609230343 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 50 100 0.888 0.0251 0.385 0.143 0.189 0.665 0.741 0.00166 0.00185 50 118 0.872 0.0262 0.347 0.146 0.193 0.654 0.703 0.00163 0.00176 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 50 100 0.542 0.0194 0.155 0.127 0.166 0.465 0.47 0.00116 0.00118 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 50 8393.801 0.005 0.0252 0.244 0.749 0.143 0.19 0.471 0.589 0.00118 0.00147 ! Validation 50 8393.801 0.005 0.0255 0.112 0.622 0.143 0.191 0.318 0.4 0.000796 0.001 Wall time: 8393.801434432156 ! Best model 50 0.622 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 51 100 1.36 0.0261 0.835 0.146 0.193 1.04 1.09 0.00259 0.00273 51 118 0.534 0.0226 0.0818 0.137 0.179 0.271 0.341 0.000676 0.000853 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 51 100 0.423 0.0196 0.0308 0.128 0.167 0.198 0.209 0.000495 0.000524 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 51 8561.290 0.005 0.025 0.369 0.869 0.143 0.189 0.584 0.727 0.00146 0.00182 ! Validation 51 8561.290 0.005 0.0258 0.091 0.606 0.144 0.192 0.292 0.36 0.000729 0.0009 Wall time: 8561.29105349537 ! Best model 51 0.606 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 52 100 0.683 0.0239 0.205 0.139 0.185 0.453 0.54 0.00113 0.00135 52 118 0.537 0.0245 0.0459 0.141 0.187 0.199 0.256 0.000497 0.000639 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 52 100 0.975 0.0187 0.602 0.125 0.163 0.922 0.926 0.00231 0.00232 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 52 8728.774 0.005 0.0238 0.198 0.675 0.139 0.184 0.428 0.533 0.00107 0.00133 ! Validation 52 8728.774 0.005 0.0246 0.361 0.854 0.141 0.187 0.644 0.717 0.00161 0.00179 Wall time: 8728.774307037238 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 53 100 0.54 0.0222 0.0962 0.135 0.178 0.307 0.37 0.000767 0.000925 53 118 0.457 0.0202 0.054 0.129 0.169 0.235 0.277 0.000586 0.000693 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 53 100 0.398 0.0177 0.0443 0.122 0.159 0.235 0.251 0.000588 0.000628 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 53 8896.247 0.005 0.0231 0.234 0.696 0.137 0.181 0.462 0.579 0.00116 0.00145 ! Validation 53 8896.247 0.005 0.0234 0.0794 0.547 0.137 0.183 0.266 0.336 0.000665 0.000841 Wall time: 8896.247454642318 ! Best model 53 0.547 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 54 100 0.652 0.0216 0.219 0.133 0.175 0.458 0.559 0.00115 0.0014 54 118 1.17 0.0199 0.77 0.128 0.168 1.02 1.05 0.00254 0.00262 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 54 100 0.918 0.0174 0.57 0.121 0.158 0.897 0.901 0.00224 0.00225 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 54 9065.851 0.005 0.0226 0.248 0.699 0.136 0.179 0.454 0.59 0.00114 0.00147 ! Validation 54 9065.851 0.005 0.0231 0.8 1.26 0.136 0.181 1.01 1.07 0.00252 0.00267 Wall time: 9065.851641494315 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 55 100 1.03 0.0241 0.544 0.14 0.185 0.815 0.88 0.00204 0.0022 55 118 0.495 0.0215 0.0645 0.133 0.175 0.27 0.303 0.000674 0.000758 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 55 100 0.437 0.0177 0.0828 0.122 0.159 0.336 0.343 0.00084 0.000858 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 55 9234.576 0.005 0.0225 0.331 0.782 0.135 0.179 0.555 0.689 0.00139 0.00172 ! Validation 55 9234.576 0.005 0.0231 0.0885 0.551 0.137 0.181 0.281 0.355 0.000701 0.000887 Wall time: 9234.576426569372 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 56 100 0.503 0.0215 0.0737 0.133 0.175 0.249 0.324 0.000622 0.00081 56 118 0.522 0.0226 0.0693 0.137 0.18 0.26 0.314 0.00065 0.000785 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 56 100 1.14 0.0178 0.782 0.122 0.159 1.05 1.06 0.00263 0.00264 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 56 9402.461 0.005 0.0221 0.311 0.754 0.134 0.178 0.547 0.667 0.00137 0.00167 ! Validation 56 9402.461 0.005 0.0228 0.591 1.05 0.136 0.18 0.856 0.918 0.00214 0.00229 Wall time: 9402.461244800128 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 57 100 0.902 0.0216 0.47 0.133 0.175 0.747 0.818 0.00187 0.00205 57 118 0.513 0.0198 0.116 0.128 0.168 0.347 0.406 0.000869 0.00102 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 57 100 0.348 0.0167 0.014 0.119 0.154 0.113 0.141 0.000282 0.000353 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 57 9570.160 0.005 0.0214 0.229 0.657 0.132 0.175 0.459 0.572 0.00115 0.00143 ! Validation 57 9570.160 0.005 0.0218 0.0776 0.514 0.133 0.176 0.266 0.333 0.000664 0.000831 Wall time: 9570.16061371006 ! Best model 57 0.514 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 58 100 0.49 0.021 0.0691 0.131 0.173 0.247 0.314 0.000616 0.000784 58 118 0.494 0.0198 0.0984 0.128 0.168 0.298 0.374 0.000746 0.000936 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 58 100 0.331 0.0162 0.00705 0.117 0.152 0.0874 0.1 0.000219 0.00025 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 58 9737.695 0.005 0.021 0.243 0.664 0.131 0.173 0.47 0.59 0.00117 0.00148 ! Validation 58 9737.695 0.005 0.0214 0.12 0.548 0.131 0.175 0.329 0.414 0.000823 0.00103 Wall time: 9737.69509622315 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 59 100 0.711 0.0202 0.307 0.128 0.17 0.611 0.662 0.00153 0.00165 59 118 0.694 0.0202 0.289 0.129 0.17 0.586 0.642 0.00146 0.0016 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 59 100 0.341 0.0162 0.0179 0.117 0.152 0.126 0.16 0.000314 0.000399 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 59 9906.884 0.005 0.0204 0.222 0.629 0.129 0.17 0.452 0.561 0.00113 0.0014 ! Validation 59 9906.884 0.005 0.0212 0.127 0.551 0.13 0.174 0.355 0.426 0.000889 0.00106 Wall time: 9906.88420266239 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 60 100 0.541 0.0196 0.15 0.127 0.167 0.386 0.462 0.000966 0.00116 60 118 0.478 0.019 0.0966 0.125 0.165 0.311 0.371 0.000777 0.000928 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 60 100 0.502 0.0161 0.181 0.116 0.151 0.494 0.507 0.00124 0.00127 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 60 10074.509 0.005 0.0203 0.263 0.669 0.129 0.17 0.482 0.614 0.00121 0.00153 ! Validation 60 10074.509 0.005 0.0208 0.14 0.557 0.129 0.172 0.367 0.447 0.000916 0.00112 Wall time: 10074.509431233164 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 61 100 0.498 0.0194 0.109 0.126 0.166 0.315 0.394 0.000787 0.000985 61 118 0.445 0.0198 0.049 0.126 0.168 0.225 0.264 0.000562 0.000661 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 61 100 0.319 0.0152 0.0147 0.113 0.147 0.11 0.145 0.000275 0.000361 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 61 10242.014 0.005 0.0196 0.191 0.583 0.126 0.167 0.428 0.523 0.00107 0.00131 ! Validation 61 10242.014 0.005 0.0202 0.138 0.541 0.127 0.169 0.364 0.443 0.00091 0.00111 Wall time: 10242.01426461339 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 62 100 0.901 0.0191 0.519 0.125 0.165 0.795 0.86 0.00199 0.00215 62 118 0.572 0.0198 0.177 0.127 0.168 0.433 0.501 0.00108 0.00125 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 62 100 0.389 0.0151 0.0865 0.113 0.147 0.337 0.351 0.000843 0.000878 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 62 10409.499 0.005 0.0193 0.225 0.611 0.125 0.166 0.457 0.566 0.00114 0.00142 ! Validation 62 10409.499 0.005 0.02 0.0894 0.489 0.127 0.169 0.286 0.357 0.000716 0.000892 Wall time: 10409.499451737385 ! Best model 62 0.489 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 63 100 0.448 0.0185 0.0769 0.123 0.162 0.268 0.331 0.000671 0.000828 63 118 0.509 0.0188 0.134 0.124 0.163 0.388 0.437 0.00097 0.00109 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 63 100 0.451 0.015 0.151 0.112 0.146 0.451 0.463 0.00113 0.00116 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 63 10577.429 0.005 0.0191 0.225 0.607 0.125 0.165 0.461 0.566 0.00115 0.00142 ! Validation 63 10577.429 0.005 0.0196 0.323 0.714 0.125 0.167 0.606 0.678 0.00152 0.00169 Wall time: 10577.429853369016 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 64 100 0.496 0.0186 0.124 0.124 0.163 0.347 0.421 0.000867 0.00105 64 118 0.451 0.0177 0.0974 0.121 0.159 0.293 0.372 0.000733 0.000931 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 64 100 0.304 0.0148 0.00843 0.111 0.145 0.0926 0.11 0.000231 0.000274 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 64 10744.684 0.005 0.0188 0.26 0.637 0.124 0.164 0.498 0.61 0.00124 0.00152 ! Validation 64 10744.684 0.005 0.0194 0.092 0.48 0.125 0.166 0.289 0.362 0.000722 0.000905 Wall time: 10744.68458234705 ! Best model 64 0.480 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 65 100 0.52 0.0179 0.162 0.121 0.16 0.405 0.48 0.00101 0.0012 65 118 0.413 0.0187 0.039 0.123 0.163 0.194 0.236 0.000485 0.000589 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 65 100 0.387 0.0146 0.0962 0.111 0.144 0.355 0.37 0.000887 0.000925 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 65 10914.305 0.005 0.0184 0.203 0.572 0.122 0.162 0.43 0.54 0.00107 0.00135 ! Validation 65 10914.305 0.005 0.019 0.105 0.485 0.123 0.165 0.312 0.386 0.000781 0.000965 Wall time: 10914.305456178263 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 66 100 0.441 0.0178 0.0859 0.12 0.159 0.271 0.35 0.000678 0.000874 66 118 0.845 0.0177 0.491 0.12 0.159 0.8 0.836 0.002 0.00209 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 66 100 0.646 0.014 0.366 0.109 0.141 0.714 0.722 0.00178 0.00181 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 66 11083.787 0.005 0.018 0.185 0.545 0.121 0.16 0.418 0.51 0.00104 0.00128 ! Validation 66 11083.787 0.005 0.0186 0.524 0.896 0.122 0.163 0.807 0.864 0.00202 0.00216 Wall time: 11083.787373758387 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 67 100 0.498 0.0178 0.141 0.12 0.159 0.375 0.448 0.000937 0.00112 67 118 0.373 0.0173 0.0261 0.119 0.157 0.139 0.193 0.000348 0.000482 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 67 100 0.287 0.0138 0.0111 0.107 0.14 0.114 0.126 0.000286 0.000314 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 67 11252.676 0.005 0.0178 0.201 0.556 0.12 0.159 0.435 0.536 0.00109 0.00134 ! Validation 67 11252.676 0.005 0.0182 0.0749 0.439 0.121 0.161 0.264 0.327 0.000659 0.000817 Wall time: 11252.676954683382 ! Best model 67 0.439 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 68 100 0.465 0.0179 0.107 0.121 0.16 0.326 0.39 0.000815 0.000975 68 118 0.431 0.0196 0.0388 0.125 0.167 0.193 0.235 0.000484 0.000588 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 68 100 0.322 0.0135 0.0529 0.106 0.139 0.25 0.274 0.000625 0.000686 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 68 11420.006 0.005 0.0175 0.186 0.537 0.119 0.158 0.401 0.517 0.001 0.00129 ! Validation 68 11420.006 0.005 0.0178 0.078 0.434 0.119 0.159 0.265 0.333 0.000663 0.000833 Wall time: 11420.00691516418 ! Best model 68 0.434 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 69 100 0.838 0.0178 0.481 0.12 0.159 0.779 0.828 0.00195 0.00207 69 118 0.529 0.0165 0.2 0.116 0.153 0.499 0.534 0.00125 0.00133 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 69 100 0.419 0.0139 0.141 0.108 0.141 0.437 0.447 0.00109 0.00112 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 69 11587.610 0.005 0.0176 0.291 0.643 0.12 0.158 0.523 0.645 0.00131 0.00161 ! Validation 69 11587.610 0.005 0.0183 0.117 0.483 0.121 0.162 0.33 0.408 0.000825 0.00102 Wall time: 11587.610626030248 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 70 100 0.405 0.0174 0.0562 0.119 0.158 0.224 0.283 0.000561 0.000707 70 118 0.365 0.0155 0.0547 0.113 0.149 0.249 0.279 0.000623 0.000698 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 70 100 0.341 0.0131 0.0795 0.105 0.136 0.32 0.336 0.0008 0.000841 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 70 11754.952 0.005 0.017 0.156 0.496 0.118 0.156 0.381 0.473 0.000953 0.00118 ! Validation 70 11754.952 0.005 0.0174 0.0976 0.445 0.118 0.157 0.297 0.373 0.000744 0.000932 Wall time: 11754.952778371982 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 71 100 0.658 0.0168 0.322 0.117 0.155 0.631 0.678 0.00158 0.00169 71 118 0.401 0.0162 0.0779 0.115 0.152 0.278 0.333 0.000695 0.000833 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 71 100 0.443 0.013 0.182 0.104 0.136 0.499 0.51 0.00125 0.00127 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 71 11922.301 0.005 0.0168 0.234 0.57 0.117 0.155 0.485 0.579 0.00121 0.00145 ! Validation 71 11922.301 0.005 0.0174 0.219 0.567 0.118 0.157 0.469 0.558 0.00117 0.0014 Wall time: 11922.301891655196 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 72 100 0.499 0.0173 0.153 0.118 0.157 0.403 0.467 0.00101 0.00117 72 118 0.494 0.0169 0.156 0.117 0.155 0.414 0.471 0.00103 0.00118 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 72 100 0.623 0.0134 0.355 0.106 0.138 0.703 0.711 0.00176 0.00178 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 72 12089.663 0.005 0.0168 0.22 0.555 0.117 0.154 0.454 0.561 0.00113 0.0014 ! Validation 72 12089.663 0.005 0.0177 0.529 0.884 0.119 0.159 0.807 0.868 0.00202 0.00217 Wall time: 12089.663936155383 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 73 100 0.839 0.0163 0.514 0.115 0.152 0.798 0.855 0.002 0.00214 73 118 0.367 0.0159 0.049 0.113 0.15 0.206 0.264 0.000514 0.000661 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 73 100 0.361 0.0129 0.104 0.104 0.135 0.374 0.385 0.000936 0.000963 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 73 12257.023 0.005 0.0165 0.214 0.544 0.116 0.154 0.448 0.553 0.00112 0.00138 ! Validation 73 12257.023 0.005 0.0169 0.216 0.554 0.116 0.155 0.48 0.555 0.0012 0.00139 Wall time: 12257.023547826335 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 74 100 0.411 0.0165 0.0801 0.115 0.153 0.286 0.338 0.000715 0.000845 74 118 0.61 0.0147 0.315 0.11 0.145 0.599 0.67 0.0015 0.00168 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 74 100 0.476 0.0125 0.226 0.103 0.133 0.56 0.567 0.0014 0.00142 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 74 12424.516 0.005 0.0163 0.213 0.538 0.115 0.152 0.447 0.55 0.00112 0.00138 ! Validation 74 12424.516 0.005 0.0167 0.331 0.665 0.115 0.154 0.62 0.687 0.00155 0.00172 Wall time: 12424.51602396043 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 75 100 0.378 0.0157 0.0638 0.113 0.15 0.237 0.302 0.000592 0.000754 75 118 0.357 0.0151 0.054 0.111 0.147 0.232 0.277 0.000581 0.000693 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 75 100 0.364 0.0122 0.12 0.101 0.132 0.402 0.414 0.001 0.00104 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 75 12591.887 0.005 0.0157 0.146 0.461 0.113 0.15 0.372 0.458 0.000929 0.00114 ! Validation 75 12591.887 0.005 0.0163 0.264 0.59 0.114 0.152 0.544 0.613 0.00136 0.00153 Wall time: 12591.887809145264 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 76 100 0.838 0.0151 0.537 0.11 0.146 0.831 0.874 0.00208 0.00219 76 118 0.438 0.0151 0.137 0.111 0.146 0.408 0.441 0.00102 0.0011 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 76 100 0.304 0.0122 0.0607 0.101 0.132 0.278 0.294 0.000694 0.000735 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 76 12759.269 0.005 0.0154 0.147 0.455 0.112 0.148 0.372 0.458 0.000931 0.00115 ! Validation 76 12759.269 0.005 0.0162 0.0704 0.395 0.114 0.152 0.252 0.317 0.00063 0.000792 Wall time: 12759.269485328346 ! Best model 76 0.395 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 77 100 0.391 0.0158 0.0749 0.114 0.15 0.262 0.327 0.000656 0.000817 77 118 0.337 0.0136 0.0655 0.106 0.139 0.257 0.306 0.000643 0.000764 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 77 100 0.268 0.0121 0.0274 0.1 0.131 0.171 0.197 0.000426 0.000494 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 77 12926.792 0.005 0.0153 0.212 0.518 0.112 0.148 0.456 0.551 0.00114 0.00138 ! Validation 77 12926.792 0.005 0.016 0.139 0.459 0.113 0.151 0.36 0.446 0.0009 0.00111 Wall time: 12926.792797418311 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 78 100 0.391 0.0158 0.0752 0.113 0.15 0.259 0.327 0.000647 0.000818 78 118 0.419 0.016 0.0991 0.113 0.151 0.341 0.376 0.000852 0.000939 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 78 100 0.244 0.0119 0.00566 0.0997 0.13 0.074 0.0898 0.000185 0.000224 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 78 13094.228 0.005 0.0152 0.184 0.487 0.111 0.147 0.406 0.513 0.00101 0.00128 ! Validation 78 13094.228 0.005 0.0159 0.0721 0.39 0.113 0.15 0.256 0.32 0.00064 0.000801 Wall time: 13094.228469375987 ! Best model 78 0.390 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 79 100 0.526 0.0158 0.21 0.113 0.15 0.494 0.548 0.00123 0.00137 79 118 0.323 0.0145 0.034 0.109 0.144 0.175 0.22 0.000439 0.00055 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 79 100 0.287 0.0117 0.0526 0.0991 0.129 0.262 0.274 0.000654 0.000685 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 79 13261.699 0.005 0.0149 0.182 0.481 0.11 0.146 0.422 0.511 0.00105 0.00128 ! Validation 79 13261.699 0.005 0.0157 0.0937 0.407 0.112 0.149 0.292 0.365 0.00073 0.000913 Wall time: 13261.69981807936 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 80 100 0.669 0.0143 0.383 0.108 0.143 0.673 0.739 0.00168 0.00185 80 118 0.365 0.0146 0.073 0.109 0.144 0.276 0.323 0.000689 0.000806 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 80 100 0.274 0.0118 0.038 0.099 0.13 0.218 0.233 0.000545 0.000582 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 80 13429.044 0.005 0.0148 0.203 0.498 0.11 0.145 0.438 0.538 0.0011 0.00135 ! Validation 80 13429.044 0.005 0.0155 0.108 0.419 0.111 0.149 0.32 0.393 0.0008 0.000982 Wall time: 13429.044799063355 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 81 100 0.621 0.0143 0.335 0.109 0.143 0.643 0.69 0.00161 0.00173 81 118 0.345 0.0155 0.0339 0.113 0.149 0.176 0.22 0.00044 0.000549 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 81 100 0.487 0.0127 0.233 0.103 0.135 0.568 0.577 0.00142 0.00144 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 81 13596.927 0.005 0.0147 0.212 0.505 0.109 0.144 0.45 0.551 0.00113 0.00138 ! Validation 81 13596.927 0.005 0.0165 0.332 0.663 0.115 0.154 0.61 0.688 0.00153 0.00172 Wall time: 13596.92751547508 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 82 100 0.356 0.0154 0.048 0.111 0.148 0.217 0.261 0.000544 0.000653 82 118 0.388 0.0154 0.0801 0.112 0.148 0.281 0.338 0.000703 0.000844 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 82 100 0.23 0.0112 0.0069 0.0968 0.126 0.0834 0.0992 0.000208 0.000248 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 82 13764.263 0.005 0.0144 0.136 0.424 0.108 0.143 0.347 0.44 0.000868 0.0011 ! Validation 82 13764.263 0.005 0.0149 0.0657 0.363 0.109 0.146 0.247 0.306 0.000617 0.000765 Wall time: 13764.263124155346 ! Best model 82 0.363 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 83 100 0.381 0.0146 0.0897 0.109 0.144 0.294 0.358 0.000736 0.000894 83 118 0.341 0.0147 0.0468 0.11 0.145 0.224 0.258 0.000561 0.000646 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 83 100 0.309 0.0112 0.0859 0.0964 0.126 0.338 0.35 0.000844 0.000874 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 83 13931.602 0.005 0.0141 0.176 0.458 0.107 0.142 0.408 0.503 0.00102 0.00126 ! Validation 83 13931.602 0.005 0.0147 0.132 0.427 0.109 0.145 0.357 0.434 0.000893 0.00108 Wall time: 13931.602444554213 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 84 100 0.444 0.0146 0.152 0.109 0.144 0.4 0.465 0.001 0.00116 84 118 0.34 0.0139 0.0629 0.107 0.14 0.256 0.299 0.00064 0.000748 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 84 100 0.42 0.0111 0.197 0.0964 0.126 0.52 0.53 0.0013 0.00132 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 84 14099.107 0.005 0.0141 0.197 0.48 0.107 0.142 0.433 0.531 0.00108 0.00133 ! Validation 84 14099.107 0.005 0.0148 0.187 0.483 0.109 0.145 0.437 0.516 0.00109 0.00129 Wall time: 14099.107090077363 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 85 100 0.497 0.0139 0.218 0.106 0.141 0.502 0.558 0.00125 0.00139 85 118 0.319 0.0145 0.0281 0.109 0.144 0.175 0.2 0.000437 0.0005 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 85 100 0.343 0.0105 0.133 0.0936 0.122 0.423 0.435 0.00106 0.00109 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 85 14267.651 0.005 0.0138 0.151 0.426 0.106 0.14 0.373 0.465 0.000932 0.00116 ! Validation 85 14267.651 0.005 0.0142 0.174 0.457 0.106 0.142 0.422 0.498 0.00105 0.00125 Wall time: 14267.651786550414 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 86 100 0.318 0.0133 0.0533 0.104 0.137 0.224 0.275 0.000559 0.000689 86 118 0.352 0.0143 0.0659 0.108 0.143 0.274 0.306 0.000685 0.000766 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 86 100 0.637 0.0106 0.424 0.0939 0.123 0.772 0.778 0.00193 0.00194 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 86 14435.252 0.005 0.0136 0.171 0.443 0.105 0.139 0.404 0.495 0.00101 0.00124 ! Validation 86 14435.252 0.005 0.0141 0.358 0.641 0.106 0.142 0.66 0.714 0.00165 0.00178 Wall time: 14435.252597165294 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 87 100 0.348 0.0133 0.0813 0.105 0.138 0.272 0.34 0.000681 0.000851 87 118 0.6 0.014 0.321 0.107 0.141 0.619 0.676 0.00155 0.00169 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 87 100 0.499 0.0107 0.285 0.0946 0.123 0.632 0.637 0.00158 0.00159 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 87 14604.741 0.005 0.0135 0.17 0.439 0.105 0.139 0.395 0.49 0.000987 0.00122 ! Validation 87 14604.741 0.005 0.0142 0.243 0.528 0.107 0.142 0.524 0.588 0.00131 0.00147 Wall time: 14604.741336007137 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 88 100 0.78 0.013 0.52 0.103 0.136 0.822 0.861 0.00205 0.00215 88 118 0.797 0.0128 0.541 0.103 0.135 0.849 0.878 0.00212 0.00219 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 88 100 0.369 0.0107 0.156 0.0941 0.123 0.464 0.471 0.00116 0.00118 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 88 14772.081 0.005 0.0133 0.17 0.436 0.104 0.138 0.392 0.489 0.000979 0.00122 ! Validation 88 14772.081 0.005 0.0143 0.28 0.566 0.107 0.143 0.567 0.632 0.00142 0.00158 Wall time: 14772.08188039111 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 89 100 0.705 0.0134 0.437 0.104 0.138 0.752 0.789 0.00188 0.00197 89 118 0.38 0.0134 0.112 0.104 0.138 0.334 0.399 0.000834 0.000998 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 89 100 0.683 0.0111 0.461 0.0955 0.126 0.805 0.811 0.00201 0.00203 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 89 14939.315 0.005 0.0133 0.196 0.462 0.104 0.138 0.441 0.53 0.0011 0.00132 ! Validation 89 14939.315 0.005 0.0146 0.501 0.792 0.108 0.144 0.774 0.844 0.00194 0.00211 Wall time: 14939.315002339426 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 90 100 0.317 0.0129 0.0584 0.102 0.136 0.229 0.288 0.000572 0.000721 90 118 0.492 0.0122 0.248 0.0996 0.132 0.564 0.594 0.00141 0.00149 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 90 100 0.216 0.0102 0.0114 0.0919 0.121 0.115 0.127 0.000288 0.000318 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 90 15106.526 0.005 0.0132 0.167 0.431 0.104 0.137 0.383 0.487 0.000958 0.00122 ! Validation 90 15106.526 0.005 0.0136 0.0567 0.329 0.104 0.139 0.227 0.284 0.000568 0.00071 Wall time: 15106.526050212327 ! Best model 90 0.329 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 91 100 0.47 0.0124 0.222 0.1 0.133 0.521 0.562 0.0013 0.00141 91 118 0.301 0.0119 0.0637 0.0987 0.13 0.267 0.301 0.000666 0.000753 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 91 100 0.227 0.00981 0.0313 0.0904 0.118 0.194 0.211 0.000484 0.000528 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 91 15273.748 0.005 0.0127 0.132 0.386 0.102 0.135 0.353 0.434 0.000884 0.00109 ! Validation 91 15273.748 0.005 0.0132 0.095 0.359 0.103 0.137 0.29 0.368 0.000726 0.00092 Wall time: 15273.748208103236 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 92 100 0.394 0.0121 0.152 0.0993 0.131 0.411 0.465 0.00103 0.00116 92 118 0.305 0.0127 0.0518 0.102 0.134 0.226 0.272 0.000566 0.000679 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 92 100 0.263 0.0103 0.0564 0.0923 0.121 0.267 0.283 0.000666 0.000709 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 92 15440.968 0.005 0.0126 0.165 0.418 0.101 0.134 0.399 0.486 0.000998 0.00122 ! Validation 92 15440.968 0.005 0.0136 0.066 0.337 0.104 0.139 0.243 0.307 0.000608 0.000767 Wall time: 15440.968877958134 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 93 100 0.284 0.012 0.043 0.0989 0.131 0.196 0.247 0.00049 0.000619 93 118 0.305 0.0117 0.0723 0.0985 0.129 0.269 0.321 0.000673 0.000802 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 93 100 0.267 0.0101 0.0649 0.0917 0.12 0.292 0.304 0.00073 0.00076 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 93 15608.289 0.005 0.0125 0.148 0.398 0.101 0.133 0.376 0.46 0.000939 0.00115 ! Validation 93 15608.289 0.005 0.0133 0.0901 0.357 0.103 0.138 0.288 0.358 0.000721 0.000896 Wall time: 15608.289741460234 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 94 100 0.296 0.0115 0.0658 0.0967 0.128 0.25 0.306 0.000626 0.000765 94 118 0.337 0.0118 0.1 0.0983 0.13 0.313 0.378 0.000781 0.000945 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 94 100 0.212 0.00936 0.025 0.0881 0.115 0.163 0.189 0.000408 0.000472 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 94 15775.505 0.005 0.0123 0.128 0.374 0.1 0.132 0.347 0.428 0.000869 0.00107 ! Validation 94 15775.505 0.005 0.0128 0.0842 0.341 0.101 0.135 0.286 0.346 0.000714 0.000866 Wall time: 15775.505967256147 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 95 100 0.379 0.0121 0.137 0.099 0.131 0.394 0.441 0.000985 0.0011 95 118 0.293 0.0122 0.0479 0.1 0.132 0.193 0.261 0.000483 0.000653 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 95 100 0.322 0.00979 0.126 0.0901 0.118 0.415 0.424 0.00104 0.00106 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 95 15942.749 0.005 0.0121 0.142 0.383 0.0991 0.131 0.362 0.45 0.000906 0.00113 ! Validation 95 15942.749 0.005 0.0129 0.215 0.474 0.102 0.136 0.483 0.553 0.00121 0.00138 Wall time: 15942.749202632345 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 96 100 0.697 0.0125 0.447 0.101 0.133 0.747 0.798 0.00187 0.002 96 118 0.353 0.0119 0.114 0.099 0.13 0.335 0.404 0.000836 0.00101 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 96 100 0.896 0.00971 0.702 0.0894 0.118 0.996 1 0.00249 0.0025 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 96 16109.989 0.005 0.0122 0.185 0.429 0.0997 0.132 0.418 0.515 0.00105 0.00129 ! Validation 96 16109.989 0.005 0.0128 0.665 0.921 0.101 0.135 0.935 0.973 0.00234 0.00243 Wall time: 16109.98919410538 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 97 100 0.591 0.0129 0.332 0.102 0.136 0.646 0.688 0.00161 0.00172 97 118 0.263 0.0111 0.0401 0.0965 0.126 0.199 0.239 0.000498 0.000598 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 97 100 0.237 0.00946 0.0482 0.0883 0.116 0.244 0.262 0.000609 0.000655 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 97 16277.213 0.005 0.0124 0.234 0.482 0.101 0.133 0.466 0.579 0.00117 0.00145 ! Validation 97 16277.213 0.005 0.0126 0.105 0.358 0.1 0.134 0.323 0.387 0.000809 0.000968 Wall time: 16277.21388139436 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 98 100 0.558 0.0115 0.327 0.0974 0.128 0.633 0.683 0.00158 0.00171 98 118 0.276 0.0109 0.0591 0.0952 0.124 0.249 0.29 0.000623 0.000725 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 98 100 0.198 0.00907 0.0162 0.0867 0.114 0.136 0.152 0.00034 0.00038 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 98 16444.539 0.005 0.0117 0.126 0.36 0.0977 0.129 0.341 0.424 0.000853 0.00106 ! Validation 98 16444.539 0.005 0.0122 0.0619 0.306 0.0987 0.132 0.235 0.297 0.000588 0.000742 Wall time: 16444.53935520025 ! Best model 98 0.306 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 99 100 0.365 0.0111 0.143 0.0959 0.126 0.4 0.451 0.000999 0.00113 99 118 0.272 0.00994 0.0733 0.0905 0.119 0.264 0.323 0.000659 0.000808 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 99 100 0.196 0.00915 0.0131 0.0867 0.114 0.103 0.136 0.000256 0.000341 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 99 16611.782 0.005 0.0117 0.147 0.381 0.0976 0.129 0.369 0.458 0.000922 0.00115 ! Validation 99 16611.782 0.005 0.0122 0.0632 0.306 0.0985 0.132 0.244 0.3 0.000611 0.00075 Wall time: 16611.78269380424 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 100 100 0.275 0.0111 0.0541 0.0947 0.126 0.22 0.278 0.000549 0.000694 100 118 0.512 0.0112 0.289 0.0959 0.126 0.614 0.641 0.00154 0.0016 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 100 100 0.739 0.00881 0.563 0.0854 0.112 0.891 0.896 0.00223 0.00224 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 100 16778.998 0.005 0.0114 0.14 0.369 0.0964 0.128 0.364 0.446 0.00091 0.00111 ! Validation 100 16778.998 0.005 0.0119 0.702 0.94 0.0975 0.13 0.965 1 0.00241 0.0025 Wall time: 16778.998718888033 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 101 100 0.295 0.0118 0.0578 0.0982 0.13 0.238 0.287 0.000596 0.000718 101 118 0.237 0.0102 0.0334 0.0917 0.12 0.147 0.218 0.000368 0.000545 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 101 100 0.246 0.00915 0.0627 0.0871 0.114 0.288 0.299 0.000719 0.000747 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 101 16946.221 0.005 0.0117 0.205 0.439 0.0977 0.129 0.438 0.542 0.0011 0.00136 ! Validation 101 16946.221 0.005 0.0122 0.112 0.356 0.0988 0.132 0.336 0.4 0.00084 0.001 Wall time: 16946.221889146138 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 102 100 0.312 0.0113 0.0872 0.0958 0.127 0.286 0.352 0.000716 0.000881 102 118 0.38 0.00994 0.181 0.09 0.119 0.468 0.507 0.00117 0.00127 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 102 100 0.196 0.009 0.0156 0.0861 0.113 0.135 0.149 0.000338 0.000373 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 102 17113.473 0.005 0.0113 0.141 0.366 0.096 0.127 0.356 0.447 0.00089 0.00112 ! Validation 102 17113.473 0.005 0.0119 0.0486 0.288 0.0977 0.13 0.21 0.263 0.000526 0.000658 Wall time: 17113.47351642931 ! Best model 102 0.288 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 103 100 0.385 0.011 0.165 0.0945 0.125 0.431 0.486 0.00108 0.00121 103 118 0.24 0.0103 0.0333 0.0918 0.121 0.178 0.218 0.000444 0.000544 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 103 100 0.177 0.00855 0.00615 0.0841 0.11 0.0856 0.0936 0.000214 0.000234 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 103 17280.805 0.005 0.0109 0.104 0.323 0.0944 0.125 0.307 0.386 0.000767 0.000965 ! Validation 103 17280.805 0.005 0.0114 0.049 0.278 0.0956 0.128 0.21 0.264 0.000524 0.00066 Wall time: 17280.8056041901 ! Best model 103 0.278 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 104 100 0.254 0.0105 0.0433 0.0926 0.122 0.193 0.248 0.000483 0.000621 104 118 0.34 0.014 0.0603 0.104 0.141 0.226 0.293 0.000566 0.000733 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 104 100 0.533 0.00861 0.36 0.0846 0.111 0.712 0.717 0.00178 0.00179 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 104 17448.043 0.005 0.011 0.169 0.389 0.0946 0.125 0.405 0.492 0.00101 0.00123 ! Validation 104 17448.043 0.005 0.0114 0.393 0.621 0.0956 0.128 0.699 0.748 0.00175 0.00187 Wall time: 17448.043538108002 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 105 100 0.699 0.0108 0.484 0.0934 0.124 0.798 0.83 0.002 0.00208 105 118 0.457 0.00965 0.264 0.0897 0.117 0.594 0.613 0.00148 0.00153 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 105 100 1.2 0.00824 1.04 0.0827 0.108 1.21 1.21 0.00303 0.00304 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 105 17615.264 0.005 0.0107 0.117 0.332 0.0936 0.124 0.321 0.407 0.000804 0.00102 ! Validation 105 17615.264 0.005 0.0113 0.961 1.19 0.0949 0.127 1.14 1.17 0.00286 0.00293 Wall time: 17615.2645894601 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 106 100 0.301 0.0103 0.0957 0.0917 0.121 0.304 0.369 0.000761 0.000923 106 118 0.356 0.0105 0.145 0.0931 0.123 0.406 0.454 0.00101 0.00114 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 106 100 0.198 0.00851 0.0277 0.0838 0.11 0.18 0.199 0.000449 0.000497 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 106 17782.483 0.005 0.011 0.161 0.381 0.0946 0.125 0.375 0.48 0.000938 0.0012 ! Validation 106 17782.483 0.005 0.0113 0.0803 0.306 0.0949 0.127 0.279 0.338 0.000697 0.000845 Wall time: 17782.483288375195 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 107 100 0.249 0.0108 0.0321 0.0937 0.124 0.168 0.214 0.000419 0.000535 107 118 0.223 0.0102 0.019 0.0915 0.12 0.148 0.164 0.00037 0.000411 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 107 100 0.19 0.00834 0.0228 0.0831 0.109 0.163 0.18 0.000406 0.000451 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 107 17949.810 0.005 0.0104 0.107 0.315 0.0921 0.122 0.316 0.391 0.000789 0.000977 ! Validation 107 17949.810 0.005 0.0112 0.0545 0.278 0.0947 0.126 0.222 0.279 0.000554 0.000696 Wall time: 17949.810216679238 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 108 100 0.278 0.0106 0.0664 0.0931 0.123 0.248 0.308 0.000619 0.000769 108 118 0.484 0.00987 0.286 0.0898 0.119 0.617 0.638 0.00154 0.0016 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 108 100 0.694 0.00835 0.527 0.0833 0.109 0.862 0.866 0.00216 0.00217 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 108 18117.072 0.005 0.0102 0.128 0.333 0.0915 0.121 0.346 0.425 0.000864 0.00106 ! Validation 108 18117.072 0.005 0.0111 0.685 0.907 0.0944 0.126 0.956 0.988 0.00239 0.00247 Wall time: 18117.072539945133 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 109 100 0.302 0.0101 0.101 0.0904 0.12 0.336 0.379 0.000839 0.000947 109 118 0.26 0.009 0.0805 0.0868 0.113 0.282 0.339 0.000705 0.000847 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 109 100 0.185 0.00811 0.0231 0.0824 0.108 0.164 0.181 0.000409 0.000454 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 109 18284.343 0.005 0.0104 0.161 0.37 0.0923 0.122 0.393 0.48 0.000982 0.0012 ! Validation 109 18284.343 0.005 0.0109 0.0534 0.272 0.0935 0.125 0.219 0.276 0.000547 0.00069 Wall time: 18284.343496088404 ! Best model 109 0.272 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 110 100 0.262 0.00987 0.0648 0.0896 0.119 0.248 0.304 0.000621 0.00076 110 118 0.274 0.01 0.0736 0.0904 0.119 0.301 0.324 0.000754 0.000809 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 110 100 0.321 0.00764 0.169 0.0796 0.104 0.482 0.49 0.0012 0.00123 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 110 18451.584 0.005 0.00997 0.0883 0.288 0.0902 0.119 0.283 0.355 0.000708 0.000887 ! Validation 110 18451.584 0.005 0.0104 0.194 0.402 0.0911 0.122 0.463 0.526 0.00116 0.00131 Wall time: 18451.58469338203 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 111 100 0.239 0.00936 0.052 0.0876 0.115 0.217 0.272 0.000543 0.00068 111 118 0.289 0.00977 0.0934 0.0899 0.118 0.303 0.365 0.000756 0.000912 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 111 100 0.169 0.00761 0.0163 0.0793 0.104 0.13 0.152 0.000326 0.000381 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 111 18618.806 0.005 0.0101 0.149 0.351 0.0908 0.12 0.368 0.461 0.000919 0.00115 ! Validation 111 18618.806 0.005 0.0105 0.0649 0.275 0.0916 0.122 0.25 0.304 0.000624 0.00076 Wall time: 18618.806290874258 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 112 100 0.534 0.00952 0.343 0.0886 0.116 0.664 0.699 0.00166 0.00175 112 118 0.667 0.00932 0.481 0.0875 0.115 0.795 0.828 0.00199 0.00207 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 112 100 0.466 0.0085 0.296 0.0836 0.11 0.644 0.65 0.00161 0.00162 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 112 18786.139 0.005 0.00999 0.172 0.372 0.0903 0.119 0.407 0.492 0.00102 0.00123 ! Validation 112 18786.139 0.005 0.0113 0.245 0.471 0.0948 0.127 0.537 0.591 0.00134 0.00148 Wall time: 18786.138994014356 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 113 100 0.231 0.01 0.0302 0.0904 0.12 0.172 0.208 0.00043 0.000519 113 118 0.239 0.0102 0.0353 0.0913 0.12 0.196 0.224 0.00049 0.00056 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 113 100 0.21 0.00751 0.0603 0.0792 0.103 0.279 0.293 0.000698 0.000732 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 113 18953.368 0.005 0.00984 0.111 0.307 0.0896 0.118 0.312 0.398 0.000779 0.000994 ! Validation 113 18953.368 0.005 0.0103 0.107 0.313 0.0908 0.121 0.336 0.39 0.000839 0.000976 Wall time: 18953.368637801148 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 114 100 0.221 0.00959 0.0298 0.0885 0.117 0.164 0.206 0.00041 0.000515 114 118 0.245 0.00859 0.0729 0.0841 0.111 0.28 0.322 0.000701 0.000806 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 114 100 0.162 0.00767 0.00818 0.0794 0.105 0.102 0.108 0.000256 0.00027 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 114 19120.587 0.005 0.0096 0.125 0.317 0.0885 0.117 0.329 0.423 0.000823 0.00106 ! Validation 114 19120.587 0.005 0.0103 0.0552 0.26 0.0904 0.121 0.223 0.28 0.000557 0.000701 Wall time: 19120.587096181232 ! Best model 114 0.260 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 115 100 0.215 0.00877 0.04 0.0848 0.112 0.189 0.239 0.000471 0.000597 115 118 0.289 0.00969 0.0949 0.0892 0.117 0.332 0.368 0.000831 0.000919 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 115 100 0.386 0.00749 0.237 0.079 0.103 0.576 0.58 0.00144 0.00145 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 115 19287.816 0.005 0.00934 0.0768 0.264 0.0873 0.115 0.266 0.331 0.000665 0.000826 ! Validation 115 19287.816 0.005 0.0102 0.307 0.51 0.0902 0.12 0.608 0.661 0.00152 0.00165 Wall time: 19287.816566231195 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 116 100 0.33 0.00872 0.156 0.0848 0.111 0.424 0.471 0.00106 0.00118 116 118 0.252 0.00957 0.0605 0.0886 0.117 0.243 0.294 0.000606 0.000734 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 116 100 0.219 0.00708 0.0772 0.0766 0.1 0.324 0.332 0.00081 0.000829 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 116 19455.029 0.005 0.00919 0.106 0.29 0.0865 0.114 0.316 0.39 0.00079 0.000974 ! Validation 116 19455.029 0.005 0.0097 0.158 0.352 0.088 0.118 0.397 0.474 0.000992 0.00119 Wall time: 19455.029688897077 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 117 100 0.348 0.00912 0.166 0.0862 0.114 0.451 0.486 0.00113 0.00122 117 118 0.429 0.00808 0.267 0.0826 0.107 0.573 0.617 0.00143 0.00154 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 117 100 0.641 0.00753 0.49 0.079 0.104 0.833 0.835 0.00208 0.00209 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 117 19622.358 0.005 0.0095 0.209 0.399 0.0881 0.116 0.453 0.545 0.00113 0.00136 ! Validation 117 19622.358 0.005 0.0101 0.49 0.691 0.0898 0.12 0.796 0.835 0.00199 0.00209 Wall time: 19622.35853730142 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 118 100 0.22 0.00874 0.0449 0.0845 0.112 0.211 0.253 0.000526 0.000632 118 118 0.236 0.00829 0.0698 0.0827 0.109 0.262 0.315 0.000655 0.000789 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 118 100 0.144 0.00698 0.00436 0.0759 0.0997 0.0489 0.0788 0.000122 0.000197 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 118 19789.587 0.005 0.00915 0.106 0.289 0.0864 0.114 0.316 0.39 0.00079 0.000975 ! Validation 118 19789.587 0.005 0.00952 0.0399 0.23 0.087 0.116 0.195 0.239 0.000487 0.000596 Wall time: 19789.587790810037 ! Best model 118 0.230 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 119 100 0.406 0.00871 0.231 0.0847 0.111 0.527 0.574 0.00132 0.00144 119 118 0.273 0.00921 0.0887 0.087 0.115 0.333 0.355 0.000832 0.000889 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 119 100 0.166 0.0072 0.022 0.0769 0.101 0.157 0.177 0.000392 0.000442 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 119 19956.819 0.005 0.00894 0.0996 0.278 0.0853 0.113 0.302 0.377 0.000756 0.000942 ! Validation 119 19956.819 0.005 0.00964 0.0669 0.26 0.0877 0.117 0.239 0.309 0.000596 0.000772 Wall time: 19956.819589279126 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 120 100 0.254 0.00865 0.0809 0.0839 0.111 0.279 0.34 0.000698 0.000849 120 118 0.863 0.00877 0.688 0.0851 0.112 0.954 0.99 0.00238 0.00247 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 120 100 0.202 0.00811 0.04 0.0815 0.108 0.22 0.239 0.000551 0.000597 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 120 20124.236 0.005 0.00883 0.112 0.289 0.0848 0.112 0.314 0.393 0.000785 0.000982 ! Validation 120 20124.236 0.005 0.0104 0.139 0.347 0.0911 0.122 0.381 0.445 0.000953 0.00111 Wall time: 20124.236319039017 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 121 100 0.275 0.00851 0.105 0.0832 0.11 0.332 0.387 0.000831 0.000966 121 118 0.246 0.00915 0.0632 0.0873 0.114 0.288 0.3 0.000719 0.00075 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 121 100 0.208 0.00661 0.0756 0.0739 0.097 0.32 0.328 0.0008 0.00082 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 121 20291.466 0.005 0.00895 0.106 0.285 0.0854 0.113 0.308 0.389 0.00077 0.000973 ! Validation 121 20291.466 0.005 0.00907 0.113 0.294 0.0849 0.114 0.35 0.401 0.000874 0.001 Wall time: 20291.46693757642 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 122 100 0.189 0.00793 0.0302 0.0806 0.106 0.163 0.208 0.000408 0.000519 122 118 0.35 0.00976 0.155 0.0895 0.118 0.421 0.469 0.00105 0.00117 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 122 100 0.867 0.00921 0.683 0.0875 0.115 0.985 0.987 0.00246 0.00247 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 122 20459.118 0.005 0.00855 0.104 0.275 0.0834 0.11 0.302 0.385 0.000755 0.000963 ! Validation 122 20459.118 0.005 0.0116 0.666 0.898 0.0975 0.129 0.942 0.974 0.00236 0.00244 Wall time: 20459.118819710333 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 123 100 0.214 0.00838 0.0461 0.0822 0.109 0.209 0.256 0.000522 0.000641 123 118 0.394 0.0073 0.248 0.0781 0.102 0.578 0.595 0.00144 0.00149 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 123 100 0.445 0.00635 0.318 0.0726 0.0951 0.67 0.673 0.00168 0.00168 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 123 20626.534 0.005 0.0088 0.116 0.292 0.0847 0.112 0.317 0.406 0.000792 0.00101 ! Validation 123 20626.534 0.005 0.00891 0.303 0.481 0.0841 0.113 0.617 0.657 0.00154 0.00164 Wall time: 20626.5347535084 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 124 100 0.19 0.00789 0.0321 0.0801 0.106 0.167 0.214 0.000417 0.000535 124 118 0.237 0.0088 0.0606 0.0845 0.112 0.249 0.294 0.000621 0.000735 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 124 100 0.136 0.00641 0.00779 0.0729 0.0956 0.101 0.105 0.000253 0.000263 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 124 20794.107 0.005 0.00863 0.156 0.328 0.0838 0.111 0.366 0.472 0.000915 0.00118 ! Validation 124 20794.107 0.005 0.00893 0.0419 0.221 0.0843 0.113 0.196 0.244 0.000489 0.000611 Wall time: 20794.107730623335 ! Best model 124 0.221 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 125 100 0.405 0.00856 0.234 0.0827 0.11 0.533 0.578 0.00133 0.00144 125 118 0.206 0.00885 0.0292 0.0841 0.112 0.162 0.204 0.000404 0.00051 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 125 100 0.453 0.00651 0.323 0.0736 0.0963 0.676 0.678 0.00169 0.0017 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 125 20961.561 0.005 0.00821 0.0852 0.249 0.0817 0.108 0.277 0.349 0.000692 0.000873 ! Validation 125 20961.561 0.005 0.00891 0.32 0.498 0.0842 0.113 0.637 0.675 0.00159 0.00169 Wall time: 20961.56106079137 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 126 100 0.187 0.00756 0.036 0.0786 0.104 0.179 0.226 0.000448 0.000566 126 118 0.291 0.00891 0.113 0.0846 0.113 0.366 0.402 0.000915 0.001 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 126 100 0.22 0.0062 0.0956 0.0711 0.094 0.364 0.369 0.00091 0.000922 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 126 21129.087 0.005 0.0083 0.0899 0.256 0.0821 0.109 0.279 0.357 0.000697 0.000894 ! Validation 126 21129.087 0.005 0.00856 0.151 0.322 0.0823 0.11 0.415 0.464 0.00104 0.00116 Wall time: 21129.087663974147 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 127 100 0.222 0.00844 0.0529 0.0818 0.11 0.236 0.274 0.00059 0.000686 127 118 0.182 0.00796 0.0228 0.0801 0.106 0.15 0.18 0.000374 0.000451 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 127 100 0.141 0.00619 0.0167 0.0713 0.0939 0.139 0.154 0.000348 0.000385 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 127 21296.497 0.005 0.00805 0.109 0.27 0.0808 0.107 0.315 0.395 0.000789 0.000988 ! Validation 127 21296.497 0.005 0.00843 0.0429 0.212 0.0817 0.11 0.193 0.247 0.000482 0.000618 Wall time: 21296.497893372085 ! Best model 127 0.212 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 128 100 0.191 0.00791 0.0328 0.0804 0.106 0.175 0.216 0.000437 0.00054 128 118 0.169 0.00733 0.0226 0.078 0.102 0.157 0.179 0.000392 0.000448 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 128 100 0.174 0.00606 0.0531 0.0707 0.0929 0.266 0.275 0.000666 0.000687 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 128 21463.930 0.005 0.00786 0.0876 0.245 0.0799 0.106 0.286 0.354 0.000714 0.000885 ! Validation 128 21463.930 0.005 0.00836 0.069 0.236 0.0815 0.109 0.255 0.314 0.000637 0.000784 Wall time: 21463.930638383143 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 129 100 0.302 0.0081 0.14 0.0811 0.107 0.399 0.447 0.000998 0.00112 129 118 0.165 0.0076 0.0131 0.0784 0.104 0.114 0.136 0.000286 0.000341 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 129 100 0.134 0.00603 0.013 0.07 0.0927 0.113 0.136 0.000283 0.000341 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 129 21631.344 0.005 0.00782 0.101 0.257 0.0796 0.106 0.306 0.38 0.000764 0.00095 ! Validation 129 21631.344 0.005 0.0083 0.0497 0.216 0.081 0.109 0.219 0.266 0.000548 0.000665 Wall time: 21631.34449160425 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 130 100 0.173 0.00772 0.0182 0.0795 0.105 0.121 0.161 0.000303 0.000403 130 118 0.215 0.00741 0.0667 0.0782 0.103 0.282 0.308 0.000704 0.000771 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 130 100 0.255 0.00618 0.131 0.0711 0.0938 0.426 0.432 0.00107 0.00108 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 130 21798.759 0.005 0.00774 0.0971 0.252 0.0792 0.105 0.3 0.372 0.00075 0.000931 ! Validation 130 21798.759 0.005 0.00829 0.146 0.312 0.0811 0.109 0.406 0.456 0.00102 0.00114 Wall time: 21798.759365015198 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 131 100 0.232 0.00765 0.0787 0.0786 0.104 0.3 0.335 0.000751 0.000837 131 118 0.182 0.00736 0.035 0.076 0.102 0.17 0.223 0.000426 0.000558 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 131 100 0.174 0.00585 0.0571 0.0687 0.0913 0.276 0.285 0.00069 0.000713 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 131 21966.280 0.005 0.00769 0.109 0.263 0.079 0.105 0.32 0.394 0.000801 0.000986 ! Validation 131 21966.280 0.005 0.00814 0.0934 0.256 0.0802 0.108 0.315 0.365 0.000788 0.000912 Wall time: 21966.280871197116 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 132 100 0.203 0.00729 0.0567 0.077 0.102 0.242 0.284 0.000606 0.00071 132 118 0.198 0.00675 0.0635 0.0739 0.098 0.266 0.301 0.000664 0.000752 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 132 100 0.125 0.00601 0.00535 0.0696 0.0925 0.0632 0.0873 0.000158 0.000218 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 132 22133.707 0.005 0.0075 0.0853 0.235 0.0779 0.103 0.285 0.349 0.000712 0.000872 ! Validation 132 22133.707 0.005 0.00821 0.0357 0.2 0.0806 0.108 0.182 0.226 0.000454 0.000564 Wall time: 22133.707500743214 ! Best model 132 0.200 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 133 100 0.169 0.00705 0.0277 0.0756 0.1 0.158 0.199 0.000394 0.000496 133 118 0.314 0.00748 0.164 0.0776 0.103 0.467 0.484 0.00117 0.00121 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 133 100 0.186 0.00569 0.0725 0.0679 0.09 0.316 0.321 0.00079 0.000803 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 133 22301.242 0.005 0.00753 0.0948 0.245 0.0781 0.104 0.287 0.367 0.000718 0.000917 ! Validation 133 22301.242 0.005 0.00794 0.0955 0.254 0.0791 0.106 0.318 0.369 0.000794 0.000922 Wall time: 22301.24287525518 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 134 100 0.236 0.00783 0.0798 0.0792 0.106 0.281 0.337 0.000702 0.000843 134 118 0.18 0.00682 0.0438 0.075 0.0986 0.233 0.25 0.000582 0.000625 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 134 100 0.117 0.00559 0.00518 0.0671 0.0892 0.0596 0.0859 0.000149 0.000215 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 134 22468.743 0.005 0.00738 0.0892 0.237 0.0773 0.103 0.292 0.357 0.000731 0.000893 ! Validation 134 22468.743 0.005 0.00775 0.0354 0.19 0.078 0.105 0.183 0.224 0.000458 0.000561 Wall time: 22468.743268524297 ! Best model 134 0.190 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 135 100 0.183 0.00739 0.035 0.0772 0.103 0.165 0.223 0.000412 0.000558 135 118 0.2 0.00677 0.065 0.0745 0.0982 0.24 0.304 0.0006 0.000761 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 135 100 0.249 0.00595 0.13 0.069 0.0921 0.426 0.431 0.00107 0.00108 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 135 22640.420 0.005 0.0073 0.0991 0.245 0.0768 0.102 0.307 0.376 0.000769 0.00094 ! Validation 135 22640.420 0.005 0.00795 0.126 0.285 0.0791 0.106 0.371 0.423 0.000929 0.00106 Wall time: 22640.420629990287 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 136 100 0.203 0.00671 0.0685 0.0741 0.0978 0.274 0.312 0.000685 0.000781 136 118 0.183 0.00683 0.0461 0.0743 0.0986 0.225 0.256 0.000563 0.000641 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 136 100 0.134 0.00552 0.0233 0.0668 0.0887 0.17 0.182 0.000425 0.000455 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 136 22808.016 0.005 0.00715 0.0682 0.211 0.076 0.101 0.253 0.312 0.000633 0.00078 ! Validation 136 22808.016 0.005 0.00764 0.0588 0.212 0.0776 0.104 0.229 0.289 0.000572 0.000724 Wall time: 22808.016607451253 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 137 100 0.323 0.00708 0.182 0.0761 0.1 0.472 0.509 0.00118 0.00127 137 118 0.322 0.00708 0.181 0.0757 0.1 0.452 0.507 0.00113 0.00127 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 137 100 0.322 0.00561 0.21 0.0676 0.0894 0.544 0.547 0.00136 0.00137 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 137 22975.513 0.005 0.00721 0.118 0.262 0.0763 0.101 0.334 0.409 0.000835 0.00102 ! Validation 137 22975.513 0.005 0.00774 0.236 0.391 0.078 0.105 0.531 0.58 0.00133 0.00145 Wall time: 22975.513583051972 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 138 100 0.239 0.00691 0.101 0.0743 0.0992 0.324 0.379 0.000809 0.000948 138 118 0.162 0.00711 0.0197 0.076 0.101 0.142 0.168 0.000356 0.000419 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 138 100 0.114 0.00527 0.00875 0.0653 0.0866 0.0959 0.112 0.00024 0.000279 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 138 23143.009 0.005 0.00711 0.0801 0.222 0.0757 0.101 0.272 0.339 0.00068 0.000847 ! Validation 138 23143.009 0.005 0.00743 0.0375 0.186 0.0763 0.103 0.189 0.231 0.000472 0.000578 Wall time: 23143.009526565205 ! Best model 138 0.186 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 139 100 0.192 0.0072 0.0478 0.076 0.101 0.208 0.261 0.00052 0.000652 139 118 0.142 0.00657 0.01 0.0726 0.0968 0.0939 0.12 0.000235 0.000299 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 139 100 0.158 0.00548 0.0488 0.0663 0.0883 0.257 0.264 0.000642 0.000659 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 139 23310.552 0.005 0.0069 0.0693 0.207 0.0745 0.0992 0.256 0.315 0.000639 0.000788 ! Validation 139 23310.552 0.005 0.00748 0.0862 0.236 0.0766 0.103 0.302 0.35 0.000755 0.000876 Wall time: 23310.55246903142 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 140 100 0.208 0.00681 0.0714 0.0738 0.0985 0.269 0.319 0.000672 0.000797 140 118 0.169 0.00673 0.0344 0.0737 0.0979 0.202 0.221 0.000505 0.000553 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 140 100 0.113 0.00549 0.00321 0.066 0.0884 0.0573 0.0677 0.000143 0.000169 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 140 23478.063 0.005 0.00704 0.11 0.25 0.0754 0.1 0.318 0.396 0.000794 0.00099 ! Validation 140 23478.063 0.005 0.00746 0.0296 0.179 0.0765 0.103 0.167 0.205 0.000417 0.000513 Wall time: 23478.063379858155 ! Best model 140 0.179 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 141 100 0.15 0.00619 0.026 0.071 0.0939 0.153 0.193 0.000383 0.000481 141 118 0.182 0.00593 0.0632 0.0691 0.0919 0.257 0.3 0.000642 0.00075 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 141 100 0.11 0.00513 0.00739 0.0639 0.0855 0.0832 0.103 0.000208 0.000257 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 141 23645.685 0.005 0.00678 0.067 0.203 0.0738 0.0983 0.241 0.309 0.000604 0.000772 ! Validation 141 23645.685 0.005 0.00713 0.0293 0.172 0.0746 0.101 0.166 0.204 0.000416 0.000511 Wall time: 23645.685093481094 ! Best model 141 0.172 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 142 100 0.216 0.00667 0.0829 0.0731 0.0975 0.292 0.344 0.000729 0.000859 142 118 0.191 0.00677 0.0556 0.074 0.0982 0.234 0.281 0.000586 0.000703 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 142 100 0.141 0.00503 0.04 0.0635 0.0847 0.232 0.239 0.000581 0.000597 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 142 23813.224 0.005 0.00667 0.0791 0.212 0.0731 0.0974 0.259 0.336 0.000647 0.00084 ! Validation 142 23813.224 0.005 0.00712 0.0652 0.207 0.0746 0.101 0.249 0.305 0.000622 0.000762 Wall time: 23813.224482114427 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 143 100 0.519 0.00778 0.364 0.0801 0.105 0.696 0.72 0.00174 0.0018 143 118 0.244 0.00925 0.0591 0.085 0.115 0.257 0.29 0.000643 0.000726 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 143 100 0.426 0.00518 0.322 0.0647 0.0859 0.676 0.678 0.00169 0.00169 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 143 23985.182 0.005 0.00699 0.141 0.281 0.075 0.0997 0.369 0.449 0.000922 0.00112 ! Validation 143 23985.182 0.005 0.00738 0.298 0.446 0.0761 0.103 0.617 0.651 0.00154 0.00163 Wall time: 23985.1826822944 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 144 100 0.158 0.00656 0.0267 0.0722 0.0967 0.157 0.195 0.000392 0.000488 144 118 0.137 0.00658 0.00531 0.0717 0.0968 0.0723 0.087 0.000181 0.000217 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 144 100 0.123 0.00494 0.0237 0.063 0.0839 0.176 0.184 0.000441 0.00046 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 144 24152.659 0.005 0.0067 0.0705 0.204 0.0734 0.0977 0.245 0.318 0.000612 0.000795 ! Validation 144 24152.659 0.005 0.00694 0.0527 0.192 0.0735 0.0995 0.222 0.274 0.000554 0.000685 Wall time: 24152.659754903987 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 145 100 0.269 0.00645 0.14 0.0721 0.0958 0.42 0.446 0.00105 0.00112 145 118 0.14 0.00618 0.0162 0.0712 0.0939 0.122 0.152 0.000304 0.00038 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 145 100 0.102 0.00494 0.00358 0.0627 0.0838 0.0642 0.0714 0.000161 0.000179 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 145 24320.262 0.005 0.00647 0.0681 0.198 0.072 0.096 0.251 0.312 0.000628 0.000781 ! Validation 145 24320.262 0.005 0.00692 0.0292 0.168 0.0734 0.0993 0.162 0.204 0.000405 0.00051 Wall time: 24320.262760837097 ! Best model 145 0.168 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 146 100 0.144 0.00613 0.0218 0.0705 0.0935 0.137 0.176 0.000342 0.000441 146 118 0.165 0.00659 0.0336 0.0726 0.0969 0.197 0.219 0.000491 0.000547 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 146 100 0.132 0.0048 0.0362 0.0621 0.0827 0.223 0.227 0.000558 0.000568 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 146 24487.757 0.005 0.00643 0.0766 0.205 0.0717 0.0957 0.268 0.331 0.00067 0.000827 ! Validation 146 24487.757 0.005 0.00684 0.0736 0.21 0.073 0.0987 0.278 0.324 0.000695 0.00081 Wall time: 24487.757544776425 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 147 100 0.233 0.00633 0.106 0.0709 0.0949 0.351 0.389 0.000877 0.000973 147 118 0.173 0.0064 0.0448 0.0718 0.0954 0.219 0.253 0.000547 0.000631 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 147 100 0.108 0.00496 0.00917 0.0627 0.084 0.0983 0.114 0.000246 0.000286 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 147 24655.282 0.005 0.00641 0.0802 0.208 0.0716 0.0955 0.274 0.338 0.000685 0.000846 ! Validation 147 24655.282 0.005 0.00685 0.0357 0.173 0.0731 0.0988 0.178 0.225 0.000444 0.000564 Wall time: 24655.28263354022 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 148 100 0.2 0.00634 0.0737 0.0712 0.095 0.278 0.324 0.000694 0.00081 148 118 0.178 0.00671 0.0437 0.0732 0.0978 0.213 0.249 0.000534 0.000623 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 148 100 0.208 0.00476 0.113 0.0619 0.0824 0.398 0.401 0.000996 0.001 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 148 24822.807 0.005 0.00628 0.0697 0.195 0.0709 0.0946 0.255 0.316 0.000638 0.000789 ! Validation 148 24822.807 0.005 0.00679 0.145 0.28 0.0728 0.0984 0.41 0.454 0.00102 0.00113 Wall time: 24822.807908348273 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 149 100 0.205 0.00632 0.0782 0.0714 0.0949 0.298 0.334 0.000744 0.000835 149 118 0.165 0.00523 0.06 0.0655 0.0863 0.253 0.292 0.000632 0.000731 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 149 100 0.109 0.00449 0.0193 0.0599 0.08 0.16 0.166 0.0004 0.000415 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 149 24991.059 0.005 0.00622 0.0678 0.192 0.0706 0.0942 0.25 0.311 0.000626 0.000777 ! Validation 149 24991.059 0.005 0.00663 0.0393 0.172 0.0717 0.0972 0.188 0.237 0.00047 0.000592 Wall time: 24991.059456220362 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 150 100 0.144 0.00613 0.0216 0.07 0.0934 0.147 0.176 0.000368 0.000439 150 118 0.229 0.00555 0.118 0.0672 0.0889 0.37 0.41 0.000924 0.00102 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 150 100 0.0991 0.00483 0.0024 0.0622 0.083 0.0465 0.0584 0.000116 0.000146 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 150 25158.734 0.005 0.0067 0.125 0.259 0.0733 0.0977 0.329 0.421 0.000822 0.00105 ! Validation 150 25158.734 0.005 0.00681 0.0287 0.165 0.0728 0.0985 0.161 0.202 0.000402 0.000506 Wall time: 25158.734218377154 ! Best model 150 0.165 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 151 100 0.14 0.00587 0.023 0.0684 0.0914 0.147 0.181 0.000367 0.000453 151 118 0.133 0.00592 0.0149 0.0689 0.0918 0.118 0.145 0.000296 0.000364 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 151 100 0.0923 0.00453 0.00176 0.06 0.0803 0.043 0.05 0.000108 0.000125 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 151 25326.299 0.005 0.0061 0.0585 0.18 0.0698 0.0932 0.234 0.289 0.000584 0.000723 ! Validation 151 25326.299 0.005 0.00647 0.0248 0.154 0.0708 0.096 0.15 0.188 0.000374 0.00047 Wall time: 25326.299103798345 ! Best model 151 0.154 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 152 100 0.146 0.00629 0.0201 0.0713 0.0946 0.137 0.169 0.000343 0.000423 152 118 0.165 0.0058 0.049 0.068 0.0909 0.203 0.264 0.000508 0.00066 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 152 100 0.105 0.00475 0.01 0.0612 0.0822 0.106 0.12 0.000266 0.000299 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 152 25493.874 0.005 0.00612 0.098 0.22 0.0699 0.0934 0.305 0.374 0.000762 0.000936 ! Validation 152 25493.874 0.005 0.00664 0.0288 0.162 0.0718 0.0972 0.162 0.203 0.000404 0.000507 Wall time: 25493.87423438439 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 153 100 0.158 0.00611 0.0356 0.0694 0.0933 0.175 0.225 0.000439 0.000563 153 118 0.118 0.00533 0.0109 0.0662 0.0872 0.0916 0.125 0.000229 0.000312 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 153 100 0.0945 0.00458 0.00287 0.0603 0.0808 0.0556 0.0639 0.000139 0.00016 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 153 25661.482 0.005 0.00597 0.0528 0.172 0.069 0.0922 0.216 0.275 0.00054 0.000688 ! Validation 153 25661.482 0.005 0.0064 0.0271 0.155 0.0705 0.0955 0.157 0.196 0.000393 0.000491 Wall time: 25661.482486329973 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 154 100 0.152 0.00638 0.0242 0.0714 0.0953 0.152 0.186 0.00038 0.000464 154 118 0.148 0.00586 0.0304 0.0689 0.0914 0.153 0.208 0.000382 0.00052 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 154 100 0.272 0.00456 0.181 0.0599 0.0806 0.505 0.507 0.00126 0.00127 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 154 25829.048 0.005 0.00599 0.0875 0.207 0.0691 0.0924 0.28 0.354 0.000701 0.000885 ! Validation 154 25829.048 0.005 0.00646 0.193 0.323 0.0708 0.096 0.49 0.525 0.00123 0.00131 Wall time: 25829.04864113126 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 155 100 0.399 0.00587 0.282 0.0682 0.0915 0.615 0.633 0.00154 0.00158 155 118 0.374 0.00651 0.244 0.0724 0.0963 0.558 0.59 0.0014 0.00147 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 155 100 0.32 0.00506 0.218 0.0632 0.0849 0.556 0.558 0.00139 0.00139 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 155 25996.720 0.005 0.00591 0.0696 0.188 0.0686 0.0917 0.252 0.312 0.00063 0.000781 ! Validation 155 25996.720 0.005 0.0068 0.209 0.345 0.073 0.0984 0.506 0.545 0.00126 0.00136 Wall time: 25996.72077461239 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 156 100 0.142 0.00591 0.0239 0.0687 0.0917 0.155 0.185 0.000386 0.000462 156 118 0.204 0.00547 0.095 0.0668 0.0883 0.337 0.368 0.000842 0.00092 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 156 100 0.107 0.00494 0.00793 0.0624 0.0839 0.09 0.106 0.000225 0.000266 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 156 26164.579 0.005 0.00597 0.0835 0.203 0.0691 0.0923 0.279 0.345 0.000697 0.000862 ! Validation 156 26164.579 0.005 0.00664 0.0311 0.164 0.0721 0.0972 0.17 0.21 0.000424 0.000526 Wall time: 26164.579566477332 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 157 100 0.185 0.00625 0.0604 0.0708 0.0944 0.248 0.293 0.000619 0.000733 157 118 0.206 0.00729 0.0599 0.0759 0.102 0.259 0.292 0.000648 0.00073 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 157 100 0.106 0.0044 0.0177 0.0595 0.0792 0.151 0.159 0.000378 0.000397 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 157 26332.051 0.005 0.00583 0.065 0.182 0.0681 0.091 0.239 0.304 0.000599 0.000761 ! Validation 157 26332.051 0.005 0.00635 0.0341 0.161 0.0703 0.0951 0.176 0.221 0.000441 0.000551 Wall time: 26332.051970203407 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 158 100 0.162 0.00566 0.049 0.0674 0.0898 0.224 0.264 0.00056 0.000661 158 118 0.127 0.00582 0.0108 0.0687 0.091 0.109 0.124 0.000273 0.00031 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 158 100 0.0945 0.00435 0.00754 0.0588 0.0787 0.0916 0.104 0.000229 0.000259 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 158 26499.528 0.005 0.00578 0.065 0.181 0.0679 0.0908 0.249 0.305 0.000622 0.000763 ! Validation 158 26499.528 0.005 0.00615 0.0284 0.151 0.069 0.0936 0.16 0.201 0.0004 0.000502 Wall time: 26499.528177629225 ! Best model 158 0.151 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 159 100 0.139 0.00612 0.0168 0.0696 0.0934 0.117 0.155 0.000292 0.000387 159 118 0.249 0.00566 0.136 0.0679 0.0898 0.424 0.44 0.00106 0.0011 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 159 100 0.237 0.00425 0.152 0.0579 0.0778 0.464 0.466 0.00116 0.00116 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 159 26667.053 0.005 0.00572 0.0593 0.174 0.0675 0.0902 0.236 0.289 0.00059 0.000723 ! Validation 159 26667.053 0.005 0.00603 0.15 0.27 0.0682 0.0927 0.426 0.462 0.00106 0.00115 Wall time: 26667.0534506212 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 160 100 0.125 0.00538 0.0172 0.0655 0.0875 0.123 0.157 0.000307 0.000392 160 118 0.348 0.00547 0.238 0.0654 0.0883 0.53 0.582 0.00133 0.00146 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 160 100 0.11 0.00444 0.0207 0.0593 0.0796 0.169 0.172 0.000422 0.00043 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 160 26834.636 0.005 0.00565 0.0703 0.183 0.0671 0.0897 0.25 0.314 0.000625 0.000784 ! Validation 160 26834.636 0.005 0.00623 0.035 0.16 0.0696 0.0942 0.173 0.223 0.000433 0.000559 Wall time: 26834.636298148427 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 161 100 0.124 0.00539 0.016 0.0655 0.0876 0.12 0.151 0.000299 0.000377 161 118 0.16 0.0052 0.0562 0.0642 0.0861 0.223 0.283 0.000556 0.000707 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 161 100 0.126 0.00432 0.0398 0.0586 0.0785 0.232 0.238 0.000581 0.000595 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 161 27002.139 0.005 0.00583 0.0863 0.203 0.0682 0.0911 0.272 0.351 0.000679 0.000878 ! Validation 161 27002.139 0.005 0.00608 0.0521 0.174 0.0687 0.0931 0.229 0.272 0.000574 0.000681 Wall time: 27002.139543771278 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 162 100 0.123 0.0055 0.013 0.0658 0.0885 0.106 0.136 0.000265 0.00034 162 118 0.162 0.00545 0.0528 0.0658 0.0881 0.247 0.274 0.000618 0.000685 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 162 100 0.138 0.00412 0.0555 0.057 0.0766 0.277 0.281 0.000692 0.000703 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 162 27169.778 0.005 0.00557 0.0569 0.168 0.0666 0.0891 0.225 0.285 0.000562 0.000712 ! Validation 162 27169.778 0.005 0.00586 0.061 0.178 0.0672 0.0913 0.254 0.295 0.000635 0.000737 Wall time: 27169.77902379725 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 163 100 0.151 0.00555 0.0396 0.0665 0.0889 0.19 0.238 0.000476 0.000594 163 118 0.122 0.00509 0.0204 0.0633 0.0852 0.142 0.171 0.000354 0.000427 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 163 100 0.0834 0.00405 0.00227 0.0565 0.076 0.046 0.0568 0.000115 0.000142 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 163 27337.273 0.005 0.00546 0.0515 0.161 0.0659 0.0882 0.219 0.271 0.000548 0.000678 ! Validation 163 27337.273 0.005 0.00585 0.0223 0.139 0.0673 0.0913 0.144 0.178 0.00036 0.000445 Wall time: 27337.27379459422 ! Best model 163 0.139 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 164 100 0.124 0.00542 0.0158 0.0656 0.0879 0.121 0.15 0.000303 0.000375 164 118 0.179 0.00515 0.0762 0.0641 0.0857 0.285 0.329 0.000713 0.000824 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 164 100 0.114 0.00415 0.0309 0.0574 0.0769 0.204 0.21 0.000509 0.000525 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 164 27504.914 0.005 0.00551 0.0728 0.183 0.0662 0.0886 0.261 0.322 0.000652 0.000805 ! Validation 164 27504.914 0.005 0.00584 0.0442 0.161 0.0672 0.0912 0.209 0.251 0.000522 0.000628 Wall time: 27504.914948088117 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 165 100 0.136 0.00576 0.0209 0.0674 0.0906 0.138 0.173 0.000344 0.000431 165 118 0.182 0.00601 0.0624 0.0687 0.0925 0.271 0.298 0.000678 0.000745 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 165 100 0.0887 0.00435 0.00169 0.0587 0.0787 0.043 0.0491 0.000108 0.000123 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 165 27672.451 0.005 0.00542 0.0714 0.18 0.0656 0.0878 0.257 0.319 0.000642 0.000798 ! Validation 165 27672.451 0.005 0.0061 0.0278 0.15 0.0687 0.0932 0.157 0.199 0.000392 0.000498 Wall time: 27672.451660702005 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 166 100 0.131 0.00527 0.0259 0.0643 0.0866 0.151 0.192 0.000377 0.00048 166 118 0.147 0.0058 0.0311 0.0667 0.0909 0.194 0.211 0.000486 0.000526 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 166 100 0.0821 0.00395 0.00314 0.0558 0.075 0.0504 0.0669 0.000126 0.000167 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 166 27839.959 0.005 0.00542 0.0575 0.166 0.0656 0.0878 0.227 0.287 0.000568 0.000717 ! Validation 166 27839.959 0.005 0.00569 0.0229 0.137 0.0662 0.0901 0.147 0.18 0.000367 0.000451 Wall time: 27839.959808997344 ! Best model 166 0.137 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 167 100 0.228 0.00518 0.125 0.0644 0.0859 0.389 0.422 0.000973 0.00105 167 118 0.137 0.00564 0.0243 0.0671 0.0897 0.159 0.186 0.000397 0.000465 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 167 100 0.184 0.00415 0.101 0.057 0.0769 0.378 0.38 0.000945 0.00095 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 167 28007.465 0.005 0.00538 0.0852 0.193 0.0654 0.0875 0.287 0.349 0.000718 0.000873 ! Validation 167 28007.465 0.005 0.00581 0.143 0.26 0.0669 0.091 0.417 0.452 0.00104 0.00113 Wall time: 28007.465776593424 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 168 100 0.127 0.00552 0.0171 0.0659 0.0886 0.123 0.156 0.000309 0.00039 168 118 0.116 0.00494 0.0175 0.0624 0.0839 0.144 0.158 0.00036 0.000394 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 168 100 0.0831 0.00399 0.00322 0.0558 0.0754 0.0537 0.0677 0.000134 0.000169 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 168 28174.959 0.005 0.00526 0.0527 0.158 0.0647 0.0866 0.219 0.275 0.000548 0.000687 ! Validation 168 28174.959 0.005 0.00559 0.0236 0.135 0.0656 0.0892 0.144 0.184 0.00036 0.000459 Wall time: 28174.95953605231 ! Best model 168 0.135 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 169 100 0.208 0.00483 0.111 0.0622 0.0829 0.37 0.398 0.000925 0.000995 169 118 0.147 0.00564 0.0344 0.0663 0.0896 0.182 0.221 0.000455 0.000553 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 169 100 0.129 0.00421 0.0446 0.0571 0.0775 0.247 0.252 0.000617 0.00063 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 169 28342.554 0.005 0.00534 0.0805 0.187 0.0652 0.0872 0.276 0.339 0.000689 0.000848 ! Validation 169 28342.554 0.005 0.00579 0.0666 0.182 0.0669 0.0908 0.266 0.308 0.000666 0.00077 Wall time: 28342.554968371987 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 170 100 0.209 0.00506 0.108 0.0637 0.0849 0.355 0.392 0.000888 0.00098 170 118 0.223 0.00494 0.124 0.0631 0.0839 0.399 0.421 0.000997 0.00105 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 170 100 0.0988 0.0047 0.0048 0.0609 0.0818 0.067 0.0827 0.000167 0.000207 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 170 28510.176 0.005 0.00522 0.0681 0.173 0.0644 0.0863 0.253 0.311 0.000632 0.000777 ! Validation 170 28510.176 0.005 0.00621 0.0249 0.149 0.0698 0.0941 0.154 0.188 0.000384 0.000471 Wall time: 28510.17651259899 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 171 100 0.305 0.00524 0.201 0.0644 0.0864 0.52 0.535 0.0013 0.00134 171 118 0.154 0.00542 0.0457 0.0655 0.0879 0.226 0.255 0.000564 0.000638 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 171 100 0.136 0.00412 0.0536 0.0571 0.0766 0.273 0.276 0.000682 0.000691 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 171 28677.715 0.005 0.00515 0.057 0.16 0.0639 0.0857 0.232 0.285 0.000579 0.000713 ! Validation 171 28677.715 0.005 0.00578 0.065 0.181 0.0668 0.0907 0.259 0.304 0.000648 0.000761 Wall time: 28677.715894418303 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 172 100 0.127 0.00497 0.0274 0.0627 0.0842 0.162 0.198 0.000405 0.000494 172 118 0.115 0.00454 0.0243 0.0615 0.0804 0.163 0.186 0.000408 0.000465 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 172 100 0.188 0.0039 0.11 0.0555 0.0745 0.393 0.396 0.000983 0.000991 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 172 28845.260 0.005 0.00506 0.0461 0.147 0.0634 0.085 0.208 0.257 0.00052 0.000642 ! Validation 172 28845.260 0.005 0.00552 0.155 0.266 0.0654 0.0887 0.44 0.47 0.0011 0.00118 Wall time: 28845.26090982929 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 173 100 0.12 0.00466 0.0269 0.0615 0.0815 0.154 0.196 0.000386 0.000489 173 118 0.375 0.00612 0.253 0.0698 0.0934 0.578 0.6 0.00144 0.0015 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 173 100 0.345 0.00437 0.258 0.0588 0.0789 0.605 0.606 0.00151 0.00152 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 173 29012.812 0.005 0.00508 0.063 0.165 0.0635 0.085 0.239 0.297 0.000598 0.000741 ! Validation 173 29012.812 0.005 0.0059 0.311 0.429 0.0679 0.0917 0.642 0.665 0.00161 0.00166 Wall time: 29012.8123474163 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 174 100 0.12 0.00494 0.0213 0.0629 0.0839 0.148 0.174 0.00037 0.000436 174 118 0.118 0.00518 0.0145 0.0641 0.0859 0.109 0.144 0.000274 0.000359 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 174 100 0.0807 0.0038 0.00473 0.0544 0.0736 0.0691 0.0821 0.000173 0.000205 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 174 29180.462 0.005 0.0052 0.0788 0.183 0.0643 0.0861 0.272 0.336 0.00068 0.00084 ! Validation 174 29180.462 0.005 0.00539 0.0282 0.136 0.0643 0.0876 0.156 0.2 0.000391 0.000501 Wall time: 29180.462871633004 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 175 100 0.158 0.00515 0.0546 0.0639 0.0856 0.232 0.279 0.00058 0.000697 175 118 0.127 0.00488 0.0296 0.0627 0.0834 0.17 0.205 0.000425 0.000513 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 175 100 0.0812 0.0039 0.00314 0.0555 0.0746 0.0579 0.0669 0.000145 0.000167 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 175 29348.009 0.005 0.00501 0.058 0.158 0.063 0.0845 0.232 0.288 0.00058 0.00072 ! Validation 175 29348.009 0.005 0.00552 0.0218 0.132 0.0654 0.0887 0.139 0.176 0.000348 0.000441 Wall time: 29348.00976965204 ! Best model 175 0.132 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 176 100 0.146 0.00476 0.051 0.0616 0.0824 0.226 0.27 0.000566 0.000674 176 118 0.154 0.00502 0.0539 0.0622 0.0845 0.229 0.277 0.000574 0.000693 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 176 100 0.103 0.00452 0.0127 0.0602 0.0803 0.121 0.135 0.000303 0.000337 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 176 29515.706 0.005 0.00495 0.0496 0.149 0.0627 0.084 0.208 0.266 0.00052 0.000664 ! Validation 176 29515.706 0.005 0.00597 0.0556 0.175 0.0686 0.0922 0.231 0.281 0.000578 0.000703 Wall time: 29515.706722578034 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 177 100 0.119 0.0049 0.0212 0.0616 0.0835 0.145 0.174 0.000363 0.000435 177 118 0.223 0.00414 0.14 0.0584 0.0768 0.429 0.447 0.00107 0.00112 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 177 100 0.397 0.00379 0.321 0.0546 0.0734 0.675 0.676 0.00169 0.00169 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 177 29683.220 0.005 0.00488 0.0513 0.149 0.0623 0.0834 0.218 0.269 0.000544 0.000672 ! Validation 177 29683.220 0.005 0.00539 0.33 0.438 0.0645 0.0876 0.665 0.686 0.00166 0.00171 Wall time: 29683.220128229354 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 178 100 0.149 0.00491 0.0511 0.0627 0.0836 0.236 0.27 0.00059 0.000674 178 118 0.136 0.00507 0.035 0.063 0.085 0.204 0.223 0.000511 0.000558 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 178 100 0.0763 0.0037 0.00234 0.0541 0.0726 0.0429 0.0578 0.000107 0.000144 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 178 29850.717 0.005 0.00552 0.109 0.219 0.0664 0.0887 0.322 0.395 0.000804 0.000987 ! Validation 178 29850.717 0.005 0.00536 0.0197 0.127 0.0643 0.0874 0.138 0.167 0.000344 0.000419 Wall time: 29850.717217535246 ! Best model 178 0.127 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 179 100 0.185 0.0048 0.089 0.0621 0.0827 0.325 0.356 0.000813 0.00089 179 118 0.148 0.00587 0.0302 0.0683 0.0915 0.167 0.207 0.000418 0.000518 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 179 100 0.0897 0.00384 0.0128 0.0557 0.074 0.13 0.135 0.000325 0.000337 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 179 30018.315 0.005 0.00484 0.0453 0.142 0.0619 0.083 0.203 0.254 0.000508 0.000636 ! Validation 179 30018.315 0.005 0.00557 0.0264 0.138 0.0657 0.0891 0.156 0.194 0.00039 0.000485 Wall time: 30018.315626908094 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 180 100 0.113 0.00446 0.0238 0.0599 0.0797 0.147 0.184 0.000368 0.00046 180 118 0.114 0.00433 0.0278 0.0592 0.0786 0.186 0.199 0.000466 0.000497 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 180 100 0.101 0.00359 0.0291 0.053 0.0715 0.199 0.204 0.000498 0.000509 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 180 30185.837 0.005 0.00476 0.0407 0.136 0.0614 0.0824 0.195 0.241 0.000487 0.000603 ! Validation 180 30185.837 0.005 0.0051 0.051 0.153 0.0625 0.0852 0.223 0.269 0.000557 0.000674 Wall time: 30185.837090429384 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 181 100 0.141 0.00467 0.0473 0.0608 0.0815 0.227 0.259 0.000569 0.000649 181 118 0.237 0.00544 0.128 0.0646 0.088 0.385 0.428 0.000963 0.00107 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 181 100 0.0829 0.00378 0.00719 0.0543 0.0734 0.086 0.101 0.000215 0.000253 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 181 30353.321 0.005 0.00488 0.062 0.16 0.0622 0.0833 0.236 0.296 0.000591 0.00074 ! Validation 181 30353.321 0.005 0.00522 0.0227 0.127 0.0634 0.0863 0.146 0.18 0.000364 0.00045 Wall time: 30353.32179812342 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 182 100 0.12 0.00433 0.0335 0.0588 0.0785 0.178 0.218 0.000445 0.000546 182 118 0.111 0.00444 0.0221 0.0604 0.0795 0.134 0.177 0.000336 0.000443 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 182 100 0.0863 0.00357 0.0148 0.053 0.0713 0.138 0.145 0.000345 0.000364 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 182 30520.798 0.005 0.00474 0.0483 0.143 0.0613 0.0822 0.213 0.263 0.000532 0.000657 ! Validation 182 30520.798 0.005 0.00507 0.0366 0.138 0.0624 0.085 0.182 0.228 0.000454 0.000571 Wall time: 30520.798558157403 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 183 100 0.115 0.00476 0.0194 0.0619 0.0824 0.129 0.166 0.000323 0.000416 183 118 0.138 0.00501 0.0377 0.0621 0.0845 0.204 0.232 0.000511 0.000579 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 183 100 0.118 0.00374 0.0432 0.0537 0.073 0.244 0.248 0.000611 0.00062 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 183 30688.388 0.005 0.00472 0.0516 0.146 0.0612 0.082 0.221 0.271 0.000552 0.000678 ! Validation 183 30688.388 0.005 0.00509 0.0682 0.17 0.0624 0.0852 0.273 0.312 0.000682 0.000779 Wall time: 30688.388122553006 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 184 100 0.115 0.00507 0.0138 0.0622 0.085 0.104 0.14 0.00026 0.000351 184 118 0.107 0.00454 0.0164 0.0602 0.0805 0.127 0.153 0.000317 0.000382 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 184 100 0.0797 0.00372 0.00538 0.0542 0.0728 0.0646 0.0876 0.000162 0.000219 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 184 30856.281 0.005 0.00488 0.0675 0.165 0.0624 0.0834 0.25 0.311 0.000624 0.000777 ! Validation 184 30856.281 0.005 0.00513 0.0202 0.123 0.0628 0.0855 0.136 0.169 0.00034 0.000424 Wall time: 30856.281442234293 ! Best model 184 0.123 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 185 100 0.142 0.00465 0.0485 0.0609 0.0814 0.237 0.263 0.000593 0.000657 185 118 0.0949 0.00426 0.00976 0.0577 0.0779 0.0931 0.118 0.000233 0.000295 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 185 100 0.0729 0.00354 0.00214 0.0526 0.071 0.0479 0.0552 0.00012 0.000138 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 185 31023.748 0.005 0.00461 0.0551 0.147 0.0605 0.0811 0.228 0.281 0.00057 0.000703 ! Validation 185 31023.748 0.005 0.00504 0.0211 0.122 0.0621 0.0848 0.137 0.173 0.000342 0.000433 Wall time: 31023.7487268541 ! Best model 185 0.122 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 186 100 0.21 0.00491 0.112 0.0623 0.0837 0.376 0.399 0.000941 0.000997 186 118 0.122 0.00462 0.0294 0.0599 0.0811 0.182 0.205 0.000455 0.000512 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 186 100 0.182 0.00364 0.109 0.0532 0.072 0.393 0.395 0.000982 0.000987 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 186 31191.188 0.005 0.00484 0.0902 0.187 0.062 0.083 0.287 0.359 0.000718 0.000898 ! Validation 186 31191.188 0.005 0.00508 0.129 0.231 0.0625 0.0851 0.398 0.429 0.000994 0.00107 Wall time: 31191.18882505037 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 187 100 0.129 0.00464 0.0359 0.0611 0.0813 0.178 0.226 0.000446 0.000565 187 118 0.0986 0.00441 0.0104 0.0594 0.0792 0.109 0.122 0.000273 0.000305 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 187 100 0.0719 0.00348 0.00218 0.052 0.0705 0.0485 0.0558 0.000121 0.000139 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 187 31358.655 0.005 0.00462 0.046 0.138 0.0605 0.0811 0.208 0.257 0.000519 0.000642 ! Validation 187 31358.655 0.005 0.00491 0.0193 0.117 0.0613 0.0836 0.132 0.166 0.00033 0.000414 Wall time: 31358.655960428994 ! Best model 187 0.117 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 188 100 0.117 0.00414 0.0344 0.0575 0.0768 0.188 0.221 0.000469 0.000553 188 118 0.111 0.00451 0.0205 0.0595 0.0802 0.142 0.171 0.000355 0.000427 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 188 100 0.0734 0.00351 0.00311 0.0527 0.0708 0.0603 0.0666 0.000151 0.000166 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 188 31526.233 0.005 0.00449 0.0379 0.128 0.0596 0.0799 0.18 0.233 0.000451 0.000582 ! Validation 188 31526.233 0.005 0.00498 0.021 0.121 0.062 0.0843 0.136 0.173 0.00034 0.000433 Wall time: 31526.23373403307 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 189 100 0.105 0.00447 0.0159 0.0599 0.0798 0.116 0.15 0.000289 0.000376 189 118 0.135 0.00533 0.0286 0.0649 0.0871 0.166 0.202 0.000414 0.000504 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 189 100 0.071 0.00349 0.00126 0.0524 0.0705 0.0376 0.0423 9.4e-05 0.000106 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 189 31693.591 0.005 0.00461 0.0624 0.155 0.0605 0.081 0.234 0.299 0.000584 0.000747 ! Validation 189 31693.591 0.005 0.00488 0.0233 0.121 0.0612 0.0834 0.141 0.182 0.000354 0.000456 Wall time: 31693.591722398996 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 190 100 0.128 0.00455 0.0373 0.0601 0.0805 0.195 0.231 0.000487 0.000576 190 118 0.246 0.006 0.126 0.0702 0.0925 0.405 0.424 0.00101 0.00106 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 190 100 0.251 0.0037 0.177 0.0538 0.0726 0.499 0.502 0.00125 0.00125 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 190 31860.912 0.005 0.00451 0.0485 0.139 0.0597 0.0801 0.213 0.262 0.000532 0.000654 ! Validation 190 31860.912 0.005 0.00504 0.145 0.246 0.0623 0.0848 0.427 0.455 0.00107 0.00114 Wall time: 31860.912103345152 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 191 100 0.0945 0.00422 0.01 0.0581 0.0775 0.101 0.12 0.000253 0.000299 191 118 0.106 0.00494 0.00742 0.0619 0.0839 0.0772 0.103 0.000193 0.000257 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 191 100 0.0699 0.00329 0.00412 0.0508 0.0685 0.0604 0.0767 0.000151 0.000192 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 191 32028.242 0.005 0.00449 0.0471 0.137 0.0597 0.08 0.208 0.26 0.000519 0.00065 ! Validation 191 32028.242 0.005 0.00475 0.0195 0.115 0.0603 0.0823 0.131 0.167 0.000328 0.000417 Wall time: 32028.24287068611 ! Best model 191 0.115 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 192 100 0.126 0.00486 0.0284 0.0617 0.0832 0.167 0.201 0.000417 0.000503 192 118 0.16 0.00426 0.0752 0.0574 0.0779 0.281 0.327 0.000704 0.000818 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 192 100 0.105 0.00364 0.0323 0.0533 0.072 0.21 0.214 0.000526 0.000536 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 192 32195.584 0.005 0.00458 0.0621 0.154 0.0603 0.0808 0.242 0.297 0.000605 0.000743 ! Validation 192 32195.584 0.005 0.00493 0.0544 0.153 0.0616 0.0838 0.236 0.278 0.000589 0.000696 Wall time: 32195.584687230177 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 193 100 0.285 0.00439 0.197 0.0593 0.0791 0.512 0.53 0.00128 0.00132 193 118 0.183 0.00506 0.0821 0.0629 0.0849 0.307 0.342 0.000768 0.000855 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 193 100 0.452 0.0042 0.368 0.057 0.0773 0.722 0.724 0.0018 0.00181 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 193 32362.995 0.005 0.00441 0.0483 0.136 0.0591 0.0792 0.203 0.262 0.000509 0.000654 ! Validation 193 32362.995 0.005 0.00531 0.467 0.573 0.0643 0.087 0.79 0.816 0.00198 0.00204 Wall time: 32362.99514681101 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 194 100 0.114 0.0045 0.0235 0.0599 0.0801 0.142 0.183 0.000356 0.000458 194 118 0.121 0.00481 0.0252 0.0619 0.0828 0.156 0.189 0.00039 0.000474 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 194 100 0.0896 0.00344 0.0207 0.0519 0.0701 0.167 0.172 0.000417 0.000429 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 194 32530.301 0.005 0.00785 0.262 0.42 0.076 0.106 0.386 0.613 0.000965 0.00153 ! Validation 194 32530.301 0.005 0.00491 0.0427 0.141 0.0613 0.0836 0.206 0.247 0.000516 0.000616 Wall time: 32530.301234465092 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 195 100 0.103 0.00451 0.0129 0.0596 0.0802 0.104 0.135 0.000261 0.000339 195 118 0.103 0.00453 0.0125 0.0602 0.0804 0.104 0.133 0.000259 0.000333 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 195 100 0.067 0.00329 0.00119 0.0509 0.0684 0.0401 0.0412 0.0001 0.000103 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 195 32697.621 0.005 0.00439 0.0241 0.112 0.0589 0.0791 0.149 0.186 0.000372 0.000464 ! Validation 195 32697.621 0.005 0.00473 0.0218 0.116 0.0602 0.0821 0.137 0.176 0.000342 0.000441 Wall time: 32697.621549515985 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 196 100 0.0922 0.00412 0.00979 0.0573 0.0766 0.0962 0.118 0.000241 0.000295 196 118 0.128 0.00496 0.0285 0.06 0.084 0.172 0.201 0.000429 0.000504 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 196 100 0.0701 0.00322 0.00577 0.0503 0.0677 0.0802 0.0907 0.0002 0.000227 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 196 32864.940 0.005 0.00427 0.0229 0.108 0.0581 0.0779 0.144 0.18 0.00036 0.000451 ! Validation 196 32864.940 0.005 0.00464 0.0283 0.121 0.0595 0.0813 0.159 0.201 0.000397 0.000502 Wall time: 32864.9400104573 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 197 100 0.0949 0.00403 0.0143 0.0568 0.0757 0.111 0.143 0.000277 0.000357 197 118 0.086 0.00396 0.0068 0.0555 0.0751 0.0661 0.0984 0.000165 0.000246 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 197 100 0.0683 0.00324 0.00347 0.0503 0.0679 0.0594 0.0703 0.000148 0.000176 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 197 33032.267 0.005 0.00423 0.0315 0.116 0.0579 0.0776 0.172 0.212 0.00043 0.000531 ! Validation 197 33032.267 0.005 0.00463 0.0168 0.11 0.0595 0.0812 0.123 0.155 0.000307 0.000387 Wall time: 33032.267922905274 ! Best model 197 0.110 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 198 100 0.151 0.00425 0.0658 0.0584 0.0778 0.277 0.306 0.000692 0.000765 198 118 0.0898 0.00392 0.0115 0.0565 0.0747 0.0988 0.128 0.000247 0.00032 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 198 100 0.072 0.00326 0.00675 0.0509 0.0682 0.0878 0.0981 0.000219 0.000245 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 198 33199.706 0.005 0.0042 0.0397 0.124 0.0576 0.0773 0.189 0.238 0.000472 0.000596 ! Validation 198 33199.706 0.005 0.00471 0.0252 0.119 0.0602 0.0819 0.152 0.19 0.000381 0.000474 Wall time: 33199.70621697605 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 199 100 0.0934 0.00406 0.0123 0.0568 0.076 0.105 0.132 0.000263 0.00033 199 118 0.0969 0.00437 0.00956 0.0592 0.0789 0.0836 0.117 0.000209 0.000292 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 199 100 0.0825 0.00313 0.0199 0.0495 0.0668 0.163 0.168 0.000406 0.000421 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 199 33367.032 0.005 0.00418 0.0377 0.121 0.0575 0.0771 0.189 0.232 0.000472 0.000581 ! Validation 199 33367.032 0.005 0.00451 0.038 0.128 0.0587 0.0802 0.195 0.233 0.000487 0.000582 Wall time: 33367.032579347026 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 200 100 0.0916 0.00382 0.0152 0.0553 0.0738 0.119 0.147 0.000297 0.000368 200 118 0.172 0.00422 0.0878 0.0571 0.0775 0.331 0.354 0.000828 0.000884 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 200 100 0.197 0.00313 0.134 0.0495 0.0668 0.435 0.437 0.00109 0.00109 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 200 33534.369 0.005 0.00409 0.0264 0.108 0.0568 0.0763 0.154 0.192 0.000384 0.00048 ! Validation 200 33534.369 0.005 0.00446 0.154 0.243 0.0584 0.0797 0.443 0.468 0.00111 0.00117 Wall time: 33534.36994172819 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 201 100 0.185 0.00452 0.0947 0.0582 0.0802 0.339 0.367 0.000848 0.000918 201 118 0.213 0.00362 0.14 0.0543 0.0718 0.438 0.447 0.00109 0.00112 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 201 100 0.305 0.00345 0.236 0.0521 0.0701 0.578 0.58 0.00145 0.00145 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 201 33701.686 0.005 0.00413 0.0567 0.139 0.0572 0.0767 0.233 0.283 0.000582 0.000707 ! Validation 201 33701.686 0.005 0.00468 0.2 0.293 0.06 0.0817 0.51 0.533 0.00127 0.00133 Wall time: 33701.6866016211 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 202 100 0.233 0.00411 0.15 0.0574 0.0765 0.447 0.463 0.00112 0.00116 202 118 0.1 0.0041 0.0183 0.0572 0.0764 0.133 0.161 0.000331 0.000403 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 202 100 0.167 0.00317 0.104 0.05 0.0672 0.382 0.384 0.000955 0.00096 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 202 33869.113 0.005 0.00418 0.0507 0.134 0.0576 0.0772 0.22 0.269 0.00055 0.000673 ! Validation 202 33869.113 0.005 0.00454 0.123 0.214 0.059 0.0804 0.387 0.419 0.000968 0.00105 Wall time: 33869.11388413515 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 203 100 0.0977 0.00423 0.0132 0.0576 0.0776 0.107 0.137 0.000267 0.000343 203 118 0.088 0.00372 0.0136 0.0546 0.0728 0.111 0.139 0.000278 0.000349 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 203 100 0.0758 0.00303 0.0151 0.0488 0.0657 0.14 0.147 0.00035 0.000367 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 203 34036.434 0.005 0.00408 0.0369 0.119 0.0569 0.0763 0.177 0.23 0.000443 0.000574 ! Validation 203 34036.434 0.005 0.00435 0.0356 0.123 0.0576 0.0787 0.185 0.225 0.000462 0.000563 Wall time: 34036.434507693164 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 204 100 0.0877 0.00384 0.0108 0.0552 0.074 0.101 0.124 0.000253 0.000311 204 118 0.081 0.00368 0.00751 0.0541 0.0724 0.0844 0.103 0.000211 0.000259 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 204 100 0.0691 0.00306 0.00785 0.0489 0.0661 0.0954 0.106 0.000239 0.000264 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 204 34203.748 0.005 0.00404 0.0341 0.115 0.0565 0.0758 0.177 0.221 0.000443 0.000552 ! Validation 204 34203.748 0.005 0.00432 0.0174 0.104 0.0574 0.0784 0.126 0.158 0.000315 0.000394 Wall time: 34203.74829590134 ! Best model 204 0.104 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 205 100 0.114 0.00404 0.033 0.0567 0.0759 0.187 0.217 0.000468 0.000542 205 118 0.0929 0.00384 0.0161 0.0558 0.074 0.12 0.152 0.000301 0.000379 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 205 100 0.0749 0.00321 0.0107 0.0499 0.0676 0.117 0.123 0.000292 0.000309 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 205 34371.231 0.005 0.00411 0.0594 0.142 0.0571 0.0765 0.23 0.292 0.000575 0.000729 ! Validation 205 34371.231 0.005 0.00441 0.0277 0.116 0.058 0.0792 0.159 0.199 0.000398 0.000497 Wall time: 34371.231698742136 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 206 100 0.0964 0.0039 0.0185 0.0556 0.0745 0.133 0.162 0.000332 0.000405 206 118 0.0889 0.00345 0.02 0.0533 0.0701 0.142 0.169 0.000356 0.000421 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 206 100 0.068 0.00304 0.00716 0.049 0.0658 0.0944 0.101 0.000236 0.000253 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 206 34538.520 0.005 0.00404 0.0463 0.127 0.0566 0.0759 0.208 0.257 0.00052 0.000643 ! Validation 206 34538.520 0.005 0.00437 0.0171 0.104 0.0578 0.0789 0.126 0.156 0.000314 0.00039 Wall time: 34538.520364675205 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 207 100 0.0961 0.0042 0.012 0.0572 0.0774 0.104 0.131 0.00026 0.000327 207 118 0.106 0.00445 0.0174 0.0582 0.0796 0.127 0.157 0.000317 0.000393 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 207 100 0.0708 0.00313 0.00823 0.0494 0.0667 0.101 0.108 0.000252 0.000271 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 207 34705.920 0.005 0.00406 0.0569 0.138 0.0568 0.0761 0.236 0.285 0.000589 0.000713 ! Validation 207 34705.920 0.005 0.00435 0.0189 0.106 0.0577 0.0787 0.133 0.164 0.000332 0.00041 Wall time: 34705.92079286929 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 208 100 0.0926 0.00408 0.0109 0.0572 0.0763 0.0967 0.125 0.000242 0.000312 208 118 0.139 0.00374 0.0641 0.0551 0.073 0.278 0.302 0.000694 0.000756 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 208 100 0.131 0.00303 0.0704 0.0486 0.0657 0.313 0.317 0.000783 0.000792 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 208 34873.226 0.005 0.00407 0.0523 0.134 0.0569 0.0762 0.208 0.273 0.000519 0.000682 ! Validation 208 34873.226 0.005 0.00434 0.115 0.202 0.0577 0.0786 0.372 0.405 0.000931 0.00101 Wall time: 34873.22607461503 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 209 100 0.174 0.00391 0.0956 0.0554 0.0747 0.34 0.369 0.00085 0.000922 209 118 0.0973 0.00356 0.026 0.0528 0.0713 0.152 0.192 0.000379 0.000481 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 209 100 0.089 0.00303 0.0283 0.0489 0.0657 0.197 0.201 0.000493 0.000502 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 209 35040.530 0.005 0.00393 0.0316 0.11 0.0558 0.0748 0.17 0.212 0.000426 0.000531 ! Validation 209 35040.530 0.005 0.00434 0.0392 0.126 0.0576 0.0787 0.202 0.236 0.000505 0.000591 Wall time: 35040.53069094615 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 210 100 0.102 0.00398 0.022 0.0562 0.0753 0.151 0.177 0.000379 0.000443 210 118 0.11 0.00356 0.039 0.0539 0.0712 0.215 0.236 0.000537 0.000589 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 210 100 0.173 0.00302 0.112 0.0491 0.0656 0.397 0.4 0.000992 0.000999 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 210 35207.916 0.005 0.00402 0.0484 0.129 0.0565 0.0757 0.212 0.263 0.000529 0.000657 ! Validation 210 35207.916 0.005 0.00434 0.194 0.28 0.0578 0.0787 0.498 0.525 0.00124 0.00131 Wall time: 35207.916587716434 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 211 100 0.18 0.00387 0.102 0.0554 0.0742 0.362 0.382 0.000906 0.000955 211 118 0.0759 0.00352 0.00555 0.0534 0.0708 0.0682 0.0889 0.000171 0.000222 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 211 100 0.0601 0.00293 0.00143 0.0483 0.0646 0.0377 0.0452 9.43e-05 0.000113 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 211 35375.360 0.005 0.00389 0.0317 0.11 0.0555 0.0745 0.167 0.213 0.000418 0.000533 ! Validation 211 35375.360 0.005 0.00427 0.0171 0.103 0.0572 0.078 0.122 0.156 0.000306 0.00039 Wall time: 35375.36040374404 ! Best model 211 0.103 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 212 100 0.185 0.00413 0.102 0.0578 0.0767 0.359 0.381 0.000897 0.000953 212 118 0.104 0.00432 0.0174 0.0588 0.0785 0.108 0.158 0.000269 0.000394 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 212 100 0.0718 0.00332 0.00527 0.051 0.0688 0.0755 0.0867 0.000189 0.000217 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 212 35542.898 0.005 0.00399 0.0629 0.143 0.0563 0.0753 0.248 0.3 0.00062 0.00075 ! Validation 212 35542.898 0.005 0.00445 0.0378 0.127 0.0586 0.0796 0.187 0.232 0.000467 0.00058 Wall time: 35542.89838630613 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 213 100 0.148 0.00376 0.0728 0.0545 0.0732 0.29 0.322 0.000725 0.000805 213 118 0.136 0.0041 0.0541 0.0568 0.0764 0.254 0.278 0.000636 0.000694 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 213 100 0.138 0.00299 0.0783 0.0486 0.0652 0.331 0.334 0.000827 0.000835 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 213 35710.280 0.005 0.00388 0.0328 0.11 0.0554 0.0743 0.171 0.216 0.000428 0.000539 ! Validation 213 35710.280 0.005 0.0042 0.0726 0.157 0.0567 0.0774 0.293 0.322 0.000733 0.000804 Wall time: 35710.28053522436 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 214 100 0.171 0.00361 0.0984 0.0531 0.0717 0.359 0.374 0.000897 0.000936 214 118 0.0968 0.00454 0.00598 0.0597 0.0804 0.0733 0.0923 0.000183 0.000231 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 214 100 0.0606 0.00295 0.00163 0.0487 0.0648 0.0397 0.0482 9.93e-05 0.00012 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 214 35877.538 0.005 0.00387 0.0414 0.119 0.0554 0.0742 0.196 0.244 0.000489 0.000609 ! Validation 214 35877.538 0.005 0.00424 0.0175 0.102 0.0572 0.0777 0.124 0.158 0.000311 0.000395 Wall time: 35877.53825648641 ! Best model 214 0.102 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 215 100 0.0834 0.00363 0.0108 0.0537 0.0719 0.101 0.124 0.000251 0.00031 215 118 0.11 0.00388 0.0321 0.0552 0.0743 0.197 0.214 0.000493 0.000535 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 215 100 0.126 0.00299 0.0668 0.0485 0.0652 0.306 0.308 0.000765 0.000771 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 215 36045.389 0.005 0.00384 0.0422 0.119 0.0552 0.074 0.199 0.245 0.000499 0.000613 ! Validation 215 36045.389 0.005 0.00421 0.0682 0.152 0.0569 0.0775 0.282 0.312 0.000705 0.000779 Wall time: 36045.38955364702 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 216 100 0.0801 0.00348 0.0106 0.0528 0.0704 0.0988 0.123 0.000247 0.000307 216 118 0.0893 0.00373 0.0147 0.0547 0.0729 0.127 0.145 0.000317 0.000362 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 216 100 0.0757 0.00293 0.017 0.0478 0.0646 0.15 0.156 0.000374 0.000389 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 216 36212.810 0.005 0.0038 0.0382 0.114 0.0549 0.0736 0.187 0.234 0.000467 0.000584 ! Validation 216 36212.810 0.005 0.00406 0.0322 0.113 0.0557 0.076 0.177 0.214 0.000443 0.000536 Wall time: 36212.810430257116 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 217 100 0.0956 0.00396 0.0164 0.055 0.0751 0.123 0.153 0.000307 0.000383 217 118 0.204 0.00422 0.12 0.0574 0.0775 0.399 0.413 0.000997 0.00103 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 217 100 0.246 0.00327 0.181 0.0501 0.0682 0.504 0.507 0.00126 0.00127 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 217 36381.711 0.005 0.00373 0.0367 0.111 0.0544 0.0729 0.184 0.227 0.000459 0.000567 ! Validation 217 36381.711 0.005 0.00429 0.219 0.305 0.0573 0.0782 0.534 0.559 0.00134 0.0014 Wall time: 36381.711485709064 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 218 100 0.0871 0.00375 0.0122 0.0547 0.0731 0.102 0.132 0.000256 0.00033 218 118 0.102 0.00451 0.0122 0.0602 0.0802 0.11 0.132 0.000274 0.000329 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 218 100 0.0591 0.00291 0.000912 0.0479 0.0644 0.0301 0.036 7.52e-05 9.01e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 218 36550.808 0.005 0.00385 0.055 0.132 0.0553 0.0741 0.223 0.281 0.000558 0.000702 ! Validation 218 36550.808 0.005 0.00409 0.0263 0.108 0.056 0.0763 0.15 0.194 0.000374 0.000484 Wall time: 36550.808973482344 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 219 100 0.11 0.00414 0.0276 0.0576 0.0768 0.163 0.198 0.000408 0.000495 219 118 0.0895 0.00356 0.0184 0.0533 0.0712 0.126 0.162 0.000314 0.000405 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 219 100 0.0848 0.00338 0.0173 0.0512 0.0694 0.145 0.157 0.000363 0.000392 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 219 36718.338 0.005 0.00396 0.0612 0.14 0.0561 0.0752 0.239 0.296 0.000597 0.00074 ! Validation 219 36718.338 0.005 0.00445 0.0454 0.134 0.0586 0.0796 0.214 0.254 0.000536 0.000636 Wall time: 36718.33836356038 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 220 100 0.12 0.00386 0.0422 0.0551 0.0742 0.219 0.245 0.000547 0.000613 220 118 0.081 0.00348 0.0114 0.052 0.0704 0.0983 0.128 0.000246 0.000319 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 220 100 0.0693 0.00277 0.0138 0.0467 0.0629 0.135 0.14 0.000337 0.00035 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 220 36885.885 0.005 0.00374 0.0366 0.111 0.0545 0.073 0.182 0.229 0.000455 0.000572 ! Validation 220 36885.885 0.005 0.00396 0.0315 0.111 0.0551 0.0752 0.175 0.212 0.000436 0.00053 Wall time: 36885.88514932431 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 221 100 0.0906 0.0037 0.0166 0.0543 0.0726 0.129 0.154 0.000323 0.000385 221 118 0.103 0.00309 0.041 0.05 0.0664 0.224 0.242 0.000561 0.000604 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 221 100 0.0704 0.0029 0.0124 0.048 0.0643 0.128 0.133 0.00032 0.000333 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 221 37053.535 0.005 0.00374 0.0415 0.116 0.0546 0.0731 0.195 0.243 0.000488 0.000608 ! Validation 221 37053.535 0.005 0.00414 0.0267 0.109 0.0563 0.0768 0.157 0.195 0.000393 0.000488 Wall time: 37053.53566084523 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 222 100 0.0931 0.00378 0.0176 0.0548 0.0734 0.126 0.158 0.000316 0.000395 222 118 0.13 0.00404 0.0495 0.0571 0.0759 0.244 0.265 0.000609 0.000664 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 222 100 0.0671 0.00304 0.0062 0.049 0.0659 0.0885 0.0939 0.000221 0.000235 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 222 37221.079 0.005 0.00371 0.0393 0.113 0.0542 0.0727 0.193 0.236 0.000483 0.000591 ! Validation 222 37221.079 0.005 0.00415 0.02 0.103 0.0568 0.0769 0.134 0.169 0.000335 0.000422 Wall time: 37221.07977998443 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 223 100 0.0917 0.00363 0.019 0.0538 0.0719 0.133 0.165 0.000334 0.000411 223 118 0.151 0.00372 0.0761 0.0543 0.0728 0.3 0.329 0.00075 0.000823 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 223 100 0.0876 0.00275 0.0326 0.0463 0.0626 0.212 0.216 0.000531 0.000539 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 223 37388.638 0.005 0.00361 0.0289 0.101 0.0535 0.0717 0.163 0.202 0.000407 0.000505 ! Validation 223 37388.638 0.005 0.00396 0.0525 0.132 0.055 0.0751 0.242 0.273 0.000604 0.000684 Wall time: 37388.6389631764 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 224 100 0.111 0.00366 0.0377 0.0542 0.0722 0.202 0.232 0.000505 0.000579 224 118 0.085 0.00331 0.0187 0.0507 0.0687 0.14 0.163 0.00035 0.000408 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 224 100 0.0574 0.00271 0.00324 0.0461 0.0621 0.0552 0.0679 0.000138 0.00017 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 224 37556.180 0.005 0.00373 0.05 0.125 0.0545 0.0729 0.211 0.267 0.000526 0.000668 ! Validation 224 37556.180 0.005 0.0039 0.0145 0.0925 0.0545 0.0745 0.114 0.144 0.000285 0.000359 Wall time: 37556.18073687516 ! Best model 224 0.092 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 225 100 0.128 0.00428 0.0428 0.0584 0.0781 0.212 0.247 0.00053 0.000617 225 118 0.114 0.00415 0.0312 0.0574 0.0769 0.169 0.211 0.000422 0.000527 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 225 100 0.137 0.00293 0.0786 0.0481 0.0646 0.332 0.335 0.000829 0.000837 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 225 37724.983 0.005 0.00378 0.0518 0.127 0.0548 0.0733 0.221 0.272 0.000552 0.00068 ! Validation 225 37724.983 0.005 0.00413 0.0769 0.159 0.0563 0.0767 0.299 0.331 0.000749 0.000827 Wall time: 37724.983554077335 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 226 100 0.0889 0.0036 0.0169 0.0534 0.0716 0.127 0.155 0.000318 0.000388 226 118 0.0921 0.00317 0.0288 0.0499 0.0672 0.173 0.203 0.000432 0.000506 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 226 100 0.0752 0.0028 0.0193 0.0468 0.0631 0.16 0.166 0.0004 0.000414 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 226 37892.667 0.005 0.0036 0.0318 0.104 0.0535 0.0717 0.171 0.213 0.000427 0.000532 ! Validation 226 37892.667 0.005 0.00393 0.0247 0.103 0.055 0.0748 0.154 0.188 0.000384 0.000469 Wall time: 37892.66783031821 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 227 100 0.201 0.00344 0.132 0.0527 0.07 0.419 0.433 0.00105 0.00108 227 118 0.0799 0.00368 0.00628 0.0539 0.0724 0.0727 0.0946 0.000182 0.000236 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 227 100 0.0693 0.00283 0.0128 0.0471 0.0635 0.125 0.135 0.000313 0.000338 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 227 38060.237 0.005 0.00357 0.0342 0.106 0.0533 0.0713 0.178 0.221 0.000444 0.000554 ! Validation 227 38060.237 0.005 0.00394 0.0366 0.115 0.055 0.0749 0.188 0.228 0.00047 0.000571 Wall time: 38060.237746926025 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 228 100 0.0857 0.00365 0.0127 0.0531 0.0721 0.107 0.134 0.000266 0.000336 228 118 0.125 0.00352 0.0548 0.0528 0.0708 0.239 0.279 0.000597 0.000698 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 228 100 0.0848 0.00267 0.0313 0.0459 0.0617 0.207 0.211 0.000517 0.000528 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 228 38227.821 0.005 0.00368 0.0521 0.126 0.054 0.0724 0.221 0.272 0.000553 0.000681 ! Validation 228 38227.821 0.005 0.00387 0.0524 0.13 0.0544 0.0742 0.24 0.273 0.0006 0.000683 Wall time: 38227.82147413725 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 229 100 0.1 0.00339 0.0323 0.0518 0.0695 0.19 0.214 0.000474 0.000536 229 118 0.0873 0.00356 0.0162 0.0522 0.0712 0.116 0.152 0.000291 0.00038 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 229 100 0.0608 0.00266 0.00762 0.0455 0.0615 0.0951 0.104 0.000238 0.00026 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 229 38395.541 0.005 0.00347 0.0252 0.0947 0.0525 0.0703 0.148 0.19 0.00037 0.000475 ! Validation 229 38395.541 0.005 0.00375 0.0194 0.0945 0.0535 0.0731 0.132 0.166 0.000331 0.000416 Wall time: 38395.541461289395 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 230 100 0.0871 0.00328 0.0214 0.0512 0.0684 0.139 0.175 0.000347 0.000437 230 118 0.0908 0.00333 0.0243 0.051 0.0688 0.165 0.186 0.000412 0.000465 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 230 100 0.0967 0.00268 0.0431 0.0459 0.0618 0.244 0.248 0.00061 0.000619 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 230 38562.855 0.005 0.00344 0.0299 0.0988 0.0522 0.07 0.164 0.207 0.000411 0.000517 ! Validation 230 38562.855 0.005 0.00379 0.0466 0.122 0.0539 0.0735 0.224 0.258 0.00056 0.000644 Wall time: 38562.855570863 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 231 100 0.193 0.00369 0.12 0.054 0.0725 0.396 0.413 0.000989 0.00103 231 118 0.101 0.00316 0.0382 0.0504 0.0671 0.208 0.233 0.00052 0.000583 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 231 100 0.0558 0.00272 0.00133 0.0463 0.0623 0.0321 0.0436 8.03e-05 0.000109 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 231 38730.244 0.005 0.00357 0.0479 0.119 0.0533 0.0713 0.213 0.262 0.000532 0.000654 ! Validation 231 38730.244 0.005 0.00384 0.0153 0.0922 0.0543 0.074 0.116 0.148 0.000291 0.000369 Wall time: 38730.244427985046 ! Best model 231 0.092 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 232 100 0.134 0.00364 0.0611 0.0536 0.0721 0.274 0.295 0.000686 0.000737 232 118 0.087 0.00332 0.0206 0.0514 0.0688 0.15 0.171 0.000375 0.000428 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 232 100 0.0654 0.00259 0.0137 0.0448 0.0607 0.135 0.14 0.000336 0.000349 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 232 38897.505 0.005 0.0035 0.0405 0.111 0.0527 0.0706 0.194 0.241 0.000484 0.000602 ! Validation 232 38897.505 0.005 0.00375 0.0302 0.105 0.0535 0.0731 0.173 0.207 0.000432 0.000519 Wall time: 38897.505707249045 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 233 100 0.0794 0.00333 0.0129 0.0516 0.0688 0.107 0.135 0.000266 0.000339 233 118 0.152 0.00334 0.0854 0.0521 0.069 0.331 0.349 0.000827 0.000872 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 233 100 0.145 0.00281 0.0888 0.0474 0.0633 0.354 0.356 0.000885 0.000889 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 233 39064.785 0.005 0.00342 0.0338 0.102 0.052 0.0698 0.175 0.218 0.000439 0.000545 ! Validation 233 39064.785 0.005 0.00409 0.129 0.211 0.0562 0.0763 0.396 0.429 0.000989 0.00107 Wall time: 39064.78552664304 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 234 100 0.0911 0.00363 0.0185 0.053 0.0719 0.14 0.162 0.00035 0.000405 234 118 0.111 0.00303 0.0499 0.0497 0.0658 0.25 0.267 0.000624 0.000667 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 234 100 0.211 0.00263 0.158 0.0454 0.0612 0.473 0.474 0.00118 0.00119 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 234 39232.042 0.005 0.0035 0.042 0.112 0.0528 0.0707 0.198 0.244 0.000495 0.000611 ! Validation 234 39232.042 0.005 0.00373 0.147 0.222 0.0535 0.0729 0.436 0.458 0.00109 0.00115 Wall time: 39232.04280584538 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 235 100 0.0726 0.00326 0.00736 0.0506 0.0682 0.0826 0.102 0.000207 0.000256 235 118 0.0786 0.00331 0.0124 0.052 0.0687 0.108 0.133 0.00027 0.000332 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 235 100 0.0598 0.00288 0.00215 0.0475 0.0641 0.0402 0.0553 0.0001 0.000138 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 235 39399.307 0.005 0.00342 0.0264 0.0948 0.0521 0.0698 0.152 0.194 0.00038 0.000485 ! Validation 235 39399.307 0.005 0.00391 0.0133 0.0915 0.0549 0.0746 0.108 0.138 0.00027 0.000345 Wall time: 39399.30728081521 ! Best model 235 0.091 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 236 100 0.758 0.0338 0.0829 0.155 0.219 0.28 0.344 0.000699 0.000859 236 118 0.56 0.0249 0.0622 0.134 0.188 0.253 0.298 0.000633 0.000744 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 236 100 0.557 0.0208 0.14 0.123 0.172 0.434 0.447 0.00109 0.00112 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 236 39566.680 0.005 0.0987 1.54 3.51 0.229 0.376 0.929 1.49 0.00232 0.00371 ! Validation 236 39566.680 0.005 0.0253 0.14 0.647 0.135 0.19 0.379 0.447 0.000947 0.00112 Wall time: 39566.68046694435 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 237 100 0.416 0.0159 0.0979 0.108 0.15 0.328 0.373 0.000819 0.000934 237 118 0.439 0.0151 0.138 0.106 0.147 0.404 0.443 0.00101 0.00111 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 237 100 0.247 0.0123 0.00122 0.0957 0.132 0.0406 0.0417 0.000101 0.000104 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 237 39733.934 0.005 0.0182 0.193 0.556 0.115 0.161 0.412 0.524 0.00103 0.00131 ! Validation 237 39733.934 0.005 0.015 0.0534 0.354 0.105 0.146 0.219 0.276 0.000547 0.00069 Wall time: 39733.93411203707 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 238 100 0.438 0.0115 0.208 0.0928 0.128 0.505 0.545 0.00126 0.00136 238 118 0.35 0.00995 0.151 0.0871 0.119 0.442 0.464 0.0011 0.00116 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 238 100 0.486 0.00894 0.307 0.0821 0.113 0.66 0.661 0.00165 0.00165 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 238 39901.216 0.005 0.0124 0.113 0.361 0.096 0.133 0.325 0.402 0.000813 0.001 ! Validation 238 39901.216 0.005 0.0112 0.249 0.472 0.0913 0.126 0.559 0.595 0.0014 0.00149 Wall time: 39901.21611926798 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 239 100 0.192 0.00851 0.022 0.0811 0.11 0.135 0.177 0.000339 0.000442 239 118 0.198 0.00797 0.0384 0.0792 0.107 0.195 0.234 0.000488 0.000585 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 239 100 0.138 0.00662 0.00569 0.0712 0.0971 0.0808 0.09 0.000202 0.000225 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 239 40068.488 0.005 0.00944 0.0991 0.288 0.0849 0.116 0.306 0.377 0.000765 0.000941 ! Validation 239 40068.488 0.005 0.00846 0.0312 0.201 0.0805 0.11 0.169 0.211 0.000422 0.000527 Wall time: 40068.48827444203 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 240 100 0.18 0.00664 0.0467 0.0727 0.0973 0.217 0.258 0.000543 0.000645 240 118 0.152 0.00659 0.0201 0.0725 0.0969 0.149 0.169 0.000373 0.000423 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 240 100 0.116 0.00523 0.0115 0.064 0.0863 0.121 0.128 0.000303 0.000319 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 240 40235.854 0.005 0.00723 0.091 0.236 0.0755 0.102 0.296 0.361 0.00074 0.000903 ! Validation 240 40235.854 0.005 0.00683 0.0309 0.167 0.073 0.0986 0.168 0.21 0.000419 0.000524 Wall time: 40235.85414658021 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 241 100 0.125 0.00543 0.0162 0.0659 0.088 0.13 0.152 0.000326 0.00038 241 118 0.176 0.00504 0.0752 0.0639 0.0847 0.28 0.327 0.000701 0.000818 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 241 100 0.0897 0.00435 0.00283 0.0586 0.0787 0.0492 0.0635 0.000123 0.000159 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 241 40403.125 0.005 0.00578 0.0656 0.181 0.0681 0.0908 0.248 0.306 0.000619 0.000764 ! Validation 241 40403.125 0.005 0.00579 0.0189 0.135 0.0675 0.0908 0.134 0.164 0.000336 0.00041 Wall time: 40403.125900730025 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 242 100 0.14 0.00504 0.0388 0.0639 0.0848 0.197 0.235 0.000494 0.000588 242 118 0.113 0.00459 0.0216 0.0616 0.0808 0.15 0.175 0.000374 0.000438 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 242 100 0.132 0.00382 0.0557 0.0549 0.0738 0.278 0.282 0.000696 0.000704 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 242 40570.406 0.005 0.00502 0.0544 0.155 0.0636 0.0846 0.223 0.279 0.000558 0.000697 ! Validation 242 40570.406 0.005 0.00518 0.0584 0.162 0.0637 0.0859 0.253 0.288 0.000633 0.000721 Wall time: 40570.40694660414 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 243 100 0.107 0.0045 0.0173 0.0601 0.08 0.124 0.157 0.000311 0.000393 243 118 0.132 0.00425 0.0469 0.0583 0.0778 0.228 0.259 0.00057 0.000646 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 243 100 0.0837 0.00341 0.0155 0.0519 0.0697 0.144 0.149 0.000359 0.000371 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 243 40737.672 0.005 0.00456 0.0376 0.129 0.0606 0.0806 0.187 0.231 0.000468 0.000578 ! Validation 243 40737.672 0.005 0.0048 0.024 0.12 0.0612 0.0827 0.151 0.185 0.000378 0.000462 Wall time: 40737.672519540414 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 244 100 0.16 0.00445 0.0711 0.0593 0.0797 0.295 0.318 0.000737 0.000796 244 118 0.169 0.00402 0.0891 0.0569 0.0756 0.322 0.356 0.000805 0.000891 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 244 100 0.199 0.00329 0.134 0.0509 0.0684 0.435 0.436 0.00109 0.00109 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 244 40904.933 0.005 0.00431 0.0428 0.129 0.0588 0.0783 0.2 0.246 0.000501 0.000615 ! Validation 244 40904.933 0.005 0.00459 0.138 0.23 0.0597 0.0808 0.42 0.444 0.00105 0.00111 Wall time: 40904.93374403007 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 245 100 0.0916 0.00389 0.0139 0.0561 0.0744 0.107 0.14 0.000267 0.000351 245 118 0.1 0.00349 0.0304 0.0536 0.0706 0.178 0.208 0.000444 0.000521 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 245 100 0.0769 0.00312 0.0146 0.0494 0.0666 0.139 0.144 0.000348 0.00036 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 245 41072.295 0.005 0.00413 0.0353 0.118 0.0576 0.0768 0.181 0.224 0.000453 0.000561 ! Validation 245 41072.295 0.005 0.00441 0.0219 0.11 0.0584 0.0792 0.147 0.177 0.000367 0.000442 Wall time: 41072.295085913036 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 246 100 0.103 0.00395 0.0238 0.056 0.075 0.155 0.184 0.000386 0.000461 246 118 0.0821 0.00379 0.00636 0.0551 0.0734 0.0859 0.0952 0.000215 0.000238 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 246 100 0.0763 0.00301 0.016 0.0486 0.0655 0.147 0.151 0.000367 0.000378 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 246 41239.568 0.005 0.00399 0.0289 0.109 0.0565 0.0754 0.165 0.203 0.000412 0.000508 ! Validation 246 41239.568 0.005 0.00429 0.0247 0.11 0.0576 0.0782 0.157 0.188 0.000391 0.000469 Wall time: 41239.56808214029 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 247 100 0.0909 0.00379 0.015 0.055 0.0735 0.12 0.146 0.000299 0.000366 247 118 0.109 0.00404 0.0278 0.0564 0.0759 0.179 0.199 0.000449 0.000497 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 247 100 0.0691 0.00294 0.0103 0.0479 0.0647 0.116 0.121 0.000289 0.000303 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 247 41406.811 0.005 0.00388 0.032 0.11 0.0557 0.0744 0.171 0.214 0.000427 0.000534 ! Validation 247 41406.811 0.005 0.00419 0.0248 0.109 0.0569 0.0773 0.151 0.188 0.000377 0.00047 Wall time: 41406.81176061928 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 248 100 0.0844 0.00361 0.0122 0.0536 0.0717 0.107 0.132 0.000267 0.000329 248 118 0.164 0.00361 0.0922 0.0541 0.0718 0.349 0.362 0.000872 0.000906 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 248 100 0.181 0.00286 0.123 0.0473 0.0639 0.418 0.419 0.00104 0.00105 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 248 41574.064 0.005 0.0038 0.0315 0.107 0.055 0.0736 0.166 0.21 0.000414 0.000526 ! Validation 248 41574.064 0.005 0.00411 0.139 0.221 0.0563 0.0765 0.422 0.445 0.00106 0.00111 Wall time: 41574.06407823833 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 249 100 0.104 0.00362 0.0314 0.0536 0.0718 0.185 0.211 0.000461 0.000529 249 118 0.109 0.00456 0.0184 0.0596 0.0806 0.123 0.162 0.000307 0.000404 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 249 100 0.0849 0.00286 0.0277 0.0471 0.0638 0.195 0.199 0.000488 0.000497 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 249 41741.324 0.005 0.00374 0.04 0.115 0.0545 0.0729 0.195 0.239 0.000487 0.000598 ! Validation 249 41741.324 0.005 0.00407 0.0486 0.13 0.0559 0.0761 0.228 0.263 0.00057 0.000658 Wall time: 41741.32457659999 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 250 100 0.102 0.00383 0.0254 0.055 0.0739 0.165 0.19 0.000413 0.000475 250 118 0.0958 0.00355 0.0248 0.0532 0.0711 0.177 0.188 0.000443 0.00047 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 250 100 0.0558 0.00275 0.000667 0.0464 0.0626 0.0295 0.0308 7.37e-05 7.71e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 250 41908.700 0.005 0.00366 0.0327 0.106 0.054 0.0722 0.176 0.216 0.00044 0.00054 ! Validation 250 41908.700 0.005 0.00397 0.0144 0.0937 0.0552 0.0752 0.109 0.143 0.000274 0.000358 Wall time: 41908.70081173722 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 251 100 0.105 0.00331 0.0388 0.0517 0.0686 0.208 0.235 0.000521 0.000588 251 118 0.176 0.00395 0.0972 0.0557 0.075 0.327 0.372 0.000817 0.00093 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 251 100 0.129 0.00291 0.0707 0.0476 0.0644 0.316 0.317 0.000791 0.000793 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 251 42075.967 0.005 0.00361 0.0351 0.107 0.0535 0.0716 0.181 0.222 0.000452 0.000555 ! Validation 251 42075.967 0.005 0.00405 0.0782 0.159 0.0558 0.0759 0.306 0.334 0.000764 0.000835 Wall time: 42075.96701184334 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 252 100 0.0771 0.00321 0.0129 0.0509 0.0676 0.111 0.135 0.000278 0.000339 252 118 0.104 0.00371 0.0293 0.0552 0.0727 0.19 0.204 0.000475 0.00051 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 252 100 0.0717 0.00271 0.0176 0.0459 0.0621 0.155 0.158 0.000387 0.000396 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 252 42243.424 0.005 0.00355 0.0232 0.0942 0.0531 0.0711 0.146 0.181 0.000364 0.000454 ! Validation 252 42243.424 0.005 0.00385 0.0371 0.114 0.0543 0.0741 0.195 0.23 0.000487 0.000575 Wall time: 42243.42439697916 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 253 100 0.147 0.00362 0.0748 0.0532 0.0718 0.304 0.327 0.00076 0.000816 253 118 0.0657 0.00303 0.00511 0.0495 0.0657 0.068 0.0853 0.00017 0.000213 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 253 100 0.063 0.0027 0.00904 0.0458 0.062 0.109 0.113 0.000273 0.000284 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 253 42412.441 0.005 0.00351 0.0401 0.11 0.0528 0.0708 0.196 0.24 0.00049 0.000599 ! Validation 253 42412.441 0.005 0.00382 0.0184 0.0949 0.0541 0.0738 0.131 0.162 0.000328 0.000405 Wall time: 42412.44192346139 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 254 100 0.0882 0.00335 0.0211 0.0518 0.0691 0.147 0.173 0.000368 0.000434 254 118 0.079 0.00356 0.00771 0.0522 0.0713 0.0822 0.105 0.000206 0.000262 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 254 100 0.0531 0.00263 0.000592 0.0452 0.0612 0.0238 0.029 5.95e-05 7.26e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 254 42579.919 0.005 0.00347 0.0268 0.0962 0.0525 0.0703 0.158 0.196 0.000395 0.00049 ! Validation 254 42579.919 0.005 0.00377 0.0122 0.0877 0.0537 0.0733 0.102 0.132 0.000256 0.00033 Wall time: 42579.91981325019 ! Best model 254 0.088 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 255 100 0.0842 0.0036 0.0122 0.0532 0.0716 0.109 0.132 0.000274 0.00033 255 118 0.0824 0.00304 0.0217 0.0493 0.0658 0.16 0.176 0.000399 0.000439 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 255 100 0.0888 0.00261 0.0366 0.0451 0.061 0.227 0.228 0.000567 0.000571 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 255 42747.477 0.005 0.00341 0.023 0.0912 0.052 0.0697 0.145 0.181 0.000363 0.000453 ! Validation 255 42747.477 0.005 0.00375 0.0357 0.111 0.0535 0.0731 0.196 0.225 0.00049 0.000564 Wall time: 42747.477251499426 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 256 100 0.0851 0.00346 0.016 0.0522 0.0702 0.123 0.151 0.000308 0.000377 256 118 0.137 0.00315 0.0745 0.0508 0.0669 0.312 0.326 0.00078 0.000814 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 256 100 0.0773 0.00267 0.024 0.0456 0.0616 0.183 0.185 0.000458 0.000462 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 256 42914.896 0.005 0.00347 0.049 0.118 0.0525 0.0704 0.198 0.264 0.000494 0.000659 ! Validation 256 42914.896 0.005 0.00375 0.0426 0.117 0.0536 0.073 0.213 0.246 0.000533 0.000616 Wall time: 42914.89691913221 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 257 100 0.133 0.00308 0.0713 0.05 0.0663 0.296 0.319 0.000739 0.000797 257 118 0.0756 0.00287 0.0181 0.0483 0.064 0.131 0.161 0.000328 0.000402 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 257 100 0.0538 0.00263 0.00115 0.045 0.0612 0.0324 0.0405 8.1e-05 0.000101 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 257 43082.339 0.005 0.00337 0.0234 0.0908 0.0517 0.0693 0.146 0.183 0.000364 0.000457 ! Validation 257 43082.339 0.005 0.00369 0.0124 0.0862 0.0531 0.0725 0.103 0.133 0.000258 0.000332 Wall time: 43082.340028061066 ! Best model 257 0.086 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 258 100 0.0738 0.00313 0.0112 0.0499 0.0668 0.0976 0.126 0.000244 0.000316 258 118 0.0705 0.00319 0.00664 0.0495 0.0675 0.0741 0.0973 0.000185 0.000243 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 258 100 0.0561 0.00255 0.00502 0.0445 0.0603 0.0797 0.0846 0.000199 0.000211 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 258 43249.753 0.005 0.00334 0.0296 0.0963 0.0515 0.069 0.165 0.206 0.000413 0.000515 ! Validation 258 43249.753 0.005 0.00363 0.0227 0.0954 0.0527 0.0719 0.145 0.18 0.000362 0.00045 Wall time: 43249.753200488165 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 259 100 0.144 0.00333 0.0772 0.0513 0.0689 0.314 0.332 0.000784 0.000829 259 118 0.211 0.00377 0.135 0.0535 0.0733 0.421 0.439 0.00105 0.0011 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 259 100 0.0897 0.00262 0.0373 0.0449 0.0611 0.229 0.231 0.000574 0.000577 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 259 43417.351 0.005 0.00332 0.045 0.111 0.0513 0.0688 0.203 0.251 0.000507 0.000629 ! Validation 259 43417.351 0.005 0.00366 0.0524 0.126 0.0529 0.0722 0.239 0.273 0.000598 0.000683 Wall time: 43417.351199432276 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 260 100 0.14 0.00358 0.0683 0.0532 0.0714 0.294 0.312 0.000735 0.00078 260 118 0.076 0.0033 0.0099 0.0514 0.0686 0.0843 0.119 0.000211 0.000297 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 260 100 0.0587 0.00253 0.00815 0.0443 0.06 0.104 0.108 0.000259 0.000269 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 260 43584.985 0.005 0.0033 0.0258 0.0918 0.0512 0.0686 0.155 0.192 0.000388 0.00048 ! Validation 260 43584.985 0.005 0.00359 0.0141 0.0859 0.0524 0.0715 0.113 0.142 0.000284 0.000354 Wall time: 43584.98560302006 ! Best model 260 0.086 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 261 100 0.0805 0.00334 0.0138 0.0513 0.0689 0.112 0.14 0.000279 0.000351 261 118 0.0895 0.00318 0.0258 0.0508 0.0673 0.152 0.192 0.000381 0.00048 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 261 100 0.0533 0.00256 0.00222 0.0445 0.0603 0.0495 0.0562 0.000124 0.00014 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 261 43752.986 0.005 0.00325 0.0276 0.0926 0.0508 0.0681 0.161 0.198 0.000401 0.000496 ! Validation 261 43752.986 0.005 0.0036 0.0202 0.0922 0.0525 0.0716 0.133 0.169 0.000334 0.000424 Wall time: 43752.98621332226 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 262 100 0.202 0.00328 0.137 0.0509 0.0684 0.421 0.441 0.00105 0.0011 262 118 0.0844 0.00367 0.011 0.0533 0.0723 0.089 0.125 0.000223 0.000313 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 262 100 0.0506 0.0025 0.000598 0.0441 0.0597 0.0254 0.0292 6.35e-05 7.3e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 262 43920.461 0.005 0.00327 0.0429 0.108 0.0509 0.0682 0.202 0.248 0.000505 0.000619 ! Validation 262 43920.461 0.005 0.00359 0.0133 0.0852 0.0524 0.0716 0.107 0.138 0.000266 0.000344 Wall time: 43920.46203893004 ! Best model 262 0.085 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 263 100 0.0881 0.00314 0.0252 0.0501 0.0669 0.169 0.19 0.000423 0.000474 263 118 0.0748 0.00294 0.016 0.0492 0.0647 0.137 0.151 0.000343 0.000377 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 263 100 0.103 0.00249 0.0533 0.0441 0.0596 0.274 0.275 0.000685 0.000689 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 263 44087.986 0.005 0.00323 0.0274 0.0919 0.0506 0.0678 0.157 0.198 0.000392 0.000494 ! Validation 263 44087.986 0.005 0.00354 0.051 0.122 0.052 0.071 0.243 0.27 0.000608 0.000674 Wall time: 44087.986927095335 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 264 100 0.148 0.00319 0.0843 0.0498 0.0674 0.326 0.347 0.000816 0.000867 264 118 0.0918 0.00371 0.0175 0.0537 0.0727 0.144 0.158 0.000359 0.000395 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 264 100 0.052 0.00248 0.00244 0.0438 0.0594 0.052 0.059 0.00013 0.000147 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 264 44255.490 0.005 0.00324 0.0436 0.109 0.0507 0.068 0.203 0.25 0.000507 0.000624 ! Validation 264 44255.490 0.005 0.00354 0.0166 0.0875 0.052 0.0711 0.12 0.154 0.0003 0.000384 Wall time: 44255.49055916816 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 265 100 0.0741 0.00327 0.00866 0.0509 0.0683 0.0897 0.111 0.000224 0.000278 265 118 0.0722 0.00277 0.0168 0.0474 0.0628 0.146 0.155 0.000364 0.000387 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 265 100 0.051 0.00245 0.00208 0.0435 0.059 0.0457 0.0545 0.000114 0.000136 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 265 44423.098 0.005 0.00318 0.0231 0.0867 0.0502 0.0673 0.146 0.182 0.000365 0.000454 ! Validation 265 44423.098 0.005 0.00349 0.012 0.0818 0.0516 0.0705 0.101 0.131 0.000253 0.000327 Wall time: 44423.0981862084 ! Best model 265 0.082 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 266 100 0.0874 0.00331 0.0212 0.0509 0.0686 0.148 0.174 0.00037 0.000435 266 118 0.0767 0.00355 0.00568 0.0533 0.0711 0.07 0.0899 0.000175 0.000225 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 266 100 0.05 0.00246 0.000752 0.0435 0.0592 0.025 0.0327 6.24e-05 8.18e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 266 44590.552 0.005 0.00317 0.0267 0.09 0.0501 0.0671 0.157 0.196 0.000392 0.000489 ! Validation 266 44590.552 0.005 0.00345 0.0135 0.0826 0.0513 0.0701 0.106 0.139 0.000266 0.000347 Wall time: 44590.55233573541 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 267 100 0.138 0.00294 0.0797 0.0484 0.0647 0.323 0.337 0.000808 0.000842 267 118 0.0756 0.00324 0.0108 0.0511 0.0679 0.0865 0.124 0.000216 0.000311 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 267 100 0.0746 0.0025 0.0246 0.0438 0.0597 0.185 0.187 0.000463 0.000468 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 267 44757.943 0.005 0.00316 0.0326 0.0958 0.05 0.0671 0.174 0.216 0.000435 0.00054 ! Validation 267 44757.943 0.005 0.00349 0.0323 0.102 0.0516 0.0705 0.186 0.214 0.000465 0.000536 Wall time: 44757.943611039314 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 268 100 0.0843 0.00329 0.0185 0.051 0.0685 0.13 0.162 0.000325 0.000406 268 118 0.173 0.00328 0.107 0.0513 0.0683 0.375 0.391 0.000938 0.000978 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 268 100 0.146 0.00249 0.0965 0.0438 0.0595 0.37 0.371 0.000925 0.000927 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 268 44925.191 0.005 0.00315 0.0339 0.0969 0.05 0.067 0.169 0.218 0.000422 0.000545 ! Validation 268 44925.191 0.005 0.00352 0.122 0.192 0.0518 0.0708 0.397 0.417 0.000994 0.00104 Wall time: 44925.191887859255 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 269 100 0.096 0.0031 0.0341 0.0497 0.0664 0.199 0.22 0.000497 0.000551 269 118 0.0972 0.0029 0.0392 0.0482 0.0643 0.215 0.236 0.000537 0.000591 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 269 100 0.158 0.00242 0.11 0.0432 0.0588 0.395 0.396 0.000987 0.000989 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 269 45092.474 0.005 0.00314 0.036 0.0989 0.0499 0.0669 0.186 0.226 0.000466 0.000566 ! Validation 269 45092.474 0.005 0.00342 0.0969 0.165 0.0511 0.0698 0.35 0.372 0.000874 0.000929 Wall time: 45092.47438403824 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 270 100 0.0929 0.00282 0.0364 0.0476 0.0634 0.2 0.228 0.000501 0.000569 270 118 0.0701 0.00314 0.0073 0.0499 0.0669 0.0791 0.102 0.000198 0.000255 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 270 100 0.109 0.0024 0.061 0.043 0.0584 0.294 0.295 0.000735 0.000737 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 270 45259.608 0.005 0.0031 0.0274 0.0894 0.0496 0.0665 0.157 0.198 0.000392 0.000495 ! Validation 270 45259.608 0.005 0.0034 0.06 0.128 0.0509 0.0696 0.266 0.292 0.000665 0.000731 Wall time: 45259.6083851303 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 271 100 0.093 0.00326 0.0278 0.0505 0.0681 0.175 0.199 0.000438 0.000497 271 118 0.16 0.00355 0.0894 0.0529 0.0711 0.338 0.357 0.000845 0.000892 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 271 100 0.0561 0.0024 0.0081 0.0431 0.0584 0.104 0.107 0.00026 0.000269 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 271 45426.730 0.005 0.00312 0.04 0.102 0.0497 0.0666 0.186 0.238 0.000465 0.000594 ! Validation 271 45426.730 0.005 0.00344 0.0366 0.105 0.0512 0.07 0.191 0.228 0.000476 0.000571 Wall time: 45426.73028074205 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 272 100 0.0932 0.00298 0.0336 0.049 0.0652 0.193 0.219 0.000483 0.000547 272 118 0.0551 0.00257 0.00362 0.0452 0.0605 0.065 0.0718 0.000162 0.00018 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 272 100 0.0622 0.00237 0.0147 0.0429 0.0581 0.142 0.145 0.000356 0.000362 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 272 45594.268 0.005 0.00313 0.0414 0.104 0.0499 0.0668 0.2 0.244 0.000499 0.000609 ! Validation 272 45594.268 0.005 0.00338 0.0181 0.0856 0.0508 0.0694 0.132 0.16 0.000329 0.000401 Wall time: 45594.268560878 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 273 100 0.0752 0.00296 0.016 0.0484 0.065 0.123 0.151 0.000308 0.000377 273 118 0.072 0.00316 0.00879 0.0495 0.0671 0.0975 0.112 0.000244 0.00028 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 273 100 0.0515 0.0023 0.00552 0.0423 0.0572 0.0827 0.0886 0.000207 0.000222 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 273 45761.366 0.005 0.00307 0.0294 0.0909 0.0494 0.0662 0.161 0.205 0.000404 0.000513 ! Validation 273 45761.366 0.005 0.00336 0.0232 0.0903 0.0506 0.0691 0.148 0.182 0.000371 0.000454 Wall time: 45761.36656318698 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 274 100 0.0691 0.00309 0.00737 0.0493 0.0663 0.0844 0.102 0.000211 0.000256 274 118 0.171 0.00396 0.0919 0.0548 0.0751 0.352 0.362 0.000881 0.000905 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 274 100 0.0639 0.00271 0.00962 0.0458 0.0622 0.116 0.117 0.00029 0.000293 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 274 45928.588 0.005 0.00302 0.0187 0.079 0.0488 0.0655 0.128 0.161 0.00032 0.000402 ! Validation 274 45928.588 0.005 0.00368 0.0346 0.108 0.0532 0.0724 0.184 0.222 0.00046 0.000555 Wall time: 45928.58872046927 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 275 100 0.0817 0.00283 0.025 0.0473 0.0635 0.17 0.189 0.000426 0.000472 275 118 0.0851 0.00293 0.0266 0.0481 0.0646 0.159 0.195 0.000397 0.000486 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 275 100 0.0659 0.00236 0.0186 0.0425 0.058 0.16 0.163 0.0004 0.000407 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 275 46095.736 0.005 0.00307 0.0292 0.0905 0.0493 0.0661 0.167 0.204 0.000418 0.00051 ! Validation 275 46095.736 0.005 0.00331 0.0216 0.0879 0.0502 0.0687 0.147 0.175 0.000368 0.000438 Wall time: 46095.73655166011 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 276 100 0.146 0.00328 0.0807 0.0511 0.0684 0.323 0.339 0.000807 0.000848 276 118 0.085 0.00313 0.0224 0.0489 0.0668 0.167 0.179 0.000417 0.000447 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 276 100 0.0531 0.00231 0.007 0.0424 0.0573 0.0958 0.0998 0.000239 0.00025 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 276 46262.855 0.005 0.00304 0.0364 0.0971 0.049 0.0658 0.188 0.228 0.00047 0.00057 ! Validation 276 46262.855 0.005 0.00333 0.0201 0.0867 0.0504 0.0689 0.137 0.169 0.000342 0.000423 Wall time: 46262.85526186507 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 277 100 0.0778 0.003 0.0178 0.0484 0.0653 0.127 0.159 0.000317 0.000399 277 118 0.175 0.00292 0.117 0.0483 0.0645 0.405 0.408 0.00101 0.00102 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 277 100 0.263 0.00231 0.217 0.0424 0.0574 0.556 0.556 0.00139 0.00139 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 277 46430.269 0.005 0.00305 0.0341 0.0951 0.0492 0.0659 0.178 0.219 0.000444 0.000546 ! Validation 277 46430.269 0.005 0.00334 0.222 0.289 0.0505 0.069 0.547 0.562 0.00137 0.00141 Wall time: 46430.2690150463 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 278 100 0.0666 0.00289 0.0089 0.048 0.0641 0.0878 0.113 0.00022 0.000282 278 118 0.061 0.00256 0.00979 0.0459 0.0604 0.103 0.118 0.000258 0.000295 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 278 100 0.0465 0.00229 0.000602 0.0421 0.0572 0.0252 0.0293 6.29e-05 7.32e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 278 46597.517 0.005 0.00302 0.0353 0.0957 0.049 0.0656 0.179 0.225 0.000448 0.000562 ! Validation 278 46597.517 0.005 0.00328 0.0144 0.08 0.05 0.0684 0.111 0.143 0.000278 0.000358 Wall time: 46597.517018159386 ! Best model 278 0.080 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 279 100 0.0802 0.00285 0.0231 0.0479 0.0638 0.16 0.181 0.0004 0.000454 279 118 0.0806 0.00343 0.0119 0.0526 0.0699 0.106 0.13 0.000265 0.000326 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 279 100 0.0498 0.00236 0.00261 0.0425 0.0579 0.0557 0.0609 0.000139 0.000152 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 279 46764.648 0.005 0.00303 0.0337 0.0942 0.049 0.0656 0.173 0.22 0.000433 0.000549 ! Validation 279 46764.648 0.005 0.00331 0.0116 0.0777 0.0502 0.0686 0.101 0.129 0.000253 0.000322 Wall time: 46764.64885848714 ! Best model 279 0.078 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 280 100 0.136 0.00289 0.078 0.048 0.0642 0.309 0.333 0.000774 0.000833 280 118 0.0796 0.00348 0.01 0.0519 0.0704 0.0956 0.119 0.000239 0.000299 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 280 100 0.122 0.00225 0.0766 0.0417 0.0566 0.329 0.33 0.000823 0.000826 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 280 46931.749 0.005 0.00297 0.0291 0.0884 0.0485 0.065 0.167 0.204 0.000418 0.00051 ! Validation 280 46931.749 0.005 0.00325 0.0742 0.139 0.0498 0.068 0.301 0.325 0.000753 0.000813 Wall time: 46931.74925164925 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 281 100 0.0719 0.00288 0.0144 0.0479 0.064 0.122 0.143 0.000304 0.000358 281 118 0.107 0.00322 0.0426 0.0502 0.0677 0.229 0.246 0.000573 0.000616 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 281 100 0.0885 0.00237 0.0411 0.0427 0.0581 0.241 0.242 0.000603 0.000605 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 281 47098.851 0.005 0.00326 0.0433 0.109 0.0507 0.0682 0.179 0.248 0.000448 0.000621 ! Validation 281 47098.851 0.005 0.00333 0.0476 0.114 0.0504 0.0688 0.229 0.26 0.000573 0.000651 Wall time: 47098.85184104927 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 282 100 0.135 0.00289 0.0772 0.0478 0.0641 0.311 0.332 0.000778 0.000829 282 118 0.065 0.00303 0.00448 0.0493 0.0657 0.0696 0.0799 0.000174 0.0002 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 282 100 0.0623 0.00225 0.0172 0.0417 0.0567 0.155 0.157 0.000386 0.000392 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 282 47265.997 0.005 0.00291 0.0184 0.0766 0.048 0.0644 0.129 0.162 0.000322 0.000405 ! Validation 282 47265.997 0.005 0.00321 0.0245 0.0886 0.0495 0.0676 0.158 0.187 0.000394 0.000467 Wall time: 47265.99794334127 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 283 100 0.0673 0.0027 0.0133 0.0465 0.062 0.116 0.137 0.000289 0.000344 283 118 0.0775 0.00314 0.0147 0.05 0.0669 0.13 0.145 0.000325 0.000362 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 283 100 0.11 0.0022 0.0662 0.0413 0.056 0.306 0.307 0.000766 0.000768 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 283 47433.222 0.005 0.00294 0.0323 0.0911 0.0483 0.0647 0.175 0.215 0.000439 0.000537 ! Validation 283 47433.222 0.005 0.00321 0.0686 0.133 0.0495 0.0676 0.289 0.313 0.000721 0.000781 Wall time: 47433.222526392434 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 284 100 0.0764 0.00292 0.018 0.0483 0.0645 0.13 0.16 0.000326 0.0004 284 118 0.0691 0.00318 0.00548 0.0497 0.0673 0.0803 0.0883 0.000201 0.000221 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 284 100 0.0504 0.00223 0.00587 0.0416 0.0563 0.0887 0.0915 0.000222 0.000229 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 284 47600.340 0.005 0.003 0.0415 0.102 0.0488 0.0654 0.201 0.244 0.000502 0.000609 ! Validation 284 47600.340 0.005 0.0032 0.0254 0.0893 0.0494 0.0675 0.155 0.19 0.000388 0.000475 Wall time: 47600.34095426602 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 285 100 0.0863 0.00288 0.0287 0.0481 0.064 0.177 0.202 0.000442 0.000506 285 118 0.076 0.0031 0.0139 0.0492 0.0665 0.122 0.141 0.000304 0.000351 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 285 100 0.054 0.00227 0.00851 0.042 0.0569 0.108 0.11 0.000269 0.000275 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 285 47767.454 0.005 0.00291 0.0268 0.085 0.048 0.0643 0.158 0.196 0.000396 0.00049 ! Validation 285 47767.454 0.005 0.00329 0.0172 0.0831 0.0502 0.0685 0.124 0.157 0.000309 0.000392 Wall time: 47767.45462907711 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 286 100 0.0688 0.00295 0.0098 0.048 0.0648 0.0903 0.118 0.000226 0.000295 286 118 0.0693 0.00307 0.00795 0.049 0.0661 0.0867 0.106 0.000217 0.000266 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 286 100 0.0668 0.00224 0.0221 0.0415 0.0565 0.176 0.177 0.000439 0.000443 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 286 47934.780 0.005 0.00288 0.0205 0.0781 0.0478 0.064 0.138 0.171 0.000344 0.000428 ! Validation 286 47934.780 0.005 0.00318 0.0211 0.0847 0.0493 0.0673 0.146 0.173 0.000366 0.000434 Wall time: 47934.7800701051 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 287 100 0.0658 0.00284 0.00904 0.0477 0.0636 0.0978 0.113 0.000244 0.000284 287 118 0.18 0.00323 0.115 0.051 0.0678 0.388 0.406 0.000969 0.00101 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 287 100 0.186 0.00289 0.128 0.0473 0.0642 0.426 0.427 0.00106 0.00107 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 287 48102.010 0.005 0.0029 0.0368 0.0949 0.048 0.0643 0.184 0.227 0.000461 0.000568 ! Validation 287 48102.010 0.005 0.00377 0.161 0.237 0.0543 0.0733 0.459 0.479 0.00115 0.0012 Wall time: 48102.01046952233 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 288 100 0.0615 0.00277 0.00617 0.0472 0.0628 0.074 0.0938 0.000185 0.000234 288 118 0.0683 0.00268 0.0147 0.0466 0.0617 0.128 0.145 0.00032 0.000362 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 288 100 0.0456 0.00218 0.00199 0.041 0.0557 0.0472 0.0533 0.000118 0.000133 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 288 48269.358 0.005 0.00292 0.0247 0.0831 0.0481 0.0645 0.15 0.188 0.000374 0.000469 ! Validation 288 48269.358 0.005 0.00313 0.0125 0.0752 0.0489 0.0668 0.104 0.133 0.00026 0.000333 Wall time: 48269.3582121823 ! Best model 288 0.075 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 289 100 0.0649 0.00268 0.0114 0.0464 0.0618 0.106 0.127 0.000266 0.000318 289 118 0.11 0.00294 0.0509 0.0484 0.0647 0.253 0.269 0.000634 0.000673 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 289 100 0.0534 0.00224 0.00861 0.0414 0.0565 0.108 0.111 0.000269 0.000277 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 289 48436.665 0.005 0.00291 0.0322 0.0904 0.0481 0.0644 0.174 0.214 0.000436 0.000535 ! Validation 289 48436.665 0.005 0.00317 0.0372 0.101 0.0491 0.0672 0.189 0.23 0.000472 0.000576 Wall time: 48436.6654984802 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 290 100 0.0865 0.00256 0.0354 0.0451 0.0603 0.206 0.225 0.000514 0.000561 290 118 0.0747 0.00338 0.00707 0.051 0.0694 0.0946 0.1 0.000236 0.000251 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 290 100 0.0515 0.00219 0.0076 0.0412 0.0559 0.0992 0.104 0.000248 0.00026 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 290 48603.962 0.005 0.00287 0.0325 0.09 0.0477 0.0639 0.179 0.216 0.000448 0.00054 ! Validation 290 48603.962 0.005 0.00314 0.0192 0.0819 0.0489 0.0668 0.135 0.165 0.000337 0.000413 Wall time: 48603.96220276132 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 291 100 0.0643 0.00268 0.0106 0.0464 0.0618 0.1 0.123 0.000251 0.000308 291 118 0.0709 0.00305 0.00996 0.0482 0.0659 0.085 0.119 0.000213 0.000298 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 291 100 0.0482 0.00215 0.00524 0.0406 0.0553 0.0838 0.0864 0.00021 0.000216 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 291 48771.289 0.005 0.00281 0.0238 0.08 0.0472 0.0632 0.149 0.184 0.000373 0.000461 ! Validation 291 48771.289 0.005 0.00307 0.0143 0.0756 0.0483 0.0661 0.113 0.143 0.000283 0.000357 Wall time: 48771.28921243502 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 292 100 0.0597 0.00267 0.00627 0.046 0.0617 0.0779 0.0945 0.000195 0.000236 292 118 0.0659 0.00318 0.00244 0.0503 0.0673 0.0505 0.059 0.000126 0.000147 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 292 100 0.0433 0.00215 0.000327 0.0407 0.0553 0.0179 0.0216 4.49e-05 5.4e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 292 48938.625 0.005 0.0028 0.0252 0.0811 0.0471 0.0631 0.152 0.19 0.000379 0.000475 ! Validation 292 48938.625 0.005 0.00305 0.0123 0.0733 0.0482 0.0659 0.105 0.132 0.000262 0.000331 Wall time: 48938.62595797004 ! Best model 292 0.073 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 293 100 0.0865 0.0028 0.0305 0.0472 0.0632 0.188 0.208 0.000469 0.000521 293 118 0.0726 0.00261 0.0203 0.0456 0.061 0.14 0.17 0.00035 0.000426 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 293 100 0.0518 0.00225 0.00689 0.0416 0.0566 0.0952 0.0991 0.000238 0.000248 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 293 49106.033 0.005 0.00276 0.0252 0.0804 0.0468 0.0628 0.151 0.189 0.000379 0.000474 ! Validation 293 49106.033 0.005 0.00314 0.0109 0.0738 0.049 0.0669 0.0967 0.125 0.000242 0.000312 Wall time: 49106.03352235211 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 294 100 0.077 0.00277 0.0217 0.0475 0.0628 0.154 0.176 0.000386 0.000439 294 118 0.0637 0.00272 0.00941 0.0462 0.0622 0.0891 0.116 0.000223 0.000289 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 294 100 0.0608 0.0021 0.0188 0.0401 0.0547 0.162 0.164 0.000404 0.000409 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 294 49273.289 0.005 0.0029 0.0379 0.0959 0.048 0.0643 0.187 0.233 0.000468 0.000583 ! Validation 294 49273.289 0.005 0.00305 0.0223 0.0832 0.0482 0.0659 0.151 0.178 0.000378 0.000446 Wall time: 49273.28914472135 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 295 100 0.0676 0.00269 0.0138 0.0461 0.0619 0.115 0.14 0.000287 0.000351 295 118 0.0704 0.00265 0.0174 0.0464 0.0615 0.145 0.157 0.000362 0.000393 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 295 100 0.0544 0.00211 0.0121 0.0404 0.0549 0.13 0.132 0.000324 0.000329 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 295 49440.543 0.005 0.00276 0.0218 0.077 0.0468 0.0627 0.143 0.176 0.000358 0.000441 ! Validation 295 49440.543 0.005 0.00303 0.0177 0.0783 0.0482 0.0657 0.131 0.159 0.000326 0.000397 Wall time: 49440.54395594122 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 296 100 0.0759 0.0028 0.0198 0.0474 0.0632 0.13 0.168 0.000325 0.00042 296 118 0.0826 0.00292 0.0243 0.0483 0.0645 0.162 0.186 0.000404 0.000465 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 296 100 0.0717 0.00219 0.0279 0.0412 0.0559 0.196 0.199 0.000491 0.000498 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 296 49607.909 0.005 0.00286 0.0377 0.0949 0.0477 0.0639 0.179 0.232 0.000448 0.00058 ! Validation 296 49607.909 0.005 0.00305 0.0487 0.11 0.0482 0.0659 0.236 0.263 0.000589 0.000658 Wall time: 49607.909074551426 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 297 100 0.0844 0.00312 0.0219 0.0502 0.0667 0.157 0.177 0.000393 0.000442 297 118 0.148 0.00283 0.0918 0.048 0.0634 0.35 0.362 0.000876 0.000904 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 297 100 0.124 0.00237 0.0762 0.0431 0.0581 0.329 0.329 0.000822 0.000824 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 297 49775.263 0.005 0.00276 0.0298 0.085 0.0468 0.0627 0.166 0.204 0.000416 0.000511 ! Validation 297 49775.263 0.005 0.00327 0.0904 0.156 0.0502 0.0683 0.334 0.359 0.000836 0.000897 Wall time: 49775.26359473402 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 298 100 0.0778 0.00291 0.0196 0.0483 0.0643 0.15 0.167 0.000374 0.000418 298 118 0.0652 0.00277 0.00974 0.0469 0.0628 0.106 0.118 0.000265 0.000295 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 298 100 0.0529 0.00219 0.00901 0.0413 0.0559 0.111 0.113 0.000278 0.000283 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 298 49942.530 0.005 0.00298 0.0414 0.101 0.0489 0.0652 0.194 0.243 0.000485 0.000609 ! Validation 298 49942.530 0.005 0.00309 0.0175 0.0793 0.0487 0.0664 0.129 0.158 0.000323 0.000394 Wall time: 49942.530039455276 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 299 100 0.0646 0.0028 0.00859 0.0468 0.0632 0.0887 0.111 0.000222 0.000277 299 118 0.109 0.00327 0.0434 0.0513 0.0682 0.232 0.249 0.00058 0.000622 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 299 100 0.0481 0.00237 0.00067 0.0434 0.0581 0.0283 0.0309 7.07e-05 7.72e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 299 50109.814 0.005 0.00272 0.0253 0.0797 0.0464 0.0622 0.147 0.189 0.000367 0.000474 ! Validation 299 50109.814 0.005 0.0033 0.0199 0.086 0.0505 0.0686 0.123 0.169 0.000308 0.000421 Wall time: 50109.81484296918 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 300 100 0.0653 0.00267 0.0118 0.0462 0.0617 0.104 0.13 0.00026 0.000324 300 118 0.0573 0.00267 0.0038 0.0467 0.0617 0.0562 0.0735 0.000141 0.000184 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 300 100 0.06 0.00212 0.0176 0.0409 0.055 0.156 0.158 0.000391 0.000396 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 300 50277.070 0.005 0.00279 0.0293 0.0852 0.0471 0.0631 0.167 0.205 0.000418 0.000512 ! Validation 300 50277.070 0.005 0.00305 0.0207 0.0818 0.0486 0.066 0.146 0.172 0.000364 0.000429 Wall time: 50277.070365081076 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 301 100 0.0573 0.00259 0.00545 0.0455 0.0608 0.0695 0.0881 0.000174 0.00022 301 118 0.136 0.00254 0.0854 0.0452 0.0602 0.343 0.349 0.000858 0.000872 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 301 100 0.129 0.00212 0.0865 0.0408 0.0549 0.35 0.351 0.000876 0.000878 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 301 50444.309 0.005 0.00274 0.0229 0.0776 0.0466 0.0624 0.145 0.179 0.000362 0.000448 ! Validation 301 50444.309 0.005 0.00302 0.102 0.163 0.0483 0.0656 0.361 0.382 0.000903 0.000955 Wall time: 50444.30924368324 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 302 100 0.063 0.00282 0.0065 0.0473 0.0634 0.0765 0.0962 0.000191 0.000241 302 118 0.0701 0.00286 0.0128 0.0476 0.0639 0.104 0.135 0.00026 0.000338 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 302 100 0.0639 0.00204 0.023 0.0398 0.0539 0.178 0.181 0.000446 0.000453 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 302 50612.085 0.005 0.00275 0.0324 0.0873 0.0467 0.0625 0.17 0.215 0.000425 0.000538 ! Validation 302 50612.085 0.005 0.00295 0.0385 0.0975 0.0474 0.0648 0.203 0.234 0.000507 0.000585 Wall time: 50612.08601517929 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 303 100 0.0776 0.00261 0.0254 0.0457 0.0609 0.168 0.19 0.00042 0.000476 303 118 0.0636 0.00252 0.0132 0.0457 0.0599 0.119 0.137 0.000298 0.000342 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 303 100 0.0486 0.00202 0.00821 0.0396 0.0537 0.106 0.108 0.000264 0.00027 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 303 50779.534 0.005 0.00274 0.0295 0.0843 0.0467 0.0625 0.16 0.205 0.000399 0.000514 ! Validation 303 50779.534 0.005 0.00296 0.0155 0.0748 0.0476 0.065 0.121 0.149 0.000302 0.000372 Wall time: 50779.53466604743 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 304 100 0.181 0.00369 0.107 0.054 0.0725 0.366 0.391 0.000915 0.000978 304 118 0.0741 0.00294 0.0153 0.0485 0.0647 0.117 0.148 0.000293 0.000369 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 304 100 0.0435 0.00209 0.00163 0.0405 0.0546 0.0404 0.0482 0.000101 0.00012 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 304 50947.055 0.005 0.00277 0.0271 0.0825 0.0468 0.0628 0.156 0.197 0.000389 0.000492 ! Validation 304 50947.055 0.005 0.00305 0.0108 0.0718 0.0484 0.0659 0.0972 0.124 0.000243 0.000311 Wall time: 50947.0558908754 ! Best model 304 0.072 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 305 100 0.0728 0.00253 0.0222 0.0446 0.06 0.159 0.178 0.000398 0.000445 305 118 0.061 0.00284 0.00423 0.0475 0.0636 0.0618 0.0776 0.000154 0.000194 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 305 100 0.0473 0.00209 0.00541 0.0403 0.0546 0.0846 0.0878 0.000212 0.00022 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 305 51115.376 0.005 0.0027 0.0289 0.0829 0.0463 0.062 0.166 0.203 0.000414 0.000509 ! Validation 305 51115.376 0.005 0.00299 0.0169 0.0767 0.0478 0.0653 0.124 0.155 0.00031 0.000388 Wall time: 51115.37665219605 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 306 100 0.0782 0.00262 0.0259 0.0456 0.061 0.172 0.192 0.000431 0.00048 306 118 0.105 0.00288 0.0476 0.0476 0.0641 0.231 0.261 0.000577 0.000651 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 306 100 0.0633 0.0022 0.0193 0.0417 0.056 0.163 0.166 0.000407 0.000414 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 306 51282.641 0.005 0.00262 0.0241 0.0765 0.0456 0.0611 0.141 0.185 0.000354 0.000462 ! Validation 306 51282.641 0.005 0.00312 0.0327 0.095 0.0491 0.0666 0.186 0.216 0.000466 0.00054 Wall time: 51282.641400039196 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 307 100 0.0609 0.00261 0.0088 0.0455 0.0609 0.0865 0.112 0.000216 0.00028 307 118 0.0576 0.00254 0.00693 0.0451 0.0601 0.0795 0.0993 0.000199 0.000248 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 307 100 0.0403 0.00199 0.000469 0.0392 0.0533 0.0238 0.0258 5.96e-05 6.46e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 307 51451.369 0.005 0.00266 0.0249 0.0781 0.046 0.0616 0.152 0.189 0.00038 0.000472 ! Validation 307 51451.369 0.005 0.00283 0.0115 0.0681 0.0464 0.0635 0.1 0.128 0.00025 0.00032 Wall time: 51451.369304433 ! Best model 307 0.068 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 308 100 0.0957 0.0031 0.0336 0.0502 0.0665 0.2 0.219 0.0005 0.000547 308 118 0.0773 0.00273 0.0226 0.0465 0.0624 0.171 0.179 0.000427 0.000448 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 308 100 0.133 0.002 0.0926 0.0397 0.0533 0.362 0.363 0.000905 0.000908 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 308 51618.763 0.005 0.00266 0.0256 0.0788 0.046 0.0616 0.152 0.191 0.000379 0.000477 ! Validation 308 51618.763 0.005 0.0029 0.0968 0.155 0.0472 0.0643 0.345 0.371 0.000862 0.000928 Wall time: 51618.7634622762 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 309 100 0.0639 0.00268 0.0102 0.0462 0.0618 0.0934 0.121 0.000233 0.000302 309 118 0.0733 0.00285 0.0163 0.0474 0.0637 0.12 0.152 0.000301 0.000381 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 309 100 0.124 0.00212 0.0811 0.0405 0.055 0.339 0.34 0.000848 0.00085 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 309 51786.110 0.005 0.00276 0.0326 0.0878 0.0468 0.0627 0.175 0.216 0.000438 0.00054 ! Validation 309 51786.110 0.005 0.00298 0.0751 0.135 0.0478 0.0652 0.301 0.327 0.000752 0.000818 Wall time: 51786.11101319501 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 310 100 0.0603 0.00239 0.0124 0.0437 0.0584 0.114 0.133 0.000284 0.000332 310 118 0.0863 0.00234 0.0395 0.0437 0.0577 0.23 0.237 0.000575 0.000593 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 310 100 0.0502 0.002 0.0103 0.0391 0.0533 0.119 0.121 0.000298 0.000303 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 310 51953.437 0.005 0.00261 0.0231 0.0753 0.0455 0.061 0.146 0.181 0.000366 0.000452 ! Validation 310 51953.437 0.005 0.00283 0.0263 0.0829 0.0464 0.0635 0.163 0.194 0.000406 0.000484 Wall time: 51953.4370996044 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 311 100 0.0623 0.00249 0.0125 0.0445 0.0596 0.107 0.133 0.000269 0.000333 311 118 0.0698 0.00299 0.00999 0.049 0.0653 0.0992 0.119 0.000248 0.000298 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 311 100 0.117 0.00201 0.0766 0.0396 0.0535 0.33 0.33 0.000824 0.000826 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 311 52120.770 0.005 0.00255 0.0228 0.0737 0.0449 0.0602 0.145 0.181 0.000362 0.000451 ! Validation 311 52120.770 0.005 0.00288 0.0693 0.127 0.047 0.064 0.288 0.314 0.00072 0.000786 Wall time: 52120.770508140326 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 312 100 0.0823 0.00256 0.0311 0.0452 0.0604 0.199 0.21 0.000499 0.000526 312 118 0.0817 0.00282 0.0252 0.0468 0.0634 0.165 0.189 0.000412 0.000474 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 312 100 0.0889 0.00193 0.0503 0.0387 0.0524 0.267 0.268 0.000667 0.000669 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 312 52288.201 0.005 0.00266 0.0297 0.0828 0.0459 0.0615 0.167 0.206 0.000418 0.000514 ! Validation 312 52288.201 0.005 0.00279 0.0695 0.125 0.046 0.063 0.292 0.315 0.000729 0.000786 Wall time: 52288.20114276605 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 313 100 0.0543 0.00229 0.00842 0.0429 0.0572 0.0888 0.11 0.000222 0.000274 313 118 0.0851 0.00227 0.0398 0.0431 0.0568 0.228 0.238 0.00057 0.000595 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 313 100 0.0663 0.00218 0.0226 0.0415 0.0558 0.178 0.179 0.000444 0.000448 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 313 52455.523 0.005 0.00268 0.0281 0.0817 0.0461 0.0618 0.159 0.2 0.000397 0.0005 ! Validation 313 52455.523 0.005 0.00303 0.0207 0.0812 0.0484 0.0657 0.144 0.172 0.000359 0.000429 Wall time: 52455.523159111384 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 314 100 0.146 0.00261 0.0939 0.0455 0.061 0.353 0.366 0.000882 0.000914 314 118 0.114 0.00369 0.0404 0.0545 0.0725 0.19 0.24 0.000475 0.000599 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 314 100 0.0495 0.00233 0.00287 0.0431 0.0576 0.0597 0.0639 0.000149 0.00016 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 314 52622.838 0.005 0.00258 0.0313 0.083 0.0452 0.0606 0.163 0.211 0.000408 0.000528 ! Validation 314 52622.838 0.005 0.00321 0.0137 0.078 0.0504 0.0677 0.111 0.14 0.000277 0.000349 Wall time: 52622.838325973134 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 315 100 0.0536 0.00224 0.00884 0.0424 0.0565 0.0953 0.112 0.000238 0.000281 315 118 0.0798 0.0027 0.0259 0.0461 0.062 0.172 0.192 0.000429 0.00048 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 315 100 0.0407 0.00198 0.00114 0.0395 0.0531 0.0354 0.0402 8.85e-05 0.000101 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 315 52790.184 0.005 0.00259 0.0182 0.0701 0.0453 0.0608 0.127 0.161 0.000318 0.000402 ! Validation 315 52790.184 0.005 0.00283 0.0119 0.0686 0.0467 0.0635 0.103 0.13 0.000257 0.000326 Wall time: 52790.18431021832 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 316 100 0.0535 0.00226 0.00827 0.0424 0.0568 0.0897 0.109 0.000224 0.000271 316 118 0.0677 0.00294 0.00895 0.0474 0.0647 0.0898 0.113 0.000224 0.000282 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 316 100 0.0961 0.00197 0.0567 0.039 0.0529 0.282 0.284 0.000706 0.000711 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 316 52957.655 0.005 0.00249 0.0208 0.0706 0.0444 0.0595 0.14 0.172 0.000351 0.000431 ! Validation 316 52957.655 0.005 0.00277 0.0804 0.136 0.0459 0.0628 0.317 0.338 0.000793 0.000846 Wall time: 52957.65511579998 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 317 100 0.0539 0.00235 0.00689 0.0432 0.0578 0.0814 0.0991 0.000203 0.000248 317 118 0.0718 0.00247 0.0224 0.0448 0.0594 0.165 0.179 0.000413 0.000446 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 317 100 0.0389 0.00192 0.000555 0.0385 0.0523 0.0215 0.0281 5.38e-05 7.03e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 317 53125.049 0.005 0.00276 0.0316 0.0868 0.0466 0.0627 0.16 0.213 0.000399 0.000531 ! Validation 317 53125.049 0.005 0.0028 0.0143 0.0704 0.0463 0.0632 0.112 0.143 0.000281 0.000357 Wall time: 53125.04906721506 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 318 100 0.077 0.00252 0.0265 0.0449 0.06 0.16 0.194 0.000401 0.000486 318 118 0.0576 0.00249 0.00767 0.0445 0.0596 0.0807 0.105 0.000202 0.000261 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 318 100 0.0633 0.00185 0.0263 0.038 0.0514 0.192 0.193 0.000479 0.000484 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 318 53292.433 0.005 0.0025 0.0238 0.0738 0.0446 0.0597 0.149 0.184 0.000372 0.000461 ! Validation 318 53292.433 0.005 0.00273 0.0299 0.0844 0.0456 0.0623 0.18 0.206 0.00045 0.000516 Wall time: 53292.43334579328 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 319 100 0.0522 0.00232 0.00574 0.0428 0.0575 0.0705 0.0904 0.000176 0.000226 319 118 0.0523 0.00211 0.0101 0.0411 0.0549 0.106 0.12 0.000264 0.0003 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 319 100 0.0375 0.00181 0.00138 0.0374 0.0507 0.0392 0.0444 9.81e-05 0.000111 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 319 53459.816 0.005 0.00262 0.0311 0.0834 0.0456 0.0611 0.163 0.211 0.000407 0.000528 ! Validation 319 53459.816 0.005 0.00268 0.0122 0.0658 0.0451 0.0618 0.103 0.132 0.000258 0.000329 Wall time: 53459.81624623621 ! Best model 319 0.066 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 320 100 0.0666 0.00256 0.0154 0.0449 0.0604 0.119 0.148 0.000297 0.00037 320 118 0.113 0.00281 0.0565 0.047 0.0633 0.265 0.284 0.000663 0.000709 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 320 100 0.0876 0.00193 0.0491 0.0387 0.0524 0.264 0.265 0.00066 0.000661 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 320 53627.217 0.005 0.00244 0.0176 0.0664 0.0439 0.0589 0.127 0.157 0.000317 0.000393 ! Validation 320 53627.217 0.005 0.00276 0.0655 0.121 0.0459 0.0627 0.284 0.305 0.000709 0.000764 Wall time: 53627.21735367132 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 321 100 0.0945 0.00296 0.0353 0.0483 0.065 0.192 0.224 0.00048 0.00056 321 118 0.0605 0.00269 0.00662 0.0463 0.0619 0.0821 0.0971 0.000205 0.000243 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 321 100 0.043 0.00199 0.00321 0.0394 0.0532 0.0615 0.0676 0.000154 0.000169 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 321 53794.648 0.005 0.00258 0.031 0.0826 0.0453 0.0607 0.165 0.211 0.000412 0.000526 ! Validation 321 53794.648 0.005 0.00281 0.0132 0.0695 0.0465 0.0633 0.109 0.137 0.000272 0.000343 Wall time: 53794.64884055313 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 322 100 0.0745 0.00228 0.0289 0.0429 0.057 0.179 0.203 0.000446 0.000508 322 118 0.0789 0.0023 0.0329 0.0431 0.0573 0.189 0.216 0.000474 0.000541 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 322 100 0.0516 0.00207 0.0102 0.0403 0.0543 0.117 0.12 0.000293 0.000301 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 322 53961.728 0.005 0.00249 0.0258 0.0757 0.0446 0.0596 0.156 0.192 0.00039 0.000479 ! Validation 322 53961.728 0.005 0.00285 0.0184 0.0753 0.0469 0.0637 0.133 0.162 0.000333 0.000404 Wall time: 53961.7280065543 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 323 100 0.065 0.00239 0.0172 0.0434 0.0584 0.131 0.157 0.000329 0.000391 323 118 0.051 0.00231 0.00475 0.0429 0.0574 0.0647 0.0822 0.000162 0.000206 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 323 100 0.0372 0.00183 0.000662 0.0377 0.051 0.0265 0.0307 6.63e-05 7.68e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 323 54128.995 0.005 0.00243 0.0188 0.0674 0.044 0.0589 0.133 0.164 0.000333 0.00041 ! Validation 323 54128.995 0.005 0.00264 0.0105 0.0633 0.0448 0.0613 0.0966 0.122 0.000241 0.000306 Wall time: 54128.99588276632 ! Best model 323 0.063 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 324 100 0.0598 0.00242 0.0115 0.0437 0.0587 0.104 0.128 0.000261 0.00032 324 118 0.083 0.00279 0.0272 0.0462 0.0631 0.168 0.197 0.000421 0.000492 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 324 100 0.0412 0.00202 0.000761 0.0396 0.0536 0.0285 0.0329 7.13e-05 8.23e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 324 54296.081 0.005 0.00238 0.0187 0.0663 0.0434 0.0582 0.13 0.163 0.000325 0.000408 ! Validation 324 54296.081 0.005 0.00284 0.0104 0.0671 0.0467 0.0636 0.0953 0.122 0.000238 0.000304 Wall time: 54296.08136485331 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 325 100 0.0482 0.00212 0.00589 0.0413 0.0549 0.0707 0.0916 0.000177 0.000229 325 118 0.0526 0.00232 0.0062 0.0431 0.0575 0.071 0.094 0.000177 0.000235 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 325 100 0.0367 0.00179 0.00101 0.0373 0.0505 0.0287 0.038 7.19e-05 9.49e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 325 54463.164 0.005 0.00252 0.0227 0.0731 0.0447 0.0599 0.14 0.18 0.00035 0.000451 ! Validation 325 54463.164 0.005 0.00259 0.00942 0.0612 0.0444 0.0607 0.0894 0.116 0.000224 0.00029 Wall time: 54463.16430546623 ! Best model 325 0.061 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 326 100 0.172 0.00236 0.125 0.0436 0.058 0.413 0.422 0.00103 0.00106 326 118 0.0723 0.00315 0.00927 0.0509 0.067 0.084 0.115 0.00021 0.000287 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 326 100 0.0493 0.00235 0.00232 0.0421 0.0578 0.0481 0.0575 0.00012 0.000144 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 326 54630.323 0.005 0.00246 0.0271 0.0764 0.0441 0.0592 0.153 0.197 0.000381 0.000493 ! Validation 326 54630.323 0.005 0.00311 0.0108 0.073 0.0489 0.0666 0.0964 0.124 0.000241 0.000309 Wall time: 54630.32307244139 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 327 100 0.0559 0.00224 0.0111 0.0422 0.0565 0.102 0.126 0.000254 0.000315 327 118 0.0669 0.00229 0.0212 0.0421 0.0571 0.151 0.174 0.000378 0.000434 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 327 100 0.0417 0.0019 0.00376 0.0389 0.052 0.0703 0.0731 0.000176 0.000183 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 327 54797.381 0.005 0.0024 0.0196 0.0675 0.0436 0.0584 0.133 0.167 0.000331 0.000418 ! Validation 327 54797.381 0.005 0.00277 0.0115 0.0669 0.046 0.0628 0.101 0.128 0.000253 0.000321 Wall time: 54797.38111921307 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 328 100 0.0798 0.00224 0.035 0.0424 0.0565 0.211 0.223 0.000526 0.000559 328 118 0.0659 0.00261 0.0137 0.0448 0.061 0.0949 0.139 0.000237 0.000349 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 328 100 0.0358 0.00177 0.000427 0.0371 0.0502 0.0232 0.0247 5.81e-05 6.17e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 328 54964.462 0.005 0.00243 0.0222 0.0707 0.0438 0.0588 0.14 0.178 0.00035 0.000445 ! Validation 328 54964.462 0.005 0.00255 0.0103 0.0612 0.044 0.0602 0.0956 0.121 0.000239 0.000303 Wall time: 54964.462285497226 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 329 100 0.0622 0.00231 0.0161 0.0426 0.0573 0.126 0.151 0.000314 0.000378 329 118 0.0694 0.00237 0.0221 0.0434 0.0581 0.157 0.177 0.000393 0.000443 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 329 100 0.0691 0.0019 0.031 0.0387 0.0521 0.208 0.21 0.00052 0.000525 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 329 55131.530 0.005 0.0023 0.0136 0.0597 0.0427 0.0573 0.111 0.139 0.000278 0.000348 ! Validation 329 55131.530 0.005 0.00268 0.0544 0.108 0.0454 0.0618 0.254 0.278 0.000636 0.000696 Wall time: 55131.53076200513 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 330 100 0.0852 0.00239 0.0375 0.0435 0.0583 0.211 0.231 0.000529 0.000578 330 118 0.0608 0.00217 0.0173 0.0415 0.0557 0.143 0.157 0.000356 0.000392 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 330 100 0.0555 0.00179 0.0196 0.037 0.0505 0.166 0.167 0.000414 0.000418 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 330 55298.613 0.005 0.00258 0.0303 0.082 0.0452 0.0607 0.157 0.208 0.000393 0.000521 ! Validation 330 55298.613 0.005 0.00255 0.0337 0.0847 0.0441 0.0603 0.192 0.219 0.00048 0.000548 Wall time: 55298.613438447006 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 331 100 0.0706 0.00297 0.0113 0.0493 0.065 0.102 0.127 0.000256 0.000317 331 118 0.0553 0.00254 0.00458 0.0445 0.0601 0.0701 0.0808 0.000175 0.000202 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 331 100 0.0485 0.00197 0.00916 0.0394 0.053 0.109 0.114 0.000273 0.000286 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 331 55465.761 0.005 0.00244 0.028 0.0767 0.044 0.0589 0.16 0.2 0.0004 0.0005 ! Validation 331 55465.761 0.005 0.00271 0.0193 0.0734 0.0458 0.0621 0.139 0.166 0.000347 0.000414 Wall time: 55465.76118066907 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 332 100 0.0634 0.00232 0.017 0.0431 0.0575 0.136 0.155 0.000341 0.000389 332 118 0.0562 0.0024 0.00809 0.0439 0.0585 0.0958 0.107 0.00024 0.000268 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 332 100 0.0484 0.0017 0.0143 0.0366 0.0493 0.141 0.143 0.000354 0.000357 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 332 55632.993 0.005 0.00239 0.0226 0.0703 0.0435 0.0583 0.136 0.18 0.00034 0.000449 ! Validation 332 55632.993 0.005 0.00254 0.0276 0.0785 0.044 0.0602 0.172 0.198 0.000429 0.000496 Wall time: 55632.99316800106 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 333 100 0.0786 0.00229 0.0327 0.0429 0.0571 0.202 0.216 0.000506 0.00054 333 118 0.0678 0.00269 0.014 0.0458 0.0619 0.103 0.141 0.000257 0.000353 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 333 100 0.0365 0.00174 0.00178 0.0369 0.0497 0.0449 0.0503 0.000112 0.000126 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 333 55800.062 0.005 0.00242 0.0305 0.0789 0.0439 0.0587 0.17 0.209 0.000426 0.000522 ! Validation 333 55800.062 0.005 0.00251 0.013 0.0632 0.0438 0.0598 0.108 0.136 0.000271 0.00034 Wall time: 55800.0619708281 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 334 100 0.0509 0.00221 0.00658 0.042 0.0562 0.0798 0.0968 0.000199 0.000242 334 118 0.0766 0.00225 0.0316 0.0425 0.0566 0.203 0.212 0.000506 0.00053 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 334 100 0.0414 0.0019 0.0034 0.0382 0.052 0.0668 0.0695 0.000167 0.000174 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 334 55967.130 0.005 0.00229 0.0175 0.0634 0.0427 0.0572 0.128 0.158 0.000319 0.000394 ! Validation 334 55967.130 0.005 0.00264 0.0187 0.0716 0.045 0.0614 0.13 0.163 0.000324 0.000408 Wall time: 55967.13035213342 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 335 100 0.0582 0.00248 0.00857 0.0438 0.0594 0.0927 0.11 0.000232 0.000276 335 118 0.0505 0.00214 0.00764 0.0414 0.0552 0.0811 0.104 0.000203 0.000261 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 335 100 0.0403 0.00168 0.00671 0.036 0.0489 0.0958 0.0978 0.00024 0.000244 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 335 56135.193 0.005 0.00233 0.0206 0.0673 0.0431 0.0577 0.134 0.172 0.000334 0.000429 ! Validation 335 56135.193 0.005 0.00244 0.0162 0.065 0.043 0.059 0.126 0.152 0.000314 0.000379 Wall time: 56135.19370278437 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 336 100 0.0955 0.0024 0.0476 0.0436 0.0584 0.247 0.26 0.000618 0.000651 336 118 0.149 0.00301 0.0891 0.0495 0.0655 0.341 0.356 0.000853 0.000891 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 336 100 0.0473 0.00228 0.00179 0.0432 0.0569 0.0443 0.0505 0.000111 0.000126 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 336 56302.273 0.005 0.00235 0.032 0.079 0.0432 0.0578 0.173 0.212 0.000433 0.00053 ! Validation 336 56302.273 0.005 0.00307 0.0163 0.0776 0.0496 0.0661 0.118 0.152 0.000296 0.000381 Wall time: 56302.27332799137 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 337 100 0.0478 0.00212 0.00542 0.0413 0.055 0.0698 0.0879 0.000175 0.00022 337 118 0.0808 0.00238 0.0332 0.0429 0.0582 0.191 0.217 0.000476 0.000544 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 337 100 0.0434 0.00197 0.00405 0.0388 0.053 0.0737 0.076 0.000184 0.00019 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 337 56469.534 0.005 0.00236 0.0202 0.0673 0.0432 0.0579 0.136 0.169 0.000339 0.000423 ! Validation 337 56469.534 0.005 0.00275 0.0132 0.0683 0.0459 0.0626 0.111 0.137 0.000276 0.000342 Wall time: 56469.53475977527 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 338 100 0.0487 0.00215 0.00575 0.0412 0.0553 0.0719 0.0905 0.00018 0.000226 338 118 0.0851 0.00227 0.0398 0.0429 0.0568 0.209 0.238 0.000523 0.000595 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 338 100 0.16 0.00199 0.12 0.04 0.0533 0.411 0.413 0.00103 0.00103 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 338 56636.826 0.005 0.00228 0.0205 0.0661 0.0425 0.057 0.136 0.171 0.000339 0.000426 ! Validation 338 56636.826 0.005 0.00273 0.132 0.187 0.046 0.0624 0.419 0.434 0.00105 0.00109 Wall time: 56636.82605886506 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 339 100 0.0735 0.00232 0.0271 0.0427 0.0575 0.178 0.196 0.000446 0.000491 339 118 0.0612 0.00254 0.0105 0.0445 0.0601 0.0906 0.122 0.000226 0.000305 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 339 100 0.0444 0.00179 0.00852 0.0372 0.0505 0.107 0.11 0.000267 0.000275 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 339 56804.095 0.005 0.0024 0.0214 0.0695 0.0436 0.0585 0.14 0.175 0.000349 0.000437 ! Validation 339 56804.095 0.005 0.00254 0.0278 0.0786 0.044 0.0601 0.16 0.199 0.0004 0.000497 Wall time: 56804.095140899066 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 340 100 0.05 0.0022 0.00588 0.0421 0.056 0.0729 0.0915 0.000182 0.000229 340 118 0.0451 0.00192 0.0067 0.039 0.0523 0.0861 0.0977 0.000215 0.000244 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 340 100 0.0355 0.00173 0.000927 0.0367 0.0496 0.0285 0.0363 7.13e-05 9.08e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 340 56971.440 0.005 0.0024 0.0301 0.0782 0.0438 0.0585 0.16 0.208 0.0004 0.000519 ! Validation 340 56971.440 0.005 0.00247 0.0122 0.0616 0.0433 0.0593 0.103 0.132 0.000258 0.00033 Wall time: 56971.44086865522 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 341 100 0.069 0.00226 0.0238 0.0422 0.0567 0.15 0.184 0.000375 0.00046 341 118 0.0603 0.00224 0.0156 0.0423 0.0565 0.133 0.149 0.000332 0.000372 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 341 100 0.0544 0.00181 0.0181 0.0378 0.0508 0.158 0.161 0.000396 0.000402 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 341 57138.703 0.005 0.00224 0.016 0.0609 0.0422 0.0565 0.119 0.151 0.000297 0.000377 ! Validation 341 57138.703 0.005 0.00252 0.0195 0.07 0.044 0.0599 0.142 0.167 0.000356 0.000417 Wall time: 57138.7035633754 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 342 100 0.0454 0.00204 0.00449 0.0404 0.054 0.0658 0.08 0.000165 0.0002 342 118 0.058 0.00239 0.0102 0.0428 0.0584 0.0999 0.121 0.00025 0.000302 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 342 100 0.0401 0.00166 0.00693 0.0359 0.0486 0.0971 0.0993 0.000243 0.000248 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 342 57305.976 0.005 0.00221 0.017 0.0612 0.0419 0.0561 0.125 0.156 0.000312 0.00039 ! Validation 342 57305.976 0.005 0.00242 0.0202 0.0686 0.0429 0.0587 0.137 0.17 0.000342 0.000424 Wall time: 57305.97629727097 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 343 100 0.0488 0.00219 0.00496 0.0418 0.0559 0.0652 0.0841 0.000163 0.00021 343 118 0.0478 0.00213 0.0053 0.0418 0.055 0.0635 0.0869 0.000159 0.000217 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 343 100 0.0401 0.00169 0.00623 0.0361 0.0491 0.0915 0.0942 0.000229 0.000235 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 343 57473.260 0.005 0.00226 0.0271 0.0722 0.0423 0.0567 0.158 0.197 0.000394 0.000493 ! Validation 343 57473.260 0.005 0.00245 0.0129 0.062 0.0432 0.0591 0.111 0.136 0.000277 0.000339 Wall time: 57473.2604618622 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 344 100 0.0515 0.0023 0.00556 0.0429 0.0572 0.0724 0.089 0.000181 0.000222 344 118 0.0624 0.00252 0.012 0.0446 0.0599 0.113 0.131 0.000283 0.000327 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 344 100 0.0686 0.00177 0.0332 0.0374 0.0502 0.216 0.217 0.00054 0.000543 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 344 57640.711 0.005 0.00235 0.0323 0.0793 0.0433 0.0579 0.172 0.215 0.000431 0.000537 ! Validation 344 57640.711 0.005 0.00251 0.044 0.0941 0.0438 0.0598 0.229 0.25 0.000571 0.000626 Wall time: 57640.71107348427 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 345 100 0.0574 0.00233 0.0108 0.0429 0.0576 0.105 0.124 0.000263 0.00031 345 118 0.0467 0.00213 0.00419 0.0409 0.0551 0.0611 0.0772 0.000153 0.000193 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 345 100 0.0333 0.00164 0.000579 0.0357 0.0483 0.0268 0.0287 6.69e-05 7.18e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 345 57808.062 0.005 0.00217 0.0145 0.058 0.0415 0.0556 0.116 0.144 0.000291 0.00036 ! Validation 345 57808.062 0.005 0.00238 0.00829 0.056 0.0425 0.0583 0.0848 0.109 0.000212 0.000272 Wall time: 57808.062660656404 ! Best model 345 0.056 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 346 100 0.0448 0.00206 0.00366 0.0405 0.0541 0.0604 0.0722 0.000151 0.000181 346 118 0.0572 0.0023 0.0111 0.043 0.0573 0.0948 0.126 0.000237 0.000315 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 346 100 0.0353 0.00158 0.00381 0.0351 0.0474 0.071 0.0736 0.000178 0.000184 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 346 57975.364 0.005 0.00231 0.0231 0.0694 0.0428 0.0574 0.135 0.182 0.000338 0.000454 ! Validation 346 57975.364 0.005 0.00234 0.0116 0.0585 0.0421 0.0578 0.104 0.129 0.000261 0.000322 Wall time: 57975.36411345098 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 347 100 0.051 0.00224 0.00618 0.0426 0.0565 0.0734 0.0938 0.000183 0.000235 347 118 0.0489 0.00214 0.00607 0.0415 0.0552 0.0776 0.093 0.000194 0.000233 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 347 100 0.0362 0.00175 0.00113 0.0368 0.05 0.0366 0.0401 9.15e-05 0.0001 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 347 58142.625 0.005 0.00222 0.0196 0.0641 0.042 0.0563 0.136 0.168 0.00034 0.000419 ! Validation 347 58142.625 0.005 0.00242 0.00916 0.0575 0.043 0.0587 0.0885 0.114 0.000221 0.000286 Wall time: 58142.6255869302 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 348 100 0.0761 0.00228 0.0306 0.0425 0.0569 0.185 0.209 0.000463 0.000522 348 118 0.0837 0.0022 0.0398 0.0424 0.0559 0.223 0.238 0.000556 0.000595 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 348 100 0.079 0.00175 0.044 0.0373 0.0499 0.249 0.25 0.000623 0.000626 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 348 58309.888 0.005 0.00221 0.0254 0.0695 0.0419 0.0561 0.155 0.19 0.000388 0.000475 ! Validation 348 58309.888 0.005 0.00248 0.0492 0.0988 0.0437 0.0594 0.242 0.265 0.000605 0.000662 Wall time: 58309.88902788516 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 349 100 0.0505 0.00214 0.00767 0.041 0.0552 0.0864 0.105 0.000216 0.000261 349 118 0.0674 0.00238 0.0198 0.0433 0.0582 0.149 0.168 0.000374 0.00042 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 349 100 0.0903 0.00165 0.0572 0.0355 0.0485 0.285 0.286 0.000713 0.000714 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 349 58477.134 0.005 0.00217 0.0174 0.0608 0.0416 0.0556 0.124 0.157 0.000311 0.000393 ! Validation 349 58477.134 0.005 0.00239 0.0684 0.116 0.0425 0.0583 0.292 0.312 0.00073 0.00078 Wall time: 58477.1341531612 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 350 100 0.0605 0.00239 0.0127 0.0436 0.0584 0.106 0.134 0.000265 0.000336 350 118 0.11 0.0022 0.0657 0.0423 0.056 0.283 0.306 0.000708 0.000765 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 350 100 0.202 0.00171 0.168 0.0368 0.0494 0.488 0.489 0.00122 0.00122 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 350 58645.021 0.005 0.00217 0.0209 0.0643 0.0415 0.0556 0.135 0.171 0.000338 0.000428 ! Validation 350 58645.021 0.005 0.00246 0.186 0.236 0.0436 0.0593 0.501 0.515 0.00125 0.00129 Wall time: 58645.02194620529 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 351 100 0.0814 0.00215 0.0383 0.0413 0.0554 0.22 0.234 0.00055 0.000584 351 118 0.0432 0.00173 0.0085 0.0375 0.0497 0.101 0.11 0.000251 0.000275 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 351 100 0.0337 0.00166 0.000517 0.0358 0.0486 0.0202 0.0271 5.06e-05 6.79e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 351 58812.296 0.005 0.00236 0.023 0.0703 0.0432 0.0581 0.138 0.182 0.000345 0.000454 ! Validation 351 58812.296 0.005 0.00234 0.0146 0.0614 0.0422 0.0578 0.106 0.144 0.000266 0.000361 Wall time: 58812.29697465338 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 352 100 0.0566 0.00216 0.0133 0.0416 0.0555 0.116 0.137 0.000289 0.000344 352 118 0.0552 0.0023 0.00912 0.0421 0.0573 0.0963 0.114 0.000241 0.000285 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 352 100 0.0677 0.0018 0.0317 0.0372 0.0506 0.21 0.213 0.000525 0.000531 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 352 58979.571 0.005 0.00215 0.0211 0.064 0.0413 0.0553 0.145 0.174 0.000362 0.000434 ! Validation 352 58979.571 0.005 0.00246 0.0358 0.0851 0.0435 0.0592 0.199 0.226 0.000496 0.000565 Wall time: 58979.57189744804 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 353 100 0.069 0.00211 0.0267 0.0407 0.0549 0.181 0.195 0.000452 0.000488 353 118 0.0452 0.00203 0.0046 0.0405 0.0538 0.0653 0.081 0.000163 0.000202 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 353 100 0.0328 0.00158 0.00126 0.0349 0.0474 0.0379 0.0423 9.46e-05 0.000106 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 353 59146.834 0.005 0.00216 0.0178 0.061 0.0414 0.0555 0.126 0.159 0.000315 0.000399 ! Validation 353 59146.834 0.005 0.00228 0.00858 0.0542 0.0416 0.057 0.0857 0.111 0.000214 0.000276 Wall time: 59146.83410271024 ! Best model 353 0.054 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 354 100 0.0599 0.00224 0.015 0.0424 0.0565 0.126 0.146 0.000316 0.000366 354 118 0.0591 0.00265 0.00622 0.046 0.0614 0.0808 0.0941 0.000202 0.000235 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 354 100 0.0386 0.00162 0.00617 0.0358 0.0481 0.0911 0.0938 0.000228 0.000234 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 354 59314.481 0.005 0.00213 0.017 0.0596 0.041 0.055 0.119 0.156 0.000297 0.00039 ! Validation 354 59314.481 0.005 0.00237 0.0135 0.061 0.0427 0.0582 0.113 0.138 0.000282 0.000346 Wall time: 59314.481210277416 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 355 100 0.0831 0.00209 0.0414 0.0409 0.0545 0.222 0.243 0.000554 0.000607 355 118 0.0596 0.00192 0.0212 0.0393 0.0523 0.162 0.174 0.000405 0.000434 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 355 100 0.0378 0.00168 0.00416 0.036 0.049 0.0745 0.077 0.000186 0.000192 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 355 59481.555 0.005 0.00217 0.0242 0.0676 0.0416 0.0556 0.147 0.186 0.000367 0.000464 ! Validation 355 59481.555 0.005 0.00233 0.0145 0.061 0.0422 0.0576 0.116 0.144 0.000289 0.000359 Wall time: 59481.55552888429 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 356 100 0.0454 0.00204 0.00459 0.0401 0.0539 0.0616 0.0808 0.000154 0.000202 356 118 0.0456 0.00182 0.0093 0.0383 0.0509 0.102 0.115 0.000256 0.000288 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 356 100 0.0466 0.00164 0.0137 0.0361 0.0484 0.139 0.14 0.000346 0.00035 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 356 59648.630 0.005 0.00255 0.0271 0.0781 0.0447 0.0603 0.146 0.197 0.000364 0.000493 ! Validation 356 59648.630 0.005 0.00238 0.0182 0.0657 0.0428 0.0582 0.138 0.161 0.000345 0.000403 Wall time: 59648.63005307317 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 357 100 0.0476 0.00206 0.00647 0.0405 0.0542 0.0758 0.096 0.00019 0.00024 357 118 0.0473 0.00187 0.00979 0.0385 0.0517 0.1 0.118 0.00025 0.000295 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 357 100 0.0401 0.00154 0.00923 0.0346 0.0469 0.113 0.115 0.000284 0.000287 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 357 59815.698 0.005 0.00207 0.0128 0.0542 0.0405 0.0543 0.11 0.135 0.000276 0.000338 ! Validation 357 59815.698 0.005 0.00223 0.0156 0.0603 0.0412 0.0564 0.126 0.149 0.000316 0.000373 Wall time: 59815.698825838044 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 358 100 0.067 0.00224 0.0221 0.0421 0.0565 0.164 0.177 0.000411 0.000444 358 118 0.0669 0.00211 0.0248 0.0411 0.0548 0.155 0.188 0.000387 0.00047 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 358 100 0.0343 0.00167 0.00101 0.0364 0.0487 0.0335 0.038 8.37e-05 9.5e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 358 59982.769 0.005 0.00214 0.0246 0.0674 0.0413 0.0552 0.152 0.187 0.000381 0.000468 ! Validation 358 59982.769 0.005 0.00244 0.0157 0.0646 0.0434 0.059 0.115 0.15 0.000289 0.000374 Wall time: 59982.769377996214 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 359 100 0.0502 0.00211 0.00804 0.0408 0.0548 0.0854 0.107 0.000213 0.000268 359 118 0.0638 0.00243 0.0151 0.0437 0.0589 0.135 0.147 0.000338 0.000367 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 359 100 0.0336 0.00157 0.00218 0.0348 0.0473 0.0527 0.0557 0.000132 0.000139 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 359 60149.931 0.005 0.00216 0.0222 0.0654 0.0414 0.0554 0.138 0.178 0.000345 0.000445 ! Validation 359 60149.931 0.005 0.0023 0.0115 0.0576 0.042 0.0573 0.101 0.128 0.000252 0.00032 Wall time: 60149.93165967008 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 360 100 0.0906 0.002 0.0506 0.0398 0.0534 0.257 0.269 0.000643 0.000672 360 118 0.112 0.00235 0.0645 0.0437 0.0579 0.289 0.303 0.000723 0.000758 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 360 100 0.0895 0.00164 0.0567 0.0358 0.0483 0.283 0.284 0.000708 0.000711 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 360 60317.026 0.005 0.00211 0.0229 0.0651 0.041 0.0548 0.142 0.179 0.000354 0.000448 ! Validation 360 60317.026 0.005 0.00236 0.0575 0.105 0.0425 0.058 0.264 0.286 0.000661 0.000716 Wall time: 60317.02606905624 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 361 100 0.0709 0.00197 0.0316 0.0394 0.0529 0.199 0.212 0.000498 0.000531 361 118 0.0744 0.0021 0.0324 0.0409 0.0547 0.179 0.215 0.000447 0.000537 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 361 100 0.0571 0.00155 0.0261 0.0349 0.0469 0.192 0.193 0.000479 0.000482 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 361 60485.056 0.005 0.00206 0.016 0.0572 0.0405 0.0542 0.12 0.15 0.0003 0.000376 ! Validation 361 60485.056 0.005 0.00224 0.0343 0.079 0.0413 0.0564 0.194 0.221 0.000484 0.000553 Wall time: 60485.05681521911 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 362 100 0.0497 0.00217 0.00634 0.0415 0.0556 0.0747 0.095 0.000187 0.000238 362 118 0.0613 0.00172 0.0269 0.0375 0.0495 0.176 0.196 0.000441 0.000489 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 362 100 0.0422 0.00171 0.00804 0.0364 0.0493 0.103 0.107 0.000258 0.000267 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 362 60652.172 0.005 0.00212 0.0182 0.0606 0.0411 0.055 0.128 0.161 0.000319 0.000402 ! Validation 362 60652.172 0.005 0.00237 0.0158 0.0633 0.0428 0.0581 0.127 0.15 0.000317 0.000375 Wall time: 60652.17219665134 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 363 100 0.0471 0.00206 0.00589 0.0403 0.0542 0.074 0.0916 0.000185 0.000229 363 118 0.0399 0.00182 0.00356 0.038 0.0509 0.0612 0.0712 0.000153 0.000178 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 363 100 0.0453 0.0015 0.0153 0.0341 0.0462 0.147 0.148 0.000367 0.000369 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 363 60819.309 0.005 0.00229 0.0255 0.0713 0.0426 0.0572 0.14 0.191 0.00035 0.000478 ! Validation 363 60819.309 0.005 0.00219 0.0214 0.0653 0.0408 0.0559 0.15 0.175 0.000376 0.000436 Wall time: 60819.30958537711 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 364 100 0.0649 0.00206 0.0236 0.0405 0.0542 0.158 0.183 0.000395 0.000459 364 118 0.0754 0.00226 0.0302 0.0418 0.0568 0.179 0.207 0.000447 0.000518 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 364 100 0.0764 0.00169 0.0427 0.0365 0.049 0.245 0.247 0.000612 0.000616 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 364 60986.537 0.005 0.00204 0.017 0.0578 0.0402 0.0539 0.128 0.155 0.000319 0.000388 ! Validation 364 60986.537 0.005 0.00241 0.0559 0.104 0.0429 0.0586 0.263 0.282 0.000658 0.000706 Wall time: 60986.537861155346 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 365 100 0.0525 0.00203 0.0119 0.0402 0.0538 0.107 0.13 0.000269 0.000326 365 118 0.0626 0.00203 0.0221 0.0399 0.0538 0.165 0.177 0.000412 0.000443 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 365 100 0.113 0.00167 0.0799 0.0366 0.0487 0.337 0.337 0.000842 0.000844 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 365 61153.687 0.005 0.00211 0.0248 0.0669 0.041 0.0548 0.151 0.188 0.000378 0.00047 ! Validation 365 61153.687 0.005 0.00237 0.0791 0.127 0.0428 0.0582 0.32 0.336 0.0008 0.000839 Wall time: 61153.6875461 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 366 100 0.0449 0.002 0.00488 0.0396 0.0534 0.0665 0.0834 0.000166 0.000208 366 118 0.0425 0.00196 0.00329 0.0394 0.0528 0.0581 0.0685 0.000145 0.000171 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 366 100 0.0422 0.00146 0.0129 0.0336 0.0457 0.134 0.136 0.000336 0.000339 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 366 61320.836 0.005 0.00204 0.0148 0.0556 0.0403 0.0539 0.109 0.146 0.000272 0.000364 ! Validation 366 61320.836 0.005 0.00214 0.019 0.0619 0.0402 0.0552 0.133 0.165 0.000332 0.000412 Wall time: 61320.83603031328 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 367 100 0.079 0.0019 0.0409 0.0392 0.0521 0.229 0.241 0.000572 0.000603 367 118 0.0516 0.00221 0.00735 0.0409 0.0561 0.0852 0.102 0.000213 0.000256 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 367 100 0.0321 0.00157 0.000718 0.0353 0.0473 0.0235 0.032 5.87e-05 7.99e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 367 61488.158 0.005 0.00207 0.0229 0.0643 0.0406 0.0543 0.152 0.181 0.000379 0.000452 ! Validation 367 61488.158 0.005 0.00223 0.00806 0.0526 0.0413 0.0563 0.082 0.107 0.000205 0.000268 Wall time: 61488.1584149641 ! Best model 367 0.053 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 368 100 0.0629 0.00201 0.0226 0.0401 0.0535 0.159 0.18 0.000398 0.000449 368 118 0.0742 0.00191 0.036 0.0391 0.0522 0.215 0.226 0.000537 0.000566 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 368 100 0.0719 0.00174 0.0371 0.0371 0.0497 0.229 0.23 0.000573 0.000575 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 368 61655.455 0.005 0.00204 0.0182 0.059 0.0402 0.0539 0.129 0.161 0.000322 0.000402 ! Validation 368 61655.455 0.005 0.00239 0.0665 0.114 0.0431 0.0583 0.259 0.308 0.000647 0.00077 Wall time: 61655.45523726707 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 369 100 0.0415 0.00186 0.00432 0.0385 0.0514 0.0631 0.0784 0.000158 0.000196 369 118 0.121 0.0018 0.0848 0.0381 0.0506 0.341 0.348 0.000852 0.000869 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 369 100 0.0594 0.00158 0.0278 0.0357 0.0474 0.198 0.199 0.000494 0.000497 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 369 61822.819 0.005 0.00213 0.0247 0.0674 0.0412 0.0552 0.147 0.186 0.000367 0.000465 ! Validation 369 61822.819 0.005 0.0023 0.0592 0.105 0.0422 0.0572 0.244 0.29 0.000611 0.000726 Wall time: 61822.81970582204 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 370 100 0.0466 0.00204 0.00575 0.0405 0.054 0.0709 0.0905 0.000177 0.000226 370 118 0.0491 0.00201 0.00885 0.0392 0.0536 0.102 0.112 0.000254 0.000281 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 370 100 0.0328 0.00157 0.00147 0.0347 0.0473 0.0387 0.0457 9.68e-05 0.000114 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 370 61990.108 0.005 0.00206 0.0208 0.0621 0.0406 0.0542 0.137 0.173 0.000341 0.000431 ! Validation 370 61990.108 0.005 0.00222 0.00998 0.0543 0.0411 0.0562 0.0951 0.119 0.000238 0.000298 Wall time: 61990.1081391722 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 371 100 0.0553 0.00213 0.0128 0.0411 0.0551 0.118 0.135 0.000294 0.000337 371 118 0.0443 0.00179 0.00841 0.0375 0.0506 0.082 0.109 0.000205 0.000274 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 371 100 0.0347 0.00156 0.00357 0.0346 0.0471 0.0698 0.0713 0.000174 0.000178 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 371 62157.397 0.005 0.00195 0.0109 0.0498 0.0393 0.0527 0.0984 0.125 0.000246 0.000312 ! Validation 371 62157.397 0.005 0.00218 0.0102 0.0537 0.0408 0.0557 0.0937 0.12 0.000234 0.000301 Wall time: 62157.39716729941 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 372 100 0.109 0.00208 0.0674 0.0409 0.0545 0.299 0.31 0.000747 0.000774 372 118 0.141 0.00197 0.102 0.0398 0.053 0.374 0.381 0.000935 0.000951 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 372 100 0.0847 0.00201 0.0445 0.04 0.0535 0.251 0.252 0.000627 0.000629 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 372 62325.947 0.005 0.002 0.0243 0.0642 0.0398 0.0533 0.136 0.184 0.00034 0.00046 ! Validation 372 62325.947 0.005 0.00261 0.0671 0.119 0.0453 0.061 0.267 0.309 0.000667 0.000773 Wall time: 62325.94768316718 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 373 100 0.451 0.0187 0.0766 0.123 0.163 0.253 0.33 0.000632 0.000826 373 118 0.429 0.0124 0.181 0.101 0.133 0.477 0.508 0.00119 0.00127 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 373 100 0.444 0.0111 0.223 0.095 0.126 0.559 0.564 0.0014 0.00141 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 373 62494.411 0.005 0.0971 1.73 3.67 0.228 0.373 0.939 1.58 0.00235 0.00394 ! Validation 373 62494.411 0.005 0.0133 0.136 0.402 0.104 0.138 0.369 0.441 0.000922 0.0011 Wall time: 62494.411849517375 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 374 100 0.146 0.00581 0.0296 0.0697 0.0909 0.16 0.205 0.000399 0.000513 374 118 0.148 0.00582 0.0318 0.0692 0.0911 0.188 0.213 0.00047 0.000532 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 374 100 0.113 0.00475 0.0176 0.0622 0.0823 0.144 0.159 0.00036 0.000396 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 374 62661.548 0.005 0.00812 0.0557 0.218 0.0811 0.108 0.223 0.282 0.000556 0.000705 ! Validation 374 62661.548 0.005 0.00586 0.0305 0.148 0.0692 0.0914 0.169 0.208 0.000423 0.000521 Wall time: 62661.548694444355 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 375 100 0.0961 0.00414 0.0133 0.0589 0.0768 0.106 0.138 0.000266 0.000344 375 118 0.125 0.00418 0.0418 0.0591 0.0771 0.21 0.244 0.000524 0.00061 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 375 100 0.113 0.00334 0.0458 0.0522 0.0689 0.249 0.256 0.000623 0.000639 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 375 62828.678 0.005 0.00472 0.0323 0.127 0.0627 0.082 0.171 0.214 0.000427 0.000535 ! Validation 375 62828.678 0.005 0.00429 0.0476 0.133 0.059 0.0782 0.224 0.26 0.00056 0.000651 Wall time: 62828.67809789302 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 376 100 0.144 0.00347 0.0741 0.0534 0.0703 0.299 0.325 0.000747 0.000812 376 118 0.0786 0.00334 0.0118 0.0524 0.069 0.104 0.13 0.000261 0.000324 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 376 100 0.0714 0.00262 0.0189 0.0461 0.0611 0.159 0.164 0.000397 0.00041 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 376 62995.798 0.005 0.00365 0.0252 0.0982 0.0549 0.0721 0.152 0.19 0.000379 0.000475 ! Validation 376 62995.798 0.005 0.00356 0.0243 0.0956 0.0533 0.0712 0.155 0.186 0.000387 0.000465 Wall time: 62995.798377080355 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 377 100 0.0746 0.00305 0.0136 0.0498 0.0659 0.122 0.139 0.000304 0.000348 377 118 0.073 0.0032 0.00906 0.0513 0.0675 0.1 0.114 0.000251 0.000284 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 377 100 0.0497 0.00229 0.00399 0.0428 0.0571 0.0682 0.0754 0.000171 0.000188 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 377 63162.903 0.005 0.00311 0.02 0.0821 0.0504 0.0665 0.137 0.169 0.000343 0.000423 ! Validation 377 63162.903 0.005 0.00316 0.0126 0.0757 0.0499 0.0671 0.107 0.134 0.000267 0.000335 Wall time: 63162.903345452156 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 378 100 0.0669 0.00268 0.0133 0.0464 0.0618 0.114 0.138 0.000285 0.000345 378 118 0.0713 0.00279 0.0154 0.0465 0.0631 0.122 0.148 0.000305 0.00037 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 378 100 0.0582 0.00207 0.0168 0.0405 0.0543 0.153 0.155 0.000382 0.000387 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 378 63330.067 0.005 0.0028 0.0151 0.0711 0.0476 0.0632 0.119 0.147 0.000296 0.000366 ! Validation 378 63330.067 0.005 0.00293 0.0276 0.0861 0.0477 0.0646 0.169 0.198 0.000422 0.000496 Wall time: 63330.06749619637 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 379 100 0.0734 0.00272 0.0191 0.0466 0.0622 0.138 0.165 0.000344 0.000412 379 118 0.0791 0.0029 0.021 0.0476 0.0643 0.153 0.173 0.000383 0.000433 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 379 100 0.0416 0.00194 0.00271 0.0393 0.0526 0.0577 0.0621 0.000144 0.000155 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 379 63497.185 0.005 0.00261 0.0116 0.0638 0.0457 0.061 0.102 0.128 0.000256 0.000321 ! Validation 379 63497.185 0.005 0.00276 0.0105 0.0658 0.0463 0.0628 0.0966 0.122 0.000241 0.000306 Wall time: 63497.184973093215 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 380 100 0.0598 0.00251 0.00963 0.0448 0.0598 0.0959 0.117 0.00024 0.000293 380 118 0.0478 0.00216 0.00449 0.0414 0.0555 0.0653 0.08 0.000163 0.0002 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 380 100 0.0373 0.00184 0.000515 0.0382 0.0512 0.0219 0.0271 5.48e-05 6.77e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 380 63664.282 0.005 0.00248 0.018 0.0677 0.0446 0.0595 0.128 0.161 0.000319 0.000402 ! Validation 380 63664.282 0.005 0.00266 0.00884 0.062 0.0452 0.0615 0.0873 0.112 0.000218 0.000281 Wall time: 63664.28202277422 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 381 100 0.0573 0.00249 0.00749 0.0444 0.0596 0.0823 0.103 0.000206 0.000258 381 118 0.0583 0.00231 0.012 0.0431 0.0574 0.114 0.131 0.000286 0.000327 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 381 100 0.0359 0.00177 0.00052 0.0374 0.0502 0.0229 0.0272 5.72e-05 6.8e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 381 63831.399 0.005 0.00238 0.0105 0.0582 0.0436 0.0583 0.0979 0.122 0.000245 0.000306 ! Validation 381 63831.399 0.005 0.00257 0.00874 0.0602 0.0444 0.0605 0.0863 0.112 0.000216 0.000279 Wall time: 63831.3992936532 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 382 100 0.0516 0.00232 0.00529 0.0427 0.0574 0.0692 0.0868 0.000173 0.000217 382 118 0.0482 0.00217 0.00479 0.0416 0.0556 0.0615 0.0826 0.000154 0.000207 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 382 100 0.0368 0.00173 0.00208 0.037 0.0497 0.0519 0.0544 0.00013 0.000136 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 382 63998.525 0.005 0.00231 0.0132 0.0594 0.0429 0.0574 0.111 0.137 0.000277 0.000343 ! Validation 382 63998.525 0.005 0.00251 0.0105 0.0607 0.0438 0.0598 0.0943 0.122 0.000236 0.000306 Wall time: 63998.52542071603 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 383 100 0.0572 0.00227 0.0117 0.0424 0.0569 0.109 0.129 0.000272 0.000323 383 118 0.0536 0.00226 0.00846 0.0427 0.0567 0.0877 0.11 0.000219 0.000274 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 383 100 0.0365 0.00168 0.00282 0.0365 0.049 0.062 0.0634 0.000155 0.000159 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 383 64165.704 0.005 0.00226 0.0143 0.0594 0.0424 0.0567 0.115 0.143 0.000287 0.000357 ! Validation 383 64165.704 0.005 0.00245 0.0123 0.0614 0.0433 0.0591 0.107 0.133 0.000267 0.000331 Wall time: 64165.70474754041 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 384 100 0.0565 0.00228 0.011 0.0423 0.057 0.101 0.125 0.000253 0.000313 384 118 0.0522 0.00239 0.0044 0.0435 0.0584 0.0627 0.0792 0.000157 0.000198 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 384 100 0.0336 0.00165 0.000688 0.0363 0.0484 0.0294 0.0313 7.36e-05 7.82e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 384 64332.807 0.005 0.00221 0.0145 0.0587 0.0419 0.0561 0.118 0.144 0.000295 0.00036 ! Validation 384 64332.807 0.005 0.00242 0.00885 0.0572 0.043 0.0587 0.0876 0.112 0.000219 0.000281 Wall time: 64332.80722406227 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 385 100 0.116 0.002 0.0763 0.0401 0.0534 0.322 0.33 0.000806 0.000824 385 118 0.0552 0.00223 0.0106 0.0418 0.0563 0.107 0.123 0.000268 0.000307 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 385 100 0.0347 0.00166 0.00144 0.0363 0.0486 0.0443 0.0453 0.000111 0.000113 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 385 64499.907 0.005 0.00217 0.0162 0.0595 0.0415 0.0556 0.119 0.152 0.000297 0.00038 ! Validation 385 64499.907 0.005 0.00241 0.0104 0.0586 0.0429 0.0586 0.0957 0.122 0.000239 0.000305 Wall time: 64499.90764717199 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 386 100 0.048 0.00214 0.00508 0.0412 0.0553 0.0683 0.085 0.000171 0.000213 386 118 0.0504 0.00211 0.00808 0.0404 0.0549 0.0785 0.107 0.000196 0.000268 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 386 100 0.0325 0.0016 0.000548 0.0355 0.0477 0.0229 0.0279 5.73e-05 6.99e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 386 64667.002 0.005 0.00214 0.0137 0.0564 0.0412 0.0552 0.113 0.14 0.000283 0.00035 ! Validation 386 64667.002 0.005 0.00234 0.00768 0.0545 0.0422 0.0577 0.0808 0.105 0.000202 0.000262 Wall time: 64667.00235976931 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 387 100 0.0601 0.00218 0.0165 0.0415 0.0557 0.132 0.153 0.000331 0.000384 387 118 0.0511 0.00233 0.00447 0.043 0.0577 0.0685 0.0798 0.000171 0.0002 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 387 100 0.041 0.0016 0.00913 0.0356 0.0477 0.113 0.114 0.000283 0.000285 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 387 64834.053 0.005 0.00212 0.0219 0.0642 0.041 0.0549 0.141 0.177 0.000352 0.000443 ! Validation 387 64834.053 0.005 0.00234 0.0185 0.0653 0.0423 0.0578 0.136 0.162 0.000341 0.000405 Wall time: 64834.0534157902 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 388 100 0.0515 0.00208 0.00992 0.0401 0.0544 0.104 0.119 0.000259 0.000297 388 118 0.0768 0.00185 0.0399 0.0383 0.0513 0.222 0.238 0.000555 0.000596 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 388 100 0.0556 0.0016 0.0237 0.0354 0.0477 0.183 0.184 0.000458 0.000459 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 388 65001.202 0.005 0.00208 0.0129 0.0545 0.0406 0.0545 0.107 0.134 0.000267 0.000336 ! Validation 388 65001.202 0.005 0.00231 0.0257 0.0719 0.0419 0.0573 0.169 0.191 0.000422 0.000479 Wall time: 65001.20240403339 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 389 100 0.0467 0.00211 0.00451 0.0408 0.0548 0.0656 0.0801 0.000164 0.0002 389 118 0.0477 0.0019 0.00961 0.0391 0.0521 0.0999 0.117 0.00025 0.000292 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 389 100 0.0362 0.00159 0.00441 0.0356 0.0476 0.0783 0.0792 0.000196 0.000198 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 389 65168.268 0.005 0.00206 0.0162 0.0575 0.0404 0.0542 0.124 0.152 0.000311 0.000381 ! Validation 389 65168.268 0.005 0.00229 0.0113 0.0572 0.0418 0.0572 0.101 0.127 0.000252 0.000317 Wall time: 65168.268060688395 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 390 100 0.0974 0.00202 0.057 0.04 0.0537 0.27 0.285 0.000674 0.000712 390 118 0.0589 0.00235 0.0119 0.043 0.0579 0.108 0.13 0.000271 0.000325 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 390 100 0.0557 0.00151 0.0254 0.0347 0.0465 0.19 0.19 0.000475 0.000476 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 390 65335.327 0.005 0.00204 0.0162 0.0571 0.0402 0.0539 0.125 0.152 0.000312 0.00038 ! Validation 390 65335.327 0.005 0.00226 0.0377 0.0828 0.0414 0.0567 0.21 0.232 0.000525 0.000579 Wall time: 65335.32774687419 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 391 100 0.0452 0.00207 0.00376 0.0405 0.0543 0.0578 0.0732 0.000144 0.000183 391 118 0.043 0.00201 0.00276 0.0399 0.0535 0.0523 0.0627 0.000131 0.000157 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 391 100 0.0367 0.00151 0.00649 0.0346 0.0464 0.0955 0.0961 0.000239 0.00024 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 391 65502.381 0.005 0.00202 0.0136 0.054 0.04 0.0536 0.113 0.14 0.000281 0.000349 ! Validation 391 65502.381 0.005 0.00222 0.0141 0.0586 0.0411 0.0563 0.111 0.142 0.000277 0.000354 Wall time: 65502.381958957296 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 392 100 0.091 0.00204 0.0502 0.04 0.0539 0.258 0.267 0.000645 0.000669 392 118 0.0533 0.00191 0.0151 0.0392 0.0522 0.132 0.147 0.000331 0.000367 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 392 100 0.0334 0.00156 0.00216 0.035 0.0472 0.0523 0.0554 0.000131 0.000139 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 392 65669.529 0.005 0.002 0.0147 0.0546 0.0397 0.0533 0.115 0.144 0.000288 0.000361 ! Validation 392 65669.529 0.005 0.00225 0.0127 0.0577 0.0413 0.0566 0.108 0.135 0.00027 0.000337 Wall time: 65669.52993379533 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 393 100 0.046 0.00204 0.00525 0.0396 0.0539 0.0703 0.0865 0.000176 0.000216 393 118 0.0457 0.00188 0.008 0.0387 0.0518 0.087 0.107 0.000217 0.000267 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 393 100 0.0305 0.00151 0.000383 0.0344 0.0464 0.0199 0.0234 4.99e-05 5.84e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 393 65836.626 0.005 0.00198 0.0116 0.0512 0.0396 0.0532 0.104 0.129 0.00026 0.000322 ! Validation 393 65836.626 0.005 0.0022 0.00765 0.0516 0.0408 0.0559 0.0805 0.104 0.000201 0.000261 Wall time: 65836.62640166515 ! Best model 393 0.052 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 394 100 0.0439 0.00193 0.00542 0.0389 0.0524 0.0699 0.0879 0.000175 0.00022 394 118 0.0521 0.00216 0.00895 0.0414 0.0554 0.0963 0.113 0.000241 0.000282 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 394 100 0.043 0.00148 0.0134 0.0342 0.0459 0.137 0.138 0.000342 0.000345 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 394 66003.731 0.005 0.00196 0.0139 0.0531 0.0394 0.0529 0.114 0.141 0.000285 0.000352 ! Validation 394 66003.731 0.005 0.00217 0.0261 0.0695 0.0406 0.0556 0.17 0.193 0.000425 0.000482 Wall time: 66003.73177754506 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 395 100 0.0465 0.00194 0.00774 0.0391 0.0526 0.0887 0.105 0.000222 0.000262 395 118 0.0484 0.00197 0.009 0.0398 0.053 0.0954 0.113 0.000238 0.000283 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 395 100 0.0311 0.0015 0.00112 0.0343 0.0462 0.0373 0.0399 9.32e-05 9.98e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 395 66170.816 0.005 0.00198 0.0244 0.0641 0.0396 0.0531 0.145 0.187 0.000364 0.000468 ! Validation 395 66170.816 0.005 0.00219 0.00821 0.0521 0.0408 0.0559 0.0845 0.108 0.000211 0.00027 Wall time: 66170.81677797018 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 396 100 0.0647 0.00193 0.0261 0.0388 0.0525 0.181 0.193 0.000451 0.000482 396 118 0.0722 0.00183 0.0356 0.0381 0.051 0.214 0.225 0.000535 0.000563 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 396 100 0.0388 0.00147 0.00949 0.034 0.0457 0.116 0.116 0.000289 0.000291 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 396 66337.907 0.005 0.00194 0.0101 0.0489 0.0392 0.0526 0.0942 0.119 0.000235 0.000298 ! Validation 396 66337.907 0.005 0.00216 0.0158 0.059 0.0404 0.0555 0.128 0.15 0.00032 0.000375 Wall time: 66337.90741622122 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 397 100 0.0596 0.00189 0.0218 0.0389 0.0519 0.159 0.176 0.000396 0.00044 397 118 0.0438 0.00191 0.00563 0.039 0.0521 0.069 0.0895 0.000172 0.000224 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 397 100 0.0311 0.00147 0.00179 0.0342 0.0457 0.0482 0.0505 0.000121 0.000126 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 397 66505.082 0.005 0.00194 0.0197 0.0586 0.0392 0.0526 0.135 0.168 0.000339 0.00042 ! Validation 397 66505.082 0.005 0.00216 0.00886 0.0521 0.0405 0.0555 0.0878 0.112 0.000219 0.000281 Wall time: 66505.08240563422 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 398 100 0.0704 0.00175 0.0353 0.0374 0.05 0.213 0.224 0.000533 0.00056 398 118 0.0435 0.00195 0.00453 0.0387 0.0527 0.0587 0.0803 0.000147 0.000201 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 398 100 0.0341 0.00147 0.00474 0.0341 0.0457 0.0811 0.0822 0.000203 0.000206 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 398 66672.178 0.005 0.00193 0.0145 0.053 0.039 0.0524 0.118 0.144 0.000296 0.00036 ! Validation 398 66672.178 0.005 0.00216 0.00845 0.0517 0.0405 0.0555 0.0876 0.11 0.000219 0.000274 Wall time: 66672.1786948964 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 399 100 0.0434 0.00181 0.00731 0.0379 0.0507 0.0813 0.102 0.000203 0.000255 399 118 0.0604 0.00181 0.0242 0.0381 0.0508 0.181 0.186 0.000453 0.000464 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 399 100 0.0701 0.00149 0.0402 0.0345 0.0461 0.239 0.239 0.000597 0.000598 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 399 66839.266 0.005 0.00193 0.0207 0.0594 0.0391 0.0525 0.139 0.171 0.000346 0.000429 ! Validation 399 66839.266 0.005 0.00219 0.0417 0.0854 0.0408 0.0558 0.224 0.244 0.00056 0.000609 Wall time: 66839.26620955206 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 400 100 0.0419 0.0019 0.00399 0.0386 0.052 0.0572 0.0754 0.000143 0.000189 400 118 0.0663 0.00226 0.0211 0.042 0.0567 0.149 0.173 0.000371 0.000433 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 400 100 0.0422 0.00145 0.0131 0.0341 0.0455 0.136 0.137 0.00034 0.000342 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 400 67006.360 0.005 0.00191 0.0112 0.0495 0.0389 0.0522 0.102 0.126 0.000254 0.000315 ! Validation 400 67006.360 0.005 0.00213 0.026 0.0687 0.0402 0.0551 0.17 0.192 0.000425 0.000481 Wall time: 67006.36062486703 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 401 100 0.0512 0.00198 0.0117 0.0397 0.053 0.108 0.129 0.000269 0.000322 401 118 0.0627 0.0021 0.0208 0.0402 0.0547 0.16 0.172 0.000399 0.00043 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 401 100 0.0782 0.00143 0.0496 0.0335 0.0452 0.265 0.266 0.000664 0.000664 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 401 67173.455 0.005 0.00189 0.0122 0.05 0.0387 0.0519 0.105 0.131 0.000262 0.000328 ! Validation 401 67173.455 0.005 0.0021 0.054 0.096 0.0399 0.0547 0.259 0.277 0.000648 0.000693 Wall time: 67173.45520490035 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 402 100 0.0417 0.00195 0.00262 0.0391 0.0527 0.0473 0.0611 0.000118 0.000153 402 118 0.0584 0.00169 0.0246 0.037 0.0491 0.175 0.187 0.000439 0.000468 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 402 100 0.0436 0.00143 0.0151 0.0334 0.0451 0.146 0.146 0.000364 0.000366 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 402 67340.637 0.005 0.00189 0.0174 0.0553 0.0387 0.0519 0.125 0.157 0.000313 0.000393 ! Validation 402 67340.637 0.005 0.0021 0.0179 0.0599 0.0399 0.0547 0.134 0.16 0.000336 0.000399 Wall time: 67340.63700787816 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 403 100 0.0517 0.00191 0.0135 0.0387 0.0522 0.118 0.138 0.000295 0.000346 403 118 0.0391 0.00179 0.00321 0.0376 0.0506 0.0588 0.0677 0.000147 0.000169 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 403 100 0.0286 0.00142 0.000192 0.0335 0.045 0.0118 0.0166 2.95e-05 4.14e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 403 67507.725 0.005 0.00189 0.0175 0.0553 0.0387 0.0519 0.129 0.158 0.000322 0.000395 ! Validation 403 67507.725 0.005 0.00209 0.00723 0.049 0.0398 0.0546 0.079 0.101 0.000197 0.000254 Wall time: 67507.72568219714 ! Best model 403 0.049 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 404 100 0.061 0.00193 0.0224 0.039 0.0524 0.157 0.179 0.000392 0.000446 404 118 0.0435 0.00188 0.0059 0.0387 0.0518 0.0754 0.0917 0.000188 0.000229 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 404 100 0.0299 0.00141 0.00173 0.0333 0.0448 0.0469 0.0496 0.000117 0.000124 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 404 67674.835 0.005 0.0019 0.0236 0.0617 0.0388 0.0521 0.141 0.184 0.000352 0.00046 ! Validation 404 67674.835 0.005 0.0021 0.00739 0.0494 0.0399 0.0547 0.0788 0.103 0.000197 0.000256 Wall time: 67674.83577976702 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 405 100 0.0386 0.00175 0.00351 0.0374 0.05 0.0563 0.0707 0.000141 0.000177 405 118 0.0385 0.00176 0.00329 0.0376 0.0501 0.0566 0.0685 0.000142 0.000171 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 405 100 0.0547 0.00143 0.0262 0.0334 0.0451 0.193 0.193 0.000482 0.000483 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 405 67841.919 0.005 0.00188 0.0153 0.0529 0.0386 0.0518 0.124 0.148 0.000309 0.00037 ! Validation 405 67841.919 0.005 0.00209 0.0306 0.0724 0.0398 0.0545 0.186 0.209 0.000465 0.000522 Wall time: 67841.91909417138 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 406 100 0.0486 0.00166 0.0154 0.0365 0.0486 0.137 0.148 0.000343 0.000371 406 118 0.0457 0.00188 0.00803 0.0387 0.0518 0.098 0.107 0.000245 0.000267 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 406 100 0.032 0.00146 0.00281 0.0338 0.0456 0.0621 0.0632 0.000155 0.000158 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 406 68008.999 0.005 0.00186 0.0128 0.05 0.0383 0.0514 0.109 0.135 0.000273 0.000339 ! Validation 406 68008.999 0.005 0.00213 0.01 0.0525 0.0401 0.055 0.0968 0.119 0.000242 0.000298 Wall time: 68008.99987710919 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 407 100 0.0705 0.00184 0.0336 0.0382 0.0513 0.207 0.219 0.000517 0.000547 407 118 0.0513 0.00233 0.0047 0.0425 0.0576 0.0713 0.0818 0.000178 0.000205 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 407 100 0.0302 0.00143 0.00158 0.0334 0.0451 0.0444 0.0474 0.000111 0.000118 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 407 68176.170 0.005 0.00188 0.0209 0.0586 0.0386 0.0517 0.142 0.173 0.000355 0.000433 ! Validation 407 68176.170 0.005 0.0021 0.0086 0.0507 0.0399 0.0547 0.0856 0.111 0.000214 0.000277 Wall time: 68176.17067886237 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 408 100 0.0433 0.00187 0.00589 0.0382 0.0516 0.0664 0.0916 0.000166 0.000229 408 118 0.0461 0.00197 0.00669 0.0385 0.053 0.0789 0.0976 0.000197 0.000244 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 408 100 0.0437 0.00142 0.0153 0.0335 0.045 0.147 0.147 0.000367 0.000369 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 408 68343.264 0.005 0.00185 0.0119 0.0489 0.0382 0.0513 0.106 0.13 0.000264 0.000326 ! Validation 408 68343.264 0.005 0.00206 0.022 0.0632 0.0396 0.0542 0.149 0.177 0.000373 0.000442 Wall time: 68343.26489450503 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 409 100 0.043 0.00184 0.0062 0.038 0.0512 0.0823 0.094 0.000206 0.000235 409 118 0.0384 0.00167 0.00492 0.0366 0.0488 0.0765 0.0838 0.000191 0.000209 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 409 100 0.0308 0.00142 0.00239 0.0335 0.045 0.0559 0.0583 0.00014 0.000146 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 409 68510.360 0.005 0.00185 0.0134 0.0505 0.0383 0.0514 0.109 0.139 0.000272 0.000347 ! Validation 409 68510.360 0.005 0.00207 0.0163 0.0577 0.0396 0.0543 0.126 0.153 0.000316 0.000381 Wall time: 68510.36005530227 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 410 100 0.0452 0.0018 0.00925 0.0378 0.0506 0.0971 0.115 0.000243 0.000287 410 118 0.0475 0.00179 0.0117 0.0377 0.0505 0.114 0.129 0.000285 0.000323 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 410 100 0.0284 0.0014 0.000342 0.0332 0.0447 0.0204 0.0221 5.1e-05 5.52e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 410 68677.447 0.005 0.00189 0.0196 0.0574 0.0387 0.0519 0.126 0.167 0.000315 0.000418 ! Validation 410 68677.447 0.005 0.00205 0.00779 0.0488 0.0395 0.0541 0.0827 0.105 0.000207 0.000263 Wall time: 68677.44754249137 ! Best model 410 0.049 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 411 100 0.0648 0.0018 0.0287 0.0378 0.0507 0.177 0.202 0.000444 0.000506 411 118 0.051 0.00194 0.0121 0.0388 0.0526 0.117 0.131 0.000293 0.000329 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 411 100 0.0614 0.0014 0.0333 0.0333 0.0447 0.216 0.218 0.000541 0.000544 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 411 68844.630 0.005 0.00186 0.0199 0.0572 0.0384 0.0515 0.135 0.168 0.000336 0.000421 ! Validation 411 68844.630 0.005 0.00208 0.0339 0.0755 0.0398 0.0544 0.198 0.22 0.000495 0.00055 Wall time: 68844.6300143674 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 412 100 0.0626 0.002 0.0225 0.0396 0.0534 0.168 0.179 0.000419 0.000448 412 118 0.0412 0.00184 0.00435 0.0384 0.0512 0.0578 0.0787 0.000144 0.000197 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 412 100 0.0289 0.0014 0.000832 0.0333 0.0447 0.0271 0.0344 6.79e-05 8.61e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 412 69011.710 0.005 0.00186 0.0139 0.0511 0.0384 0.0515 0.114 0.141 0.000286 0.000353 ! Validation 412 69011.710 0.005 0.00207 0.00731 0.0486 0.0396 0.0542 0.0788 0.102 0.000197 0.000255 Wall time: 69011.71014996013 ! Best model 412 0.049 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 413 100 0.0807 0.00174 0.0459 0.0373 0.0498 0.245 0.256 0.000611 0.000639 413 118 0.0657 0.0018 0.0296 0.0377 0.0507 0.177 0.205 0.000443 0.000514 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 413 100 0.0773 0.00167 0.0439 0.0367 0.0487 0.249 0.25 0.000622 0.000625 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 413 69178.808 0.005 0.00183 0.0171 0.0538 0.0381 0.0511 0.124 0.156 0.00031 0.000389 ! Validation 413 69178.808 0.005 0.00236 0.0603 0.107 0.0428 0.058 0.275 0.293 0.000688 0.000733 Wall time: 69178.80890340032 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 414 100 0.047 0.00175 0.012 0.0375 0.0499 0.116 0.131 0.00029 0.000327 414 118 0.0551 0.00157 0.0238 0.036 0.0472 0.168 0.184 0.000421 0.00046 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 414 100 0.0413 0.00138 0.0137 0.033 0.0443 0.139 0.14 0.000347 0.00035 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 414 69345.893 0.005 0.00187 0.0178 0.0553 0.0386 0.0517 0.129 0.159 0.000322 0.000398 ! Validation 414 69345.893 0.005 0.00202 0.0129 0.0533 0.0392 0.0536 0.113 0.136 0.000283 0.000339 Wall time: 69345.89319208404 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 415 100 0.0446 0.00192 0.00617 0.039 0.0523 0.0772 0.0937 0.000193 0.000234 415 118 0.0548 0.00212 0.0125 0.0405 0.0549 0.115 0.133 0.000287 0.000333 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 415 100 0.0556 0.00136 0.0284 0.0328 0.044 0.2 0.201 0.0005 0.000503 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 415 69512.989 0.005 0.0018 0.0121 0.0481 0.0377 0.0506 0.105 0.131 0.000262 0.000329 ! Validation 415 69512.989 0.005 0.002 0.0391 0.0791 0.039 0.0534 0.208 0.236 0.00052 0.00059 Wall time: 69512.98951757932 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 416 100 0.0393 0.00176 0.00408 0.0374 0.0501 0.0624 0.0763 0.000156 0.000191 416 118 0.0601 0.00196 0.0208 0.0394 0.0529 0.156 0.172 0.000391 0.000431 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 416 100 0.0365 0.00139 0.00877 0.0333 0.0445 0.111 0.112 0.000276 0.000279 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 416 69680.172 0.005 0.0018 0.0121 0.048 0.0377 0.0506 0.104 0.131 0.00026 0.000327 ! Validation 416 69680.172 0.005 0.00207 0.0181 0.0595 0.0398 0.0543 0.14 0.161 0.00035 0.000402 Wall time: 69680.17257623328 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 417 100 0.0718 0.00194 0.0329 0.0395 0.0526 0.201 0.217 0.000503 0.000541 417 118 0.0494 0.00193 0.0108 0.0393 0.0524 0.11 0.124 0.000275 0.000311 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 417 100 0.0344 0.0016 0.00245 0.0351 0.0477 0.0543 0.0591 0.000136 0.000148 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 417 69847.255 0.005 0.00183 0.0218 0.0585 0.0382 0.0511 0.131 0.176 0.000328 0.000441 ! Validation 417 69847.255 0.005 0.00221 0.0107 0.0549 0.0412 0.0561 0.0979 0.123 0.000245 0.000308 Wall time: 69847.2556810393 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 418 100 0.0371 0.00164 0.00434 0.0364 0.0483 0.0639 0.0786 0.00016 0.000197 418 118 0.0412 0.00172 0.00678 0.0372 0.0495 0.0812 0.0982 0.000203 0.000246 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 418 100 0.031 0.0014 0.00304 0.0328 0.0446 0.0635 0.0658 0.000159 0.000164 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 418 70014.350 0.005 0.00183 0.0107 0.0473 0.0381 0.0511 0.0983 0.124 0.000246 0.000309 ! Validation 418 70014.350 0.005 0.00202 0.0117 0.0521 0.0392 0.0537 0.101 0.129 0.000252 0.000322 Wall time: 70014.35071223509 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 419 100 0.0387 0.00173 0.00423 0.037 0.0496 0.0606 0.0777 0.000152 0.000194 419 118 0.0371 0.00175 0.00199 0.037 0.05 0.0379 0.0532 9.47e-05 0.000133 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 419 100 0.0294 0.00135 0.00235 0.0325 0.0439 0.0545 0.0579 0.000136 0.000145 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 419 70181.439 0.005 0.00242 0.038 0.0865 0.0435 0.0588 0.162 0.234 0.000404 0.000584 ! Validation 419 70181.439 0.005 0.00197 0.0103 0.0497 0.0387 0.053 0.0984 0.121 0.000246 0.000302 Wall time: 70181.43924827408 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 420 100 0.0492 0.00172 0.0149 0.0369 0.0494 0.128 0.146 0.00032 0.000365 420 118 0.0466 0.00199 0.00687 0.0393 0.0532 0.0876 0.0989 0.000219 0.000247 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 420 100 0.0274 0.00136 0.000193 0.0326 0.0441 0.016 0.0166 3.99e-05 4.15e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 420 70348.538 0.005 0.00178 0.0119 0.0474 0.0375 0.0503 0.105 0.13 0.000263 0.000325 ! Validation 420 70348.538 0.005 0.00197 0.00793 0.0473 0.0387 0.053 0.0837 0.106 0.000209 0.000266 Wall time: 70348.53874227032 ! Best model 420 0.047 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 421 100 0.0412 0.00177 0.0059 0.0375 0.0502 0.0767 0.0916 0.000192 0.000229 421 118 0.0376 0.00158 0.00603 0.0358 0.0474 0.0863 0.0927 0.000216 0.000232 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 421 100 0.0373 0.00135 0.0104 0.0324 0.0438 0.121 0.122 0.000303 0.000305 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 421 70515.718 0.005 0.00177 0.0109 0.0464 0.0375 0.0503 0.098 0.125 0.000245 0.000312 ! Validation 421 70515.718 0.005 0.00194 0.0172 0.0561 0.0384 0.0526 0.129 0.157 0.000322 0.000392 Wall time: 70515.71858712332 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 422 100 0.0379 0.00177 0.00246 0.0374 0.0502 0.0474 0.0591 0.000119 0.000148 422 118 0.0434 0.002 0.00343 0.0394 0.0534 0.0573 0.0699 0.000143 0.000175 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 422 100 0.0274 0.00134 0.000608 0.0323 0.0436 0.0256 0.0294 6.4e-05 7.35e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 422 70682.809 0.005 0.00177 0.0147 0.0501 0.0374 0.0502 0.114 0.145 0.000284 0.000363 ! Validation 422 70682.809 0.005 0.00195 0.00724 0.0463 0.0384 0.0527 0.0799 0.102 0.0002 0.000254 Wall time: 70682.80907406425 ! Best model 422 0.046 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 423 100 0.0536 0.00168 0.02 0.0365 0.0489 0.159 0.169 0.000396 0.000422 423 118 0.0414 0.00194 0.00261 0.0394 0.0525 0.0516 0.061 0.000129 0.000152 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 423 100 0.0288 0.00141 0.0007 0.0332 0.0448 0.0261 0.0316 6.52e-05 7.89e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 423 70849.918 0.005 0.00175 0.0124 0.0474 0.0372 0.0499 0.107 0.133 0.000268 0.000333 ! Validation 423 70849.918 0.005 0.00203 0.00697 0.0476 0.0394 0.0538 0.077 0.0996 0.000192 0.000249 Wall time: 70849.91871393938 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 424 100 0.062 0.0019 0.0241 0.0386 0.052 0.169 0.185 0.000421 0.000463 424 118 0.0397 0.00184 0.00295 0.0383 0.0511 0.05 0.0648 0.000125 0.000162 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 424 100 0.0297 0.00133 0.00321 0.0322 0.0435 0.066 0.0676 0.000165 0.000169 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 424 71017.008 0.005 0.00179 0.0166 0.0523 0.0377 0.0504 0.124 0.154 0.000309 0.000386 ! Validation 424 71017.008 0.005 0.00193 0.00846 0.0471 0.0383 0.0525 0.0871 0.11 0.000218 0.000274 Wall time: 71017.00884547736 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 425 100 0.0457 0.0018 0.00967 0.0376 0.0507 0.101 0.117 0.000254 0.000293 425 118 0.0411 0.0017 0.007 0.0369 0.0493 0.087 0.0998 0.000218 0.00025 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 425 100 0.0306 0.00153 9.15e-05 0.0345 0.0466 0.00969 0.0114 2.42e-05 2.85e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 425 71184.110 0.005 0.00174 0.012 0.0469 0.0372 0.0498 0.101 0.131 0.000253 0.000328 ! Validation 425 71184.110 0.005 0.00206 0.00723 0.0484 0.0399 0.0542 0.0792 0.101 0.000198 0.000254 Wall time: 71184.11009953031 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 426 100 0.0565 0.00189 0.0186 0.0389 0.0519 0.15 0.163 0.000375 0.000407 426 118 0.0567 0.00193 0.0182 0.0391 0.0524 0.14 0.161 0.000349 0.000403 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 426 100 0.0316 0.00147 0.00222 0.0339 0.0458 0.0542 0.0563 0.000136 0.000141 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 426 71351.289 0.005 0.00182 0.0241 0.0605 0.0381 0.0509 0.154 0.186 0.000386 0.000464 ! Validation 426 71351.289 0.005 0.00207 0.00943 0.0508 0.0399 0.0543 0.0932 0.116 0.000233 0.00029 Wall time: 71351.28979588812 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 427 100 0.0387 0.00179 0.00282 0.0376 0.0505 0.0512 0.0634 0.000128 0.000159 427 118 0.0336 0.00143 0.00505 0.0341 0.0451 0.0689 0.0849 0.000172 0.000212 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 427 100 0.0323 0.00132 0.006 0.032 0.0433 0.0902 0.0924 0.000226 0.000231 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 427 71518.377 0.005 0.00179 0.0177 0.0535 0.0377 0.0505 0.121 0.159 0.000301 0.000398 ! Validation 427 71518.377 0.005 0.00193 0.0136 0.0523 0.0383 0.0525 0.112 0.139 0.00028 0.000349 Wall time: 71518.377276049 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 428 100 0.0498 0.00168 0.0162 0.0363 0.0489 0.13 0.152 0.000324 0.00038 428 118 0.0397 0.00177 0.00433 0.0377 0.0502 0.0593 0.0785 0.000148 0.000196 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 428 100 0.0274 0.00136 0.000166 0.0326 0.0441 0.0133 0.0154 3.33e-05 3.84e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 428 71685.463 0.005 0.0018 0.016 0.0519 0.0378 0.0506 0.122 0.151 0.000304 0.000379 ! Validation 428 71685.463 0.005 0.00194 0.0082 0.047 0.0384 0.0526 0.0845 0.108 0.000211 0.00027 Wall time: 71685.46361045213 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 429 100 0.0443 0.00171 0.0101 0.0368 0.0494 0.1 0.12 0.00025 0.0003 429 118 0.0441 0.00175 0.009 0.0372 0.05 0.0881 0.113 0.00022 0.000283 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 429 100 0.0286 0.00141 0.000442 0.0329 0.0448 0.0224 0.0251 5.6e-05 6.28e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 429 71852.708 0.005 0.0018 0.0147 0.0507 0.0378 0.0506 0.116 0.145 0.000291 0.000363 ! Validation 429 71852.708 0.005 0.00201 0.00668 0.0468 0.0392 0.0534 0.0761 0.0976 0.00019 0.000244 Wall time: 71852.70810610242 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 430 100 0.0525 0.00179 0.0167 0.0376 0.0504 0.143 0.154 0.000357 0.000386 430 118 0.117 0.00215 0.0744 0.0414 0.0553 0.309 0.326 0.000773 0.000814 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 430 100 0.0525 0.00191 0.0143 0.0389 0.0521 0.142 0.143 0.000356 0.000357 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 430 72019.883 0.005 0.00176 0.0149 0.0501 0.0374 0.0501 0.115 0.143 0.000287 0.000359 ! Validation 430 72019.883 0.005 0.00248 0.0473 0.0969 0.0442 0.0594 0.225 0.26 0.000564 0.000649 Wall time: 72019.88361944817 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 431 100 0.0413 0.00167 0.00802 0.0364 0.0487 0.0934 0.107 0.000233 0.000267 431 118 0.059 0.00167 0.0257 0.036 0.0488 0.171 0.191 0.000428 0.000478 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 431 100 0.044 0.00135 0.017 0.0328 0.0439 0.154 0.155 0.000385 0.000389 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 431 72191.339 0.005 0.00198 0.0201 0.0597 0.0397 0.0531 0.13 0.169 0.000326 0.000422 ! Validation 431 72191.339 0.005 0.00198 0.032 0.0715 0.0389 0.0531 0.19 0.213 0.000476 0.000534 Wall time: 72191.3398076212 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 432 100 0.0386 0.00176 0.00329 0.0374 0.0501 0.0555 0.0685 0.000139 0.000171 432 118 0.0351 0.00154 0.00428 0.0354 0.0468 0.0578 0.078 0.000145 0.000195 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 432 100 0.028 0.00131 0.00185 0.0318 0.0432 0.0493 0.0513 0.000123 0.000128 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 432 72358.408 0.005 0.00174 0.0156 0.0505 0.0372 0.0498 0.12 0.15 0.0003 0.000374 ! Validation 432 72358.408 0.005 0.00192 0.00732 0.0457 0.0382 0.0523 0.0792 0.102 0.000198 0.000255 Wall time: 72358.4082747642 ! Best model 432 0.046 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 433 100 0.0529 0.00179 0.017 0.0377 0.0506 0.134 0.156 0.000335 0.000389 433 118 0.0414 0.00148 0.0118 0.0347 0.0459 0.111 0.129 0.000277 0.000324 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 433 100 0.0413 0.00132 0.015 0.0319 0.0433 0.145 0.146 0.000364 0.000366 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 433 72525.490 0.005 0.00178 0.0149 0.0506 0.0377 0.0504 0.113 0.146 0.000282 0.000365 ! Validation 433 72525.490 0.005 0.00192 0.0243 0.0627 0.0383 0.0523 0.162 0.186 0.000404 0.000465 Wall time: 72525.49019691022 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 434 100 0.0627 0.0017 0.0287 0.0368 0.0492 0.196 0.202 0.000489 0.000505 434 118 0.0522 0.00157 0.0208 0.0354 0.0473 0.153 0.172 0.000383 0.00043 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 434 100 0.043 0.00134 0.0162 0.0323 0.0437 0.151 0.152 0.000377 0.00038 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 434 72692.557 0.005 0.00172 0.0177 0.0522 0.037 0.0496 0.126 0.159 0.000316 0.000397 ! Validation 434 72692.557 0.005 0.00199 0.0283 0.0682 0.039 0.0533 0.174 0.201 0.000434 0.000502 Wall time: 72692.55776370037 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 435 100 0.0421 0.00171 0.00787 0.0369 0.0494 0.0864 0.106 0.000216 0.000265 435 118 0.0542 0.00192 0.0158 0.0388 0.0523 0.125 0.15 0.000313 0.000375 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 435 100 0.0274 0.00126 0.00209 0.0313 0.0424 0.0517 0.0545 0.000129 0.000136 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 435 72859.717 0.005 0.00175 0.0166 0.0516 0.0373 0.0499 0.126 0.154 0.000315 0.000385 ! Validation 435 72859.717 0.005 0.00188 0.00796 0.0456 0.0378 0.0518 0.0818 0.106 0.000204 0.000266 Wall time: 72859.71757974243 ! Best model 435 0.046 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 436 100 0.0428 0.00177 0.00749 0.0374 0.0502 0.0857 0.103 0.000214 0.000258 436 118 0.0642 0.00159 0.0324 0.0358 0.0476 0.2 0.215 0.000499 0.000537 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 436 100 0.0462 0.00147 0.0169 0.0334 0.0457 0.153 0.155 0.000384 0.000388 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 436 73026.802 0.005 0.00169 0.0145 0.0483 0.0366 0.0491 0.106 0.143 0.000266 0.000357 ! Validation 436 73026.802 0.005 0.00204 0.0178 0.0585 0.0395 0.0539 0.14 0.159 0.000349 0.000398 Wall time: 73026.80273851939 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 437 100 0.0465 0.00163 0.014 0.0361 0.0482 0.125 0.141 0.000312 0.000353 437 118 0.0438 0.00201 0.00365 0.0397 0.0535 0.0601 0.0721 0.00015 0.00018 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 437 100 0.032 0.00155 0.00103 0.0349 0.047 0.0325 0.0383 8.13e-05 9.59e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 437 73193.874 0.005 0.00178 0.0193 0.0549 0.0377 0.0503 0.136 0.166 0.00034 0.000416 ! Validation 437 73193.874 0.005 0.00213 0.0105 0.053 0.0406 0.055 0.0971 0.122 0.000243 0.000306 Wall time: 73193.87460639141 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 438 100 0.0428 0.00165 0.0098 0.0364 0.0485 0.103 0.118 0.000258 0.000295 438 118 0.0639 0.00207 0.0225 0.0404 0.0543 0.175 0.179 0.000438 0.000447 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 438 100 0.0269 0.00133 0.000244 0.032 0.0436 0.0154 0.0187 3.85e-05 4.66e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 438 73360.943 0.005 0.00171 0.0109 0.0451 0.0369 0.0493 0.099 0.124 0.000247 0.000311 ! Validation 438 73360.943 0.005 0.00192 0.0164 0.0548 0.0382 0.0523 0.115 0.153 0.000288 0.000382 Wall time: 73360.94355804939 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 439 100 0.0443 0.00173 0.00965 0.0373 0.0497 0.0958 0.117 0.000239 0.000293 439 118 0.0498 0.00177 0.0143 0.0373 0.0503 0.134 0.143 0.000336 0.000357 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 439 100 0.0327 0.00149 0.00289 0.0341 0.0461 0.0623 0.0642 0.000156 0.00016 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 439 73528.010 0.005 0.0019 0.0278 0.0657 0.0389 0.052 0.165 0.199 0.000413 0.000498 ! Validation 439 73528.010 0.005 0.00199 0.00941 0.0493 0.0392 0.0533 0.0933 0.116 0.000233 0.00029 Wall time: 73528.01069072029 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 440 100 0.07 0.00172 0.0355 0.0372 0.0495 0.187 0.225 0.000468 0.000562 440 118 0.0368 0.00154 0.006 0.0353 0.0468 0.0814 0.0924 0.000204 0.000231 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 440 100 0.0412 0.0016 0.00912 0.0354 0.0478 0.112 0.114 0.000279 0.000285 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 440 73695.174 0.005 0.00174 0.0167 0.0514 0.0371 0.0498 0.115 0.154 0.000288 0.000386 ! Validation 440 73695.174 0.005 0.00209 0.0201 0.0619 0.0402 0.0545 0.136 0.169 0.00034 0.000423 Wall time: 73695.17483167443 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 441 100 0.0439 0.00168 0.0104 0.0366 0.0489 0.107 0.122 0.000267 0.000304 441 118 0.0612 0.00171 0.027 0.0369 0.0494 0.178 0.196 0.000446 0.000491 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 441 100 0.0822 0.00188 0.0445 0.0384 0.0518 0.25 0.252 0.000625 0.000629 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 441 73862.268 0.005 0.00169 0.0113 0.045 0.0366 0.049 0.101 0.126 0.000253 0.000316 ! Validation 441 73862.268 0.005 0.00238 0.0407 0.0882 0.043 0.0582 0.218 0.241 0.000544 0.000602 Wall time: 73862.26894260943 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 442 100 0.0683 0.00176 0.0331 0.0379 0.0501 0.207 0.217 0.000517 0.000543 442 118 0.0399 0.00185 0.00299 0.0382 0.0513 0.0493 0.0653 0.000123 0.000163 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 442 100 0.0278 0.00136 0.000599 0.0328 0.044 0.0259 0.0292 6.48e-05 7.3e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 442 74029.348 0.005 0.00171 0.0109 0.045 0.0369 0.0493 0.101 0.125 0.000252 0.000312 ! Validation 442 74029.348 0.005 0.00192 0.00757 0.046 0.0384 0.0523 0.08 0.104 0.0002 0.00026 Wall time: 74029.3487562174 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 443 100 0.0769 0.00195 0.038 0.0394 0.0526 0.215 0.233 0.000537 0.000582 443 118 0.0366 0.00166 0.00334 0.0365 0.0487 0.0589 0.0689 0.000147 0.000172 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 443 100 0.03 0.00135 0.00305 0.0323 0.0438 0.0644 0.0659 0.000161 0.000165 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 443 74196.449 0.005 0.00174 0.0168 0.0517 0.0373 0.0498 0.124 0.155 0.000311 0.000388 ! Validation 443 74196.449 0.005 0.0019 0.0133 0.0513 0.0382 0.052 0.111 0.138 0.000277 0.000344 Wall time: 74196.44904303737 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 444 100 0.044 0.00162 0.0115 0.0359 0.0481 0.117 0.128 0.000292 0.00032 444 118 0.0437 0.00161 0.0115 0.0361 0.0479 0.121 0.128 0.000303 0.00032 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 444 100 0.0334 0.00134 0.00669 0.0321 0.0436 0.0969 0.0977 0.000242 0.000244 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 444 74363.532 0.005 0.00166 0.011 0.0442 0.0363 0.0486 0.0993 0.125 0.000248 0.000314 ! Validation 444 74363.532 0.005 0.00187 0.0135 0.0509 0.0378 0.0516 0.109 0.139 0.000273 0.000346 Wall time: 74363.5321343313 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 445 100 0.0405 0.00167 0.00716 0.0363 0.0487 0.0851 0.101 0.000213 0.000253 445 118 0.0395 0.00164 0.00668 0.0362 0.0484 0.0857 0.0976 0.000214 0.000244 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 445 100 0.0248 0.00122 0.000327 0.031 0.0417 0.0204 0.0216 5.1e-05 5.4e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 445 74530.680 0.005 0.0017 0.0186 0.0526 0.0368 0.0492 0.126 0.163 0.000316 0.000407 ! Validation 445 74530.680 0.005 0.0018 0.00671 0.0427 0.037 0.0507 0.0767 0.0978 0.000192 0.000244 Wall time: 74530.68032540614 ! Best model 445 0.043 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 446 100 0.0409 0.00179 0.00523 0.0377 0.0504 0.0677 0.0864 0.000169 0.000216 446 118 0.283 0.002 0.243 0.0411 0.0534 0.584 0.588 0.00146 0.00147 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 446 100 0.537 0.00512 0.434 0.0637 0.0854 0.783 0.787 0.00196 0.00197 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 446 74697.765 0.005 0.00176 0.0217 0.0569 0.0373 0.0501 0.129 0.169 0.000323 0.000424 ! Validation 446 74697.765 0.005 0.00589 0.278 0.396 0.068 0.0916 0.579 0.63 0.00145 0.00157 Wall time: 74697.76545484224 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 447 100 0.513 0.0222 0.0703 0.128 0.178 0.251 0.316 0.000627 0.000791 447 118 0.426 0.0178 0.07 0.115 0.159 0.266 0.316 0.000664 0.00079 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 447 100 0.349 0.0162 0.0256 0.11 0.152 0.178 0.191 0.000446 0.000477 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 447 74864.822 0.005 0.102 1.91 3.95 0.239 0.383 0.995 1.65 0.00249 0.00413 ! Validation 447 74864.822 0.005 0.0187 0.205 0.58 0.117 0.163 0.435 0.541 0.00109 0.00135 Wall time: 74864.82235335419 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 448 100 0.321 0.0111 0.0985 0.0902 0.126 0.31 0.375 0.000774 0.000937 448 118 0.296 0.0112 0.0719 0.0909 0.126 0.258 0.32 0.000644 0.0008 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 448 100 0.218 0.00904 0.0373 0.0819 0.113 0.216 0.23 0.00054 0.000576 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 448 75031.876 0.005 0.0138 0.128 0.403 0.1 0.14 0.349 0.427 0.000872 0.00107 ! Validation 448 75031.876 0.005 0.0106 0.0533 0.266 0.0886 0.123 0.224 0.276 0.00056 0.000689 Wall time: 75031.87635863712 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 449 100 0.186 0.00743 0.0375 0.0759 0.103 0.188 0.231 0.000469 0.000578 449 118 0.157 0.00644 0.0278 0.0714 0.0958 0.172 0.199 0.000431 0.000498 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 449 100 0.12 0.00573 0.00511 0.0668 0.0904 0.0805 0.0853 0.000201 0.000213 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 449 75199.027 0.005 0.00833 0.0848 0.251 0.0795 0.109 0.278 0.348 0.000695 0.000871 ! Validation 449 75199.027 0.005 0.00679 0.026 0.162 0.0725 0.0984 0.152 0.192 0.000379 0.000481 Wall time: 75199.02743237512 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 450 100 0.121 0.005 0.0212 0.0634 0.0844 0.139 0.174 0.000347 0.000434 450 118 0.17 0.00433 0.0838 0.0598 0.0786 0.332 0.346 0.00083 0.000864 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 450 100 0.245 0.00409 0.163 0.0575 0.0763 0.476 0.482 0.00119 0.00121 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 450 75366.096 0.005 0.00553 0.0747 0.185 0.0665 0.0889 0.265 0.326 0.000663 0.000815 ! Validation 450 75366.096 0.005 0.00496 0.185 0.285 0.063 0.0841 0.487 0.514 0.00122 0.00128 Wall time: 75366.0965986452 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 451 100 0.0936 0.00397 0.0143 0.0569 0.0752 0.111 0.143 0.000278 0.000356 451 118 0.0951 0.00358 0.0234 0.0546 0.0714 0.163 0.183 0.000407 0.000457 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 451 100 0.0828 0.00321 0.0186 0.0513 0.0676 0.15 0.163 0.000376 0.000407 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 451 75533.158 0.005 0.00418 0.0432 0.127 0.0585 0.0772 0.201 0.248 0.000502 0.000621 ! Validation 451 75533.158 0.005 0.00409 0.0233 0.105 0.0572 0.0764 0.149 0.182 0.000373 0.000456 Wall time: 75533.15805786522 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 452 100 0.0825 0.00331 0.0162 0.052 0.0687 0.122 0.152 0.000304 0.00038 452 118 0.0738 0.00345 0.0048 0.0533 0.0701 0.0672 0.0827 0.000168 0.000207 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 452 100 0.0792 0.00272 0.0247 0.0473 0.0623 0.182 0.188 0.000454 0.000469 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 452 75700.218 0.005 0.00356 0.026 0.0971 0.054 0.0712 0.156 0.193 0.00039 0.000482 ! Validation 452 75700.218 0.005 0.00362 0.041 0.113 0.0537 0.0718 0.209 0.242 0.000524 0.000604 Wall time: 75700.21805254323 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 453 100 0.0983 0.0029 0.0403 0.0487 0.0643 0.212 0.239 0.000529 0.000599 453 118 0.0807 0.0032 0.0168 0.0511 0.0675 0.119 0.155 0.000298 0.000387 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 453 100 0.112 0.00246 0.0624 0.045 0.0592 0.295 0.298 0.000738 0.000745 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 453 75867.290 0.005 0.00319 0.024 0.0878 0.051 0.0674 0.147 0.185 0.000368 0.000462 ! Validation 453 75867.290 0.005 0.00335 0.077 0.144 0.0515 0.069 0.311 0.331 0.000776 0.000828 Wall time: 75867.29089192906 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 454 100 0.0792 0.00297 0.0198 0.049 0.0651 0.136 0.168 0.00034 0.000419 454 118 0.0631 0.00284 0.00629 0.0484 0.0636 0.0788 0.0947 0.000197 0.000237 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 454 100 0.0477 0.00222 0.00327 0.0427 0.0562 0.0564 0.0683 0.000141 0.000171 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 454 76034.443 0.005 0.00296 0.0284 0.0877 0.0491 0.065 0.161 0.202 0.000403 0.000504 ! Validation 454 76034.443 0.005 0.0031 0.0135 0.0755 0.0494 0.0665 0.11 0.139 0.000275 0.000347 Wall time: 76034.44327511312 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 455 100 0.0783 0.00284 0.0216 0.0479 0.0636 0.149 0.175 0.000373 0.000438 455 118 0.0928 0.00278 0.0372 0.0478 0.063 0.214 0.23 0.000536 0.000575 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 455 100 0.165 0.00209 0.123 0.0414 0.0546 0.417 0.418 0.00104 0.00105 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 455 76201.524 0.005 0.00277 0.0207 0.0761 0.0474 0.0628 0.137 0.171 0.000344 0.000428 ! Validation 455 76201.524 0.005 0.00294 0.144 0.203 0.0481 0.0648 0.439 0.453 0.0011 0.00113 Wall time: 76201.52430943307 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 456 100 0.0638 0.00269 0.00994 0.0466 0.0619 0.0967 0.119 0.000242 0.000297 456 118 0.0884 0.00284 0.0317 0.0476 0.0636 0.202 0.212 0.000504 0.000531 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 456 100 0.0843 0.00198 0.0446 0.0402 0.0532 0.25 0.252 0.000624 0.00063 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 456 76368.585 0.005 0.00266 0.0309 0.084 0.0464 0.0615 0.162 0.21 0.000406 0.000524 ! Validation 456 76368.585 0.005 0.00283 0.0469 0.103 0.047 0.0635 0.236 0.258 0.00059 0.000646 Wall time: 76368.58503506798 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 457 100 0.0963 0.00256 0.0451 0.0454 0.0604 0.236 0.254 0.00059 0.000634 457 118 0.0528 0.0024 0.00481 0.0444 0.0585 0.0615 0.0828 0.000154 0.000207 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 457 100 0.0388 0.00188 0.00106 0.0393 0.0518 0.0293 0.0388 7.31e-05 9.71e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 457 76535.646 0.005 0.00253 0.0205 0.0712 0.0452 0.0601 0.139 0.171 0.000349 0.000429 ! Validation 457 76535.646 0.005 0.00272 0.00894 0.0633 0.046 0.0622 0.0891 0.113 0.000223 0.000282 Wall time: 76535.6459748433 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 458 100 0.067 0.00253 0.0165 0.0449 0.06 0.131 0.153 0.000327 0.000383 458 118 0.0511 0.00206 0.00979 0.0411 0.0542 0.1 0.118 0.000251 0.000295 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 458 100 0.0395 0.00181 0.00333 0.0385 0.0508 0.0625 0.0689 0.000156 0.000172 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 458 76702.703 0.005 0.00243 0.0141 0.0626 0.0443 0.0588 0.113 0.142 0.000283 0.000355 ! Validation 458 76702.703 0.005 0.00262 0.0142 0.0666 0.0452 0.0611 0.117 0.142 0.000291 0.000355 Wall time: 76702.70326701412 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 459 100 0.0674 0.00223 0.0229 0.0425 0.0563 0.151 0.18 0.000378 0.000451 459 118 0.0774 0.00242 0.0291 0.044 0.0587 0.176 0.203 0.00044 0.000509 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 459 100 0.0497 0.00178 0.0142 0.0382 0.0503 0.141 0.142 0.000351 0.000355 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 459 76869.834 0.005 0.00234 0.0138 0.0607 0.0434 0.0578 0.113 0.14 0.000282 0.000349 ! Validation 459 76869.834 0.005 0.00257 0.0275 0.079 0.0448 0.0606 0.176 0.198 0.000439 0.000495 Wall time: 76869.83413400734 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 460 100 0.055 0.00242 0.00662 0.0442 0.0587 0.0776 0.0971 0.000194 0.000243 460 118 0.0965 0.00215 0.0536 0.0421 0.0553 0.268 0.276 0.000671 0.000691 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 460 100 0.203 0.00173 0.168 0.0374 0.0496 0.488 0.489 0.00122 0.00122 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 460 77036.907 0.005 0.00228 0.016 0.0617 0.0429 0.057 0.121 0.15 0.000302 0.000374 ! Validation 460 77036.907 0.005 0.00249 0.154 0.203 0.044 0.0596 0.456 0.468 0.00114 0.00117 Wall time: 77036.90741181513 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 461 100 0.0528 0.00225 0.00791 0.0427 0.0566 0.0849 0.106 0.000212 0.000265 461 118 0.0573 0.00245 0.00841 0.0428 0.059 0.0858 0.109 0.000215 0.000274 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 461 100 0.0401 0.00166 0.00684 0.0368 0.0487 0.0962 0.0987 0.000241 0.000247 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 461 77203.966 0.005 0.00224 0.0205 0.0653 0.0424 0.0565 0.133 0.171 0.000332 0.000428 ! Validation 461 77203.966 0.005 0.00243 0.0108 0.0594 0.0434 0.0588 0.0976 0.124 0.000244 0.000309 Wall time: 77203.96644927422 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 462 100 0.0595 0.00233 0.0128 0.0434 0.0577 0.118 0.135 0.000294 0.000338 462 118 0.101 0.00247 0.0516 0.0443 0.0594 0.26 0.271 0.000651 0.000678 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 462 100 0.128 0.00162 0.0957 0.0364 0.048 0.369 0.369 0.000922 0.000923 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 462 77371.019 0.005 0.00218 0.0183 0.0619 0.0418 0.0557 0.132 0.161 0.000331 0.000402 ! Validation 462 77371.019 0.005 0.00239 0.0961 0.144 0.043 0.0583 0.356 0.37 0.00089 0.000925 Wall time: 77371.01989729516 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 463 100 0.0489 0.0021 0.00684 0.0409 0.0547 0.0793 0.0987 0.000198 0.000247 463 118 0.0566 0.00238 0.00909 0.044 0.0582 0.0955 0.114 0.000239 0.000284 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 463 100 0.0468 0.00159 0.015 0.0359 0.0476 0.144 0.146 0.000361 0.000366 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 463 77538.084 0.005 0.00215 0.0171 0.06 0.0415 0.0553 0.122 0.156 0.000305 0.000391 ! Validation 463 77538.084 0.005 0.00233 0.0167 0.0632 0.0424 0.0576 0.129 0.154 0.000322 0.000385 Wall time: 77538.08408458019 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 464 100 0.0524 0.002 0.0125 0.0401 0.0533 0.116 0.133 0.000291 0.000333 464 118 0.0841 0.00212 0.0417 0.0408 0.055 0.23 0.244 0.000574 0.000609 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 464 100 0.091 0.0016 0.059 0.036 0.0478 0.289 0.29 0.000721 0.000725 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 464 77705.225 0.005 0.00209 0.0175 0.0594 0.041 0.0546 0.123 0.157 0.000307 0.000393 ! Validation 464 77705.225 0.005 0.00232 0.0558 0.102 0.0424 0.0575 0.265 0.282 0.000662 0.000705 Wall time: 77705.22519544419 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 465 100 0.0464 0.00206 0.00515 0.0405 0.0542 0.0672 0.0857 0.000168 0.000214 465 118 0.0403 0.00178 0.00466 0.0381 0.0504 0.0687 0.0815 0.000172 0.000204 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 465 100 0.0311 0.00154 0.000227 0.0355 0.0469 0.0162 0.018 4.05e-05 4.5e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 465 77872.291 0.005 0.00206 0.0188 0.06 0.0407 0.0542 0.131 0.164 0.000328 0.00041 ! Validation 465 77872.291 0.005 0.00227 0.00719 0.0525 0.0419 0.0568 0.0809 0.101 0.000202 0.000253 Wall time: 77872.29150789743 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 466 100 0.0604 0.00229 0.0146 0.0426 0.0571 0.125 0.144 0.000314 0.000361 466 118 0.0488 0.00174 0.0139 0.0379 0.0498 0.136 0.141 0.000341 0.000352 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 466 100 0.0695 0.00152 0.0391 0.035 0.0465 0.235 0.236 0.000588 0.00059 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 466 78039.347 0.005 0.00203 0.0215 0.0622 0.0404 0.0539 0.133 0.175 0.000333 0.000438 ! Validation 466 78039.347 0.005 0.00223 0.0487 0.0933 0.0415 0.0563 0.242 0.263 0.000606 0.000659 Wall time: 78039.34796407307 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 467 100 0.0473 0.00208 0.0058 0.0408 0.0544 0.0765 0.0909 0.000191 0.000227 467 118 0.0695 0.00196 0.0304 0.0396 0.0528 0.193 0.208 0.000482 0.00052 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 467 100 0.0627 0.0015 0.0327 0.0349 0.0462 0.215 0.216 0.000538 0.000539 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 467 78206.412 0.005 0.002 0.0145 0.0544 0.04 0.0533 0.116 0.143 0.000289 0.000358 ! Validation 467 78206.412 0.005 0.00221 0.054 0.0981 0.0412 0.0561 0.261 0.277 0.000653 0.000694 Wall time: 78206.4123771023 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 468 100 0.0494 0.00194 0.0106 0.0392 0.0526 0.103 0.123 0.000257 0.000307 468 118 0.0472 0.00215 0.00427 0.0419 0.0553 0.0563 0.078 0.000141 0.000195 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 468 100 0.0317 0.00149 0.00195 0.0346 0.046 0.0488 0.0527 0.000122 0.000132 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 468 78373.551 0.005 0.00197 0.0213 0.0607 0.0397 0.053 0.142 0.175 0.000356 0.000436 ! Validation 468 78373.551 0.005 0.00218 0.00896 0.0526 0.041 0.0558 0.0922 0.113 0.00023 0.000282 Wall time: 78373.5518672443 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 469 100 0.0463 0.00208 0.00472 0.0405 0.0544 0.0655 0.082 0.000164 0.000205 469 118 0.101 0.00152 0.0709 0.0356 0.0466 0.302 0.318 0.000756 0.000794 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 469 100 0.0657 0.0015 0.0358 0.0346 0.0462 0.225 0.226 0.000562 0.000564 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 469 78540.609 0.005 0.00195 0.0202 0.0592 0.0396 0.0527 0.136 0.168 0.000339 0.000421 ! Validation 469 78540.609 0.005 0.00219 0.044 0.0877 0.0411 0.0558 0.232 0.25 0.000581 0.000626 Wall time: 78540.60991637502 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 470 100 0.051 0.00196 0.0118 0.0395 0.0529 0.111 0.13 0.000278 0.000324 470 118 0.0505 0.00165 0.0174 0.0368 0.0485 0.144 0.158 0.00036 0.000394 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 470 100 0.0465 0.00144 0.0176 0.0341 0.0453 0.157 0.158 0.000393 0.000396 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 470 78707.663 0.005 0.00193 0.0187 0.0574 0.0394 0.0525 0.132 0.163 0.000331 0.000408 ! Validation 470 78707.663 0.005 0.00212 0.0199 0.0624 0.0404 0.055 0.145 0.169 0.000364 0.000421 Wall time: 78707.66336160107 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 471 100 0.0489 0.00199 0.00902 0.0399 0.0533 0.0937 0.113 0.000234 0.000283 471 118 0.0467 0.00147 0.0173 0.0348 0.0458 0.143 0.157 0.000357 0.000393 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 471 100 0.0394 0.00142 0.0109 0.0339 0.045 0.124 0.125 0.00031 0.000312 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 471 78874.723 0.005 0.00189 0.014 0.0519 0.0389 0.052 0.116 0.141 0.00029 0.000353 ! Validation 471 78874.723 0.005 0.00211 0.0217 0.0639 0.0403 0.0548 0.156 0.176 0.00039 0.00044 Wall time: 78874.72352551902 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 472 100 0.0591 0.002 0.0192 0.04 0.0533 0.149 0.165 0.000371 0.000413 472 118 0.0459 0.00193 0.00731 0.039 0.0524 0.092 0.102 0.00023 0.000255 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 472 100 0.0372 0.0014 0.00918 0.0336 0.0447 0.112 0.114 0.000281 0.000286 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 472 79041.784 0.005 0.00188 0.0177 0.0553 0.0388 0.0518 0.13 0.159 0.000325 0.000397 ! Validation 472 79041.784 0.005 0.00207 0.0114 0.0528 0.0399 0.0543 0.102 0.127 0.000256 0.000319 Wall time: 79041.78493498033 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 473 100 0.0625 0.00182 0.0261 0.0381 0.0509 0.178 0.193 0.000444 0.000482 473 118 0.044 0.00184 0.00713 0.0385 0.0512 0.0868 0.101 0.000217 0.000252 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 473 100 0.06 0.00137 0.0325 0.0332 0.0442 0.214 0.215 0.000536 0.000538 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 473 79208.926 0.005 0.00187 0.0167 0.054 0.0386 0.0516 0.125 0.154 0.000313 0.000386 ! Validation 473 79208.926 0.005 0.00205 0.0309 0.0719 0.0396 0.054 0.19 0.21 0.000474 0.000524 Wall time: 79208.92644595727 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 474 100 0.0466 0.00183 0.0099 0.0383 0.0511 0.102 0.119 0.000254 0.000297 474 118 0.049 0.00177 0.0137 0.0377 0.0502 0.119 0.14 0.000298 0.00035 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 474 100 0.0281 0.00137 0.00072 0.0331 0.0441 0.0279 0.032 6.99e-05 8.01e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 474 79375.999 0.005 0.00183 0.0125 0.0492 0.0383 0.0511 0.106 0.133 0.000265 0.000333 ! Validation 474 79375.999 0.005 0.00202 0.00791 0.0483 0.0393 0.0537 0.0855 0.106 0.000214 0.000265 Wall time: 79375.99954807106 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 475 100 0.0664 0.00195 0.0274 0.0396 0.0527 0.19 0.198 0.000475 0.000494 475 118 0.047 0.0019 0.00898 0.0392 0.052 0.0974 0.113 0.000244 0.000283 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 475 100 0.0475 0.00142 0.019 0.0336 0.045 0.164 0.165 0.000411 0.000412 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 475 79543.062 0.005 0.00184 0.0264 0.0633 0.0383 0.0512 0.159 0.195 0.000398 0.000486 ! Validation 475 79543.062 0.005 0.00207 0.0236 0.065 0.0399 0.0543 0.164 0.183 0.000411 0.000459 Wall time: 79543.06276267627 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 476 100 0.0602 0.00183 0.0236 0.0382 0.0511 0.168 0.183 0.000421 0.000459 476 118 0.0687 0.00172 0.0344 0.0373 0.0494 0.2 0.221 0.000499 0.000553 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 476 100 0.0656 0.00135 0.0386 0.033 0.0438 0.234 0.235 0.000586 0.000587 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 476 79710.124 0.005 0.00182 0.0154 0.0518 0.0381 0.0509 0.12 0.148 0.0003 0.000369 ! Validation 476 79710.124 0.005 0.00202 0.048 0.0884 0.0393 0.0536 0.246 0.261 0.000614 0.000653 Wall time: 79710.12492981227 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 477 100 0.0409 0.00178 0.00536 0.0377 0.0503 0.0701 0.0873 0.000175 0.000218 477 118 0.0406 0.00184 0.00386 0.0384 0.0511 0.0664 0.0741 0.000166 0.000185 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 477 100 0.0323 0.00134 0.00548 0.0327 0.0437 0.0866 0.0883 0.000216 0.000221 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 477 79877.202 0.005 0.0018 0.0152 0.0512 0.0379 0.0506 0.121 0.148 0.000302 0.000369 ! Validation 477 79877.202 0.005 0.00199 0.00752 0.0473 0.039 0.0532 0.0821 0.104 0.000205 0.000259 Wall time: 79877.20205353433 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 478 100 0.0445 0.00189 0.00657 0.0387 0.052 0.0759 0.0967 0.00019 0.000242 478 118 0.046 0.00151 0.0158 0.0347 0.0463 0.132 0.15 0.000331 0.000376 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 478 100 0.0415 0.00133 0.0149 0.0326 0.0435 0.145 0.146 0.000362 0.000365 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 478 80044.333 0.005 0.0018 0.0175 0.0535 0.0379 0.0506 0.13 0.158 0.000324 0.000395 ! Validation 478 80044.333 0.005 0.00197 0.0159 0.0554 0.0389 0.053 0.126 0.151 0.000316 0.000376 Wall time: 80044.33380503021 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 479 100 0.0388 0.00174 0.00393 0.0372 0.0499 0.057 0.0749 0.000143 0.000187 479 118 0.0379 0.0015 0.00789 0.0354 0.0463 0.0838 0.106 0.000209 0.000265 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 479 100 0.0304 0.00135 0.00337 0.0329 0.0439 0.0689 0.0693 0.000172 0.000173 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 479 80211.394 0.005 0.00176 0.0131 0.0484 0.0375 0.0501 0.111 0.137 0.000278 0.000342 ! Validation 479 80211.394 0.005 0.00199 0.0206 0.0604 0.0391 0.0532 0.146 0.171 0.000366 0.000428 Wall time: 80211.39396943711 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 480 100 0.0408 0.00179 0.0051 0.0377 0.0504 0.0736 0.0852 0.000184 0.000213 480 118 0.0422 0.00166 0.00889 0.0361 0.0487 0.101 0.113 0.000253 0.000281 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 480 100 0.0376 0.0013 0.0116 0.0322 0.0431 0.128 0.129 0.000319 0.000321 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 480 80378.448 0.005 0.00177 0.0206 0.056 0.0376 0.0502 0.134 0.172 0.000334 0.000429 ! Validation 480 80378.448 0.005 0.00193 0.0214 0.0601 0.0384 0.0525 0.155 0.175 0.000386 0.000437 Wall time: 80378.44857572718 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 481 100 0.101 0.00182 0.0646 0.038 0.0509 0.296 0.303 0.00074 0.000758 481 118 0.0491 0.0021 0.00702 0.041 0.0547 0.0903 0.1 0.000226 0.00025 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 481 100 0.0278 0.00132 0.0014 0.0323 0.0434 0.0404 0.0446 0.000101 0.000111 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 481 80545.508 0.005 0.00175 0.0172 0.0521 0.0373 0.0498 0.129 0.157 0.000323 0.000392 ! Validation 481 80545.508 0.005 0.00195 0.00842 0.0474 0.0386 0.0527 0.0839 0.109 0.00021 0.000274 Wall time: 80545.50832085218 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 482 100 0.0635 0.00166 0.0302 0.0366 0.0487 0.193 0.207 0.000483 0.000519 482 118 0.0717 0.00183 0.0351 0.0383 0.0511 0.208 0.224 0.00052 0.000559 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 482 100 0.0393 0.00134 0.0126 0.0325 0.0436 0.133 0.134 0.000332 0.000335 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 482 80712.579 0.005 0.00173 0.0122 0.0467 0.0371 0.0496 0.106 0.131 0.000264 0.000327 ! Validation 482 80712.579 0.005 0.00194 0.0158 0.0547 0.0386 0.0526 0.131 0.15 0.000327 0.000375 Wall time: 80712.57933237916 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 483 100 0.0629 0.00166 0.0298 0.0365 0.0486 0.19 0.206 0.000476 0.000515 483 118 0.0588 0.00182 0.0224 0.0381 0.0509 0.164 0.179 0.000409 0.000447 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 483 100 0.0576 0.00131 0.0315 0.0323 0.0431 0.211 0.212 0.000528 0.000529 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 483 80879.722 0.005 0.00174 0.0208 0.0555 0.0372 0.0497 0.143 0.172 0.000358 0.00043 ! Validation 483 80879.722 0.005 0.00193 0.0494 0.088 0.0385 0.0524 0.246 0.265 0.000616 0.000663 Wall time: 80879.7227387433 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 484 100 0.0395 0.00168 0.00588 0.0366 0.0489 0.0776 0.0915 0.000194 0.000229 484 118 0.0368 0.00173 0.00224 0.0371 0.0496 0.051 0.0565 0.000127 0.000141 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 484 100 0.0268 0.00127 0.0013 0.0318 0.0426 0.0418 0.043 0.000104 0.000108 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 484 81046.779 0.005 0.00172 0.0152 0.0497 0.0371 0.0496 0.117 0.147 0.000292 0.000368 ! Validation 484 81046.779 0.005 0.00189 0.00624 0.044 0.038 0.0519 0.0721 0.0943 0.00018 0.000236 Wall time: 81046.77983568702 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 485 100 0.0366 0.00171 0.00244 0.0368 0.0494 0.0488 0.0589 0.000122 0.000147 485 118 0.0386 0.00169 0.00482 0.0369 0.049 0.0609 0.0829 0.000152 0.000207 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 485 100 0.0315 0.00128 0.00594 0.0319 0.0427 0.0897 0.092 0.000224 0.00023 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 485 81213.832 0.005 0.00172 0.013 0.0474 0.037 0.0495 0.109 0.137 0.000272 0.000341 ! Validation 485 81213.832 0.005 0.00188 0.00913 0.0468 0.0379 0.0518 0.0888 0.114 0.000222 0.000285 Wall time: 81213.83205513936 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 486 100 0.0508 0.00172 0.0163 0.0371 0.0496 0.135 0.152 0.000338 0.000381 486 118 0.0434 0.00171 0.00928 0.0368 0.0493 0.101 0.115 0.000251 0.000287 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 486 100 0.0435 0.00125 0.0185 0.0316 0.0423 0.161 0.162 0.000404 0.000405 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 486 81380.884 0.005 0.00168 0.0102 0.0437 0.0365 0.0489 0.0964 0.12 0.000241 0.000301 ! Validation 486 81380.884 0.005 0.00185 0.0195 0.0566 0.0376 0.0514 0.142 0.167 0.000355 0.000417 Wall time: 81380.88423900306 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 487 100 0.0368 0.00165 0.00384 0.0364 0.0484 0.0565 0.0739 0.000141 0.000185 487 118 0.0478 0.00202 0.00742 0.0401 0.0536 0.07 0.103 0.000175 0.000257 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 487 100 0.0427 0.0013 0.0166 0.0322 0.0431 0.153 0.154 0.000383 0.000385 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 487 81548.024 0.005 0.00174 0.0283 0.063 0.0372 0.0497 0.155 0.201 0.000388 0.000503 ! Validation 487 81548.024 0.005 0.0019 0.0184 0.0564 0.0382 0.052 0.137 0.162 0.000341 0.000405 Wall time: 81548.02489401912 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 488 100 0.0383 0.00175 0.00333 0.0371 0.0499 0.0573 0.0689 0.000143 0.000172 488 118 0.0439 0.00156 0.0127 0.035 0.0472 0.117 0.134 0.000292 0.000336 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 488 100 0.0269 0.00126 0.00174 0.0317 0.0423 0.0475 0.0499 0.000119 0.000125 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 488 81715.090 0.005 0.00169 0.0114 0.0453 0.0367 0.0491 0.1 0.128 0.00025 0.000319 ! Validation 488 81715.090 0.005 0.00186 0.00898 0.0462 0.0378 0.0515 0.0829 0.113 0.000207 0.000283 Wall time: 81715.09081093501 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 489 100 0.0364 0.00162 0.00404 0.036 0.048 0.0593 0.0758 0.000148 0.00019 489 118 0.0378 0.00152 0.00747 0.0347 0.0465 0.0983 0.103 0.000246 0.000258 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 489 100 0.0494 0.00123 0.0248 0.0313 0.0419 0.187 0.188 0.000468 0.00047 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 489 81882.147 0.005 0.00168 0.0141 0.0476 0.0366 0.0489 0.11 0.142 0.000275 0.000354 ! Validation 489 81882.147 0.005 0.00184 0.0236 0.0604 0.0375 0.0512 0.162 0.183 0.000405 0.000458 Wall time: 81882.14757081727 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 490 100 0.0351 0.00153 0.00458 0.035 0.0466 0.0661 0.0808 0.000165 0.000202 490 118 0.054 0.00187 0.0165 0.0379 0.0517 0.134 0.154 0.000336 0.000384 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 490 100 0.0315 0.00125 0.00653 0.0316 0.0422 0.0957 0.0965 0.000239 0.000241 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 490 82049.209 0.005 0.00169 0.0195 0.0534 0.0367 0.0491 0.129 0.167 0.000323 0.000417 ! Validation 490 82049.209 0.005 0.00184 0.0155 0.0523 0.0375 0.0512 0.122 0.148 0.000304 0.000371 Wall time: 82049.20958673628 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 491 100 0.0337 0.00155 0.00256 0.0352 0.0471 0.0482 0.0604 0.000121 0.000151 491 118 0.0756 0.00171 0.0413 0.0364 0.0494 0.232 0.243 0.000579 0.000606 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 491 100 0.0412 0.00139 0.0134 0.0329 0.0445 0.137 0.138 0.000342 0.000346 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 491 82216.273 0.005 0.00166 0.0151 0.0482 0.0363 0.0486 0.118 0.146 0.000295 0.000364 ! Validation 491 82216.273 0.005 0.00196 0.0136 0.0528 0.0388 0.0528 0.114 0.139 0.000284 0.000348 Wall time: 82216.27308622003 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 492 100 0.0897 0.0017 0.0557 0.0367 0.0492 0.263 0.282 0.000657 0.000704 492 118 0.0421 0.00163 0.00944 0.0364 0.0482 0.0922 0.116 0.000231 0.00029 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 492 100 0.0569 0.00129 0.0312 0.032 0.0428 0.211 0.211 0.000526 0.000527 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 492 82383.420 0.005 0.00167 0.0153 0.0487 0.0365 0.0488 0.119 0.148 0.000297 0.00037 ! Validation 492 82383.420 0.005 0.00188 0.0397 0.0774 0.0381 0.0518 0.215 0.238 0.000538 0.000594 Wall time: 82383.42081136536 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 493 100 0.0344 0.00154 0.00358 0.0353 0.0469 0.0584 0.0714 0.000146 0.000178 493 118 0.0486 0.00168 0.0151 0.037 0.0489 0.136 0.147 0.000339 0.000367 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 493 100 0.0363 0.00124 0.0116 0.0314 0.042 0.128 0.129 0.000321 0.000321 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 493 82550.486 0.005 0.00166 0.0157 0.049 0.0364 0.0487 0.12 0.15 0.0003 0.000374 ! Validation 493 82550.486 0.005 0.00182 0.018 0.0544 0.0373 0.0509 0.133 0.16 0.000333 0.0004 Wall time: 82550.48695991002 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 494 100 0.0379 0.00158 0.00631 0.0358 0.0474 0.0768 0.0948 0.000192 0.000237 494 118 0.0498 0.00154 0.019 0.0348 0.0468 0.146 0.165 0.000364 0.000412 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 494 100 0.0847 0.00121 0.0604 0.031 0.0416 0.293 0.293 0.000733 0.000733 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 494 82717.550 0.005 0.00163 0.0115 0.0441 0.0361 0.0482 0.103 0.127 0.000257 0.000319 ! Validation 494 82717.550 0.005 0.0018 0.0554 0.0914 0.0371 0.0507 0.263 0.281 0.000659 0.000702 Wall time: 82717.55040824739 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 495 100 0.0473 0.00161 0.015 0.0358 0.0479 0.13 0.146 0.000324 0.000366 495 118 0.0361 0.00158 0.00453 0.0352 0.0474 0.0615 0.0803 0.000154 0.000201 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 495 100 0.0271 0.00117 0.00369 0.0305 0.0408 0.0719 0.0725 0.00018 0.000181 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 495 82884.619 0.005 0.0017 0.0206 0.0546 0.0368 0.0492 0.135 0.172 0.000337 0.00043 ! Validation 495 82884.619 0.005 0.00178 0.0104 0.0461 0.0369 0.0504 0.0925 0.122 0.000231 0.000304 Wall time: 82884.6194082452 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 496 100 0.0452 0.00165 0.0122 0.0363 0.0485 0.119 0.132 0.000297 0.00033 496 118 0.0405 0.00147 0.0111 0.0344 0.0457 0.117 0.126 0.000292 0.000315 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 496 100 0.0343 0.00119 0.0105 0.031 0.0412 0.121 0.122 0.000302 0.000305 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 496 83051.679 0.005 0.00161 0.0106 0.0428 0.0358 0.0479 0.099 0.123 0.000247 0.000307 ! Validation 496 83051.679 0.005 0.00181 0.022 0.0582 0.0373 0.0508 0.158 0.177 0.000394 0.000442 Wall time: 83051.67987200618 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 497 100 0.0337 0.00155 0.00279 0.0351 0.0469 0.0504 0.0631 0.000126 0.000158 497 118 0.0346 0.00149 0.0048 0.0347 0.046 0.0699 0.0827 0.000175 0.000207 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 497 100 0.0314 0.00118 0.00789 0.0306 0.0409 0.106 0.106 0.000265 0.000265 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 497 83218.822 0.005 0.00161 0.013 0.0452 0.0358 0.0479 0.111 0.136 0.000278 0.000341 ! Validation 497 83218.822 0.005 0.00176 0.0127 0.048 0.0367 0.0501 0.107 0.135 0.000266 0.000336 Wall time: 83218.82263633423 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 498 100 0.0362 0.00162 0.00368 0.0361 0.0481 0.06 0.0725 0.00015 0.000181 498 118 0.0333 0.00153 0.00279 0.0352 0.0466 0.0481 0.063 0.00012 0.000158 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 498 100 0.0252 0.00117 0.00176 0.0305 0.0409 0.0497 0.0501 0.000124 0.000125 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 498 83385.884 0.005 0.00168 0.0215 0.0551 0.0366 0.0489 0.138 0.176 0.000344 0.000439 ! Validation 498 83385.884 0.005 0.00178 0.0104 0.0459 0.0368 0.0503 0.101 0.122 0.000253 0.000304 Wall time: 83385.88402453018 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 499 100 0.0394 0.00157 0.00806 0.0354 0.0473 0.0935 0.107 0.000234 0.000268 499 118 0.0384 0.00141 0.0102 0.0336 0.0448 0.0918 0.12 0.00023 0.000301 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 499 100 0.0314 0.0012 0.00743 0.0308 0.0413 0.102 0.103 0.000256 0.000257 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 499 83552.939 0.005 0.0016 0.0162 0.0483 0.0358 0.0478 0.125 0.152 0.000312 0.00038 ! Validation 499 83552.939 0.005 0.00177 0.0169 0.0523 0.0368 0.0503 0.136 0.155 0.000341 0.000388 Wall time: 83552.93995982641 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 500 100 0.037 0.00162 0.0045 0.036 0.0481 0.062 0.0801 0.000155 0.0002 500 118 0.039 0.00175 0.00396 0.0372 0.05 0.0652 0.0751 0.000163 0.000188 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 500 100 0.026 0.00124 0.00132 0.0316 0.042 0.0425 0.0433 0.000106 0.000108 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 500 83719.997 0.005 0.00162 0.0137 0.0461 0.0359 0.048 0.112 0.14 0.000281 0.00035 ! Validation 500 83719.997 0.005 0.0018 0.00638 0.0424 0.0373 0.0506 0.0716 0.0953 0.000179 0.000238 Wall time: 83719.99782425212 ! Best model 500 0.042 ! Stop training: max epochs Wall time: 83720.02658369439 Cumulative wall time: 83720.02658369439 Using device: cuda Please note that _all_ machine learning models running on CUDA hardware are generally somewhat nondeterministic and that this can manifest in small, generally unimportant variation in the final test errors. Loading model... loaded model Loading dataset... Processing dataset... Done! Loaded dataset specified in test_config.yaml. Using all frames from the specified test dataset, yielding a test set size of 500 frames. Starting... --- Final result: --- f_mae = 0.038054 f_rmse = 0.050578 e_mae = 0.102744 e_rmse = 0.129242 e/N_mae = 0.000257 e/N_rmse = 0.000323 f_mae = 0.038054 f_rmse = 0.050578 e_mae = 0.102744 e_rmse = 0.129242 e/N_mae = 0.000257 e/N_rmse = 0.000323 Train end time: 2024-12-09_09:54:26 Training duration: 23h 19m 49s