Train start time: 2024-12-07_06:57:52 Torch device: cuda Processing dataset... Loaded data: Batch(atomic_numbers=[3584000, 1], batch=[3584000], cell=[8000, 3, 3], edge_cell_shift=[138811224, 3], edge_index=[2, 138811224], forces=[3584000, 3], pbc=[8000, 3], pos=[3584000, 3], ptr=[8001], total_energy=[8000, 1]) processed data size: ~5514.67 MB Cached processed data to disk Done! Successfully loaded the data set of type ASEDataset(8000)... Replace string dataset_per_atom_total_energy_mean to -346.8895845496029 Atomic outputs are scaled by: [H, C, N, O, Zn: None], shifted by [H, C, N, O, Zn: -346.889585]. Replace string dataset_forces_rms to 1.2194973071018034 Initially outputs are globally scaled by: 1.2194973071018034, total_energy are globally shifted by None. Successfully built the network... Number of weights: 363624 Number of trainable weights: 363624 ! Starting training ... validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 0 100 54.2 0.963 35 0.885 1.2 7.21 7.21 0.0161 0.0161 Initialization # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Initial Validation 0 10.375 0.005 1.01 15.3 35.5 0.908 1.22 4.11 4.77 0.00918 0.0106 Wall time: 10.375805924180895 ! Best model 0 35.461 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 1 100 26.6 1.01 6.5 0.903 1.22 2.51 3.11 0.00561 0.00694 1 118 31.2 0.977 11.7 0.894 1.21 2.96 4.17 0.00662 0.00931 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 1 100 40.1 0.958 21 0.881 1.19 5.57 5.58 0.0124 0.0125 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 1 263.072 0.005 0.998 9.13 29.1 0.902 1.22 2.94 3.68 0.00655 0.00822 ! Validation 1 263.072 0.005 1 12.6 32.7 0.905 1.22 3.45 4.33 0.0077 0.00966 Wall time: 263.07265909807757 ! Best model 1 32.690 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 2 100 18.1 0.842 1.26 0.825 1.12 1.05 1.37 0.00234 0.00305 2 118 18.2 0.784 2.55 0.796 1.08 1.54 1.95 0.00344 0.00435 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 2 100 15 0.694 1.15 0.746 1.02 1.23 1.31 0.00273 0.00292 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 2 472.379 0.005 0.928 3.87 22.4 0.865 1.18 1.86 2.4 0.00415 0.00536 ! Validation 2 472.379 0.005 0.745 3.69 18.6 0.779 1.05 1.8 2.34 0.00401 0.00523 Wall time: 472.37949198205024 ! Best model 2 18.594 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 3 100 8.21 0.336 1.48 0.527 0.707 1.21 1.48 0.0027 0.00331 3 118 7.35 0.316 1.03 0.508 0.685 1.06 1.24 0.00236 0.00277 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 3 100 19.5 0.25 14.5 0.457 0.609 4.63 4.64 0.0103 0.0103 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 3 681.770 0.005 0.455 1.71 10.8 0.609 0.823 1.27 1.6 0.00284 0.00356 ! Validation 3 681.770 0.005 0.309 5.27 11.4 0.505 0.678 2.53 2.8 0.00564 0.00625 Wall time: 681.7714655240998 ! Best model 3 11.450 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 4 100 4.97 0.224 0.488 0.431 0.577 0.685 0.852 0.00153 0.0019 4 118 5.36 0.219 0.973 0.429 0.571 1 1.2 0.00223 0.00268 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 4 100 6.48 0.173 3.02 0.384 0.507 2.08 2.12 0.00464 0.00473 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 4 891.041 0.005 0.255 1.17 6.27 0.46 0.616 1.05 1.32 0.00235 0.00294 ! Validation 4 891.041 0.005 0.219 0.983 5.36 0.428 0.57 1 1.21 0.00224 0.0027 Wall time: 891.0411073621362 ! Best model 4 5.357 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 5 100 3.81 0.165 0.515 0.373 0.495 0.666 0.875 0.00149 0.00195 5 118 4.57 0.162 1.34 0.37 0.49 1.03 1.41 0.0023 0.00315 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 5 100 4.96 0.134 2.28 0.34 0.447 1.78 1.84 0.00397 0.00411 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 5 1100.339 0.005 0.185 0.931 4.64 0.394 0.525 0.936 1.17 0.00209 0.00262 ! Validation 5 1100.339 0.005 0.173 0.853 4.32 0.382 0.508 0.937 1.13 0.00209 0.00251 Wall time: 1100.3397929049097 ! Best model 5 4.321 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 6 100 3.48 0.144 0.61 0.348 0.462 0.807 0.952 0.0018 0.00213 6 118 3.13 0.139 0.336 0.342 0.455 0.598 0.707 0.00133 0.00158 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 6 100 2.39 0.111 0.172 0.31 0.406 0.493 0.506 0.0011 0.00113 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 6 1309.789 0.005 0.152 0.733 3.76 0.357 0.475 0.834 1.05 0.00186 0.00234 ! Validation 6 1309.789 0.005 0.144 1.41 4.29 0.35 0.463 1.19 1.45 0.00266 0.00323 Wall time: 1309.789140176028 ! Best model 6 4.294 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 7 100 3.75 0.124 1.26 0.325 0.43 1.12 1.37 0.00251 0.00306 7 118 2.58 0.121 0.172 0.319 0.423 0.401 0.505 0.000895 0.00113 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 7 100 2.25 0.0988 0.274 0.294 0.383 0.448 0.638 0.001 0.00142 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 7 1522.207 0.005 0.131 0.701 3.32 0.333 0.441 0.814 1.02 0.00182 0.00229 ! Validation 7 1522.207 0.005 0.128 0.653 3.21 0.33 0.436 0.796 0.985 0.00178 0.0022 Wall time: 1522.2077848417684 ! Best model 7 3.212 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 8 100 2.65 0.112 0.407 0.308 0.408 0.637 0.778 0.00142 0.00174 8 118 2.59 0.115 0.302 0.312 0.413 0.533 0.67 0.00119 0.00149 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 8 100 1.97 0.0895 0.175 0.28 0.365 0.458 0.51 0.00102 0.00114 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 8 1731.366 0.005 0.117 0.648 2.99 0.315 0.417 0.793 0.983 0.00177 0.00219 ! Validation 8 1731.366 0.005 0.116 0.781 3.11 0.315 0.416 0.879 1.08 0.00196 0.00241 Wall time: 1731.366186285857 ! Best model 8 3.107 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 9 100 2.34 0.104 0.269 0.296 0.393 0.515 0.633 0.00115 0.00141 9 118 3.13 0.101 1.11 0.293 0.388 1.19 1.28 0.00265 0.00287 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 9 100 2.43 0.0836 0.755 0.271 0.353 0.924 1.06 0.00206 0.00236 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 9 1940.398 0.005 0.109 0.731 2.91 0.304 0.403 0.831 1.04 0.00186 0.00232 ! Validation 9 1940.398 0.005 0.109 0.573 2.74 0.304 0.402 0.753 0.923 0.00168 0.00206 Wall time: 1940.398252021987 ! Best model 9 2.744 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 10 100 2.37 0.098 0.415 0.288 0.382 0.609 0.785 0.00136 0.00175 10 118 1.96 0.0911 0.137 0.278 0.368 0.376 0.452 0.000839 0.00101 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 10 100 2.15 0.0777 0.594 0.261 0.34 0.766 0.94 0.00171 0.0021 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 10 2149.649 0.005 0.101 0.535 2.56 0.293 0.388 0.713 0.894 0.00159 0.002 ! Validation 10 2149.649 0.005 0.101 0.664 2.69 0.294 0.388 0.813 0.994 0.00182 0.00222 Wall time: 2149.649971514009 ! Best model 10 2.687 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 11 100 2.89 0.0914 1.06 0.28 0.369 1.09 1.25 0.00243 0.0028 11 118 2.21 0.09 0.408 0.278 0.366 0.61 0.778 0.00136 0.00174 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 11 100 1.72 0.0736 0.251 0.254 0.331 0.444 0.61 0.000992 0.00136 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 11 2358.894 0.005 0.0956 0.637 2.55 0.285 0.377 0.783 0.975 0.00175 0.00218 ! Validation 11 2358.894 0.005 0.0961 0.463 2.38 0.287 0.378 0.662 0.83 0.00148 0.00185 Wall time: 2358.8943575117737 ! Best model 11 2.384 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 12 100 2.21 0.0903 0.407 0.277 0.367 0.626 0.778 0.0014 0.00174 12 118 2.31 0.0905 0.5 0.277 0.367 0.727 0.862 0.00162 0.00192 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 12 100 1.59 0.07 0.188 0.248 0.323 0.495 0.528 0.0011 0.00118 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 12 2570.079 0.005 0.091 0.639 2.46 0.278 0.368 0.777 0.976 0.00174 0.00218 ! Validation 12 2570.079 0.005 0.0922 0.478 2.32 0.281 0.37 0.672 0.843 0.0015 0.00188 Wall time: 2570.0802302882075 ! Best model 12 2.321 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 13 100 2.08 0.0869 0.337 0.272 0.36 0.561 0.708 0.00125 0.00158 13 118 1.99 0.0855 0.275 0.271 0.357 0.554 0.639 0.00124 0.00143 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 13 100 2.05 0.0666 0.715 0.242 0.315 0.889 1.03 0.00198 0.0023 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 13 2779.391 0.005 0.0873 0.541 2.29 0.272 0.36 0.712 0.899 0.00159 0.00201 ! Validation 13 2779.391 0.005 0.0876 1.51 3.26 0.274 0.361 1.32 1.5 0.00295 0.00334 Wall time: 2779.3912941971794 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 14 100 1.89 0.0798 0.29 0.262 0.345 0.511 0.656 0.00114 0.00146 14 118 2.46 0.081 0.845 0.262 0.347 0.898 1.12 0.002 0.0025 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 14 100 1.48 0.0636 0.207 0.236 0.307 0.446 0.555 0.000994 0.00124 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 14 2988.756 0.005 0.0832 0.537 2.2 0.266 0.352 0.709 0.892 0.00158 0.00199 ! Validation 14 2988.756 0.005 0.084 0.4 2.08 0.268 0.353 0.629 0.771 0.0014 0.00172 Wall time: 2988.7568002757616 ! Best model 14 2.081 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 15 100 1.98 0.0764 0.455 0.255 0.337 0.674 0.823 0.00151 0.00184 15 118 2.08 0.085 0.383 0.27 0.355 0.452 0.755 0.00101 0.00168 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 15 100 1.39 0.0599 0.187 0.228 0.299 0.434 0.527 0.000969 0.00118 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 15 3203.357 0.005 0.0787 0.43 2 0.258 0.342 0.635 0.8 0.00142 0.00179 ! Validation 15 3203.357 0.005 0.0801 0.387 1.99 0.261 0.345 0.617 0.759 0.00138 0.00169 Wall time: 3203.3579135579057 ! Best model 15 1.989 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 16 100 2.83 0.0761 1.3 0.254 0.337 1.28 1.39 0.00285 0.00311 16 118 1.95 0.0759 0.432 0.254 0.336 0.714 0.801 0.00159 0.00179 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 16 100 1.58 0.0583 0.419 0.225 0.294 0.602 0.789 0.00134 0.00176 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 16 3413.072 0.005 0.0763 0.635 2.16 0.254 0.337 0.787 0.973 0.00176 0.00217 ! Validation 16 3413.072 0.005 0.0779 0.38 1.94 0.258 0.34 0.605 0.751 0.00135 0.00168 Wall time: 3413.0721948151477 ! Best model 16 1.938 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 17 100 3.06 0.0739 1.58 0.25 0.331 1.42 1.53 0.00318 0.00342 17 118 2.18 0.0707 0.77 0.245 0.324 0.912 1.07 0.00203 0.00239 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 17 100 2.13 0.056 1.01 0.22 0.289 1.12 1.22 0.0025 0.00273 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 17 3622.497 0.005 0.0732 0.505 1.97 0.249 0.33 0.686 0.865 0.00153 0.00193 ! Validation 17 3622.497 0.005 0.0751 0.569 2.07 0.253 0.334 0.756 0.92 0.00169 0.00205 Wall time: 3622.497138218954 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 18 100 1.66 0.0692 0.273 0.243 0.321 0.503 0.637 0.00112 0.00142 18 118 1.64 0.0723 0.193 0.246 0.328 0.459 0.536 0.00102 0.0012 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 18 100 1.87 0.053 0.805 0.214 0.281 0.976 1.09 0.00218 0.00244 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 18 3831.825 0.005 0.0704 0.391 1.8 0.244 0.324 0.61 0.764 0.00136 0.0017 ! Validation 18 3831.825 0.005 0.0715 0.507 1.94 0.247 0.326 0.703 0.868 0.00157 0.00194 Wall time: 3831.825477075763 ! Best model 18 1.937 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 19 100 1.8 0.0683 0.431 0.24 0.319 0.66 0.8 0.00147 0.00179 19 118 1.67 0.0633 0.401 0.232 0.307 0.698 0.772 0.00156 0.00172 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 19 100 1.19 0.0508 0.177 0.21 0.275 0.439 0.513 0.000979 0.00115 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 19 4041.117 0.005 0.0677 0.479 1.83 0.24 0.317 0.678 0.845 0.00151 0.00189 ! Validation 19 4041.117 0.005 0.0687 0.37 1.74 0.242 0.32 0.603 0.741 0.00135 0.00165 Wall time: 4041.1176413367502 ! Best model 19 1.744 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 20 100 1.9 0.0636 0.629 0.232 0.308 0.807 0.967 0.0018 0.00216 20 118 1.46 0.0603 0.256 0.227 0.299 0.509 0.617 0.00114 0.00138 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 20 100 1.18 0.0489 0.202 0.206 0.27 0.405 0.548 0.000904 0.00122 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 20 4250.398 0.005 0.0653 0.487 1.79 0.235 0.312 0.687 0.852 0.00153 0.0019 ! Validation 20 4250.398 0.005 0.0667 0.347 1.68 0.238 0.315 0.588 0.718 0.00131 0.0016 Wall time: 4250.398697266821 ! Best model 20 1.680 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 21 100 1.85 0.0612 0.626 0.228 0.302 0.891 0.965 0.00199 0.00215 21 118 3 0.0675 1.65 0.239 0.317 1.26 1.57 0.00282 0.0035 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 21 100 3.83 0.0469 2.89 0.201 0.264 2.02 2.07 0.0045 0.00463 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 21 4459.658 0.005 0.0629 0.446 1.7 0.231 0.306 0.647 0.807 0.00144 0.0018 ! Validation 21 4459.658 0.005 0.0639 2 3.28 0.233 0.308 1.59 1.72 0.00355 0.00385 Wall time: 4459.658996468876 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 22 100 2.01 0.0612 0.782 0.228 0.302 0.964 1.08 0.00215 0.00241 22 118 1.33 0.0594 0.143 0.225 0.297 0.382 0.462 0.000853 0.00103 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 22 100 1.12 0.0458 0.205 0.199 0.261 0.506 0.553 0.00113 0.00123 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 22 4668.957 0.005 0.0612 0.467 1.69 0.228 0.302 0.662 0.836 0.00148 0.00187 ! Validation 22 4668.957 0.005 0.062 0.578 1.82 0.23 0.304 0.763 0.927 0.0017 0.00207 Wall time: 4668.957670364995 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 23 100 1.3 0.0567 0.165 0.219 0.29 0.392 0.496 0.000874 0.00111 23 118 1.27 0.0559 0.149 0.219 0.288 0.429 0.47 0.000957 0.00105 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 23 100 1.14 0.0438 0.262 0.195 0.255 0.439 0.624 0.000979 0.00139 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 23 4878.142 0.005 0.059 0.438 1.62 0.224 0.296 0.65 0.809 0.00145 0.00181 ! Validation 23 4878.142 0.005 0.0598 0.287 1.48 0.226 0.298 0.526 0.653 0.00117 0.00146 Wall time: 4878.142348641064 ! Best model 23 1.483 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 24 100 2.86 0.0562 1.74 0.219 0.289 1.5 1.61 0.00335 0.00359 24 118 1.26 0.0503 0.254 0.207 0.274 0.432 0.615 0.000964 0.00137 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 24 100 1.17 0.0429 0.314 0.193 0.253 0.586 0.683 0.00131 0.00152 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 24 5087.322 0.005 0.0567 0.441 1.57 0.219 0.29 0.647 0.811 0.00145 0.00181 ! Validation 24 5087.322 0.005 0.0582 0.707 1.87 0.223 0.294 0.857 1.03 0.00191 0.00229 Wall time: 5087.322783919983 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 25 100 1.92 0.0554 0.808 0.217 0.287 0.967 1.1 0.00216 0.00245 25 118 1.32 0.0533 0.253 0.215 0.282 0.545 0.613 0.00122 0.00137 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 25 100 0.992 0.042 0.152 0.191 0.25 0.462 0.476 0.00103 0.00106 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 25 5296.493 0.005 0.0558 0.568 1.68 0.218 0.288 0.742 0.921 0.00166 0.00205 ! Validation 25 5296.493 0.005 0.0571 0.423 1.56 0.221 0.291 0.644 0.793 0.00144 0.00177 Wall time: 5296.493611579761 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 26 100 1.21 0.0514 0.181 0.209 0.276 0.422 0.519 0.000943 0.00116 26 118 1.24 0.0518 0.204 0.209 0.278 0.461 0.55 0.00103 0.00123 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 26 100 1 0.0391 0.218 0.184 0.241 0.394 0.57 0.00088 0.00127 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 26 5505.652 0.005 0.0538 0.396 1.47 0.214 0.283 0.614 0.769 0.00137 0.00172 ! Validation 26 5505.652 0.005 0.0536 0.29 1.36 0.214 0.282 0.54 0.657 0.00121 0.00147 Wall time: 5505.652771664783 ! Best model 26 1.362 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 27 100 1.07 0.049 0.0889 0.205 0.27 0.294 0.364 0.000656 0.000812 27 118 1.25 0.0492 0.266 0.205 0.27 0.507 0.629 0.00113 0.0014 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 27 100 0.894 0.0368 0.158 0.179 0.234 0.349 0.485 0.000779 0.00108 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 27 5714.918 0.005 0.0505 0.388 1.4 0.208 0.274 0.611 0.76 0.00136 0.0017 ! Validation 27 5714.918 0.005 0.0508 0.333 1.35 0.209 0.275 0.586 0.704 0.00131 0.00157 Wall time: 5714.918489006814 ! Best model 27 1.348 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 28 100 1.17 0.047 0.228 0.201 0.264 0.48 0.582 0.00107 0.0013 28 118 1.85 0.0472 0.911 0.202 0.265 1.08 1.16 0.00241 0.0026 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 28 100 1.16 0.0341 0.481 0.173 0.225 0.75 0.846 0.00168 0.00189 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 28 5924.045 0.005 0.0473 0.403 1.35 0.201 0.265 0.62 0.771 0.00138 0.00172 ! Validation 28 5924.045 0.005 0.0476 0.297 1.25 0.202 0.266 0.545 0.665 0.00122 0.00148 Wall time: 5924.04603214981 ! Best model 28 1.250 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 29 100 1.49 0.0446 0.598 0.195 0.258 0.804 0.943 0.00179 0.00211 29 118 0.99 0.04 0.19 0.185 0.244 0.447 0.531 0.000998 0.00119 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 29 100 0.796 0.0325 0.145 0.17 0.22 0.401 0.464 0.000896 0.00104 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 29 6134.378 0.005 0.0452 0.568 1.47 0.197 0.259 0.759 0.921 0.0017 0.00206 ! Validation 29 6134.378 0.005 0.0455 0.707 1.62 0.198 0.26 0.866 1.03 0.00193 0.00229 Wall time: 6134.378545844927 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 30 100 1.03 0.0409 0.214 0.188 0.247 0.469 0.564 0.00105 0.00126 30 118 1.57 0.0393 0.788 0.185 0.242 1.02 1.08 0.00228 0.00242 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 30 100 0.866 0.03 0.267 0.164 0.211 0.516 0.63 0.00115 0.00141 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 30 6343.347 0.005 0.0421 0.364 1.21 0.19 0.25 0.599 0.733 0.00134 0.00164 ! Validation 30 6343.347 0.005 0.0432 0.261 1.13 0.193 0.253 0.516 0.623 0.00115 0.00139 Wall time: 6343.347963700071 ! Best model 30 1.125 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 31 100 1.04 0.0398 0.241 0.185 0.243 0.507 0.598 0.00113 0.00134 31 118 1 0.0401 0.199 0.185 0.244 0.477 0.544 0.00106 0.00121 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 31 100 0.75 0.0284 0.181 0.159 0.206 0.408 0.519 0.000911 0.00116 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 31 6552.425 0.005 0.0405 0.374 1.18 0.186 0.245 0.593 0.747 0.00132 0.00167 ! Validation 31 6552.425 0.005 0.0414 0.242 1.07 0.188 0.248 0.498 0.6 0.00111 0.00134 Wall time: 6552.424999026116 ! Best model 31 1.070 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 32 100 1.02 0.0389 0.238 0.182 0.24 0.49 0.594 0.00109 0.00133 32 118 0.874 0.0362 0.149 0.177 0.232 0.385 0.471 0.00086 0.00105 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 32 100 0.645 0.0264 0.116 0.154 0.198 0.28 0.415 0.000625 0.000927 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 32 6761.408 0.005 0.0383 0.269 1.03 0.181 0.239 0.503 0.634 0.00112 0.00141 ! Validation 32 6761.408 0.005 0.0391 0.223 1.01 0.183 0.241 0.475 0.576 0.00106 0.00129 Wall time: 6761.408804344945 ! Best model 32 1.006 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 33 100 0.881 0.036 0.161 0.176 0.231 0.365 0.49 0.000815 0.00109 33 118 0.891 0.0368 0.155 0.177 0.234 0.384 0.48 0.000858 0.00107 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 33 100 0.665 0.0256 0.153 0.151 0.195 0.355 0.476 0.000793 0.00106 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 33 6975.063 0.005 0.0372 0.395 1.14 0.178 0.235 0.622 0.768 0.00139 0.00171 ! Validation 33 6975.063 0.005 0.0382 0.218 0.981 0.181 0.238 0.475 0.569 0.00106 0.00127 Wall time: 6975.0636182827875 ! Best model 33 0.981 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 34 100 0.876 0.0359 0.157 0.175 0.231 0.41 0.484 0.000916 0.00108 34 118 1.12 0.0382 0.354 0.181 0.238 0.662 0.726 0.00148 0.00162 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 34 100 0.874 0.0249 0.376 0.149 0.192 0.682 0.748 0.00152 0.00167 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 34 7183.997 0.005 0.0357 0.349 1.06 0.175 0.23 0.591 0.72 0.00132 0.00161 ! Validation 34 7183.997 0.005 0.0374 0.246 0.994 0.178 0.236 0.505 0.605 0.00113 0.00135 Wall time: 7183.997127358802 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 35 100 0.82 0.0329 0.163 0.167 0.221 0.417 0.492 0.000931 0.0011 35 118 0.787 0.0331 0.125 0.169 0.222 0.325 0.432 0.000725 0.000964 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 35 100 0.795 0.0235 0.325 0.145 0.187 0.633 0.696 0.00141 0.00155 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 35 7393.316 0.005 0.0343 0.187 0.872 0.171 0.226 0.423 0.528 0.000945 0.00118 ! Validation 35 7393.316 0.005 0.0355 0.243 0.952 0.174 0.23 0.492 0.601 0.0011 0.00134 Wall time: 7393.316360129975 ! Best model 35 0.952 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 36 100 0.786 0.0311 0.164 0.164 0.215 0.403 0.494 0.000899 0.0011 36 118 1.41 0.0338 0.736 0.171 0.224 1 1.05 0.00224 0.00234 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 36 100 1.21 0.0228 0.749 0.143 0.184 1.02 1.06 0.00227 0.00236 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 36 7602.640 0.005 0.0336 0.359 1.03 0.169 0.223 0.585 0.728 0.00131 0.00163 ! Validation 36 7602.640 0.005 0.0347 1.5 2.2 0.172 0.227 1.39 1.5 0.0031 0.00334 Wall time: 7602.640425642952 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 37 100 0.814 0.0327 0.16 0.167 0.221 0.387 0.488 0.000864 0.00109 37 118 1.28 0.037 0.536 0.177 0.235 0.791 0.893 0.00177 0.00199 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 37 100 0.55 0.023 0.0891 0.143 0.185 0.256 0.364 0.000572 0.000813 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 37 7818.380 0.005 0.0333 0.47 1.14 0.168 0.223 0.68 0.836 0.00152 0.00187 ! Validation 37 7818.380 0.005 0.0346 0.283 0.974 0.171 0.227 0.529 0.648 0.00118 0.00145 Wall time: 7818.380910356063 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 38 100 1.04 0.0337 0.37 0.169 0.224 0.581 0.742 0.0013 0.00166 38 118 1.03 0.0349 0.329 0.173 0.228 0.559 0.7 0.00125 0.00156 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 38 100 1.17 0.0221 0.725 0.14 0.181 0.999 1.04 0.00223 0.00232 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 38 8027.601 0.005 0.0324 0.351 1 0.166 0.22 0.596 0.723 0.00133 0.00161 ! Validation 38 8027.601 0.005 0.0336 0.544 1.22 0.169 0.223 0.756 0.9 0.00169 0.00201 Wall time: 8027.601571174804 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 39 100 1.18 0.0311 0.555 0.162 0.215 0.81 0.909 0.00181 0.00203 39 118 0.914 0.0356 0.202 0.171 0.23 0.493 0.549 0.0011 0.00122 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 39 100 0.529 0.0215 0.0979 0.138 0.179 0.278 0.382 0.000622 0.000852 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 39 8236.803 0.005 0.0313 0.278 0.904 0.163 0.216 0.519 0.644 0.00116 0.00144 ! Validation 39 8236.803 0.005 0.0327 0.21 0.865 0.167 0.221 0.467 0.559 0.00104 0.00125 Wall time: 8236.803055778146 ! Best model 39 0.865 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 40 100 0.781 0.0296 0.19 0.158 0.21 0.426 0.531 0.00095 0.00119 40 118 1.53 0.0318 0.891 0.164 0.218 1.08 1.15 0.00241 0.00257 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 40 100 0.569 0.0212 0.146 0.137 0.177 0.373 0.465 0.000832 0.00104 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 40 8446.932 0.005 0.0305 0.285 0.894 0.161 0.213 0.507 0.646 0.00113 0.00144 ! Validation 40 8446.932 0.005 0.0321 0.187 0.829 0.165 0.219 0.442 0.527 0.000986 0.00118 Wall time: 8446.932916813996 ! Best model 40 0.829 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 41 100 1.09 0.0285 0.521 0.155 0.206 0.784 0.88 0.00175 0.00197 41 118 0.854 0.0288 0.279 0.155 0.207 0.615 0.644 0.00137 0.00144 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 41 100 0.969 0.0205 0.56 0.135 0.174 0.875 0.912 0.00195 0.00204 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 41 8656.101 0.005 0.0294 0.236 0.825 0.158 0.209 0.476 0.593 0.00106 0.00132 ! Validation 41 8656.101 0.005 0.0311 0.312 0.934 0.162 0.215 0.556 0.681 0.00124 0.00152 Wall time: 8656.101549061015 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 42 100 0.727 0.0284 0.159 0.156 0.206 0.396 0.486 0.000884 0.00108 42 118 0.702 0.0306 0.0907 0.16 0.213 0.322 0.367 0.000719 0.00082 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 42 100 0.441 0.0194 0.0522 0.131 0.17 0.252 0.279 0.000563 0.000622 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 42 8865.215 0.005 0.0288 0.249 0.826 0.156 0.207 0.486 0.61 0.00109 0.00136 ! Validation 42 8865.215 0.005 0.0298 0.291 0.888 0.159 0.211 0.536 0.658 0.0012 0.00147 Wall time: 8865.215500839055 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 43 100 0.766 0.0273 0.22 0.153 0.202 0.48 0.572 0.00107 0.00128 43 118 1.06 0.0288 0.483 0.157 0.207 0.794 0.847 0.00177 0.00189 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 43 100 0.782 0.0192 0.398 0.13 0.169 0.723 0.77 0.00161 0.00172 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 43 9074.218 0.005 0.028 0.288 0.848 0.154 0.204 0.526 0.653 0.00117 0.00146 ! Validation 43 9074.218 0.005 0.0293 0.941 1.53 0.158 0.209 1.07 1.18 0.00238 0.00264 Wall time: 9074.218057456892 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 44 100 0.892 0.0295 0.302 0.157 0.209 0.573 0.67 0.00128 0.0015 44 118 0.656 0.0266 0.124 0.151 0.199 0.328 0.429 0.000732 0.000959 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 44 100 0.453 0.0193 0.0671 0.13 0.169 0.276 0.316 0.000617 0.000705 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 44 9283.220 0.005 0.0277 0.387 0.942 0.153 0.203 0.631 0.76 0.00141 0.0017 ! Validation 44 9283.220 0.005 0.0295 0.465 1.06 0.158 0.209 0.689 0.832 0.00154 0.00186 Wall time: 9283.220116482116 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 45 100 0.968 0.0283 0.402 0.154 0.205 0.685 0.773 0.00153 0.00173 45 118 0.659 0.0229 0.202 0.141 0.184 0.496 0.548 0.00111 0.00122 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 45 100 0.571 0.0183 0.206 0.127 0.165 0.489 0.553 0.00109 0.00123 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 45 9492.220 0.005 0.027 0.248 0.787 0.151 0.2 0.486 0.607 0.00109 0.00136 ! Validation 45 9492.220 0.005 0.0282 0.188 0.752 0.155 0.205 0.435 0.529 0.000971 0.00118 Wall time: 9492.220573890023 ! Best model 45 0.752 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 46 100 0.644 0.0268 0.108 0.151 0.2 0.339 0.4 0.000758 0.000893 46 118 1.24 0.0254 0.729 0.147 0.194 1 1.04 0.00224 0.00232 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 46 100 2.63 0.0177 2.28 0.124 0.162 1.82 1.84 0.00407 0.00411 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 46 9701.314 0.005 0.0263 0.232 0.759 0.149 0.198 0.469 0.583 0.00105 0.0013 ! Validation 46 9701.314 0.005 0.0274 1.61 2.16 0.152 0.202 1.46 1.55 0.00325 0.00345 Wall time: 9701.314923074096 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 47 100 1.21 0.0257 0.698 0.147 0.195 0.961 1.02 0.00214 0.00227 47 118 0.639 0.0238 0.163 0.142 0.188 0.464 0.492 0.00104 0.0011 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 47 100 1.14 0.0177 0.786 0.125 0.162 1.05 1.08 0.00235 0.00241 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 47 9912.611 0.005 0.0262 0.327 0.851 0.149 0.198 0.549 0.698 0.00123 0.00156 ! Validation 47 9912.611 0.005 0.0273 0.586 1.13 0.152 0.201 0.807 0.933 0.0018 0.00208 Wall time: 9912.61142325215 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 48 100 0.599 0.025 0.0994 0.145 0.193 0.296 0.384 0.000662 0.000858 48 118 0.556 0.0246 0.0632 0.145 0.191 0.233 0.307 0.00052 0.000684 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 48 100 0.392 0.0172 0.0478 0.122 0.16 0.198 0.267 0.000443 0.000595 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 48 10123.026 0.005 0.0256 0.284 0.797 0.147 0.195 0.527 0.651 0.00118 0.00145 ! Validation 48 10123.026 0.005 0.0266 0.204 0.736 0.15 0.199 0.456 0.551 0.00102 0.00123 Wall time: 10123.025980414823 ! Best model 48 0.736 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 49 100 0.912 0.0251 0.411 0.146 0.193 0.703 0.781 0.00157 0.00174 49 118 0.668 0.0245 0.179 0.144 0.191 0.437 0.516 0.000975 0.00115 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 49 100 0.468 0.017 0.128 0.122 0.159 0.373 0.437 0.000832 0.000976 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 49 10332.794 0.005 0.0249 0.254 0.753 0.145 0.193 0.503 0.615 0.00112 0.00137 ! Validation 49 10332.794 0.005 0.0261 0.449 0.97 0.149 0.197 0.69 0.817 0.00154 0.00182 Wall time: 10332.794450447895 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 50 100 0.596 0.0241 0.115 0.143 0.189 0.339 0.413 0.000757 0.000922 50 118 0.608 0.025 0.109 0.145 0.193 0.335 0.402 0.000748 0.000897 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 50 100 0.472 0.0163 0.147 0.119 0.156 0.414 0.468 0.000924 0.00104 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 50 10542.099 0.005 0.0243 0.232 0.718 0.143 0.19 0.47 0.588 0.00105 0.00131 ! Validation 50 10542.099 0.005 0.0253 0.498 1 0.146 0.194 0.732 0.86 0.00163 0.00192 Wall time: 10542.099727476016 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 51 100 0.55 0.0229 0.0913 0.139 0.185 0.294 0.368 0.000656 0.000822 51 118 0.909 0.0218 0.473 0.136 0.18 0.815 0.839 0.00182 0.00187 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 51 100 1 0.0162 0.682 0.119 0.155 0.982 1.01 0.00219 0.00225 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 51 10751.302 0.005 0.0238 0.243 0.718 0.142 0.188 0.484 0.599 0.00108 0.00134 ! Validation 51 10751.302 0.005 0.0248 0.414 0.911 0.145 0.192 0.661 0.785 0.00148 0.00175 Wall time: 10751.302270655055 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 52 100 1.25 0.0222 0.802 0.138 0.182 1.05 1.09 0.00234 0.00244 52 118 0.481 0.0216 0.049 0.136 0.179 0.246 0.27 0.000549 0.000603 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 52 100 0.566 0.0159 0.248 0.117 0.154 0.568 0.607 0.00127 0.00136 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 52 10960.724 0.005 0.0233 0.248 0.713 0.14 0.186 0.498 0.609 0.00111 0.00136 ! Validation 52 10960.724 0.005 0.0246 0.766 1.26 0.144 0.191 0.956 1.07 0.00213 0.00238 Wall time: 10960.724281404167 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 53 100 0.861 0.022 0.421 0.137 0.181 0.722 0.792 0.00161 0.00177 53 118 0.976 0.0224 0.528 0.139 0.183 0.845 0.886 0.00189 0.00198 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 53 100 0.441 0.016 0.12 0.118 0.154 0.367 0.423 0.000819 0.000944 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 53 11169.781 0.005 0.0232 0.302 0.766 0.14 0.186 0.534 0.668 0.00119 0.00149 ! Validation 53 11169.781 0.005 0.0243 0.53 1.02 0.144 0.19 0.764 0.888 0.00171 0.00198 Wall time: 11169.781858922914 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 54 100 0.543 0.0223 0.0982 0.137 0.182 0.319 0.382 0.000713 0.000853 54 118 0.561 0.0197 0.166 0.13 0.171 0.434 0.497 0.00097 0.00111 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 54 100 0.339 0.0149 0.0411 0.114 0.149 0.186 0.247 0.000416 0.000552 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 54 11378.836 0.005 0.0223 0.167 0.613 0.137 0.182 0.401 0.499 0.000896 0.00111 ! Validation 54 11378.836 0.005 0.0231 0.173 0.635 0.14 0.185 0.415 0.507 0.000926 0.00113 Wall time: 11378.836574892048 ! Best model 54 0.635 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 55 100 0.516 0.0219 0.0775 0.136 0.181 0.269 0.339 0.0006 0.000758 55 118 0.614 0.0215 0.185 0.134 0.179 0.441 0.524 0.000985 0.00117 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 55 100 0.623 0.0147 0.328 0.113 0.148 0.668 0.698 0.00149 0.00156 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 55 11587.994 0.005 0.0221 0.276 0.718 0.137 0.181 0.513 0.641 0.00115 0.00143 ! Validation 55 11587.994 0.005 0.0229 0.747 1.21 0.139 0.185 0.958 1.05 0.00214 0.00235 Wall time: 11587.994241131004 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 56 100 1.21 0.0205 0.8 0.132 0.174 1.05 1.09 0.00234 0.00243 56 118 0.796 0.0196 0.404 0.13 0.171 0.738 0.775 0.00165 0.00173 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 56 100 0.626 0.0149 0.327 0.114 0.149 0.668 0.698 0.00149 0.00156 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 56 11797.195 0.005 0.0217 0.266 0.699 0.135 0.18 0.511 0.627 0.00114 0.0014 ! Validation 56 11797.195 0.005 0.023 0.692 1.15 0.14 0.185 0.922 1.01 0.00206 0.00226 Wall time: 11797.19595713215 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 57 100 0.506 0.0219 0.0684 0.136 0.18 0.27 0.319 0.000602 0.000712 57 118 0.522 0.0214 0.0938 0.135 0.178 0.344 0.374 0.000768 0.000834 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 57 100 0.316 0.014 0.035 0.11 0.145 0.174 0.228 0.000387 0.000509 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 57 12006.391 0.005 0.0213 0.217 0.643 0.134 0.178 0.461 0.569 0.00103 0.00127 ! Validation 57 12006.391 0.005 0.0219 0.153 0.592 0.136 0.18 0.388 0.478 0.000867 0.00107 Wall time: 12006.391347656958 ! Best model 57 0.592 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 58 100 0.609 0.02 0.208 0.13 0.173 0.478 0.557 0.00107 0.00124 58 118 0.619 0.02 0.219 0.131 0.173 0.504 0.57 0.00113 0.00127 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 58 100 1.2 0.014 0.922 0.11 0.144 1.16 1.17 0.00258 0.00261 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 58 12215.630 0.005 0.0211 0.279 0.702 0.134 0.177 0.515 0.645 0.00115 0.00144 ! Validation 58 12215.630 0.005 0.0219 0.624 1.06 0.136 0.18 0.861 0.964 0.00192 0.00215 Wall time: 12215.630658486858 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 59 100 0.471 0.0192 0.0883 0.128 0.169 0.299 0.362 0.000667 0.000809 59 118 0.503 0.0206 0.0915 0.132 0.175 0.298 0.369 0.000666 0.000824 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 59 100 0.344 0.0135 0.0734 0.108 0.142 0.276 0.33 0.000616 0.000737 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 59 12424.858 0.005 0.0207 0.216 0.63 0.132 0.175 0.442 0.568 0.000986 0.00127 ! Validation 59 12424.858 0.005 0.0212 0.121 0.546 0.134 0.178 0.356 0.424 0.000795 0.000946 Wall time: 12424.858926347923 ! Best model 59 0.546 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 60 100 0.487 0.0198 0.0912 0.13 0.172 0.301 0.368 0.000672 0.000822 60 118 0.777 0.0176 0.426 0.123 0.162 0.721 0.796 0.00161 0.00178 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 60 100 0.631 0.0133 0.365 0.108 0.141 0.714 0.737 0.00159 0.00165 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 60 12634.181 0.005 0.0199 0.187 0.585 0.13 0.172 0.425 0.524 0.000948 0.00117 ! Validation 60 12634.181 0.005 0.0209 0.191 0.609 0.133 0.176 0.438 0.533 0.000978 0.00119 Wall time: 12634.181516068988 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 61 100 0.552 0.0204 0.143 0.131 0.174 0.373 0.461 0.000831 0.00103 61 118 0.526 0.0203 0.12 0.131 0.174 0.35 0.423 0.00078 0.000944 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 61 100 0.31 0.0135 0.0404 0.108 0.141 0.193 0.245 0.000431 0.000547 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 61 12843.606 0.005 0.0199 0.241 0.638 0.13 0.172 0.476 0.599 0.00106 0.00134 ! Validation 61 12843.606 0.005 0.021 0.145 0.564 0.133 0.177 0.384 0.464 0.000856 0.00104 Wall time: 12843.606096449774 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 62 100 0.808 0.0197 0.414 0.129 0.171 0.683 0.784 0.00152 0.00175 62 118 0.501 0.0188 0.126 0.125 0.167 0.337 0.432 0.000753 0.000965 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 62 100 0.374 0.0131 0.112 0.106 0.139 0.371 0.409 0.000828 0.000913 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 62 13052.868 0.005 0.0198 0.3 0.696 0.129 0.172 0.546 0.669 0.00122 0.00149 ! Validation 62 13052.868 0.005 0.0205 0.118 0.529 0.132 0.175 0.349 0.419 0.000779 0.000936 Wall time: 13052.86825932283 ! Best model 62 0.529 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 63 100 0.841 0.0186 0.469 0.126 0.166 0.79 0.835 0.00176 0.00186 63 118 0.429 0.019 0.049 0.127 0.168 0.233 0.27 0.000519 0.000602 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 63 100 0.274 0.0127 0.0195 0.105 0.138 0.145 0.17 0.000324 0.00038 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 63 13262.139 0.005 0.019 0.148 0.527 0.127 0.168 0.374 0.47 0.000835 0.00105 ! Validation 63 13262.139 0.005 0.0199 0.186 0.584 0.13 0.172 0.424 0.526 0.000947 0.00118 Wall time: 13262.139189220965 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 64 100 0.494 0.0183 0.128 0.124 0.165 0.36 0.437 0.000803 0.000975 64 118 0.491 0.019 0.111 0.125 0.168 0.348 0.407 0.000777 0.000908 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 64 100 0.559 0.0125 0.309 0.104 0.136 0.657 0.678 0.00147 0.00151 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 64 13471.401 0.005 0.0189 0.259 0.637 0.126 0.168 0.505 0.621 0.00113 0.00139 ! Validation 64 13471.401 0.005 0.0197 0.175 0.57 0.129 0.171 0.418 0.51 0.000933 0.00114 Wall time: 13471.401259054895 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 65 100 0.61 0.0185 0.24 0.125 0.166 0.527 0.597 0.00118 0.00133 65 118 0.528 0.0208 0.113 0.131 0.176 0.338 0.41 0.000755 0.000916 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 65 100 0.271 0.0122 0.0277 0.103 0.134 0.158 0.203 0.000352 0.000453 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 65 13680.754 0.005 0.0183 0.168 0.535 0.124 0.165 0.406 0.501 0.000906 0.00112 ! Validation 65 13680.754 0.005 0.0193 0.139 0.525 0.128 0.169 0.37 0.455 0.000826 0.00102 Wall time: 13680.755036810879 ! Best model 65 0.525 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 66 100 0.466 0.0179 0.108 0.123 0.163 0.33 0.402 0.000737 0.000896 66 118 0.971 0.0189 0.594 0.126 0.167 0.882 0.94 0.00197 0.0021 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 66 100 1.2 0.0124 0.954 0.104 0.136 1.18 1.19 0.00263 0.00266 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 66 13891.354 0.005 0.0184 0.251 0.619 0.125 0.165 0.495 0.608 0.0011 0.00136 ! Validation 66 13891.354 0.005 0.0193 0.567 0.953 0.128 0.17 0.828 0.918 0.00185 0.00205 Wall time: 13891.35417503398 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 67 100 0.433 0.0182 0.0702 0.124 0.164 0.266 0.323 0.000594 0.000721 67 118 0.411 0.0166 0.0781 0.12 0.157 0.283 0.341 0.000632 0.000761 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 67 100 0.35 0.0118 0.113 0.101 0.133 0.376 0.41 0.00084 0.000916 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 67 14101.280 0.005 0.0179 0.157 0.514 0.123 0.163 0.389 0.484 0.000868 0.00108 ! Validation 67 14101.280 0.005 0.0187 0.388 0.762 0.126 0.167 0.659 0.76 0.00147 0.0017 Wall time: 14101.280821138993 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 68 100 0.463 0.018 0.104 0.122 0.163 0.31 0.394 0.000691 0.00088 68 118 0.432 0.0185 0.0608 0.124 0.166 0.27 0.301 0.000603 0.000671 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 68 100 0.251 0.0115 0.0213 0.1 0.131 0.149 0.178 0.000333 0.000397 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 68 14310.486 0.005 0.0177 0.207 0.56 0.122 0.162 0.434 0.556 0.000968 0.00124 ! Validation 68 14310.486 0.005 0.0182 0.22 0.585 0.124 0.165 0.464 0.572 0.00104 0.00128 Wall time: 14310.486760113854 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 69 100 0.522 0.0169 0.184 0.119 0.158 0.445 0.524 0.000993 0.00117 69 118 0.398 0.0166 0.066 0.119 0.157 0.259 0.313 0.000578 0.000699 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 69 100 0.249 0.0113 0.0227 0.0994 0.13 0.157 0.184 0.00035 0.00041 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 69 14519.808 0.005 0.0172 0.183 0.527 0.121 0.16 0.427 0.523 0.000953 0.00117 ! Validation 69 14519.808 0.005 0.0179 0.213 0.571 0.123 0.163 0.454 0.562 0.00101 0.00126 Wall time: 14519.808294000104 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 70 100 0.615 0.0175 0.266 0.122 0.161 0.555 0.629 0.00124 0.0014 70 118 0.425 0.0168 0.0899 0.12 0.158 0.317 0.366 0.000708 0.000816 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 70 100 0.416 0.0117 0.182 0.101 0.132 0.493 0.521 0.0011 0.00116 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 70 14730.219 0.005 0.0174 0.301 0.649 0.121 0.161 0.555 0.671 0.00124 0.0015 ! Validation 70 14730.219 0.005 0.0184 0.519 0.886 0.125 0.165 0.782 0.878 0.00175 0.00196 Wall time: 14730.219792859163 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 71 100 0.53 0.0167 0.196 0.119 0.158 0.471 0.54 0.00105 0.00121 71 118 0.536 0.0171 0.194 0.121 0.159 0.421 0.537 0.00094 0.0012 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 71 100 0.646 0.0111 0.424 0.0984 0.128 0.781 0.795 0.00174 0.00177 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 71 14939.836 0.005 0.0169 0.157 0.494 0.119 0.158 0.387 0.483 0.000863 0.00108 ! Validation 71 14939.836 0.005 0.0175 0.26 0.609 0.122 0.161 0.521 0.621 0.00116 0.00139 Wall time: 14939.836741578765 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 72 100 0.532 0.017 0.193 0.119 0.159 0.464 0.535 0.00104 0.0012 72 118 0.323 0.014 0.0429 0.11 0.144 0.197 0.253 0.00044 0.000564 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 72 100 0.238 0.0107 0.0239 0.0967 0.126 0.147 0.189 0.000328 0.000421 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 72 15148.949 0.005 0.0165 0.158 0.487 0.118 0.157 0.384 0.485 0.000856 0.00108 ! Validation 72 15148.949 0.005 0.0171 0.121 0.464 0.121 0.16 0.345 0.425 0.00077 0.000949 Wall time: 15148.949935191777 ! Best model 72 0.464 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 73 100 0.39 0.0166 0.0583 0.119 0.157 0.235 0.294 0.000524 0.000657 73 118 0.4 0.0161 0.0775 0.117 0.155 0.256 0.34 0.000572 0.000758 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 73 100 0.263 0.0107 0.048 0.0968 0.126 0.223 0.267 0.000497 0.000596 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 73 15359.130 0.005 0.0162 0.167 0.491 0.117 0.155 0.396 0.499 0.000883 0.00111 ! Validation 73 15359.130 0.005 0.017 0.12 0.46 0.12 0.159 0.347 0.423 0.000775 0.000944 Wall time: 15359.1304047429 ! Best model 73 0.460 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 74 100 0.682 0.0168 0.346 0.119 0.158 0.669 0.717 0.00149 0.0016 74 118 0.517 0.0196 0.125 0.128 0.171 0.329 0.431 0.000734 0.000962 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 74 100 0.343 0.0112 0.12 0.0995 0.129 0.396 0.422 0.000885 0.000943 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 74 15568.526 0.005 0.0162 0.199 0.522 0.117 0.155 0.432 0.545 0.000965 0.00122 ! Validation 74 15568.526 0.005 0.0174 0.103 0.452 0.122 0.161 0.323 0.391 0.000721 0.000874 Wall time: 15568.52636915585 ! Best model 74 0.452 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 75 100 0.549 0.016 0.228 0.117 0.154 0.498 0.583 0.00111 0.0013 75 118 0.539 0.0179 0.182 0.122 0.163 0.451 0.52 0.00101 0.00116 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 75 100 0.239 0.0108 0.0231 0.0977 0.127 0.147 0.185 0.000329 0.000413 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 75 15777.619 0.005 0.0157 0.155 0.469 0.115 0.153 0.384 0.479 0.000856 0.00107 ! Validation 75 15777.619 0.005 0.0169 0.179 0.517 0.12 0.159 0.411 0.516 0.000918 0.00115 Wall time: 15777.619125628844 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 76 100 0.37 0.0154 0.0615 0.114 0.151 0.249 0.302 0.000556 0.000675 76 118 0.402 0.0149 0.104 0.113 0.149 0.352 0.393 0.000785 0.000877 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 76 100 0.749 0.0103 0.544 0.0947 0.123 0.887 0.899 0.00198 0.00201 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 76 15986.749 0.005 0.0158 0.237 0.553 0.116 0.153 0.484 0.595 0.00108 0.00133 ! Validation 76 15986.749 0.005 0.0164 1.14 1.46 0.118 0.156 1.24 1.3 0.00277 0.0029 Wall time: 15986.749646024778 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 77 100 0.329 0.0146 0.0373 0.112 0.147 0.183 0.236 0.000408 0.000526 77 118 0.566 0.015 0.266 0.113 0.149 0.602 0.628 0.00134 0.0014 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 77 100 0.25 0.0104 0.0417 0.0956 0.124 0.203 0.249 0.000453 0.000556 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 77 16195.843 0.005 0.0155 0.237 0.547 0.115 0.152 0.491 0.593 0.0011 0.00132 ! Validation 77 16195.843 0.005 0.0166 0.204 0.535 0.119 0.157 0.441 0.551 0.000983 0.00123 Wall time: 16195.843050499912 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 78 100 0.326 0.0144 0.0388 0.111 0.146 0.183 0.24 0.000409 0.000536 78 118 0.4 0.0158 0.0845 0.115 0.153 0.296 0.354 0.000661 0.000791 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 78 100 0.371 0.0099 0.173 0.0932 0.121 0.488 0.508 0.00109 0.00113 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 78 16404.906 0.005 0.0151 0.119 0.421 0.113 0.15 0.34 0.421 0.00076 0.00094 ! Validation 78 16404.906 0.005 0.0158 0.128 0.444 0.116 0.153 0.358 0.436 0.000799 0.000972 Wall time: 16404.90637569083 ! Best model 78 0.444 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 79 100 0.346 0.0145 0.0554 0.111 0.147 0.234 0.287 0.000523 0.000641 79 118 0.332 0.0134 0.0628 0.108 0.141 0.251 0.305 0.000559 0.000682 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 79 100 0.593 0.00948 0.403 0.0915 0.119 0.763 0.775 0.0017 0.00173 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 79 16616.053 0.005 0.0147 0.138 0.431 0.112 0.148 0.366 0.454 0.000816 0.00101 ! Validation 79 16616.053 0.005 0.0154 0.254 0.561 0.114 0.151 0.521 0.615 0.00116 0.00137 Wall time: 16616.053856892046 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 80 100 0.358 0.0146 0.0669 0.111 0.147 0.251 0.315 0.000559 0.000704 80 118 0.366 0.0165 0.0365 0.116 0.157 0.19 0.233 0.000423 0.00052 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 80 100 0.347 0.00965 0.154 0.0923 0.12 0.457 0.478 0.00102 0.00107 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 80 16824.982 0.005 0.0148 0.22 0.516 0.112 0.148 0.474 0.574 0.00106 0.00128 ! Validation 80 16824.982 0.005 0.0156 0.104 0.415 0.115 0.152 0.322 0.393 0.000719 0.000877 Wall time: 16824.982066377997 ! Best model 80 0.415 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 81 100 0.452 0.0131 0.19 0.106 0.14 0.453 0.531 0.00101 0.00119 81 118 0.364 0.0149 0.0662 0.113 0.149 0.265 0.314 0.000592 0.000701 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 81 100 0.241 0.00965 0.0476 0.0921 0.12 0.229 0.266 0.000511 0.000594 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 81 17034.433 0.005 0.0143 0.124 0.409 0.11 0.146 0.338 0.43 0.000753 0.000959 ! Validation 81 17034.433 0.005 0.0153 0.133 0.44 0.114 0.151 0.358 0.445 0.0008 0.000993 Wall time: 17034.43324744003 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 82 100 0.776 0.015 0.477 0.113 0.149 0.794 0.842 0.00177 0.00188 82 118 0.449 0.0139 0.171 0.109 0.144 0.475 0.504 0.00106 0.00112 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 82 100 0.336 0.0093 0.149 0.0908 0.118 0.446 0.471 0.000995 0.00105 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 82 17243.372 0.005 0.0142 0.195 0.478 0.11 0.145 0.438 0.538 0.000979 0.0012 ! Validation 82 17243.372 0.005 0.0151 0.11 0.412 0.113 0.15 0.332 0.405 0.00074 0.000904 Wall time: 17243.372319524176 ! Best model 82 0.412 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 83 100 0.374 0.0141 0.0916 0.11 0.145 0.293 0.369 0.000654 0.000824 83 118 0.315 0.0136 0.0431 0.109 0.142 0.208 0.253 0.000463 0.000565 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 83 100 0.302 0.00926 0.117 0.0906 0.117 0.396 0.417 0.000884 0.00093 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 83 17452.357 0.005 0.0139 0.14 0.418 0.109 0.144 0.36 0.457 0.000803 0.00102 ! Validation 83 17452.357 0.005 0.0149 0.0985 0.396 0.113 0.149 0.311 0.383 0.000694 0.000854 Wall time: 17452.35733940685 ! Best model 83 0.396 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 84 100 0.699 0.0136 0.427 0.108 0.142 0.752 0.797 0.00168 0.00178 84 118 0.362 0.0129 0.104 0.106 0.138 0.3 0.394 0.000669 0.00088 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 84 100 0.268 0.0093 0.0824 0.0911 0.118 0.318 0.35 0.00071 0.000781 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 84 17669.988 0.005 0.0139 0.205 0.483 0.109 0.144 0.45 0.553 0.001 0.00123 ! Validation 84 17669.988 0.005 0.0149 0.0994 0.396 0.113 0.149 0.321 0.384 0.000718 0.000858 Wall time: 17669.988311550114 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 85 100 0.31 0.013 0.0497 0.106 0.139 0.209 0.272 0.000466 0.000607 85 118 0.532 0.0159 0.213 0.116 0.154 0.5 0.563 0.00112 0.00126 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 85 100 0.466 0.00961 0.274 0.0926 0.12 0.625 0.638 0.00139 0.00142 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 85 17879.098 0.005 0.0136 0.15 0.421 0.107 0.142 0.376 0.471 0.000839 0.00105 ! Validation 85 17879.098 0.005 0.015 0.847 1.15 0.113 0.149 1.05 1.12 0.00235 0.00251 Wall time: 17879.09865707811 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 86 100 0.382 0.0133 0.117 0.106 0.14 0.351 0.417 0.000783 0.00093 86 118 0.384 0.0126 0.131 0.104 0.137 0.328 0.442 0.000732 0.000986 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 86 100 0.192 0.00882 0.0158 0.089 0.115 0.12 0.153 0.000268 0.000342 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 86 18088.193 0.005 0.0136 0.181 0.453 0.108 0.142 0.424 0.519 0.000946 0.00116 ! Validation 86 18088.193 0.005 0.0142 0.0993 0.383 0.11 0.145 0.31 0.384 0.000691 0.000858 Wall time: 18088.193861564156 ! Best model 86 0.383 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 87 100 0.576 0.0127 0.321 0.105 0.138 0.63 0.691 0.00141 0.00154 87 118 0.373 0.0143 0.0861 0.112 0.146 0.293 0.358 0.000654 0.000799 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 87 100 0.324 0.00843 0.155 0.0868 0.112 0.463 0.481 0.00103 0.00107 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 87 18297.234 0.005 0.0134 0.178 0.447 0.107 0.141 0.419 0.516 0.000935 0.00115 ! Validation 87 18297.234 0.005 0.0138 0.107 0.383 0.109 0.144 0.326 0.398 0.000728 0.000889 Wall time: 18297.23448834475 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 88 100 0.314 0.0128 0.0576 0.104 0.138 0.23 0.293 0.000513 0.000653 88 118 0.44 0.014 0.16 0.11 0.144 0.433 0.488 0.000967 0.00109 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 88 100 0.188 0.00845 0.019 0.087 0.112 0.143 0.168 0.000319 0.000375 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 88 18506.431 0.005 0.0127 0.088 0.343 0.104 0.138 0.287 0.361 0.00064 0.000805 ! Validation 88 18506.431 0.005 0.0136 0.263 0.536 0.108 0.142 0.521 0.626 0.00116 0.0014 Wall time: 18506.432021239772 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 89 100 0.465 0.0128 0.208 0.105 0.138 0.49 0.557 0.00109 0.00124 89 118 0.25 0.0116 0.0173 0.101 0.132 0.113 0.16 0.000251 0.000358 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 89 100 0.231 0.00822 0.0661 0.0856 0.111 0.288 0.314 0.000643 0.0007 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 89 18717.833 0.005 0.0128 0.168 0.424 0.104 0.138 0.412 0.502 0.00092 0.00112 ! Validation 89 18717.833 0.005 0.0134 0.0842 0.352 0.107 0.141 0.291 0.354 0.000649 0.00079 Wall time: 18717.83323414484 ! Best model 89 0.352 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 90 100 0.455 0.0124 0.207 0.103 0.136 0.499 0.555 0.00111 0.00124 90 118 0.799 0.013 0.538 0.106 0.139 0.847 0.894 0.00189 0.002 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 90 100 0.201 0.00874 0.0263 0.0888 0.114 0.168 0.198 0.000374 0.000442 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 90 18928.343 0.005 0.0125 0.145 0.395 0.103 0.136 0.373 0.46 0.000832 0.00103 ! Validation 90 18928.343 0.005 0.0138 0.208 0.485 0.109 0.143 0.459 0.557 0.00102 0.00124 Wall time: 18928.343692940194 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 91 100 0.464 0.0123 0.218 0.102 0.135 0.512 0.57 0.00114 0.00127 91 118 1.07 0.0125 0.818 0.103 0.136 1.08 1.1 0.00242 0.00246 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 91 100 0.4 0.00921 0.216 0.0913 0.117 0.55 0.566 0.00123 0.00126 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 91 19137.440 0.005 0.0127 0.211 0.465 0.104 0.137 0.437 0.555 0.000977 0.00124 ! Validation 91 19137.440 0.005 0.0143 0.513 0.798 0.111 0.146 0.791 0.873 0.00177 0.00195 Wall time: 19137.440433547832 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 92 100 0.389 0.0117 0.154 0.1 0.132 0.417 0.478 0.00093 0.00107 92 118 0.279 0.0116 0.046 0.1 0.132 0.231 0.262 0.000516 0.000584 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 92 100 0.193 0.00769 0.0395 0.0833 0.107 0.211 0.242 0.000471 0.000541 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 92 19346.556 0.005 0.0126 0.126 0.378 0.104 0.137 0.333 0.433 0.000743 0.000968 ! Validation 92 19346.556 0.005 0.0127 0.218 0.472 0.104 0.138 0.473 0.569 0.00106 0.00127 Wall time: 19346.55662510777 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 93 100 0.454 0.013 0.194 0.105 0.139 0.48 0.537 0.00107 0.0012 93 118 0.334 0.0106 0.123 0.0964 0.126 0.36 0.427 0.000802 0.000953 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 93 100 0.197 0.00777 0.0414 0.0836 0.108 0.218 0.248 0.000487 0.000554 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 93 19555.747 0.005 0.0121 0.174 0.415 0.102 0.134 0.413 0.509 0.000922 0.00114 ! Validation 93 19555.747 0.005 0.0128 0.121 0.378 0.105 0.138 0.338 0.425 0.000755 0.000948 Wall time: 19555.747682924848 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 94 100 0.314 0.0119 0.0768 0.1 0.133 0.253 0.338 0.000565 0.000754 94 118 0.397 0.011 0.177 0.0981 0.128 0.472 0.513 0.00105 0.00114 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 94 100 0.626 0.00771 0.472 0.0832 0.107 0.827 0.838 0.00185 0.00187 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 94 19765.140 0.005 0.0118 0.0896 0.325 0.1 0.132 0.287 0.364 0.000641 0.000812 ! Validation 94 19765.140 0.005 0.0127 0.271 0.524 0.104 0.137 0.558 0.635 0.00125 0.00142 Wall time: 19765.140449788887 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 95 100 0.48 0.012 0.239 0.101 0.134 0.538 0.597 0.0012 0.00133 95 118 0.301 0.0114 0.0718 0.0997 0.13 0.287 0.327 0.00064 0.000729 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 95 100 0.23 0.00728 0.0843 0.0811 0.104 0.332 0.354 0.00074 0.000791 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 95 19975.503 0.005 0.0116 0.129 0.362 0.0997 0.132 0.349 0.438 0.000779 0.000978 ! Validation 95 19975.503 0.005 0.0122 0.28 0.523 0.102 0.134 0.554 0.646 0.00124 0.00144 Wall time: 19975.50317765912 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 96 100 0.314 0.0114 0.0871 0.0981 0.13 0.3 0.36 0.000669 0.000803 96 118 0.222 0.00969 0.028 0.0919 0.12 0.171 0.204 0.000382 0.000455 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 96 100 0.303 0.00738 0.156 0.0818 0.105 0.471 0.481 0.00105 0.00107 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 96 20184.493 0.005 0.0115 0.138 0.368 0.0991 0.131 0.373 0.455 0.000834 0.00101 ! Validation 96 20184.493 0.005 0.0121 0.107 0.35 0.102 0.134 0.327 0.399 0.00073 0.000891 Wall time: 20184.49314907007 ! Best model 96 0.350 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 97 100 0.382 0.011 0.162 0.0965 0.128 0.436 0.491 0.000973 0.0011 97 118 0.54 0.0111 0.318 0.0975 0.129 0.602 0.687 0.00134 0.00153 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 97 100 0.272 0.00891 0.0942 0.0893 0.115 0.359 0.374 0.000801 0.000835 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 97 20393.426 0.005 0.0113 0.143 0.369 0.0981 0.13 0.352 0.459 0.000785 0.00103 ! Validation 97 20393.426 0.005 0.0138 1.01 1.28 0.109 0.143 1.07 1.22 0.00239 0.00273 Wall time: 20393.426793680992 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 98 100 0.287 0.0111 0.0654 0.0978 0.128 0.247 0.312 0.000551 0.000696 98 118 0.392 0.0102 0.188 0.0937 0.123 0.489 0.529 0.00109 0.00118 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 98 100 0.154 0.00721 0.00989 0.0809 0.104 0.0947 0.121 0.000211 0.000271 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 98 20602.428 0.005 0.0117 0.187 0.42 0.0999 0.132 0.417 0.527 0.00093 0.00118 ! Validation 98 20602.428 0.005 0.0118 0.157 0.394 0.101 0.133 0.384 0.483 0.000857 0.00108 Wall time: 20602.428779005073 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 99 100 0.258 0.0107 0.0431 0.0958 0.126 0.214 0.253 0.000477 0.000565 99 118 0.345 0.0114 0.118 0.0978 0.13 0.363 0.418 0.00081 0.000933 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 99 100 0.16 0.00701 0.0201 0.0799 0.102 0.145 0.173 0.000323 0.000386 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 99 20812.463 0.005 0.011 0.123 0.344 0.097 0.128 0.346 0.429 0.000772 0.000957 ! Validation 99 20812.463 0.005 0.0116 0.137 0.37 0.0998 0.131 0.351 0.451 0.000783 0.00101 Wall time: 20812.463465944864 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 100 100 0.319 0.0104 0.111 0.0946 0.124 0.347 0.407 0.000774 0.000908 100 118 0.328 0.0118 0.0927 0.0999 0.132 0.331 0.371 0.000739 0.000829 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 100 100 0.204 0.00695 0.0648 0.0793 0.102 0.292 0.31 0.000653 0.000693 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 100 21021.587 0.005 0.0111 0.146 0.369 0.0975 0.129 0.382 0.466 0.000853 0.00104 ! Validation 100 21021.587 0.005 0.0115 0.0754 0.306 0.0993 0.131 0.273 0.335 0.000609 0.000748 Wall time: 21021.58730873512 ! Best model 100 0.306 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 101 100 0.294 0.0105 0.0837 0.0949 0.125 0.295 0.353 0.000659 0.000788 101 118 0.244 0.0113 0.0184 0.0974 0.13 0.14 0.166 0.000312 0.00037 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 101 100 0.164 0.00686 0.0264 0.079 0.101 0.168 0.198 0.000375 0.000442 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 101 21230.683 0.005 0.0108 0.114 0.33 0.096 0.127 0.333 0.413 0.000743 0.000921 ! Validation 101 21230.683 0.005 0.0113 0.0751 0.301 0.0984 0.13 0.268 0.334 0.000598 0.000746 Wall time: 21230.68366527278 ! Best model 101 0.301 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 102 100 0.241 0.01 0.0407 0.0924 0.122 0.194 0.246 0.000432 0.000549 102 118 0.934 0.0101 0.733 0.0918 0.122 1.03 1.04 0.0023 0.00233 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 102 100 0.546 0.00771 0.392 0.0833 0.107 0.757 0.764 0.00169 0.0017 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 102 21440.304 0.005 0.0105 0.115 0.325 0.0948 0.125 0.32 0.405 0.000715 0.000904 ! Validation 102 21440.304 0.005 0.0121 1.11 1.35 0.102 0.134 1.22 1.28 0.00273 0.00286 Wall time: 21440.304347188212 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 103 100 0.249 0.0105 0.0402 0.0943 0.125 0.192 0.244 0.00043 0.000546 103 118 0.243 0.01 0.0428 0.0924 0.122 0.194 0.252 0.000433 0.000563 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 103 100 0.139 0.00663 0.00637 0.0776 0.0993 0.0777 0.0973 0.000173 0.000217 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 103 21649.487 0.005 0.0105 0.136 0.346 0.0946 0.125 0.36 0.45 0.000803 0.00101 ! Validation 103 21649.487 0.005 0.011 0.0729 0.293 0.097 0.128 0.262 0.329 0.000584 0.000735 Wall time: 21649.487311373 ! Best model 103 0.293 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 104 100 0.366 0.0104 0.158 0.0946 0.125 0.427 0.484 0.000953 0.00108 104 118 0.222 0.00947 0.0321 0.0908 0.119 0.179 0.218 0.000399 0.000488 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 104 100 0.151 0.00708 0.00973 0.0799 0.103 0.104 0.12 0.000233 0.000269 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 104 21859.021 0.005 0.0106 0.173 0.386 0.0952 0.126 0.415 0.509 0.000927 0.00114 ! Validation 104 21859.021 0.005 0.0114 0.0654 0.294 0.0988 0.13 0.252 0.312 0.000563 0.000696 Wall time: 21859.021889166906 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 105 100 0.251 0.0101 0.0487 0.0929 0.123 0.207 0.269 0.000462 0.000601 105 118 0.24 0.00999 0.0406 0.0922 0.122 0.195 0.246 0.000435 0.000549 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 105 100 0.188 0.00648 0.058 0.0767 0.0982 0.276 0.294 0.000616 0.000655 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 105 22068.171 0.005 0.0101 0.0744 0.276 0.0927 0.123 0.262 0.333 0.000585 0.000744 ! Validation 105 22068.171 0.005 0.0108 0.295 0.51 0.0959 0.127 0.579 0.662 0.00129 0.00148 Wall time: 22068.17199621303 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 106 100 0.249 0.0104 0.0408 0.0939 0.124 0.186 0.246 0.000416 0.00055 106 118 0.288 0.00979 0.0922 0.0912 0.121 0.336 0.37 0.000749 0.000827 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 106 100 0.136 0.00637 0.00848 0.0763 0.0973 0.0933 0.112 0.000208 0.000251 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 106 22277.303 0.005 0.0101 0.118 0.32 0.0927 0.122 0.339 0.419 0.000756 0.000936 ! Validation 106 22277.303 0.005 0.0106 0.066 0.277 0.0951 0.125 0.25 0.313 0.000558 0.000699 Wall time: 22277.303188411985 ! Best model 106 0.277 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 107 100 0.305 0.0107 0.0908 0.0953 0.126 0.314 0.367 0.000701 0.00082 107 118 0.231 0.00936 0.0435 0.0897 0.118 0.203 0.254 0.000454 0.000568 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 107 100 0.139 0.00637 0.0114 0.0761 0.0973 0.11 0.13 0.000246 0.00029 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 107 22486.511 0.005 0.0104 0.182 0.39 0.0942 0.124 0.407 0.522 0.000908 0.00116 ! Validation 107 22486.511 0.005 0.0106 0.0853 0.298 0.0952 0.126 0.28 0.356 0.000626 0.000795 Wall time: 22486.511681117117 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 108 100 0.283 0.00932 0.0969 0.0894 0.118 0.321 0.38 0.000717 0.000847 108 118 0.258 0.0106 0.0454 0.0941 0.126 0.241 0.26 0.000538 0.00058 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 108 100 0.13 0.00622 0.00603 0.0752 0.0962 0.0759 0.0947 0.00017 0.000211 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 108 22695.625 0.005 0.00981 0.0947 0.291 0.0913 0.121 0.304 0.376 0.000678 0.000839 ! Validation 108 22695.625 0.005 0.0104 0.0897 0.298 0.0942 0.124 0.287 0.365 0.00064 0.000815 Wall time: 22695.62591904914 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 109 100 0.249 0.00944 0.06 0.0897 0.119 0.243 0.299 0.000542 0.000667 109 118 0.536 0.0104 0.329 0.0924 0.124 0.666 0.699 0.00149 0.00156 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 109 100 0.712 0.00595 0.593 0.0734 0.0941 0.934 0.939 0.00209 0.0021 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 109 22904.802 0.005 0.00951 0.0694 0.26 0.0899 0.119 0.255 0.317 0.00057 0.000708 ! Validation 109 22904.802 0.005 0.01 0.456 0.656 0.0923 0.122 0.761 0.824 0.0017 0.00184 Wall time: 22904.80266315816 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 110 100 0.239 0.00941 0.0511 0.0893 0.118 0.233 0.276 0.000519 0.000615 110 118 0.282 0.00929 0.0963 0.0886 0.118 0.306 0.379 0.000683 0.000845 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 110 100 0.477 0.00632 0.35 0.0757 0.097 0.717 0.722 0.0016 0.00161 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 110 23113.940 0.005 0.00963 0.145 0.337 0.0905 0.12 0.383 0.465 0.000855 0.00104 ! Validation 110 23113.940 0.005 0.0104 0.203 0.41 0.094 0.124 0.477 0.549 0.00107 0.00123 Wall time: 23113.94044976821 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 111 100 0.38 0.00942 0.191 0.0894 0.118 0.484 0.534 0.00108 0.00119 111 118 0.218 0.00977 0.0229 0.0907 0.121 0.133 0.184 0.000298 0.000412 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 111 100 0.261 0.00594 0.143 0.0733 0.094 0.452 0.461 0.00101 0.00103 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 111 23323.080 0.005 0.00951 0.112 0.302 0.0899 0.119 0.332 0.409 0.000742 0.000912 ! Validation 111 23323.080 0.005 0.00989 0.0933 0.291 0.0918 0.121 0.308 0.373 0.000687 0.000832 Wall time: 23323.08015986206 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 112 100 0.23 0.00865 0.057 0.0859 0.113 0.242 0.291 0.000541 0.00065 112 118 0.245 0.00814 0.0818 0.0841 0.11 0.274 0.349 0.000612 0.000779 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 112 100 0.29 0.00586 0.173 0.0732 0.0934 0.501 0.507 0.00112 0.00113 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 112 23532.287 0.005 0.00921 0.082 0.266 0.0885 0.117 0.278 0.349 0.000621 0.00078 ! Validation 112 23532.287 0.005 0.00982 0.108 0.304 0.0915 0.121 0.337 0.401 0.000752 0.000894 Wall time: 23532.287272392772 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 113 100 0.222 0.00928 0.0361 0.0883 0.117 0.176 0.232 0.000393 0.000517 113 118 0.465 0.00915 0.282 0.0883 0.117 0.625 0.648 0.0014 0.00145 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 113 100 0.227 0.00598 0.107 0.0737 0.0943 0.392 0.399 0.000874 0.000891 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 113 23741.411 0.005 0.00935 0.144 0.331 0.0891 0.118 0.372 0.461 0.00083 0.00103 ! Validation 113 23741.411 0.005 0.00982 0.0908 0.287 0.0915 0.121 0.303 0.367 0.000676 0.00082 Wall time: 23741.411336788908 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 114 100 0.509 0.01 0.309 0.0922 0.122 0.643 0.678 0.00144 0.00151 114 118 0.282 0.0104 0.0743 0.0946 0.124 0.29 0.332 0.000646 0.000742 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 114 100 0.155 0.00705 0.0145 0.079 0.102 0.128 0.147 0.000285 0.000328 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 114 23950.535 0.005 0.0092 0.126 0.31 0.0884 0.117 0.345 0.433 0.000771 0.000966 ! Validation 114 23950.535 0.005 0.0108 0.123 0.339 0.096 0.127 0.347 0.428 0.000774 0.000956 Wall time: 23950.535485694185 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 115 100 0.375 0.00912 0.192 0.0879 0.116 0.5 0.535 0.00112 0.00119 115 118 0.215 0.0079 0.057 0.0823 0.108 0.247 0.291 0.000552 0.00065 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 115 100 0.124 0.00597 0.0044 0.0736 0.0942 0.0672 0.0809 0.00015 0.000181 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 115 24159.659 0.005 0.00907 0.11 0.291 0.0878 0.116 0.33 0.405 0.000736 0.000903 ! Validation 115 24159.659 0.005 0.00984 0.0697 0.266 0.0916 0.121 0.257 0.322 0.000574 0.000719 Wall time: 24159.659217359032 ! Best model 115 0.266 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 116 100 0.215 0.00833 0.0484 0.0846 0.111 0.225 0.268 0.000501 0.000599 116 118 0.206 0.00909 0.0241 0.0874 0.116 0.132 0.189 0.000295 0.000422 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 116 100 0.115 0.00556 0.00411 0.0711 0.0909 0.0628 0.0782 0.00014 0.000175 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 116 24368.794 0.005 0.00885 0.0861 0.263 0.0866 0.115 0.287 0.359 0.000641 0.000801 ! Validation 116 24368.794 0.005 0.00935 0.104 0.291 0.0891 0.118 0.311 0.393 0.000693 0.000878 Wall time: 24368.79445512686 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 117 100 0.348 0.00957 0.157 0.0893 0.119 0.431 0.483 0.000963 0.00108 117 118 0.382 0.00957 0.191 0.0896 0.119 0.475 0.533 0.00106 0.00119 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 117 100 0.569 0.00588 0.452 0.0726 0.0935 0.815 0.82 0.00182 0.00183 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 117 24578.003 0.005 0.00888 0.107 0.285 0.0868 0.115 0.316 0.399 0.000706 0.00089 ! Validation 117 24578.003 0.005 0.00971 0.427 0.622 0.091 0.12 0.747 0.797 0.00167 0.00178 Wall time: 24578.003174678888 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 118 100 0.234 0.00882 0.0572 0.0859 0.115 0.222 0.292 0.000496 0.000651 118 118 0.325 0.00705 0.184 0.0786 0.102 0.496 0.523 0.00111 0.00117 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 118 100 0.486 0.00588 0.369 0.0726 0.0935 0.736 0.74 0.00164 0.00165 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 118 24787.128 0.005 0.00869 0.104 0.277 0.0859 0.114 0.316 0.391 0.000706 0.000874 ! Validation 118 24787.128 0.005 0.00963 0.544 0.737 0.0905 0.12 0.852 0.9 0.0019 0.00201 Wall time: 24787.1283380338 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 119 100 0.261 0.00798 0.102 0.0827 0.109 0.341 0.389 0.000762 0.000868 119 118 0.288 0.00823 0.124 0.0827 0.111 0.389 0.429 0.000868 0.000957 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 119 100 0.339 0.0054 0.231 0.0703 0.0897 0.583 0.586 0.0013 0.00131 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 119 24996.251 0.005 0.00855 0.0787 0.25 0.0852 0.113 0.269 0.341 0.000601 0.000762 ! Validation 119 24996.251 0.005 0.00897 0.518 0.698 0.0873 0.115 0.825 0.878 0.00184 0.00196 Wall time: 24996.251995468047 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 120 100 0.367 0.00896 0.188 0.0865 0.115 0.496 0.529 0.00111 0.00118 120 118 0.214 0.00885 0.0374 0.0861 0.115 0.175 0.236 0.000391 0.000526 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 120 100 0.116 0.00533 0.00941 0.0696 0.0891 0.0949 0.118 0.000212 0.000264 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 120 25205.390 0.005 0.00843 0.106 0.274 0.0844 0.112 0.322 0.398 0.000718 0.000888 ! Validation 120 25205.390 0.005 0.00894 0.0585 0.237 0.0871 0.115 0.233 0.295 0.00052 0.000659 Wall time: 25205.390750912018 ! Best model 120 0.237 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 121 100 0.21 0.00851 0.0395 0.0847 0.113 0.211 0.242 0.000472 0.000541 121 118 0.229 0.00886 0.0517 0.0866 0.115 0.234 0.277 0.000522 0.000619 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 121 100 0.127 0.00533 0.0202 0.0694 0.089 0.151 0.173 0.000336 0.000387 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 121 25414.532 0.005 0.00847 0.118 0.287 0.0847 0.112 0.332 0.42 0.00074 0.000937 ! Validation 121 25414.532 0.005 0.00896 0.0664 0.246 0.0873 0.115 0.247 0.314 0.000552 0.000701 Wall time: 25414.53246909799 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 122 100 0.234 0.00787 0.0771 0.0817 0.108 0.285 0.339 0.000636 0.000756 122 118 0.321 0.00844 0.152 0.0846 0.112 0.409 0.476 0.000913 0.00106 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 122 100 0.154 0.00545 0.0446 0.0702 0.09 0.245 0.258 0.000548 0.000575 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 122 25623.759 0.005 0.00831 0.112 0.278 0.0839 0.111 0.329 0.408 0.000735 0.00091 ! Validation 122 25623.759 0.005 0.00903 0.138 0.318 0.0877 0.116 0.378 0.452 0.000845 0.00101 Wall time: 25623.759636571165 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 123 100 0.332 0.00868 0.159 0.0867 0.114 0.435 0.486 0.000972 0.00108 123 118 0.215 0.00948 0.0259 0.0881 0.119 0.165 0.196 0.000368 0.000438 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 123 100 0.128 0.00591 0.0103 0.0729 0.0937 0.111 0.124 0.000249 0.000276 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 123 25833.373 0.005 0.00832 0.105 0.272 0.0838 0.111 0.304 0.397 0.000679 0.000885 ! Validation 123 25833.373 0.005 0.00938 0.0533 0.241 0.0893 0.118 0.223 0.281 0.000497 0.000628 Wall time: 25833.37405162584 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 124 100 0.338 0.00809 0.176 0.0825 0.11 0.476 0.512 0.00106 0.00114 124 118 0.2 0.00802 0.0396 0.0829 0.109 0.208 0.243 0.000464 0.000542 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 124 100 0.123 0.00521 0.0187 0.0689 0.088 0.146 0.167 0.000326 0.000372 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 124 26042.502 0.005 0.00827 0.113 0.278 0.0837 0.111 0.337 0.41 0.000751 0.000916 ! Validation 124 26042.502 0.005 0.00869 0.135 0.309 0.086 0.114 0.368 0.448 0.000822 0.001 Wall time: 26042.502483174205 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 125 100 0.186 0.00767 0.033 0.0806 0.107 0.183 0.222 0.000408 0.000495 125 118 0.178 0.00826 0.0132 0.0835 0.111 0.103 0.14 0.000231 0.000313 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 125 100 0.113 0.00499 0.0136 0.0671 0.0861 0.131 0.142 0.000293 0.000318 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 125 26251.623 0.005 0.00793 0.0553 0.214 0.0818 0.109 0.229 0.288 0.000512 0.000642 ! Validation 125 26251.623 0.005 0.00842 0.0506 0.219 0.0846 0.112 0.22 0.274 0.000491 0.000612 Wall time: 26251.623798262794 ! Best model 125 0.219 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 126 100 0.194 0.00812 0.0313 0.0828 0.11 0.173 0.216 0.000387 0.000482 126 118 0.29 0.00888 0.112 0.0862 0.115 0.328 0.408 0.000731 0.000911 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 126 100 0.397 0.00509 0.295 0.0679 0.087 0.66 0.663 0.00147 0.00148 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 126 26460.851 0.005 0.00806 0.125 0.286 0.0825 0.109 0.338 0.431 0.000754 0.000961 ! Validation 126 26460.851 0.005 0.00853 0.228 0.398 0.0851 0.113 0.523 0.582 0.00117 0.0013 Wall time: 26460.851898999885 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 127 100 0.198 0.00759 0.0457 0.0803 0.106 0.217 0.261 0.000483 0.000582 127 118 0.209 0.00868 0.0355 0.0846 0.114 0.16 0.23 0.000357 0.000513 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 127 100 0.181 0.00481 0.085 0.0658 0.0846 0.349 0.356 0.000779 0.000794 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 127 26669.978 0.005 0.00785 0.0662 0.223 0.0814 0.108 0.249 0.314 0.000557 0.000701 ! Validation 127 26669.978 0.005 0.00816 0.0628 0.226 0.0831 0.11 0.251 0.306 0.000559 0.000682 Wall time: 26669.97822735086 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 128 100 0.49 0.0081 0.328 0.0826 0.11 0.654 0.699 0.00146 0.00156 128 118 0.258 0.00788 0.101 0.0813 0.108 0.362 0.387 0.000809 0.000863 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 128 100 0.195 0.00515 0.0922 0.0686 0.0875 0.367 0.37 0.00082 0.000827 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 128 26879.103 0.005 0.00769 0.105 0.259 0.0805 0.107 0.323 0.395 0.000721 0.000882 ! Validation 128 26879.103 0.005 0.00844 0.095 0.264 0.0847 0.112 0.312 0.376 0.000696 0.000839 Wall time: 26879.103841450065 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 129 100 0.211 0.00848 0.0417 0.084 0.112 0.199 0.249 0.000444 0.000556 129 118 0.486 0.00785 0.329 0.0819 0.108 0.66 0.7 0.00147 0.00156 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 129 100 0.486 0.00511 0.384 0.0679 0.0872 0.753 0.755 0.00168 0.00169 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 129 27088.233 0.005 0.00785 0.106 0.263 0.0815 0.108 0.313 0.395 0.000698 0.000881 ! Validation 129 27088.233 0.005 0.00848 0.245 0.414 0.0849 0.112 0.534 0.604 0.00119 0.00135 Wall time: 27088.234036928043 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 130 100 0.215 0.00795 0.056 0.082 0.109 0.238 0.289 0.000532 0.000644 130 118 0.211 0.00835 0.0441 0.083 0.111 0.207 0.256 0.000462 0.000571 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 130 100 0.117 0.0049 0.019 0.0667 0.0854 0.158 0.168 0.000353 0.000376 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 130 27297.351 0.005 0.00783 0.0837 0.24 0.0813 0.108 0.287 0.353 0.000642 0.000789 ! Validation 130 27297.351 0.005 0.00812 0.128 0.29 0.083 0.11 0.354 0.435 0.00079 0.000972 Wall time: 27297.351401554886 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 131 100 0.173 0.00762 0.0208 0.0798 0.106 0.145 0.176 0.000324 0.000392 131 118 0.19 0.00742 0.0413 0.0796 0.105 0.209 0.248 0.000466 0.000553 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 131 100 0.1 0.00471 0.00597 0.0652 0.0836 0.0801 0.0942 0.000179 0.00021 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 131 27506.557 0.005 0.0076 0.0757 0.228 0.0801 0.106 0.266 0.336 0.000595 0.00075 ! Validation 131 27506.557 0.005 0.00791 0.0731 0.231 0.0819 0.108 0.262 0.33 0.000584 0.000736 Wall time: 27506.5570575851 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 132 100 0.257 0.00733 0.11 0.079 0.104 0.342 0.405 0.000764 0.000904 132 118 0.388 0.00812 0.226 0.0823 0.11 0.553 0.579 0.00123 0.00129 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 132 100 0.162 0.005 0.0624 0.0673 0.0862 0.297 0.305 0.000664 0.00068 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 132 27715.688 0.005 0.0075 0.117 0.267 0.0795 0.106 0.344 0.415 0.000768 0.000927 ! Validation 132 27715.688 0.005 0.00831 0.0542 0.22 0.0839 0.111 0.231 0.284 0.000515 0.000634 Wall time: 27715.68859539507 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 133 100 0.23 0.00713 0.0875 0.0778 0.103 0.319 0.361 0.000712 0.000805 133 118 0.178 0.00682 0.0419 0.0765 0.101 0.211 0.25 0.000472 0.000557 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 133 100 0.139 0.00461 0.0467 0.0647 0.0828 0.254 0.264 0.000567 0.000589 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 133 27924.800 0.005 0.00734 0.075 0.222 0.0786 0.104 0.27 0.334 0.000603 0.000747 ! Validation 133 27924.800 0.005 0.00776 0.0521 0.207 0.081 0.107 0.227 0.278 0.000506 0.000622 Wall time: 27924.800493931863 ! Best model 133 0.207 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 134 100 0.173 0.00711 0.0309 0.0777 0.103 0.171 0.214 0.000383 0.000479 134 118 0.18 0.00663 0.0479 0.0752 0.0993 0.221 0.267 0.000493 0.000596 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 134 100 0.0954 0.00451 0.00518 0.064 0.0819 0.0734 0.0878 0.000164 0.000196 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 134 28133.878 0.005 0.0073 0.0764 0.222 0.0784 0.104 0.27 0.337 0.000602 0.000753 ! Validation 134 28133.878 0.005 0.00763 0.12 0.272 0.0803 0.107 0.334 0.422 0.000746 0.000942 Wall time: 28133.87851880677 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 135 100 0.165 0.00681 0.029 0.0762 0.101 0.168 0.208 0.000374 0.000463 135 118 0.384 0.00699 0.244 0.0767 0.102 0.581 0.603 0.0013 0.00135 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 135 100 0.425 0.00463 0.332 0.0643 0.083 0.699 0.703 0.00156 0.00157 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 135 28343.151 0.005 0.00719 0.0826 0.226 0.0778 0.103 0.277 0.348 0.000619 0.000777 ! Validation 135 28343.151 0.005 0.00778 0.291 0.447 0.0811 0.108 0.589 0.658 0.00131 0.00147 Wall time: 28343.15128854895 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 136 100 0.395 0.00734 0.248 0.0789 0.105 0.581 0.608 0.0013 0.00136 136 118 0.16 0.00723 0.0157 0.0781 0.104 0.134 0.153 0.0003 0.000341 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 136 100 0.169 0.00456 0.0774 0.0641 0.0824 0.333 0.339 0.000744 0.000757 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 136 28552.335 0.005 0.0074 0.111 0.259 0.079 0.105 0.333 0.407 0.000742 0.000909 ! Validation 136 28552.335 0.005 0.00779 0.0618 0.218 0.0811 0.108 0.245 0.303 0.000547 0.000677 Wall time: 28552.33601047704 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 137 100 0.183 0.00706 0.0418 0.0771 0.102 0.205 0.249 0.000458 0.000557 137 118 0.217 0.00873 0.0423 0.0853 0.114 0.227 0.251 0.000507 0.00056 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 137 100 0.149 0.0046 0.0567 0.0647 0.0827 0.284 0.29 0.000634 0.000648 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 137 28761.441 0.005 0.0071 0.075 0.217 0.0772 0.103 0.259 0.335 0.000577 0.000747 ! Validation 137 28761.441 0.005 0.00763 0.239 0.391 0.0804 0.107 0.534 0.596 0.00119 0.00133 Wall time: 28761.441301671788 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 138 100 0.164 0.00673 0.0297 0.0758 0.1 0.162 0.21 0.000362 0.000469 138 118 0.413 0.00759 0.261 0.0796 0.106 0.542 0.623 0.00121 0.00139 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 138 100 0.15 0.00436 0.0631 0.0628 0.0805 0.302 0.306 0.000673 0.000684 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 138 28970.540 0.005 0.00697 0.0757 0.215 0.0765 0.102 0.273 0.333 0.00061 0.000743 ! Validation 138 28970.540 0.005 0.00737 0.228 0.375 0.0789 0.105 0.519 0.582 0.00116 0.0013 Wall time: 28970.540488401894 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 139 100 0.15 0.00646 0.0205 0.0738 0.098 0.126 0.175 0.000281 0.00039 139 118 0.247 0.0101 0.0456 0.0917 0.122 0.196 0.26 0.000437 0.000581 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 139 100 0.22 0.0058 0.104 0.0724 0.0929 0.388 0.393 0.000865 0.000878 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 139 29179.641 0.005 0.007 0.0928 0.233 0.0766 0.102 0.294 0.372 0.000656 0.000831 ! Validation 139 29179.641 0.005 0.00893 0.48 0.658 0.0873 0.115 0.782 0.844 0.00175 0.00188 Wall time: 29179.64135780977 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 140 100 0.202 0.00689 0.0647 0.0764 0.101 0.276 0.31 0.000615 0.000692 140 118 0.15 0.00613 0.0274 0.0721 0.0954 0.175 0.202 0.000391 0.00045 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 140 100 0.107 0.00449 0.0171 0.0637 0.0817 0.151 0.16 0.000337 0.000356 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 140 29388.737 0.005 0.00707 0.0874 0.229 0.0772 0.103 0.287 0.361 0.00064 0.000806 ! Validation 140 29388.737 0.005 0.00753 0.144 0.294 0.0797 0.106 0.388 0.462 0.000866 0.00103 Wall time: 29388.737725146115 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 141 100 0.156 0.00622 0.0321 0.0723 0.0962 0.18 0.218 0.000401 0.000488 141 118 0.121 0.00533 0.0139 0.0682 0.0891 0.117 0.144 0.00026 0.000321 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 141 100 0.182 0.00414 0.0993 0.0612 0.0784 0.382 0.384 0.000852 0.000858 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 141 29597.922 0.005 0.00675 0.0532 0.188 0.0754 0.1 0.229 0.282 0.000511 0.00063 ! Validation 141 29597.922 0.005 0.00707 0.278 0.419 0.0773 0.103 0.568 0.643 0.00127 0.00144 Wall time: 29597.922166078817 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 142 100 0.28 0.0066 0.148 0.0747 0.0991 0.443 0.469 0.000988 0.00105 142 118 0.174 0.00635 0.0467 0.0735 0.0972 0.223 0.263 0.000498 0.000588 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 142 100 0.216 0.00441 0.128 0.0631 0.081 0.432 0.436 0.000965 0.000974 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 142 29807.027 0.005 0.00706 0.125 0.266 0.0771 0.103 0.347 0.432 0.000774 0.000964 ! Validation 142 29807.027 0.005 0.00731 0.282 0.428 0.0786 0.104 0.595 0.648 0.00133 0.00145 Wall time: 29807.027937368955 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 143 100 0.174 0.0067 0.0401 0.0754 0.0998 0.192 0.244 0.00043 0.000545 143 118 0.15 0.00595 0.0306 0.0711 0.094 0.166 0.213 0.000371 0.000477 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 143 100 0.102 0.00429 0.0167 0.0619 0.0798 0.151 0.158 0.000337 0.000352 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 143 30016.131 0.005 0.00674 0.0752 0.21 0.0753 0.1 0.27 0.335 0.000602 0.000748 ! Validation 143 30016.131 0.005 0.0072 0.0719 0.216 0.0779 0.103 0.264 0.327 0.00059 0.00073 Wall time: 30016.131284674164 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 144 100 0.157 0.00673 0.0221 0.0752 0.1 0.143 0.181 0.000319 0.000405 144 118 0.155 0.00608 0.0332 0.0717 0.0951 0.183 0.222 0.000408 0.000496 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 144 100 0.083 0.00404 0.00213 0.0604 0.0775 0.0513 0.0563 0.000114 0.000126 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 144 30226.357 0.005 0.00652 0.0585 0.189 0.074 0.0985 0.236 0.295 0.000527 0.000659 ! Validation 144 30226.357 0.005 0.00688 0.0765 0.214 0.0761 0.101 0.267 0.337 0.000595 0.000753 Wall time: 30226.357759270817 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 145 100 0.227 0.00659 0.0949 0.0741 0.099 0.34 0.376 0.00076 0.000839 145 118 0.236 0.0072 0.0922 0.0774 0.103 0.346 0.37 0.000772 0.000826 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 145 100 0.115 0.00452 0.0244 0.0635 0.082 0.186 0.191 0.000416 0.000425 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 145 30435.353 0.005 0.00651 0.0915 0.222 0.0739 0.0983 0.292 0.369 0.000651 0.000823 ! Validation 145 30435.353 0.005 0.00749 0.0629 0.213 0.0797 0.106 0.245 0.306 0.000547 0.000683 Wall time: 30435.353252958972 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 146 100 0.146 0.00605 0.0251 0.0713 0.0949 0.157 0.193 0.000351 0.000431 146 118 0.171 0.0053 0.065 0.0671 0.0888 0.25 0.311 0.000559 0.000694 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 146 100 0.158 0.00405 0.077 0.0605 0.0776 0.335 0.338 0.000747 0.000755 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 146 30644.265 0.005 0.00665 0.0777 0.211 0.0748 0.0995 0.271 0.34 0.000604 0.000759 ! Validation 146 30644.265 0.005 0.00683 0.0712 0.208 0.0757 0.101 0.262 0.325 0.000585 0.000726 Wall time: 30644.26572209783 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 147 100 0.155 0.00659 0.0236 0.0739 0.099 0.146 0.187 0.000325 0.000418 147 118 0.15 0.00567 0.0365 0.0696 0.0918 0.2 0.233 0.000446 0.00052 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 147 100 0.0826 0.00402 0.00217 0.0601 0.0773 0.0518 0.0568 0.000116 0.000127 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 147 30853.156 0.005 0.00639 0.0581 0.186 0.0732 0.0975 0.237 0.294 0.00053 0.000657 ! Validation 147 30853.156 0.005 0.00677 0.0924 0.228 0.0755 0.1 0.293 0.371 0.000655 0.000828 Wall time: 30853.156120959204 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 148 100 0.212 0.00664 0.0789 0.0747 0.0994 0.306 0.343 0.000684 0.000765 148 118 0.116 0.00531 0.00984 0.0678 0.0888 0.107 0.121 0.000239 0.00027 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 148 100 0.104 0.00405 0.0229 0.0602 0.0776 0.178 0.184 0.000398 0.000412 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 148 31062.045 0.005 0.00644 0.077 0.206 0.0735 0.0979 0.276 0.339 0.000616 0.000758 ! Validation 148 31062.045 0.005 0.00679 0.136 0.271 0.0756 0.1 0.364 0.449 0.000813 0.001 Wall time: 31062.044985565823 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 149 100 0.158 0.0066 0.0262 0.0741 0.0991 0.16 0.197 0.000357 0.00044 149 118 0.222 0.0055 0.112 0.0687 0.0905 0.348 0.407 0.000776 0.000909 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 149 100 0.0844 0.00405 0.00341 0.0602 0.0776 0.0674 0.0712 0.00015 0.000159 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 149 31270.943 0.005 0.00634 0.0742 0.201 0.0729 0.0971 0.27 0.332 0.000602 0.00074 ! Validation 149 31270.943 0.005 0.00681 0.0893 0.226 0.0757 0.101 0.288 0.364 0.000643 0.000813 Wall time: 31270.943614313845 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 150 100 0.258 0.00571 0.144 0.0692 0.0921 0.434 0.462 0.000969 0.00103 150 118 0.545 0.0072 0.401 0.0775 0.103 0.747 0.773 0.00167 0.00172 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 150 100 0.501 0.00443 0.412 0.063 0.0811 0.782 0.783 0.00174 0.00175 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 150 31479.937 0.005 0.00632 0.0801 0.207 0.0728 0.0969 0.275 0.34 0.000613 0.00076 ! Validation 150 31479.937 0.005 0.00715 0.293 0.436 0.078 0.103 0.601 0.66 0.00134 0.00147 Wall time: 31479.93729234906 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 151 100 0.156 0.00617 0.033 0.072 0.0958 0.186 0.222 0.000415 0.000494 151 118 0.182 0.00625 0.0567 0.0724 0.0964 0.241 0.29 0.000539 0.000648 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 151 100 0.124 0.00419 0.0402 0.0609 0.0789 0.238 0.244 0.000531 0.000546 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 151 31688.876 0.005 0.00616 0.0516 0.175 0.0718 0.0957 0.217 0.277 0.000484 0.000618 ! Validation 151 31688.876 0.005 0.00683 0.108 0.244 0.0758 0.101 0.339 0.4 0.000757 0.000893 Wall time: 31688.87628775509 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 152 100 0.205 0.00578 0.0894 0.0695 0.0927 0.333 0.365 0.000743 0.000814 152 118 0.182 0.00628 0.0563 0.0727 0.0966 0.253 0.289 0.000564 0.000646 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 152 100 0.13 0.00388 0.0528 0.0587 0.0759 0.276 0.28 0.000615 0.000625 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 152 31897.809 0.005 0.00614 0.082 0.205 0.0717 0.0956 0.282 0.35 0.000629 0.00078 ! Validation 152 31897.809 0.005 0.00659 0.172 0.304 0.0744 0.099 0.445 0.506 0.000993 0.00113 Wall time: 31897.809486646205 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 153 100 0.236 0.00605 0.115 0.0714 0.0949 0.374 0.413 0.000834 0.000923 153 118 0.15 0.00624 0.0256 0.0729 0.0963 0.146 0.195 0.000327 0.000436 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 153 100 0.112 0.00407 0.0307 0.0603 0.0778 0.209 0.214 0.000466 0.000477 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 153 32106.764 0.005 0.00612 0.0704 0.193 0.0716 0.0954 0.262 0.324 0.000585 0.000724 ! Validation 153 32106.764 0.005 0.00668 0.106 0.24 0.075 0.0997 0.329 0.398 0.000734 0.000888 Wall time: 32106.764627487864 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 154 100 0.149 0.00622 0.0243 0.0722 0.0962 0.156 0.19 0.000347 0.000424 154 118 0.142 0.00531 0.0354 0.0678 0.0889 0.214 0.23 0.000479 0.000512 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 154 100 0.0867 0.00376 0.0115 0.0581 0.0748 0.124 0.131 0.000276 0.000291 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 154 32315.696 0.005 0.00639 0.105 0.233 0.0733 0.0976 0.31 0.397 0.000691 0.000886 ! Validation 154 32315.696 0.005 0.0064 0.0402 0.168 0.0733 0.0975 0.192 0.245 0.000428 0.000546 Wall time: 32315.696219904814 ! Best model 154 0.168 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 155 100 0.522 0.00693 0.383 0.076 0.102 0.731 0.755 0.00163 0.00168 155 118 0.245 0.00605 0.124 0.0724 0.0948 0.415 0.43 0.000926 0.00096 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 155 100 0.122 0.00402 0.0413 0.0603 0.0773 0.239 0.248 0.000532 0.000554 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 155 32524.703 0.005 0.00609 0.0787 0.2 0.0714 0.0951 0.266 0.341 0.000593 0.000762 ! Validation 155 32524.703 0.005 0.0066 0.241 0.373 0.0745 0.0991 0.526 0.598 0.00117 0.00134 Wall time: 32524.70318662701 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 156 100 0.137 0.006 0.0169 0.0705 0.0945 0.13 0.158 0.000291 0.000353 156 118 0.137 0.0048 0.0405 0.0642 0.0845 0.205 0.246 0.000458 0.000548 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 156 100 0.127 0.00372 0.0525 0.0578 0.0744 0.273 0.279 0.00061 0.000623 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 156 32733.637 0.005 0.00592 0.038 0.156 0.0704 0.0939 0.188 0.238 0.00042 0.000531 ! Validation 156 32733.637 0.005 0.00624 0.0474 0.172 0.0724 0.0963 0.217 0.265 0.000484 0.000592 Wall time: 32733.63727228809 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 157 100 0.133 0.00582 0.0171 0.0699 0.093 0.129 0.159 0.000288 0.000356 157 118 0.17 0.00643 0.0416 0.0724 0.0978 0.213 0.249 0.000476 0.000555 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 157 100 0.105 0.00412 0.0224 0.0601 0.0783 0.172 0.182 0.000384 0.000407 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 157 32942.566 0.005 0.00585 0.0682 0.185 0.07 0.0933 0.256 0.319 0.000572 0.000712 ! Validation 157 32942.566 0.005 0.00685 0.0832 0.22 0.0755 0.101 0.285 0.352 0.000636 0.000785 Wall time: 32942.56686580507 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 158 100 0.14 0.00574 0.0255 0.0693 0.0924 0.156 0.195 0.000347 0.000435 158 118 0.194 0.00534 0.0873 0.0678 0.0891 0.338 0.36 0.000755 0.000804 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 158 100 0.177 0.00364 0.104 0.0571 0.0736 0.39 0.393 0.00087 0.000877 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 158 33151.513 0.005 0.00579 0.0517 0.167 0.0696 0.0928 0.221 0.277 0.000493 0.000617 ! Validation 158 33151.513 0.005 0.00611 0.194 0.317 0.0716 0.0953 0.483 0.538 0.00108 0.0012 Wall time: 33151.513192156795 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 159 100 0.167 0.0059 0.0493 0.0703 0.0937 0.232 0.271 0.000518 0.000605 159 118 0.123 0.00565 0.0101 0.0685 0.0917 0.0935 0.122 0.000209 0.000273 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 159 100 0.0769 0.00358 0.0052 0.0567 0.073 0.0737 0.0879 0.000164 0.000196 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 159 33360.443 0.005 0.00571 0.0616 0.176 0.0691 0.0922 0.243 0.304 0.000543 0.000678 ! Validation 159 33360.443 0.005 0.00607 0.0403 0.162 0.0714 0.095 0.188 0.245 0.000419 0.000546 Wall time: 33360.443568775896 ! Best model 159 0.162 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 160 100 0.148 0.00588 0.0303 0.0702 0.0936 0.172 0.212 0.000385 0.000474 160 118 0.231 0.00626 0.106 0.0713 0.0965 0.377 0.396 0.000841 0.000885 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 160 100 0.315 0.00395 0.236 0.0596 0.0766 0.59 0.593 0.00132 0.00132 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 160 33569.468 0.005 0.00574 0.0657 0.18 0.0693 0.0923 0.252 0.312 0.000564 0.000696 ! Validation 160 33569.468 0.005 0.00639 0.2 0.328 0.0736 0.0975 0.499 0.546 0.00111 0.00122 Wall time: 33569.46896572318 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 161 100 0.14 0.00563 0.0276 0.068 0.0915 0.167 0.203 0.000372 0.000452 161 118 0.12 0.00503 0.0198 0.0649 0.0865 0.148 0.172 0.00033 0.000383 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 161 100 0.136 0.00346 0.067 0.0557 0.0718 0.312 0.316 0.000696 0.000705 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 161 33778.386 0.005 0.00564 0.0648 0.178 0.0687 0.0916 0.243 0.311 0.000543 0.000695 ! Validation 161 33778.386 0.005 0.00591 0.0557 0.174 0.0704 0.0937 0.234 0.288 0.000523 0.000642 Wall time: 33778.38676748099 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 162 100 0.127 0.00546 0.0181 0.0676 0.0901 0.135 0.164 0.000302 0.000366 162 118 0.136 0.00539 0.0284 0.0678 0.0895 0.185 0.206 0.000412 0.000459 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 162 100 0.238 0.0036 0.166 0.0567 0.0732 0.494 0.497 0.0011 0.00111 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 162 33987.326 0.005 0.00579 0.0723 0.188 0.0696 0.0928 0.258 0.329 0.000576 0.000733 ! Validation 162 33987.326 0.005 0.00594 0.139 0.258 0.0706 0.094 0.405 0.455 0.000905 0.00101 Wall time: 33987.32668806892 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 163 100 0.16 0.0053 0.0536 0.0666 0.0888 0.24 0.282 0.000536 0.00063 163 118 0.198 0.00523 0.0938 0.0654 0.0882 0.353 0.374 0.000788 0.000834 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 163 100 0.414 0.00349 0.345 0.0556 0.0721 0.715 0.716 0.00159 0.0016 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 163 34196.258 0.005 0.0055 0.0544 0.164 0.0678 0.0904 0.224 0.284 0.0005 0.000633 ! Validation 163 34196.258 0.005 0.00593 0.235 0.353 0.0705 0.0939 0.541 0.591 0.00121 0.00132 Wall time: 34196.258707623 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 164 100 0.22 0.00717 0.077 0.0777 0.103 0.292 0.338 0.000653 0.000755 164 118 0.149 0.00586 0.0318 0.0699 0.0933 0.181 0.217 0.000405 0.000485 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 164 100 0.0797 0.00387 0.00229 0.0587 0.0759 0.0544 0.0584 0.000121 0.00013 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 164 34405.283 0.005 0.00567 0.0923 0.206 0.0689 0.0918 0.293 0.371 0.000654 0.000829 ! Validation 164 34405.283 0.005 0.00631 0.0396 0.166 0.0729 0.0968 0.188 0.243 0.00042 0.000542 Wall time: 34405.283026433084 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 165 100 0.131 0.00528 0.0253 0.0667 0.0886 0.164 0.194 0.000367 0.000433 165 118 0.153 0.00512 0.0505 0.0652 0.0873 0.25 0.274 0.000558 0.000612 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 165 100 0.229 0.00352 0.158 0.056 0.0723 0.481 0.485 0.00107 0.00108 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 165 34614.224 0.005 0.00553 0.064 0.175 0.068 0.0907 0.254 0.309 0.000566 0.000689 ! Validation 165 34614.224 0.005 0.00586 0.303 0.42 0.0701 0.0934 0.621 0.671 0.00139 0.0015 Wall time: 34614.2246186859 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 166 100 0.12 0.00539 0.0119 0.067 0.0895 0.11 0.133 0.000246 0.000297 166 118 0.124 0.0054 0.0164 0.0672 0.0896 0.13 0.156 0.000291 0.000349 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 166 100 0.075 0.00359 0.00317 0.0565 0.0731 0.064 0.0687 0.000143 0.000153 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 166 34823.153 0.005 0.00542 0.0571 0.166 0.0673 0.0898 0.235 0.292 0.000525 0.000652 ! Validation 166 34823.153 0.005 0.00599 0.074 0.194 0.0708 0.0944 0.259 0.332 0.000577 0.000741 Wall time: 34823.153935277835 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 167 100 0.162 0.00567 0.0485 0.0692 0.0919 0.222 0.268 0.000496 0.000599 167 118 0.106 0.00501 0.00603 0.0655 0.0863 0.0827 0.0947 0.000185 0.000211 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 167 100 0.0749 0.00338 0.00736 0.0551 0.0709 0.0914 0.105 0.000204 0.000233 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 167 35032.072 0.005 0.00534 0.0609 0.168 0.0668 0.0892 0.225 0.302 0.000503 0.000674 ! Validation 167 35032.072 0.005 0.00568 0.0525 0.166 0.069 0.0919 0.215 0.279 0.00048 0.000624 Wall time: 35032.07257347414 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 168 100 0.155 0.00603 0.0347 0.0714 0.0947 0.187 0.227 0.000418 0.000507 168 118 0.135 0.00525 0.0299 0.0658 0.0884 0.185 0.211 0.000413 0.000471 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 168 100 0.127 0.00371 0.0531 0.0576 0.0743 0.272 0.281 0.000608 0.000627 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 168 35240.986 0.005 0.00539 0.0833 0.191 0.0671 0.0896 0.266 0.353 0.000594 0.000788 ! Validation 168 35240.986 0.005 0.00604 0.208 0.329 0.0713 0.0948 0.499 0.557 0.00111 0.00124 Wall time: 35240.986815431155 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 169 100 0.197 0.00565 0.0842 0.0683 0.0916 0.318 0.354 0.00071 0.00079 169 118 0.14 0.00572 0.0258 0.0691 0.0922 0.167 0.196 0.000374 0.000437 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 169 100 0.0706 0.00341 0.0023 0.0551 0.0712 0.0553 0.0585 0.000123 0.000131 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 169 35450.002 0.005 0.00541 0.0605 0.169 0.0672 0.0897 0.247 0.3 0.00055 0.000671 ! Validation 169 35450.002 0.005 0.00576 0.0635 0.179 0.0694 0.0925 0.242 0.307 0.000541 0.000686 Wall time: 35450.002887380775 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 170 100 0.14 0.00549 0.0296 0.0677 0.0904 0.179 0.21 0.000399 0.000469 170 118 0.151 0.00605 0.0295 0.0708 0.0949 0.169 0.21 0.000377 0.000468 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 170 100 0.0781 0.00334 0.0113 0.0546 0.0705 0.117 0.13 0.00026 0.00029 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 170 35658.935 0.005 0.0052 0.0371 0.141 0.0658 0.0879 0.188 0.235 0.00042 0.000525 ! Validation 170 35658.935 0.005 0.00558 0.0311 0.143 0.0683 0.0911 0.171 0.215 0.000382 0.00048 Wall time: 35658.9358347198 ! Best model 170 0.143 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 171 100 0.14 0.00529 0.0343 0.0664 0.0887 0.183 0.226 0.000408 0.000504 171 118 0.134 0.0061 0.0118 0.0708 0.0953 0.105 0.133 0.000235 0.000296 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 171 100 0.0688 0.00333 0.0022 0.0545 0.0704 0.0535 0.0572 0.00012 0.000128 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 171 35867.875 0.005 0.0055 0.0921 0.202 0.0678 0.0904 0.296 0.371 0.00066 0.000829 ! Validation 171 35867.875 0.005 0.00562 0.0547 0.167 0.0686 0.0915 0.224 0.285 0.000499 0.000636 Wall time: 35867.87512363121 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 172 100 0.142 0.00505 0.0414 0.0648 0.0867 0.202 0.248 0.000451 0.000554 172 118 0.123 0.00489 0.0254 0.0647 0.0853 0.166 0.194 0.00037 0.000434 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 172 100 0.199 0.00341 0.131 0.0554 0.0712 0.438 0.441 0.000978 0.000985 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 172 36076.809 0.005 0.00513 0.0399 0.142 0.0654 0.0873 0.195 0.244 0.000436 0.000544 ! Validation 172 36076.809 0.005 0.00557 0.0976 0.209 0.0685 0.091 0.326 0.381 0.000727 0.00085 Wall time: 36076.80910483375 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 173 100 0.137 0.00509 0.0356 0.0653 0.087 0.201 0.23 0.000448 0.000514 173 118 0.132 0.00614 0.0088 0.0699 0.0955 0.0991 0.114 0.000221 0.000255 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 173 100 0.0682 0.00321 0.00405 0.0535 0.0691 0.0567 0.0776 0.000127 0.000173 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 173 36285.735 0.005 0.00525 0.0678 0.173 0.0662 0.0883 0.249 0.319 0.000557 0.000711 ! Validation 173 36285.735 0.005 0.00542 0.0515 0.16 0.0673 0.0898 0.214 0.277 0.000477 0.000618 Wall time: 36285.735144542065 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 174 100 0.221 0.00544 0.112 0.0671 0.0899 0.382 0.408 0.000852 0.000911 174 118 0.227 0.00581 0.111 0.0698 0.0929 0.358 0.407 0.000799 0.000908 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 174 100 0.165 0.00335 0.0976 0.0545 0.0706 0.376 0.381 0.00084 0.00085 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 174 36494.756 0.005 0.00507 0.043 0.145 0.065 0.0868 0.202 0.252 0.00045 0.000562 ! Validation 174 36494.756 0.005 0.00552 0.305 0.416 0.068 0.0906 0.615 0.674 0.00137 0.0015 Wall time: 36494.7567498358 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 175 100 0.112 0.00485 0.0154 0.064 0.0849 0.127 0.151 0.000284 0.000338 175 118 0.115 0.0048 0.0194 0.0635 0.0845 0.126 0.17 0.00028 0.000379 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 175 100 0.072 0.00329 0.00614 0.0541 0.07 0.0805 0.0955 0.00018 0.000213 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 175 36703.688 0.005 0.0052 0.0597 0.164 0.066 0.088 0.241 0.299 0.000537 0.000667 ! Validation 175 36703.688 0.005 0.00538 0.0669 0.175 0.0671 0.0895 0.241 0.315 0.000538 0.000704 Wall time: 36703.68840477709 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 176 100 0.136 0.00537 0.029 0.067 0.0894 0.165 0.208 0.000369 0.000464 176 118 0.151 0.00528 0.0452 0.0664 0.0886 0.196 0.259 0.000438 0.000579 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 176 100 0.0905 0.00337 0.0231 0.0553 0.0708 0.178 0.185 0.000397 0.000413 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 176 36912.636 0.005 0.00501 0.057 0.157 0.0646 0.0863 0.231 0.291 0.000516 0.00065 ! Validation 176 36912.636 0.005 0.00553 0.141 0.252 0.0684 0.0907 0.362 0.458 0.000808 0.00102 Wall time: 36912.63696858613 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 177 100 0.251 0.00472 0.157 0.0631 0.0838 0.457 0.483 0.00102 0.00108 177 118 0.224 0.00506 0.122 0.0659 0.0867 0.362 0.427 0.000807 0.000953 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 177 100 0.117 0.00382 0.0403 0.0583 0.0754 0.233 0.245 0.00052 0.000547 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 177 37121.574 0.005 0.00498 0.054 0.154 0.0644 0.0861 0.228 0.282 0.000509 0.00063 ! Validation 177 37121.574 0.005 0.00599 0.131 0.251 0.0709 0.0944 0.378 0.441 0.000843 0.000985 Wall time: 37121.57451062603 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 178 100 0.185 0.005 0.0855 0.0646 0.0862 0.321 0.356 0.000717 0.000796 178 118 0.174 0.00554 0.0629 0.0672 0.0908 0.262 0.306 0.000585 0.000683 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 178 100 0.0978 0.00372 0.0234 0.0578 0.0744 0.179 0.187 0.000399 0.000417 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 178 37330.525 0.005 0.00503 0.0562 0.157 0.0648 0.0864 0.231 0.289 0.000516 0.000645 ! Validation 178 37330.525 0.005 0.00575 0.0889 0.204 0.0697 0.0925 0.302 0.364 0.000673 0.000812 Wall time: 37330.52518955991 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 179 100 0.118 0.00494 0.0196 0.0642 0.0857 0.143 0.171 0.000319 0.000381 179 118 0.0971 0.00459 0.00525 0.0621 0.0827 0.0743 0.0883 0.000166 0.000197 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 179 100 0.074 0.00307 0.0127 0.0522 0.0675 0.126 0.137 0.000282 0.000306 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 179 37539.534 0.005 0.00497 0.0403 0.14 0.0644 0.086 0.196 0.246 0.000438 0.000548 ! Validation 179 37539.534 0.005 0.0052 0.0335 0.137 0.0659 0.0879 0.173 0.223 0.000386 0.000498 Wall time: 37539.53476167284 ! Best model 179 0.137 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 180 100 0.116 0.00501 0.0162 0.0645 0.0863 0.118 0.155 0.000263 0.000346 180 118 0.152 0.00538 0.0444 0.0663 0.0895 0.204 0.257 0.000455 0.000574 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 180 100 0.144 0.00322 0.0801 0.0534 0.0692 0.341 0.345 0.000762 0.00077 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 180 37748.476 0.005 0.0049 0.0681 0.166 0.0639 0.0854 0.261 0.319 0.000583 0.000711 ! Validation 180 37748.476 0.005 0.00538 0.0539 0.162 0.0671 0.0895 0.231 0.283 0.000515 0.000632 Wall time: 37748.47603789577 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 181 100 0.111 0.00473 0.0158 0.0629 0.0839 0.125 0.153 0.000279 0.000343 181 118 0.105 0.00445 0.0161 0.061 0.0813 0.133 0.155 0.000297 0.000345 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 181 100 0.0667 0.00319 0.00283 0.0533 0.0689 0.0604 0.0649 0.000135 0.000145 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 181 37957.406 0.005 0.00581 0.108 0.224 0.0693 0.0931 0.31 0.402 0.000692 0.000898 ! Validation 181 37957.406 0.005 0.00526 0.0609 0.166 0.0663 0.0884 0.228 0.301 0.000508 0.000672 Wall time: 37957.40603466192 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 182 100 0.103 0.00455 0.0124 0.0615 0.0822 0.106 0.136 0.000236 0.000303 182 118 0.13 0.00477 0.0348 0.0628 0.0842 0.208 0.227 0.000465 0.000508 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 182 100 0.158 0.00327 0.0925 0.054 0.0697 0.365 0.371 0.000816 0.000828 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 182 38166.338 0.005 0.00476 0.0373 0.133 0.063 0.0842 0.189 0.236 0.000422 0.000526 ! Validation 182 38166.338 0.005 0.00526 0.0578 0.163 0.0663 0.0884 0.241 0.293 0.000537 0.000654 Wall time: 38166.33877853397 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 183 100 0.29 0.00481 0.194 0.0635 0.0845 0.503 0.537 0.00112 0.0012 183 118 0.122 0.00505 0.0212 0.0635 0.0867 0.132 0.178 0.000295 0.000397 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 183 100 0.2 0.00354 0.129 0.0557 0.0726 0.435 0.439 0.000972 0.000979 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 183 38375.384 0.005 0.00481 0.0476 0.144 0.0633 0.0846 0.211 0.267 0.000471 0.000595 ! Validation 183 38375.384 0.005 0.00545 0.127 0.236 0.0675 0.09 0.387 0.434 0.000863 0.000968 Wall time: 38375.384265210945 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 184 100 0.293 0.00518 0.19 0.0657 0.0878 0.512 0.531 0.00114 0.00119 184 118 0.172 0.00508 0.0702 0.0657 0.0869 0.238 0.323 0.000532 0.000721 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 184 100 0.0742 0.0035 0.00415 0.0559 0.0722 0.0743 0.0785 0.000166 0.000175 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 184 38584.321 0.005 0.00488 0.0549 0.152 0.0638 0.0851 0.225 0.285 0.000502 0.000637 ! Validation 184 38584.321 0.005 0.00551 0.0847 0.195 0.0681 0.0906 0.287 0.355 0.00064 0.000792 Wall time: 38584.32187958108 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 185 100 0.148 0.00472 0.0541 0.0624 0.0838 0.254 0.284 0.000568 0.000633 185 118 0.101 0.00442 0.0126 0.0608 0.081 0.115 0.137 0.000257 0.000306 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 185 100 0.159 0.00298 0.099 0.0516 0.0666 0.379 0.384 0.000846 0.000857 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 185 38793.272 0.005 0.00469 0.036 0.13 0.0625 0.0835 0.187 0.232 0.000417 0.000518 ! Validation 185 38793.272 0.005 0.00498 0.168 0.268 0.0645 0.086 0.453 0.5 0.00101 0.00112 Wall time: 38793.27250486286 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 186 100 0.122 0.00493 0.0237 0.0638 0.0857 0.152 0.188 0.00034 0.000419 186 118 0.087 0.00397 0.00763 0.0577 0.0768 0.0846 0.107 0.000189 0.000238 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 186 100 0.0649 0.00296 0.00565 0.0514 0.0664 0.0781 0.0916 0.000174 0.000205 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 186 39002.199 0.005 0.00467 0.0525 0.146 0.0624 0.0834 0.225 0.28 0.000503 0.000626 ! Validation 186 39002.199 0.005 0.00493 0.0366 0.135 0.0641 0.0856 0.18 0.233 0.000401 0.000521 Wall time: 39002.199066739064 ! Best model 186 0.135 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 187 100 0.235 0.00472 0.141 0.0629 0.0837 0.438 0.457 0.000978 0.00102 187 118 0.112 0.00469 0.0178 0.0624 0.0835 0.127 0.163 0.000282 0.000363 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 187 100 0.067 0.00313 0.00435 0.0527 0.0682 0.0703 0.0805 0.000157 0.00018 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 187 39211.340 0.005 0.00477 0.0671 0.162 0.063 0.0842 0.253 0.317 0.000564 0.000707 ! Validation 187 39211.340 0.005 0.0051 0.0287 0.131 0.0653 0.0871 0.164 0.207 0.000367 0.000461 Wall time: 39211.340800526086 ! Best model 187 0.131 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 188 100 0.0967 0.00419 0.013 0.0593 0.0789 0.112 0.139 0.000249 0.00031 188 118 0.155 0.00522 0.0508 0.0649 0.0881 0.236 0.275 0.000528 0.000613 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 188 100 0.137 0.00303 0.0765 0.0518 0.0672 0.334 0.337 0.000746 0.000753 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 188 39420.371 0.005 0.00456 0.0444 0.136 0.0616 0.0823 0.201 0.257 0.000449 0.000573 ! Validation 188 39420.371 0.005 0.00505 0.0628 0.164 0.0649 0.0866 0.245 0.306 0.000548 0.000682 Wall time: 39420.37112833513 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 189 100 0.138 0.00436 0.0505 0.0603 0.0806 0.243 0.274 0.000542 0.000612 189 118 0.0924 0.00418 0.00882 0.06 0.0789 0.101 0.115 0.000226 0.000256 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 189 100 0.0772 0.00291 0.019 0.0509 0.0658 0.154 0.168 0.000344 0.000375 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 189 39629.781 0.005 0.00457 0.0544 0.146 0.0617 0.0825 0.237 0.285 0.000529 0.000637 ! Validation 189 39629.781 0.005 0.00488 0.0938 0.191 0.0638 0.0852 0.312 0.374 0.000696 0.000834 Wall time: 39629.78099835804 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 190 100 0.102 0.0046 0.00986 0.0617 0.0827 0.0979 0.121 0.000219 0.00027 190 118 0.109 0.0043 0.023 0.0584 0.08 0.168 0.185 0.000376 0.000413 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 190 100 0.0643 0.00304 0.00352 0.0521 0.0672 0.0615 0.0723 0.000137 0.000161 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 190 39838.834 0.005 0.0045 0.0399 0.13 0.0612 0.0819 0.197 0.244 0.00044 0.000545 ! Validation 190 39838.834 0.005 0.00492 0.043 0.142 0.0641 0.0856 0.2 0.253 0.000445 0.000565 Wall time: 39838.8340206868 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 191 100 0.107 0.00425 0.0215 0.0596 0.0795 0.145 0.179 0.000324 0.000399 191 118 0.249 0.00429 0.164 0.0605 0.0798 0.481 0.493 0.00107 0.0011 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 191 100 0.32 0.00297 0.261 0.0514 0.0664 0.619 0.623 0.00138 0.00139 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 191 40047.831 0.005 0.00447 0.0374 0.127 0.061 0.0815 0.184 0.233 0.000411 0.000521 ! Validation 191 40047.831 0.005 0.00484 0.448 0.545 0.0636 0.0849 0.746 0.816 0.00167 0.00182 Wall time: 40047.83138822997 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 192 100 0.136 0.00492 0.0374 0.0636 0.0855 0.197 0.236 0.000441 0.000526 192 118 0.0938 0.00443 0.00533 0.0612 0.0811 0.0749 0.089 0.000167 0.000199 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 192 100 0.0742 0.00288 0.0166 0.0508 0.0655 0.143 0.157 0.00032 0.000351 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 192 40256.830 0.005 0.00464 0.0677 0.161 0.0622 0.0831 0.248 0.318 0.000553 0.000711 ! Validation 192 40256.830 0.005 0.00484 0.0316 0.128 0.0636 0.0848 0.168 0.217 0.000376 0.000484 Wall time: 40256.83028773591 ! Best model 192 0.128 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 193 100 0.119 0.00467 0.0251 0.0628 0.0834 0.162 0.193 0.000361 0.000431 193 118 0.284 0.00508 0.183 0.0642 0.0869 0.498 0.521 0.00111 0.00116 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 193 100 0.169 0.00298 0.11 0.0518 0.0666 0.395 0.404 0.000882 0.000902 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 193 40465.919 0.005 0.00448 0.0549 0.145 0.061 0.0816 0.233 0.283 0.000521 0.000633 ! Validation 193 40465.919 0.005 0.00492 0.128 0.226 0.0643 0.0856 0.381 0.436 0.00085 0.000972 Wall time: 40465.91911246581 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 194 100 0.114 0.00426 0.0291 0.0596 0.0796 0.175 0.208 0.000391 0.000464 194 118 0.103 0.00457 0.0113 0.0613 0.0825 0.107 0.129 0.00024 0.000289 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 194 100 0.0673 0.00319 0.00349 0.0534 0.0689 0.0618 0.072 0.000138 0.000161 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 194 40686.266 0.005 0.00449 0.0475 0.137 0.0612 0.0818 0.212 0.267 0.000474 0.000595 ! Validation 194 40686.266 0.005 0.00503 0.0488 0.149 0.0649 0.0865 0.207 0.269 0.000462 0.000601 Wall time: 40686.26611570688 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 195 100 0.0923 0.00398 0.0128 0.0576 0.0769 0.106 0.138 0.000237 0.000307 195 118 0.242 0.00408 0.161 0.0582 0.0779 0.451 0.489 0.00101 0.00109 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 195 100 0.106 0.0029 0.0476 0.0508 0.0657 0.258 0.266 0.000575 0.000594 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 195 40895.254 0.005 0.00432 0.0317 0.118 0.06 0.0802 0.17 0.214 0.00038 0.000478 ! Validation 195 40895.254 0.005 0.00483 0.144 0.241 0.0635 0.0848 0.401 0.463 0.000894 0.00103 Wall time: 40895.254760751035 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 196 100 0.102 0.00423 0.0179 0.0593 0.0793 0.136 0.163 0.000303 0.000364 196 118 0.105 0.00377 0.0297 0.0566 0.0749 0.17 0.21 0.00038 0.000469 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 196 100 0.124 0.00288 0.0667 0.0506 0.0654 0.309 0.315 0.00069 0.000703 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 196 41104.250 0.005 0.00453 0.0626 0.153 0.0615 0.0821 0.232 0.306 0.000517 0.000683 ! Validation 196 41104.250 0.005 0.00468 0.121 0.215 0.0626 0.0834 0.367 0.425 0.000818 0.000948 Wall time: 41104.25049996888 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 197 100 0.252 0.0052 0.148 0.0658 0.0879 0.446 0.469 0.000995 0.00105 197 118 0.106 0.00452 0.0156 0.0619 0.0819 0.122 0.152 0.000271 0.00034 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 197 100 0.0908 0.00282 0.0345 0.0501 0.0647 0.219 0.226 0.000488 0.000506 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 197 41313.248 0.005 0.00454 0.0669 0.158 0.0616 0.0822 0.257 0.316 0.000574 0.000706 ! Validation 197 41313.248 0.005 0.00473 0.0406 0.135 0.0629 0.0839 0.204 0.246 0.000454 0.000548 Wall time: 41313.24856835883 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 198 100 0.146 0.00462 0.0537 0.0617 0.0829 0.255 0.283 0.000569 0.000631 198 118 0.107 0.00429 0.021 0.0594 0.0799 0.12 0.177 0.000267 0.000394 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 198 100 0.1 0.00285 0.043 0.0504 0.0651 0.245 0.253 0.000547 0.000565 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 198 41522.318 0.005 0.0043 0.0425 0.129 0.0599 0.08 0.2 0.252 0.000447 0.000562 ! Validation 198 41522.318 0.005 0.00469 0.0624 0.156 0.0626 0.0835 0.257 0.305 0.000573 0.00068 Wall time: 41522.31871451298 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 199 100 0.103 0.00446 0.0144 0.0606 0.0814 0.119 0.146 0.000265 0.000326 199 118 0.12 0.00484 0.0232 0.0627 0.0849 0.157 0.186 0.000351 0.000415 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 199 100 0.0826 0.00273 0.0281 0.0494 0.0637 0.192 0.204 0.000429 0.000456 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 199 41731.310 0.005 0.00441 0.0677 0.156 0.0606 0.0809 0.255 0.318 0.00057 0.00071 ! Validation 199 41731.310 0.005 0.00456 0.033 0.124 0.0617 0.0824 0.178 0.221 0.000397 0.000494 Wall time: 41731.31056530075 ! Best model 199 0.124 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 200 100 0.111 0.00383 0.0342 0.0567 0.0754 0.199 0.226 0.000445 0.000504 200 118 0.173 0.00434 0.0858 0.0606 0.0804 0.336 0.357 0.00075 0.000797 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 200 100 0.0707 0.00329 0.0049 0.0539 0.0699 0.066 0.0854 0.000147 0.000191 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 200 41940.306 0.005 0.00419 0.0241 0.108 0.059 0.0789 0.149 0.188 0.000332 0.000419 ! Validation 200 41940.306 0.005 0.00518 0.143 0.246 0.0658 0.0878 0.375 0.461 0.000837 0.00103 Wall time: 41940.30697485106 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 201 100 0.106 0.00397 0.0269 0.0578 0.0769 0.172 0.2 0.000384 0.000446 201 118 0.236 0.00444 0.147 0.0615 0.0813 0.452 0.467 0.00101 0.00104 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 201 100 0.138 0.0031 0.0764 0.0529 0.0679 0.328 0.337 0.000733 0.000753 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 201 42149.273 0.005 0.00431 0.046 0.132 0.0599 0.08 0.208 0.26 0.000465 0.00058 ! Validation 201 42149.273 0.005 0.00485 0.0868 0.184 0.0641 0.085 0.301 0.359 0.000673 0.000802 Wall time: 42149.27349714376 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 202 100 0.11 0.00431 0.0239 0.0596 0.08 0.153 0.188 0.000341 0.000421 202 118 0.226 0.00512 0.123 0.0665 0.0873 0.398 0.428 0.000888 0.000956 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 202 100 0.149 0.00361 0.0773 0.057 0.0732 0.331 0.339 0.000739 0.000757 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 202 42358.320 0.005 0.00424 0.0413 0.126 0.0594 0.0794 0.192 0.246 0.000429 0.000549 ! Validation 202 42358.320 0.005 0.00553 0.364 0.474 0.0685 0.0906 0.66 0.735 0.00147 0.00164 Wall time: 42358.32066055201 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 203 100 0.154 0.00406 0.0731 0.0582 0.0777 0.303 0.33 0.000675 0.000736 203 118 0.103 0.00454 0.0125 0.0606 0.0822 0.112 0.136 0.000249 0.000304 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 203 100 0.081 0.0028 0.0249 0.05 0.0646 0.178 0.193 0.000396 0.00043 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 203 42567.300 0.005 0.00463 0.0718 0.164 0.0622 0.0829 0.26 0.328 0.000581 0.000731 ! Validation 203 42567.300 0.005 0.00466 0.0393 0.133 0.0624 0.0832 0.199 0.242 0.000445 0.00054 Wall time: 42567.30060081696 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 204 100 0.1 0.0042 0.0163 0.0591 0.0791 0.12 0.156 0.000268 0.000348 204 118 0.189 0.00429 0.104 0.0592 0.0798 0.372 0.393 0.000831 0.000877 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 204 100 0.0817 0.00275 0.0268 0.0495 0.0639 0.192 0.2 0.00043 0.000446 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 204 42776.271 0.005 0.00412 0.0311 0.113 0.0585 0.0782 0.172 0.213 0.000384 0.000476 ! Validation 204 42776.271 0.005 0.00451 0.127 0.217 0.0615 0.0819 0.362 0.435 0.000808 0.000971 Wall time: 42776.27112433221 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 205 100 0.144 0.00415 0.0608 0.0583 0.0785 0.279 0.301 0.000622 0.000671 205 118 0.109 0.00439 0.0214 0.0598 0.0808 0.148 0.178 0.000329 0.000398 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 205 100 0.0723 0.00277 0.0168 0.0498 0.0642 0.137 0.158 0.000305 0.000353 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 205 42985.238 0.005 0.0041 0.0397 0.122 0.0584 0.078 0.198 0.243 0.000442 0.000543 ! Validation 205 42985.238 0.005 0.00449 0.0692 0.159 0.0613 0.0817 0.265 0.321 0.000592 0.000716 Wall time: 42985.23866329808 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 206 100 0.0939 0.004 0.0138 0.0578 0.0772 0.117 0.143 0.000261 0.000319 206 118 0.153 0.00487 0.0554 0.0629 0.0851 0.262 0.287 0.000585 0.000641 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 206 100 0.172 0.00309 0.111 0.0524 0.0678 0.401 0.406 0.000896 0.000905 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 206 43194.213 0.005 0.00412 0.0439 0.126 0.0585 0.0782 0.208 0.255 0.000464 0.00057 ! Validation 206 43194.213 0.005 0.0049 0.219 0.317 0.0642 0.0854 0.528 0.571 0.00118 0.00128 Wall time: 43194.21389605012 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 207 100 0.208 0.00378 0.133 0.0563 0.0749 0.427 0.444 0.000954 0.000992 207 118 0.128 0.00508 0.0261 0.0641 0.0869 0.146 0.197 0.000326 0.00044 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 207 100 0.0585 0.00278 0.00283 0.05 0.0644 0.0597 0.0649 0.000133 0.000145 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 207 43403.268 0.005 0.00414 0.0443 0.127 0.0587 0.0784 0.204 0.257 0.000456 0.000573 ! Validation 207 43403.268 0.005 0.00458 0.0811 0.173 0.062 0.0825 0.275 0.347 0.000613 0.000775 Wall time: 43403.268571401015 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 208 100 0.109 0.00425 0.0236 0.0594 0.0795 0.157 0.187 0.000351 0.000418 208 118 0.0999 0.00377 0.0246 0.057 0.0748 0.151 0.191 0.000338 0.000427 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 208 100 0.0616 0.00263 0.00905 0.0485 0.0625 0.0993 0.116 0.000222 0.000259 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 208 43612.245 0.005 0.00416 0.0491 0.132 0.0589 0.0787 0.214 0.271 0.000478 0.000604 ! Validation 208 43612.245 0.005 0.00439 0.0404 0.128 0.0604 0.0808 0.183 0.245 0.000409 0.000547 Wall time: 43612.245954907965 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 209 100 0.166 0.00445 0.0773 0.0606 0.0814 0.302 0.339 0.000673 0.000757 209 118 0.104 0.00423 0.019 0.0591 0.0793 0.133 0.168 0.000297 0.000375 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 209 100 0.111 0.003 0.0511 0.0517 0.0668 0.264 0.276 0.00059 0.000615 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 209 43821.225 0.005 0.00402 0.0457 0.126 0.0578 0.0773 0.205 0.261 0.000456 0.000583 ! Validation 209 43821.225 0.005 0.00468 0.057 0.151 0.0627 0.0834 0.245 0.291 0.000547 0.00065 Wall time: 43821.22521823319 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 210 100 0.0991 0.00377 0.0237 0.0559 0.0749 0.146 0.188 0.000326 0.000419 210 118 0.203 0.0035 0.133 0.0548 0.0722 0.401 0.445 0.000895 0.000993 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 210 100 0.0693 0.00292 0.0109 0.051 0.0659 0.101 0.128 0.000225 0.000285 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 210 44030.211 0.005 0.00401 0.0393 0.12 0.0578 0.0773 0.189 0.24 0.000422 0.000535 ! Validation 210 44030.211 0.005 0.00463 0.0576 0.15 0.0625 0.083 0.235 0.293 0.000524 0.000653 Wall time: 44030.21128256107 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 211 100 0.1 0.00393 0.0214 0.0572 0.0764 0.152 0.178 0.00034 0.000398 211 118 0.104 0.00305 0.0433 0.0513 0.0674 0.216 0.254 0.000482 0.000567 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 211 100 0.0756 0.00284 0.0188 0.0502 0.0649 0.153 0.167 0.000341 0.000374 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 211 44239.218 0.005 0.00409 0.0436 0.125 0.0585 0.0781 0.195 0.255 0.000436 0.000568 ! Validation 211 44239.218 0.005 0.00457 0.0708 0.162 0.0617 0.0825 0.247 0.325 0.000552 0.000724 Wall time: 44239.21873028483 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 212 100 0.194 0.00404 0.113 0.0578 0.0775 0.384 0.41 0.000857 0.000916 212 118 0.0925 0.00391 0.0143 0.0572 0.0763 0.113 0.146 0.000253 0.000326 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 212 100 0.0585 0.0027 0.00446 0.049 0.0634 0.0661 0.0815 0.000148 0.000182 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 212 44448.292 0.005 0.00423 0.064 0.149 0.0594 0.0794 0.251 0.309 0.000559 0.000691 ! Validation 212 44448.292 0.005 0.0044 0.0655 0.154 0.0606 0.0809 0.245 0.312 0.000548 0.000697 Wall time: 44448.29231232684 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 213 100 0.0956 0.004 0.0156 0.0578 0.0772 0.124 0.152 0.000276 0.000339 213 118 0.241 0.00533 0.135 0.0654 0.089 0.435 0.447 0.00097 0.000999 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 213 100 0.147 0.00275 0.0922 0.0495 0.064 0.364 0.37 0.000812 0.000827 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 213 44657.251 0.005 0.00398 0.0435 0.123 0.0574 0.0768 0.204 0.253 0.000455 0.000564 ! Validation 213 44657.251 0.005 0.00438 0.262 0.35 0.0606 0.0807 0.563 0.624 0.00126 0.00139 Wall time: 44657.251828957815 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 214 100 0.119 0.00396 0.0397 0.0575 0.0768 0.219 0.243 0.000489 0.000542 214 118 0.114 0.00475 0.0187 0.0613 0.084 0.147 0.167 0.000329 0.000372 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 214 100 0.0663 0.00269 0.0124 0.049 0.0633 0.117 0.136 0.000261 0.000304 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 214 44866.220 0.005 0.00414 0.0481 0.131 0.0587 0.0784 0.213 0.268 0.000476 0.000599 ! Validation 214 44866.220 0.005 0.0044 0.0868 0.175 0.0605 0.0808 0.301 0.359 0.000672 0.000802 Wall time: 44866.22058032313 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 215 100 0.0856 0.00368 0.012 0.0553 0.074 0.114 0.133 0.000254 0.000298 215 118 0.08 0.00345 0.011 0.054 0.0716 0.0918 0.128 0.000205 0.000285 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 215 100 0.0707 0.00249 0.021 0.047 0.0608 0.163 0.177 0.000364 0.000395 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 215 45075.186 0.005 0.00385 0.0327 0.11 0.0566 0.0757 0.179 0.221 0.000399 0.000494 ! Validation 215 45075.186 0.005 0.00412 0.0934 0.176 0.0585 0.0783 0.311 0.373 0.000694 0.000832 Wall time: 45075.18648744188 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 216 100 0.0799 0.00367 0.00654 0.0554 0.0738 0.0775 0.0986 0.000173 0.00022 216 118 0.271 0.00339 0.204 0.0538 0.071 0.535 0.55 0.00119 0.00123 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 216 100 0.306 0.00259 0.254 0.0483 0.0621 0.612 0.615 0.00137 0.00137 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 216 45284.152 0.005 0.00387 0.034 0.111 0.0567 0.0759 0.174 0.221 0.000389 0.000493 ! Validation 216 45284.152 0.005 0.00431 0.262 0.348 0.0602 0.08 0.596 0.624 0.00133 0.00139 Wall time: 45284.15208593896 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 217 100 0.108 0.00358 0.0369 0.0545 0.073 0.191 0.234 0.000427 0.000523 217 118 0.115 0.00339 0.0468 0.0536 0.071 0.199 0.264 0.000443 0.000589 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 217 100 0.0879 0.00257 0.0366 0.0478 0.0618 0.22 0.233 0.000491 0.000521 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 217 45493.198 0.005 0.00392 0.0502 0.129 0.0572 0.0764 0.224 0.273 0.000501 0.00061 ! Validation 217 45493.198 0.005 0.00423 0.0353 0.12 0.0594 0.0793 0.183 0.229 0.000408 0.000511 Wall time: 45493.19842823688 ! Best model 217 0.120 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 218 100 0.0903 0.00353 0.0197 0.0546 0.0725 0.147 0.171 0.000329 0.000382 218 118 0.234 0.00367 0.16 0.0555 0.0738 0.458 0.488 0.00102 0.00109 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 218 100 0.135 0.00271 0.0805 0.0494 0.0635 0.34 0.346 0.000759 0.000772 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 218 45702.169 0.005 0.0038 0.0382 0.114 0.0563 0.0752 0.182 0.236 0.000406 0.000526 ! Validation 218 45702.169 0.005 0.00436 0.11 0.197 0.0606 0.0806 0.331 0.405 0.000739 0.000903 Wall time: 45702.169933543075 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 219 100 0.113 0.00367 0.0395 0.0553 0.0738 0.208 0.243 0.000464 0.000541 219 118 0.261 0.0046 0.169 0.0608 0.0827 0.442 0.501 0.000987 0.00112 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 219 100 0.0614 0.00292 0.00306 0.0508 0.0659 0.0568 0.0674 0.000127 0.000151 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 219 45911.133 0.005 0.00388 0.0336 0.111 0.0567 0.0759 0.171 0.22 0.000381 0.000492 ! Validation 219 45911.133 0.005 0.00452 0.197 0.287 0.0615 0.082 0.413 0.541 0.000922 0.00121 Wall time: 45911.13326952886 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 220 100 0.0974 0.00402 0.0171 0.0571 0.0773 0.12 0.159 0.000267 0.000356 220 118 0.0788 0.00354 0.00801 0.0548 0.0725 0.0883 0.109 0.000197 0.000244 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 220 100 0.11 0.0024 0.0624 0.0462 0.0598 0.297 0.305 0.000663 0.00068 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 220 46120.094 0.005 0.00444 0.0821 0.171 0.0607 0.0813 0.255 0.351 0.000569 0.000782 ! Validation 220 46120.094 0.005 0.00403 0.168 0.248 0.0579 0.0774 0.435 0.499 0.000971 0.00111 Wall time: 46120.09444953408 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 221 100 0.135 0.0037 0.0609 0.0551 0.0742 0.274 0.301 0.000612 0.000672 221 118 0.11 0.00455 0.0195 0.0606 0.0822 0.138 0.17 0.000308 0.00038 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 221 100 0.241 0.00267 0.188 0.0485 0.063 0.523 0.528 0.00117 0.00118 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 221 46329.145 0.005 0.00371 0.032 0.106 0.0555 0.0742 0.176 0.218 0.000394 0.000487 ! Validation 221 46329.145 0.005 0.00431 0.181 0.267 0.0599 0.0801 0.481 0.519 0.00107 0.00116 Wall time: 46329.14562415192 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 222 100 0.0946 0.00355 0.0236 0.0539 0.0726 0.158 0.188 0.000353 0.000419 222 118 0.0739 0.00307 0.0124 0.0516 0.0676 0.121 0.136 0.000271 0.000303 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 222 100 0.0605 0.00238 0.0128 0.0461 0.0595 0.118 0.138 0.000263 0.000309 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 222 46538.124 0.005 0.004 0.0606 0.141 0.0578 0.0772 0.229 0.301 0.000512 0.000672 ! Validation 222 46538.124 0.005 0.004 0.0301 0.11 0.0577 0.0771 0.166 0.211 0.00037 0.000472 Wall time: 46538.12448700378 ! Best model 222 0.110 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 223 100 0.116 0.00395 0.0374 0.0569 0.0767 0.195 0.236 0.000436 0.000527 223 118 0.0952 0.00389 0.0175 0.0569 0.076 0.131 0.161 0.000292 0.00036 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 223 100 0.0531 0.00245 0.00406 0.0468 0.0604 0.0624 0.0777 0.000139 0.000173 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 223 46747.095 0.005 0.00376 0.0354 0.111 0.0559 0.0748 0.178 0.23 0.000398 0.000513 ! Validation 223 46747.095 0.005 0.00408 0.0504 0.132 0.0585 0.0779 0.214 0.274 0.000478 0.000611 Wall time: 46747.09526294004 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 224 100 0.0814 0.00358 0.00985 0.0547 0.0729 0.0975 0.121 0.000218 0.00027 224 118 0.0785 0.00353 0.00792 0.0546 0.0724 0.0871 0.109 0.000195 0.000242 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 224 100 0.0666 0.00238 0.019 0.0459 0.0595 0.155 0.168 0.000346 0.000375 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 224 46956.065 0.005 0.00362 0.0248 0.0972 0.0548 0.0734 0.149 0.192 0.000333 0.000429 ! Validation 224 46956.065 0.005 0.00395 0.0662 0.145 0.0573 0.0767 0.255 0.314 0.000568 0.0007 Wall time: 46956.06586301606 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 225 100 0.0835 0.00358 0.0119 0.0548 0.0729 0.115 0.133 0.000256 0.000297 225 118 0.081 0.00353 0.0104 0.0543 0.0725 0.102 0.124 0.000227 0.000277 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 225 100 0.0495 0.00232 0.00313 0.0454 0.0587 0.0592 0.0683 0.000132 0.000152 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 225 47165.077 0.005 0.00363 0.026 0.0985 0.0549 0.0735 0.158 0.197 0.000352 0.000439 ! Validation 225 47165.077 0.005 0.00386 0.0375 0.115 0.0566 0.0758 0.182 0.236 0.000405 0.000527 Wall time: 47165.07796703791 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 226 100 0.088 0.00383 0.0114 0.0566 0.0755 0.113 0.13 0.000253 0.00029 226 118 0.167 0.00326 0.101 0.0527 0.0696 0.37 0.388 0.000826 0.000866 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 226 100 0.108 0.00253 0.0578 0.0475 0.0614 0.283 0.293 0.000633 0.000654 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 226 47374.098 0.005 0.00363 0.0396 0.112 0.055 0.0735 0.194 0.241 0.000433 0.000539 ! Validation 226 47374.098 0.005 0.00408 0.123 0.205 0.0583 0.0779 0.372 0.428 0.00083 0.000954 Wall time: 47374.098100118805 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 227 100 0.0887 0.00344 0.02 0.0538 0.0715 0.135 0.173 0.000301 0.000385 227 118 0.109 0.00318 0.0455 0.0523 0.0688 0.236 0.26 0.000527 0.000581 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 227 100 0.0564 0.00259 0.00459 0.048 0.0621 0.0741 0.0827 0.000165 0.000184 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 227 47583.036 0.005 0.00377 0.0394 0.115 0.056 0.0749 0.191 0.242 0.000425 0.00054 ! Validation 227 47583.036 0.005 0.00411 0.0407 0.123 0.0586 0.0782 0.19 0.246 0.000423 0.000549 Wall time: 47583.03654848598 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 228 100 0.0957 0.00413 0.0132 0.0588 0.0783 0.11 0.14 0.000246 0.000312 228 118 0.111 0.00379 0.0356 0.0561 0.0751 0.216 0.23 0.000483 0.000514 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 228 100 0.0845 0.00263 0.0318 0.048 0.0626 0.207 0.218 0.000461 0.000486 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 228 47791.975 0.005 0.00373 0.0484 0.123 0.0557 0.0744 0.212 0.269 0.000474 0.0006 ! Validation 228 47791.975 0.005 0.00412 0.0446 0.127 0.0586 0.0783 0.213 0.258 0.000475 0.000575 Wall time: 47791.975444975775 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 229 100 0.0777 0.00323 0.0132 0.0521 0.0693 0.112 0.14 0.00025 0.000312 229 118 0.0879 0.00383 0.0113 0.0562 0.0755 0.109 0.13 0.000243 0.000289 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 229 100 0.0667 0.00224 0.0219 0.0446 0.0577 0.171 0.181 0.000382 0.000403 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 229 48000.902 0.005 0.00354 0.023 0.0938 0.0542 0.0725 0.147 0.185 0.000328 0.000414 ! Validation 229 48000.902 0.005 0.00379 0.0687 0.145 0.0561 0.0751 0.267 0.32 0.000595 0.000714 Wall time: 48000.90285055898 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 230 100 0.088 0.00361 0.0158 0.0546 0.0733 0.129 0.153 0.000288 0.000342 230 118 0.171 0.00397 0.091 0.0565 0.0769 0.338 0.368 0.000755 0.000821 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 230 100 0.0604 0.00252 0.01 0.0472 0.0612 0.0943 0.122 0.00021 0.000272 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 230 48209.843 0.005 0.00362 0.0487 0.121 0.0549 0.0733 0.217 0.268 0.000484 0.000599 ! Validation 230 48209.843 0.005 0.00403 0.0303 0.111 0.058 0.0775 0.17 0.212 0.000379 0.000474 Wall time: 48209.84335320303 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 231 100 0.14 0.00357 0.0684 0.0547 0.0729 0.292 0.319 0.000653 0.000712 231 118 0.0736 0.00313 0.0111 0.0515 0.0682 0.0995 0.128 0.000222 0.000286 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 231 100 0.104 0.00224 0.0597 0.0446 0.0577 0.291 0.298 0.00065 0.000665 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 231 48418.855 0.005 0.00372 0.0447 0.119 0.0557 0.0744 0.213 0.259 0.000475 0.000577 ! Validation 231 48418.855 0.005 0.0038 0.124 0.2 0.0562 0.0752 0.385 0.43 0.000859 0.000959 Wall time: 48418.855523588136 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 232 100 0.1 0.00372 0.0261 0.0562 0.0744 0.163 0.197 0.000364 0.000439 232 118 0.078 0.00371 0.00373 0.0557 0.0743 0.0517 0.0745 0.000115 0.000166 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 232 100 0.051 0.00237 0.00351 0.046 0.0594 0.0626 0.0722 0.00014 0.000161 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 232 48627.798 0.005 0.00363 0.0389 0.112 0.055 0.0735 0.192 0.241 0.000428 0.000539 ! Validation 232 48627.798 0.005 0.00389 0.0431 0.121 0.057 0.0761 0.199 0.253 0.000444 0.000565 Wall time: 48627.79849652387 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 233 100 0.098 0.00367 0.0246 0.0555 0.0739 0.158 0.191 0.000354 0.000427 233 118 0.0932 0.00355 0.0222 0.0538 0.0727 0.151 0.182 0.000338 0.000406 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 233 100 0.0711 0.00231 0.025 0.0454 0.0586 0.178 0.193 0.000398 0.00043 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 233 48836.734 0.005 0.00354 0.0387 0.109 0.0542 0.0725 0.187 0.24 0.000417 0.000537 ! Validation 233 48836.734 0.005 0.00383 0.0308 0.107 0.0566 0.0755 0.173 0.214 0.000387 0.000477 Wall time: 48836.73424063483 ! Best model 233 0.107 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 234 100 0.125 0.00388 0.0472 0.0571 0.076 0.237 0.265 0.00053 0.000591 234 118 0.0708 0.00334 0.00393 0.0525 0.0705 0.0636 0.0764 0.000142 0.000171 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 234 100 0.0634 0.00228 0.0177 0.0452 0.0583 0.154 0.162 0.000344 0.000362 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 234 49045.667 0.005 0.00349 0.0306 0.1 0.0539 0.0721 0.175 0.214 0.00039 0.000478 ! Validation 234 49045.667 0.005 0.00382 0.0963 0.173 0.0565 0.0754 0.319 0.378 0.000712 0.000845 Wall time: 49045.66779116308 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 235 100 0.0947 0.00356 0.0236 0.0539 0.0727 0.164 0.187 0.000365 0.000418 235 118 0.095 0.00351 0.0248 0.054 0.0723 0.168 0.192 0.000375 0.000429 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 235 100 0.0799 0.00269 0.0262 0.0485 0.0632 0.181 0.197 0.000405 0.00044 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 235 49254.766 0.005 0.00362 0.0413 0.114 0.0549 0.0734 0.198 0.248 0.000442 0.000554 ! Validation 235 49254.766 0.005 0.00414 0.0407 0.123 0.0588 0.0785 0.195 0.246 0.000435 0.000549 Wall time: 49254.766342844814 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 236 100 0.0811 0.00337 0.0137 0.0531 0.0708 0.121 0.143 0.000271 0.000319 236 118 0.0963 0.00347 0.027 0.0541 0.0718 0.161 0.2 0.000359 0.000447 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 236 100 0.055 0.00248 0.00544 0.0469 0.0607 0.0779 0.0899 0.000174 0.000201 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 236 49463.777 0.005 0.00342 0.0346 0.103 0.0533 0.0713 0.186 0.227 0.000414 0.000507 ! Validation 236 49463.777 0.005 0.00395 0.0358 0.115 0.0574 0.0767 0.175 0.231 0.00039 0.000515 Wall time: 49463.7772926162 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 237 100 0.115 0.00382 0.039 0.0572 0.0754 0.202 0.241 0.000451 0.000537 237 118 0.0845 0.00383 0.00797 0.0555 0.0754 0.0858 0.109 0.000192 0.000243 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 237 100 0.0564 0.00239 0.00856 0.0462 0.0596 0.0999 0.113 0.000223 0.000252 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 237 49672.711 0.005 0.00363 0.0342 0.107 0.0549 0.0735 0.171 0.226 0.000381 0.000505 ! Validation 237 49672.711 0.005 0.00392 0.0424 0.121 0.0574 0.0763 0.192 0.251 0.000429 0.000561 Wall time: 49672.71108295489 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 238 100 0.143 0.00352 0.0728 0.0544 0.0724 0.308 0.329 0.000687 0.000734 238 118 0.143 0.00413 0.0606 0.0587 0.0784 0.26 0.3 0.00058 0.00067 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 238 100 0.0607 0.00233 0.0141 0.0454 0.0589 0.129 0.145 0.000288 0.000323 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 238 49881.657 0.005 0.00345 0.0348 0.104 0.0536 0.0716 0.184 0.227 0.00041 0.000506 ! Validation 238 49881.657 0.005 0.00375 0.148 0.223 0.056 0.0746 0.371 0.469 0.000829 0.00105 Wall time: 49881.65776565485 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 239 100 0.0904 0.00351 0.0203 0.0538 0.0722 0.141 0.174 0.000315 0.000387 239 118 0.122 0.00351 0.0517 0.0549 0.0723 0.267 0.277 0.000595 0.000619 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 239 100 0.0653 0.00253 0.0147 0.0476 0.0613 0.127 0.148 0.000284 0.00033 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 239 50090.615 0.005 0.00354 0.0423 0.113 0.0544 0.0726 0.19 0.251 0.000424 0.00056 ! Validation 239 50090.615 0.005 0.00395 0.0299 0.109 0.0576 0.0766 0.166 0.211 0.00037 0.00047 Wall time: 50090.61558332713 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 240 100 0.0986 0.00334 0.0317 0.0528 0.0705 0.189 0.217 0.000421 0.000485 240 118 0.0887 0.00324 0.0239 0.0521 0.0694 0.178 0.189 0.000398 0.000421 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 240 100 0.0732 0.00265 0.0202 0.0484 0.0628 0.167 0.173 0.000373 0.000387 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 240 50299.637 0.005 0.00343 0.0445 0.113 0.0535 0.0714 0.208 0.258 0.000463 0.000575 ! Validation 240 50299.637 0.005 0.0041 0.103 0.185 0.0586 0.0781 0.318 0.391 0.000709 0.000874 Wall time: 50299.63708597096 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 241 100 0.091 0.0034 0.023 0.0531 0.0711 0.16 0.185 0.000357 0.000412 241 118 0.0899 0.0035 0.0199 0.0539 0.0721 0.136 0.172 0.000303 0.000384 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 241 100 0.0482 0.00214 0.00531 0.0436 0.0565 0.0765 0.0889 0.000171 0.000198 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 241 50508.567 0.005 0.00338 0.0259 0.0935 0.053 0.0709 0.154 0.196 0.000344 0.000438 ! Validation 241 50508.567 0.005 0.0036 0.0395 0.112 0.0547 0.0732 0.182 0.242 0.000406 0.000541 Wall time: 50508.56780582201 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 242 100 0.11 0.0048 0.0137 0.0637 0.0845 0.121 0.143 0.00027 0.000318 242 118 0.072 0.00311 0.00989 0.0516 0.068 0.1 0.121 0.000224 0.000271 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 242 100 0.0587 0.00243 0.0101 0.0463 0.0601 0.104 0.123 0.000231 0.000274 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 242 50717.497 0.005 0.00374 0.0535 0.128 0.0556 0.0746 0.213 0.283 0.000475 0.000631 ! Validation 242 50717.497 0.005 0.00381 0.0435 0.12 0.0565 0.0753 0.194 0.254 0.000434 0.000568 Wall time: 50717.497282459866 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 243 100 0.112 0.00339 0.0447 0.053 0.071 0.232 0.258 0.000517 0.000576 243 118 0.0985 0.00351 0.0284 0.0531 0.0722 0.177 0.206 0.000396 0.000459 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 243 100 0.119 0.0022 0.0753 0.0444 0.0573 0.331 0.335 0.000739 0.000747 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 243 50927.595 0.005 0.00338 0.0313 0.0988 0.053 0.0708 0.176 0.216 0.000394 0.000482 ! Validation 243 50927.595 0.005 0.00367 0.122 0.196 0.0553 0.0739 0.376 0.427 0.000839 0.000953 Wall time: 50927.59536077315 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 244 100 0.0717 0.00312 0.00935 0.051 0.0681 0.0974 0.118 0.000217 0.000263 244 118 0.0877 0.00371 0.0136 0.0558 0.0742 0.118 0.142 0.000264 0.000317 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 244 100 0.08 0.00215 0.0369 0.0437 0.0566 0.228 0.234 0.000509 0.000523 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 244 51136.532 0.005 0.00326 0.0271 0.0922 0.052 0.0696 0.16 0.201 0.000358 0.000449 ! Validation 244 51136.532 0.005 0.00363 0.114 0.186 0.055 0.0735 0.362 0.411 0.000807 0.000918 Wall time: 51136.532981288154 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 245 100 0.0821 0.00324 0.0173 0.0519 0.0694 0.136 0.16 0.000303 0.000358 245 118 0.074 0.00306 0.0128 0.0502 0.0675 0.109 0.138 0.000243 0.000307 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 245 100 0.0481 0.00211 0.0059 0.0433 0.056 0.0795 0.0937 0.000178 0.000209 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 245 51345.576 0.005 0.00326 0.0269 0.0921 0.052 0.0696 0.165 0.2 0.000367 0.000447 ! Validation 245 51345.576 0.005 0.0035 0.0372 0.107 0.054 0.0722 0.177 0.235 0.000395 0.000525 Wall time: 51345.5768393632 ! Best model 245 0.107 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 246 100 0.0856 0.00324 0.0207 0.0515 0.0694 0.151 0.175 0.000337 0.000392 246 118 0.0914 0.00364 0.0186 0.0551 0.0736 0.131 0.166 0.000292 0.000371 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 246 100 0.0504 0.00211 0.00821 0.0433 0.056 0.0913 0.11 0.000204 0.000247 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 246 51554.552 0.005 0.00355 0.053 0.124 0.0542 0.0726 0.212 0.281 0.000472 0.000628 ! Validation 246 51554.552 0.005 0.00353 0.0812 0.152 0.0541 0.0725 0.272 0.348 0.000607 0.000776 Wall time: 51554.55199833214 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 247 100 0.0831 0.00335 0.0161 0.0527 0.0706 0.117 0.155 0.00026 0.000345 247 118 0.109 0.0028 0.0528 0.0489 0.0646 0.268 0.28 0.000598 0.000626 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 247 100 0.104 0.00218 0.061 0.0441 0.0569 0.292 0.301 0.000652 0.000672 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 247 51763.499 0.005 0.00321 0.0245 0.0886 0.0517 0.0691 0.154 0.19 0.000344 0.000424 ! Validation 247 51763.499 0.005 0.00352 0.138 0.209 0.0542 0.0724 0.4 0.454 0.000893 0.00101 Wall time: 51763.49984361883 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 248 100 0.0768 0.00292 0.0183 0.0495 0.066 0.144 0.165 0.000321 0.000369 248 118 0.0788 0.00273 0.0243 0.0479 0.0637 0.173 0.19 0.000386 0.000425 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 248 100 0.0682 0.00205 0.0272 0.0427 0.0552 0.191 0.201 0.000427 0.000449 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 248 51972.442 0.005 0.00359 0.0565 0.128 0.0546 0.0731 0.224 0.291 0.000501 0.000649 ! Validation 248 51972.442 0.005 0.00348 0.0772 0.147 0.0537 0.0719 0.288 0.339 0.000643 0.000756 Wall time: 51972.44206949975 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 249 100 0.172 0.0037 0.0978 0.0548 0.0742 0.365 0.381 0.000815 0.000851 249 118 0.11 0.00369 0.0367 0.0557 0.074 0.19 0.234 0.000424 0.000521 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 249 100 0.186 0.00231 0.14 0.0449 0.0586 0.453 0.456 0.00101 0.00102 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 249 52181.637 0.005 0.00324 0.035 0.0998 0.0518 0.0694 0.183 0.228 0.000409 0.000509 ! Validation 249 52181.637 0.005 0.00374 0.294 0.369 0.0557 0.0746 0.605 0.661 0.00135 0.00148 Wall time: 52181.637712081894 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 250 100 0.0756 0.00333 0.00889 0.0527 0.0704 0.091 0.115 0.000203 0.000257 250 118 0.0638 0.00255 0.0128 0.0469 0.0616 0.107 0.138 0.000238 0.000309 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 250 100 0.0465 0.00204 0.00563 0.0427 0.0551 0.0788 0.0915 0.000176 0.000204 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 250 52390.666 0.005 0.00327 0.026 0.0913 0.0522 0.0698 0.154 0.197 0.000343 0.00044 ! Validation 250 52390.666 0.005 0.00344 0.0302 0.0991 0.0535 0.0716 0.162 0.212 0.000362 0.000473 Wall time: 52390.666056818794 ! Best model 250 0.099 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 251 100 0.077 0.00314 0.0141 0.0508 0.0684 0.122 0.145 0.000272 0.000323 251 118 0.0663 0.00313 0.00377 0.0512 0.0682 0.0683 0.0748 0.000152 0.000167 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 251 100 0.0455 0.00203 0.00492 0.0425 0.055 0.0737 0.0855 0.000165 0.000191 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 251 52599.614 0.005 0.00312 0.0201 0.0825 0.0509 0.0681 0.14 0.173 0.000312 0.000387 ! Validation 251 52599.614 0.005 0.00345 0.0248 0.0939 0.0535 0.0717 0.149 0.192 0.000334 0.000429 Wall time: 52599.61439804593 ! Best model 251 0.094 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 252 100 0.0804 0.00339 0.0126 0.053 0.071 0.0983 0.137 0.000219 0.000305 252 118 0.0921 0.00279 0.0362 0.0484 0.0644 0.206 0.232 0.000459 0.000518 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 252 100 0.0965 0.00244 0.0478 0.0467 0.0602 0.262 0.267 0.000584 0.000595 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 252 52808.557 0.005 0.00322 0.0302 0.0945 0.0518 0.0692 0.171 0.212 0.000382 0.000472 ! Validation 252 52808.557 0.005 0.00383 0.0531 0.13 0.0568 0.0755 0.239 0.281 0.000534 0.000628 Wall time: 52808.55783997476 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 253 100 4.81 0.176 1.29 0.382 0.511 1.18 1.38 0.00263 0.00309 253 118 3.07 0.133 0.418 0.328 0.444 0.589 0.789 0.00132 0.00176 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 253 100 2.48 0.115 0.175 0.306 0.414 0.411 0.511 0.000918 0.00114 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 253 53017.488 0.005 0.201 3.6 7.61 0.301 0.547 1.3 2.32 0.0029 0.00518 ! Validation 253 53017.488 0.005 0.134 0.618 3.3 0.331 0.446 0.773 0.959 0.00173 0.00214 Wall time: 53017.48851631582 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 254 100 1.07 0.0458 0.153 0.189 0.261 0.395 0.477 0.000883 0.00107 254 118 1.04 0.0437 0.167 0.184 0.255 0.433 0.499 0.000967 0.00111 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 254 100 0.663 0.0316 0.0303 0.159 0.217 0.182 0.212 0.000405 0.000474 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 254 53226.437 0.005 0.0729 0.423 1.88 0.237 0.33 0.639 0.795 0.00143 0.00177 ! Validation 254 53226.437 0.005 0.0416 0.438 1.27 0.181 0.249 0.691 0.807 0.00154 0.0018 Wall time: 53226.437126044184 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 255 100 0.647 0.0272 0.102 0.146 0.201 0.307 0.389 0.000684 0.000868 255 118 0.614 0.0243 0.129 0.138 0.19 0.34 0.438 0.000759 0.000977 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 255 100 0.439 0.0195 0.0494 0.124 0.17 0.223 0.271 0.000497 0.000605 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 255 53435.465 0.005 0.0313 0.242 0.867 0.156 0.216 0.48 0.601 0.00107 0.00134 ! Validation 255 53435.465 0.005 0.0261 0.447 0.969 0.143 0.197 0.727 0.816 0.00162 0.00182 Wall time: 53435.465908756945 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 256 100 0.484 0.0207 0.0703 0.127 0.175 0.251 0.323 0.000559 0.000721 256 118 0.609 0.0205 0.199 0.127 0.175 0.486 0.544 0.00109 0.00121 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 256 100 0.319 0.0152 0.0154 0.109 0.15 0.125 0.152 0.00028 0.000338 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 256 53644.414 0.005 0.0226 0.182 0.634 0.133 0.183 0.42 0.52 0.000938 0.00116 ! Validation 256 53644.414 0.005 0.0205 0.175 0.585 0.127 0.174 0.42 0.511 0.000936 0.00114 Wall time: 53644.41496385308 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 257 100 0.412 0.018 0.0528 0.118 0.164 0.218 0.28 0.000488 0.000625 257 118 0.58 0.0176 0.229 0.117 0.162 0.514 0.584 0.00115 0.0013 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 257 100 0.282 0.0127 0.0289 0.0991 0.137 0.171 0.207 0.000383 0.000463 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 257 53853.385 0.005 0.0184 0.127 0.496 0.12 0.166 0.343 0.434 0.000766 0.000969 ! Validation 257 53853.385 0.005 0.0172 0.248 0.591 0.116 0.16 0.527 0.607 0.00118 0.00135 Wall time: 53853.38538826816 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 258 100 0.522 0.0148 0.225 0.107 0.149 0.519 0.579 0.00116 0.00129 258 118 0.79 0.0148 0.494 0.108 0.149 0.824 0.857 0.00184 0.00191 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 258 100 0.277 0.0112 0.053 0.0929 0.129 0.239 0.281 0.000533 0.000627 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 258 54062.343 0.005 0.0157 0.128 0.442 0.11 0.153 0.345 0.432 0.00077 0.000964 ! Validation 258 54062.343 0.005 0.0153 0.404 0.71 0.109 0.151 0.695 0.775 0.00155 0.00173 Wall time: 54062.34359123977 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 259 100 0.735 0.0135 0.465 0.102 0.142 0.785 0.832 0.00175 0.00186 259 118 0.336 0.0143 0.0505 0.105 0.146 0.198 0.274 0.000442 0.000612 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 259 100 0.72 0.0101 0.517 0.0886 0.123 0.866 0.877 0.00193 0.00196 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 259 54271.385 0.005 0.0141 0.121 0.402 0.104 0.145 0.336 0.425 0.000749 0.000948 ! Validation 259 54271.385 0.005 0.0137 0.238 0.513 0.104 0.143 0.525 0.595 0.00117 0.00133 Wall time: 54271.38551882887 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 260 100 0.386 0.0126 0.135 0.0991 0.137 0.381 0.449 0.00085 0.001 260 118 0.267 0.0119 0.0298 0.0963 0.133 0.162 0.21 0.000361 0.00047 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 260 100 0.551 0.00918 0.368 0.0845 0.117 0.727 0.739 0.00162 0.00165 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 260 54480.357 0.005 0.0128 0.129 0.385 0.0999 0.138 0.353 0.44 0.000787 0.000981 ! Validation 260 54480.357 0.005 0.0126 0.167 0.419 0.1 0.137 0.426 0.498 0.000952 0.00111 Wall time: 54480.357422067784 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 261 100 0.555 0.0118 0.319 0.0962 0.133 0.639 0.689 0.00143 0.00154 261 118 0.249 0.0117 0.0154 0.0946 0.132 0.118 0.151 0.000264 0.000338 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 261 100 0.276 0.00841 0.108 0.0809 0.112 0.381 0.401 0.00085 0.000895 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 261 54689.341 0.005 0.0116 0.12 0.352 0.0955 0.132 0.338 0.423 0.000756 0.000944 ! Validation 261 54689.341 0.005 0.0116 0.0601 0.292 0.0961 0.131 0.24 0.299 0.000536 0.000667 Wall time: 54689.341296299826 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 262 100 0.453 0.0102 0.249 0.0897 0.123 0.571 0.608 0.00128 0.00136 262 118 0.279 0.0109 0.0617 0.0936 0.127 0.26 0.303 0.00058 0.000676 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 262 100 0.195 0.00754 0.0446 0.0772 0.106 0.232 0.258 0.000518 0.000575 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 262 54898.276 0.005 0.0106 0.0833 0.296 0.0917 0.126 0.278 0.352 0.000621 0.000787 ! Validation 262 54898.276 0.005 0.0105 0.203 0.414 0.092 0.125 0.484 0.55 0.00108 0.00123 Wall time: 54898.276092122775 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 263 100 0.251 0.00966 0.0576 0.0878 0.12 0.234 0.293 0.000522 0.000653 263 118 0.215 0.00888 0.0373 0.0842 0.115 0.178 0.236 0.000398 0.000526 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 263 100 0.149 0.00703 0.00822 0.0748 0.102 0.0998 0.111 0.000223 0.000247 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 263 55107.235 0.005 0.00998 0.133 0.333 0.089 0.122 0.36 0.446 0.000804 0.000995 ! Validation 263 55107.235 0.005 0.00985 0.0995 0.297 0.0893 0.121 0.317 0.385 0.000708 0.000859 Wall time: 55107.235068921 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 264 100 0.273 0.00864 0.1 0.0836 0.113 0.333 0.386 0.000743 0.000862 264 118 0.189 0.0085 0.019 0.0829 0.112 0.125 0.168 0.000278 0.000376 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 264 100 0.164 0.0064 0.0358 0.0719 0.0976 0.209 0.231 0.000467 0.000515 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 264 55316.263 0.005 0.00907 0.0781 0.259 0.0854 0.116 0.281 0.342 0.000627 0.000763 ! Validation 264 55316.263 0.005 0.00904 0.0475 0.228 0.0861 0.116 0.21 0.266 0.000469 0.000593 Wall time: 55316.26347262319 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 265 100 0.247 0.00799 0.0867 0.0809 0.109 0.321 0.359 0.000717 0.000801 265 118 0.178 0.00791 0.02 0.0795 0.108 0.141 0.172 0.000315 0.000385 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 265 100 0.136 0.00616 0.0131 0.0713 0.0957 0.125 0.14 0.000279 0.000312 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 265 55525.213 0.005 0.00839 0.112 0.28 0.0825 0.112 0.327 0.41 0.00073 0.000914 ! Validation 265 55525.213 0.005 0.00873 0.168 0.343 0.0848 0.114 0.423 0.5 0.000945 0.00112 Wall time: 55525.213220187 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 266 100 0.204 0.00779 0.0479 0.0799 0.108 0.223 0.267 0.000497 0.000596 266 118 0.182 0.00692 0.0437 0.0755 0.101 0.202 0.255 0.00045 0.000569 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 266 100 0.216 0.00543 0.107 0.0672 0.0899 0.39 0.399 0.000872 0.000891 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 266 55734.156 0.005 0.00787 0.0749 0.232 0.0802 0.108 0.273 0.334 0.000609 0.000746 ! Validation 266 55734.156 0.005 0.00788 0.0619 0.219 0.0809 0.108 0.249 0.303 0.000556 0.000677 Wall time: 55734.15640055388 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 267 100 0.37 0.00734 0.223 0.0773 0.105 0.553 0.576 0.00123 0.00129 267 118 0.198 0.00697 0.0585 0.0757 0.102 0.215 0.295 0.000481 0.000658 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 267 100 0.108 0.00517 0.00493 0.066 0.0877 0.074 0.0856 0.000165 0.000191 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 267 55943.122 0.005 0.00733 0.0985 0.245 0.0777 0.104 0.308 0.383 0.000686 0.000855 ! Validation 267 55943.122 0.005 0.00753 0.0823 0.233 0.0793 0.106 0.281 0.35 0.000627 0.000781 Wall time: 55943.122518057 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 268 100 0.497 0.00679 0.361 0.075 0.1 0.711 0.733 0.00159 0.00164 268 118 0.164 0.00661 0.0313 0.0744 0.0992 0.183 0.216 0.000408 0.000482 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 268 100 0.101 0.0048 0.00502 0.0641 0.0845 0.0566 0.0864 0.000126 0.000193 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 268 56152.074 0.005 0.00681 0.0648 0.201 0.0752 0.101 0.243 0.311 0.000543 0.000694 ! Validation 268 56152.074 0.005 0.00703 0.0672 0.208 0.0768 0.102 0.249 0.316 0.000556 0.000705 Wall time: 56152.07434451813 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 269 100 0.147 0.00623 0.0225 0.072 0.0963 0.148 0.183 0.000331 0.000408 269 118 0.242 0.00655 0.111 0.0733 0.0987 0.369 0.406 0.000824 0.000907 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 269 100 0.16 0.00441 0.0718 0.0617 0.081 0.32 0.327 0.000714 0.000729 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 269 56361.103 0.005 0.00645 0.0738 0.203 0.0733 0.0979 0.272 0.331 0.000608 0.000738 ! Validation 269 56361.103 0.005 0.00659 0.203 0.335 0.0745 0.099 0.5 0.549 0.00112 0.00123 Wall time: 56361.10308477888 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 270 100 0.317 0.00626 0.192 0.072 0.0965 0.506 0.534 0.00113 0.00119 270 118 0.16 0.00585 0.0426 0.0703 0.0933 0.217 0.252 0.000483 0.000562 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 270 100 0.153 0.00415 0.0698 0.06 0.0785 0.313 0.322 0.000699 0.000719 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 270 56570.049 0.005 0.00602 0.0605 0.181 0.071 0.0946 0.242 0.3 0.00054 0.00067 ! Validation 270 56570.049 0.005 0.00627 0.0467 0.172 0.0728 0.0965 0.216 0.263 0.000483 0.000588 Wall time: 56570.04908734001 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 271 100 0.138 0.0057 0.0237 0.0688 0.0921 0.155 0.188 0.000346 0.000419 271 118 0.149 0.00554 0.0379 0.0691 0.0908 0.195 0.237 0.000436 0.00053 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 271 100 0.0991 0.00387 0.0218 0.0582 0.0758 0.17 0.18 0.000378 0.000402 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 271 56778.984 0.005 0.00573 0.0616 0.176 0.0694 0.0924 0.241 0.303 0.000537 0.000676 ! Validation 271 56778.984 0.005 0.00591 0.119 0.238 0.0708 0.0938 0.36 0.421 0.000803 0.00094 Wall time: 56778.9841141412 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 272 100 0.148 0.00518 0.0442 0.0661 0.0877 0.224 0.256 0.000499 0.000572 272 118 0.18 0.00547 0.0707 0.0672 0.0902 0.283 0.324 0.000632 0.000724 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 272 100 0.152 0.00369 0.0779 0.0571 0.074 0.334 0.34 0.000746 0.00076 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 272 56987.938 0.005 0.00546 0.0625 0.172 0.0678 0.0901 0.244 0.305 0.000544 0.00068 ! Validation 272 56987.938 0.005 0.00569 0.201 0.315 0.0695 0.092 0.502 0.547 0.00112 0.00122 Wall time: 56987.93873286713 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 273 100 0.121 0.00531 0.015 0.0666 0.0889 0.125 0.149 0.00028 0.000333 273 118 0.344 0.00494 0.245 0.0646 0.0857 0.576 0.604 0.00129 0.00135 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 273 100 0.279 0.00354 0.208 0.0561 0.0725 0.553 0.557 0.00123 0.00124 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 273 57196.886 0.005 0.00516 0.0454 0.149 0.066 0.0876 0.208 0.256 0.000464 0.000571 ! Validation 273 57196.886 0.005 0.0055 0.395 0.505 0.0685 0.0904 0.734 0.766 0.00164 0.00171 Wall time: 57196.88694604579 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 274 100 0.126 0.00467 0.0329 0.0628 0.0834 0.192 0.221 0.000429 0.000494 274 118 0.117 0.00473 0.0221 0.0634 0.0839 0.15 0.181 0.000334 0.000405 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 274 100 0.0974 0.00333 0.0308 0.0543 0.0704 0.203 0.214 0.000454 0.000478 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 274 57405.930 0.005 0.00497 0.0566 0.156 0.0647 0.086 0.235 0.291 0.000524 0.000649 ! Validation 274 57405.930 0.005 0.00522 0.0884 0.193 0.0665 0.0881 0.308 0.363 0.000688 0.000809 Wall time: 57405.93013397604 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 275 100 0.107 0.00464 0.0142 0.0623 0.0831 0.115 0.145 0.000257 0.000325 275 118 0.117 0.00484 0.0196 0.0643 0.0849 0.12 0.171 0.000269 0.000381 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 275 100 0.085 0.00322 0.0206 0.0536 0.0692 0.162 0.175 0.000363 0.000391 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 275 57614.873 0.005 0.00475 0.0447 0.14 0.0633 0.0841 0.211 0.258 0.000471 0.000577 ! Validation 275 57614.873 0.005 0.00503 0.0319 0.132 0.0653 0.0865 0.175 0.218 0.00039 0.000486 Wall time: 57614.87305710511 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 276 100 0.185 0.00439 0.0971 0.0608 0.0808 0.355 0.38 0.000792 0.000848 276 118 0.169 0.00406 0.088 0.0589 0.0777 0.347 0.362 0.000776 0.000808 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 276 100 0.183 0.00313 0.121 0.0529 0.0682 0.418 0.424 0.000933 0.000946 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 276 57823.815 0.005 0.00456 0.0516 0.143 0.062 0.0823 0.219 0.276 0.000488 0.000617 ! Validation 276 57823.815 0.005 0.00488 0.0928 0.191 0.0643 0.0852 0.323 0.372 0.000721 0.000829 Wall time: 57823.815747376066 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 277 100 0.185 0.00397 0.105 0.0582 0.0768 0.378 0.396 0.000843 0.000884 277 118 0.106 0.00445 0.0174 0.0615 0.0814 0.136 0.161 0.000303 0.000359 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 277 100 0.0821 0.00302 0.0217 0.052 0.067 0.167 0.18 0.000373 0.000401 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 277 58032.767 0.005 0.00441 0.045 0.133 0.0609 0.081 0.208 0.259 0.000464 0.000579 ! Validation 277 58032.767 0.005 0.00475 0.0348 0.13 0.0633 0.084 0.182 0.227 0.000406 0.000508 Wall time: 58032.76767637115 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 278 100 0.154 0.00418 0.0701 0.0593 0.0788 0.3 0.323 0.000669 0.00072 278 118 0.106 0.00476 0.0104 0.0611 0.0841 0.103 0.124 0.000229 0.000277 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 278 100 0.0703 0.00288 0.0127 0.0507 0.0654 0.116 0.138 0.00026 0.000307 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 278 58241.793 0.005 0.00426 0.036 0.121 0.0598 0.0795 0.185 0.232 0.000413 0.000518 ! Validation 278 58241.793 0.005 0.00456 0.0292 0.121 0.062 0.0824 0.168 0.208 0.000376 0.000465 Wall time: 58241.79392154515 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 279 100 0.0867 0.00381 0.0105 0.0568 0.0753 0.102 0.125 0.000227 0.000278 279 118 0.123 0.00472 0.0288 0.0623 0.0838 0.178 0.207 0.000398 0.000462 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 279 100 0.0953 0.00276 0.04 0.0496 0.0641 0.232 0.244 0.000518 0.000545 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 279 58450.742 0.005 0.00417 0.0509 0.134 0.0591 0.0787 0.223 0.276 0.000499 0.000615 ! Validation 279 58450.742 0.005 0.00445 0.0445 0.133 0.0612 0.0813 0.209 0.257 0.000466 0.000574 Wall time: 58450.74227597006 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 280 100 0.0977 0.00414 0.015 0.059 0.0784 0.12 0.149 0.000269 0.000333 280 118 0.115 0.0039 0.0372 0.0571 0.0761 0.218 0.235 0.000487 0.000525 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 280 100 0.0639 0.00273 0.00916 0.0494 0.0638 0.0955 0.117 0.000213 0.000261 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 280 58659.673 0.005 0.00404 0.0441 0.125 0.0582 0.0775 0.205 0.256 0.000458 0.000572 ! Validation 280 58659.673 0.005 0.00438 0.0393 0.127 0.0607 0.0807 0.19 0.242 0.000425 0.000539 Wall time: 58659.6736857472 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 281 100 0.137 0.00423 0.0524 0.059 0.0793 0.253 0.279 0.000565 0.000623 281 118 0.103 0.00369 0.0296 0.0551 0.074 0.192 0.21 0.000429 0.000469 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 281 100 0.0909 0.00271 0.0367 0.0493 0.0635 0.222 0.234 0.000496 0.000521 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 281 58868.617 0.005 0.00396 0.0445 0.124 0.0576 0.0767 0.213 0.257 0.000476 0.000575 ! Validation 281 58868.617 0.005 0.00429 0.13 0.216 0.06 0.0798 0.381 0.44 0.000849 0.000982 Wall time: 58868.61719903583 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 282 100 0.0872 0.00366 0.0139 0.0552 0.0738 0.122 0.144 0.000272 0.000321 282 118 0.0759 0.0032 0.0118 0.0525 0.069 0.109 0.132 0.000244 0.000296 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 282 100 0.0681 0.00262 0.0158 0.0484 0.0624 0.136 0.153 0.000303 0.000342 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 282 59077.575 0.005 0.00384 0.0352 0.112 0.0567 0.0756 0.182 0.229 0.000405 0.000512 ! Validation 282 59077.575 0.005 0.00419 0.0994 0.183 0.0592 0.0789 0.318 0.385 0.00071 0.000858 Wall time: 59077.57544527203 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 283 100 0.0851 0.0037 0.0111 0.0552 0.0742 0.104 0.128 0.000231 0.000286 283 118 0.107 0.0046 0.0145 0.0608 0.0827 0.115 0.147 0.000256 0.000327 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 283 100 0.0782 0.00261 0.0261 0.0483 0.0622 0.181 0.197 0.000405 0.000439 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 283 59286.613 0.005 0.00378 0.0352 0.111 0.0561 0.0749 0.187 0.229 0.000418 0.000512 ! Validation 283 59286.613 0.005 0.00417 0.0358 0.119 0.0591 0.0788 0.183 0.231 0.000409 0.000515 Wall time: 59286.613851998 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 284 100 0.105 0.00383 0.0285 0.0563 0.0755 0.183 0.206 0.000408 0.00046 284 118 0.119 0.0035 0.049 0.0541 0.0722 0.241 0.27 0.000538 0.000602 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 284 100 0.107 0.00256 0.0555 0.0479 0.0617 0.278 0.287 0.000621 0.000641 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 284 59495.558 0.005 0.00372 0.049 0.123 0.0557 0.0744 0.222 0.27 0.000497 0.000603 ! Validation 284 59495.558 0.005 0.00407 0.0406 0.122 0.0584 0.0778 0.198 0.246 0.000442 0.000548 Wall time: 59495.55886193784 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 285 100 0.0821 0.00353 0.0114 0.054 0.0725 0.0976 0.13 0.000218 0.00029 285 118 0.0955 0.00309 0.0338 0.0511 0.0678 0.202 0.224 0.000452 0.0005 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 285 100 0.119 0.00245 0.0703 0.0468 0.0604 0.314 0.323 0.000701 0.000722 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 285 59704.508 0.005 0.00363 0.0298 0.102 0.055 0.0735 0.169 0.211 0.000377 0.00047 ! Validation 285 59704.508 0.005 0.00395 0.0537 0.133 0.0574 0.0766 0.233 0.283 0.00052 0.000631 Wall time: 59704.50862057321 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 286 100 0.133 0.00372 0.0585 0.056 0.0744 0.271 0.295 0.000604 0.000659 286 118 0.151 0.00388 0.0732 0.0571 0.076 0.286 0.33 0.000638 0.000736 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 286 100 0.0564 0.00253 0.00573 0.0475 0.0614 0.0812 0.0923 0.000181 0.000206 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 286 59913.447 0.005 0.00364 0.0583 0.131 0.0551 0.0736 0.24 0.294 0.000536 0.000657 ! Validation 286 59913.447 0.005 0.00407 0.0297 0.111 0.0584 0.0778 0.168 0.21 0.000375 0.000469 Wall time: 59913.44778268179 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 287 100 0.083 0.00351 0.0129 0.0543 0.0722 0.114 0.138 0.000254 0.000309 287 118 0.179 0.00388 0.102 0.0565 0.0759 0.367 0.389 0.000819 0.000868 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 287 100 0.0659 0.00244 0.0171 0.0468 0.0602 0.141 0.16 0.000315 0.000356 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 287 60122.391 0.005 0.00355 0.0267 0.0977 0.0543 0.0726 0.155 0.197 0.000347 0.000441 ! Validation 287 60122.391 0.005 0.00387 0.101 0.178 0.0568 0.0759 0.318 0.387 0.00071 0.000864 Wall time: 60122.39131149696 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 288 100 0.123 0.00328 0.0574 0.0525 0.0699 0.271 0.292 0.000604 0.000652 288 118 0.0999 0.00301 0.0396 0.0507 0.067 0.22 0.243 0.000491 0.000542 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 288 100 0.0848 0.00239 0.037 0.0462 0.0596 0.221 0.234 0.000494 0.000523 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 288 60331.416 0.005 0.00351 0.0378 0.108 0.054 0.0722 0.187 0.237 0.000417 0.000529 ! Validation 288 60331.416 0.005 0.00384 0.108 0.185 0.0567 0.0756 0.353 0.401 0.000789 0.000896 Wall time: 60331.41696097376 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 289 100 0.0821 0.0036 0.00999 0.0544 0.0732 0.0951 0.122 0.000212 0.000272 289 118 0.0935 0.00367 0.0201 0.0549 0.0738 0.154 0.173 0.000344 0.000386 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 289 100 0.0507 0.00231 0.00452 0.0454 0.0586 0.0661 0.082 0.000148 0.000183 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 289 60540.355 0.005 0.00348 0.0448 0.114 0.0538 0.0719 0.21 0.259 0.000469 0.000577 ! Validation 289 60540.355 0.005 0.00375 0.045 0.12 0.0559 0.0747 0.205 0.259 0.000458 0.000578 Wall time: 60540.35543863615 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 290 100 0.0868 0.00344 0.018 0.0536 0.0715 0.142 0.164 0.000316 0.000366 290 118 0.0904 0.00385 0.0134 0.0567 0.0757 0.106 0.141 0.000237 0.000315 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 290 100 0.0493 0.00228 0.00358 0.0452 0.0583 0.0713 0.0729 0.000159 0.000163 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 290 60749.302 0.005 0.00344 0.0316 0.1 0.0535 0.0715 0.175 0.217 0.000391 0.000485 ! Validation 290 60749.302 0.005 0.0037 0.0258 0.0999 0.0555 0.0742 0.156 0.196 0.000349 0.000437 Wall time: 60749.30200986285 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 291 100 0.13 0.00383 0.0537 0.0568 0.0755 0.255 0.283 0.00057 0.000631 291 118 0.113 0.0043 0.0269 0.0594 0.08 0.148 0.2 0.00033 0.000447 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 291 100 0.0795 0.00247 0.0301 0.0471 0.0606 0.199 0.212 0.000443 0.000472 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 291 60958.255 0.005 0.00348 0.0569 0.126 0.0538 0.0719 0.216 0.291 0.000482 0.00065 ! Validation 291 60958.255 0.005 0.00392 0.0645 0.143 0.0575 0.0764 0.257 0.31 0.000573 0.000691 Wall time: 60958.25581282377 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 292 100 0.107 0.00329 0.0408 0.0521 0.0699 0.225 0.246 0.000503 0.00055 292 118 0.093 0.00322 0.0287 0.0524 0.0692 0.19 0.206 0.000424 0.000461 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 292 100 0.054 0.00228 0.00844 0.0451 0.0582 0.086 0.112 0.000192 0.00025 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 292 61167.190 0.005 0.00336 0.0228 0.0899 0.0528 0.0707 0.148 0.184 0.000331 0.000411 ! Validation 292 61167.190 0.005 0.00364 0.0478 0.12 0.055 0.0735 0.217 0.267 0.000484 0.000595 Wall time: 61167.190196414944 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 293 100 0.116 0.00342 0.048 0.0528 0.0713 0.251 0.267 0.000561 0.000596 293 118 0.0839 0.00369 0.0102 0.0549 0.0741 0.103 0.123 0.00023 0.000274 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 293 100 0.0599 0.00222 0.0156 0.0444 0.0574 0.135 0.152 0.000302 0.00034 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 293 61376.235 0.005 0.00329 0.0308 0.0966 0.0522 0.0699 0.171 0.215 0.000382 0.000479 ! Validation 293 61376.235 0.005 0.00363 0.0482 0.121 0.055 0.0735 0.223 0.268 0.000499 0.000598 Wall time: 61376.23542910721 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 294 100 0.0805 0.00337 0.0131 0.0527 0.0708 0.117 0.14 0.000261 0.000312 294 118 0.113 0.00333 0.0461 0.0531 0.0704 0.237 0.262 0.000528 0.000585 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 294 100 0.0823 0.00228 0.0367 0.045 0.0582 0.222 0.234 0.000495 0.000522 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 294 61585.350 0.005 0.0033 0.0317 0.0977 0.0523 0.07 0.177 0.217 0.000395 0.000484 ! Validation 294 61585.350 0.005 0.00363 0.146 0.219 0.0549 0.0735 0.408 0.467 0.000912 0.00104 Wall time: 61585.350488700904 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 295 100 0.0883 0.00329 0.0226 0.052 0.0699 0.15 0.183 0.000336 0.000409 295 118 0.0717 0.00313 0.00902 0.0515 0.0683 0.0952 0.116 0.000212 0.000259 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 295 100 0.0492 0.00219 0.00544 0.0441 0.057 0.06 0.0899 0.000134 0.000201 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 295 61794.499 0.005 0.00327 0.037 0.102 0.0521 0.0697 0.188 0.235 0.000419 0.000525 ! Validation 295 61794.499 0.005 0.00353 0.0371 0.108 0.0541 0.0724 0.186 0.235 0.000416 0.000524 Wall time: 61794.499806390144 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 296 100 0.0797 0.00341 0.0115 0.0533 0.0712 0.108 0.131 0.000241 0.000292 296 118 0.0734 0.00305 0.0124 0.0508 0.0674 0.112 0.136 0.00025 0.000303 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 296 100 0.0602 0.0023 0.0142 0.0453 0.0585 0.13 0.145 0.00029 0.000324 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 296 62004.002 0.005 0.00322 0.035 0.0993 0.0517 0.0692 0.188 0.229 0.000419 0.00051 ! Validation 296 62004.002 0.005 0.00364 0.0397 0.112 0.055 0.0735 0.191 0.243 0.000427 0.000542 Wall time: 62004.00275566708 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 297 100 0.0755 0.00313 0.013 0.0511 0.0682 0.111 0.139 0.000248 0.00031 297 118 0.0981 0.00307 0.0367 0.0507 0.0676 0.209 0.234 0.000466 0.000522 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 297 100 0.0512 0.00237 0.00392 0.0459 0.0593 0.0738 0.0763 0.000165 0.00017 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 297 62213.287 0.005 0.00319 0.0307 0.0945 0.0514 0.0689 0.164 0.214 0.000365 0.000477 ! Validation 297 62213.287 0.005 0.00367 0.0409 0.114 0.0553 0.0738 0.19 0.247 0.000424 0.00055 Wall time: 62213.28774872376 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 298 100 0.0857 0.00363 0.0131 0.0547 0.0735 0.109 0.139 0.000243 0.000311 298 118 0.0759 0.00292 0.0174 0.0493 0.0659 0.134 0.161 0.000299 0.000359 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 298 100 0.076 0.00217 0.0326 0.044 0.0568 0.208 0.22 0.000465 0.000491 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 298 62422.528 0.005 0.00321 0.0338 0.0979 0.0516 0.0691 0.181 0.224 0.000404 0.000501 ! Validation 298 62422.528 0.005 0.0035 0.0357 0.106 0.054 0.0721 0.191 0.23 0.000427 0.000514 Wall time: 62422.528391208034 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 299 100 0.0994 0.00328 0.0338 0.0518 0.0698 0.2 0.224 0.000447 0.0005 299 118 0.0714 0.00326 0.00623 0.051 0.0696 0.0815 0.0962 0.000182 0.000215 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 299 100 0.0482 0.00228 0.00259 0.0445 0.0582 0.0604 0.0621 0.000135 0.000139 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 299 62631.696 0.005 0.00317 0.0348 0.0981 0.0513 0.0686 0.181 0.228 0.000405 0.000509 ! Validation 299 62631.696 0.005 0.00364 0.0231 0.0958 0.055 0.0736 0.15 0.185 0.000334 0.000413 Wall time: 62631.69670935208 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 300 100 0.0644 0.00279 0.00865 0.0485 0.0644 0.0863 0.113 0.000193 0.000253 300 118 0.0866 0.00301 0.0265 0.0503 0.0669 0.178 0.198 0.000396 0.000443 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 300 100 0.0554 0.00208 0.0137 0.043 0.0557 0.128 0.143 0.000285 0.000319 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 300 62840.851 0.005 0.00313 0.0218 0.0843 0.0509 0.0682 0.14 0.18 0.000312 0.000402 ! Validation 300 62840.851 0.005 0.00341 0.0257 0.0939 0.0532 0.0712 0.158 0.196 0.000353 0.000436 Wall time: 62840.8519390258 ! Best model 300 0.094 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 301 100 0.187 0.00327 0.121 0.0521 0.0698 0.412 0.425 0.00092 0.000948 301 118 0.0873 0.00289 0.0294 0.0487 0.0656 0.185 0.209 0.000414 0.000467 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 301 100 0.0565 0.00211 0.0143 0.0434 0.056 0.133 0.146 0.000297 0.000325 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 301 63050.012 0.005 0.00312 0.0447 0.107 0.0509 0.0681 0.211 0.258 0.00047 0.000576 ! Validation 301 63050.012 0.005 0.00342 0.0511 0.12 0.0534 0.0713 0.228 0.276 0.000508 0.000616 Wall time: 63050.01294920221 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 302 100 0.102 0.003 0.0421 0.0499 0.0667 0.206 0.25 0.000459 0.000558 302 118 0.0819 0.00312 0.0196 0.0512 0.0681 0.15 0.171 0.000336 0.000381 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 302 100 0.0499 0.00225 0.00489 0.0444 0.0579 0.059 0.0852 0.000132 0.00019 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 302 63259.887 0.005 0.00311 0.0343 0.0965 0.0508 0.068 0.181 0.226 0.000405 0.000505 ! Validation 302 63259.887 0.005 0.00353 0.0394 0.11 0.0541 0.0724 0.19 0.242 0.000425 0.00054 Wall time: 63259.88740386395 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 303 100 0.146 0.00302 0.086 0.0504 0.067 0.339 0.358 0.000756 0.000798 303 118 0.136 0.00289 0.0784 0.0492 0.0655 0.326 0.342 0.000728 0.000762 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 303 100 0.0476 0.0022 0.00355 0.0441 0.0573 0.0625 0.0726 0.00014 0.000162 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 303 63468.990 0.005 0.00306 0.0391 0.1 0.0504 0.0675 0.19 0.24 0.000423 0.000536 ! Validation 303 63468.990 0.005 0.00354 0.0667 0.138 0.0543 0.0726 0.25 0.315 0.000559 0.000703 Wall time: 63468.990227629896 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 304 100 0.134 0.00305 0.0728 0.05 0.0674 0.309 0.329 0.00069 0.000735 304 118 0.0651 0.00283 0.00849 0.0491 0.0649 0.0914 0.112 0.000204 0.000251 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 304 100 0.0513 0.00217 0.00778 0.0439 0.0569 0.0823 0.108 0.000184 0.00024 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 304 63678.110 0.005 0.00308 0.0308 0.0924 0.0506 0.0677 0.172 0.215 0.000384 0.000479 ! Validation 304 63678.110 0.005 0.00344 0.0491 0.118 0.0535 0.0715 0.22 0.27 0.000491 0.000603 Wall time: 63678.110261932015 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 305 100 0.074 0.00317 0.0107 0.0511 0.0686 0.104 0.126 0.000232 0.000281 305 118 0.389 0.00401 0.308 0.0576 0.0772 0.671 0.677 0.0015 0.00151 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 305 100 0.0673 0.00299 0.00747 0.0512 0.0667 0.0856 0.105 0.000191 0.000235 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 305 63887.257 0.005 0.00302 0.0273 0.0878 0.05 0.067 0.146 0.194 0.000325 0.000434 ! Validation 305 63887.257 0.005 0.0043 0.214 0.3 0.06 0.0799 0.439 0.564 0.00098 0.00126 Wall time: 63887.258041861 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 306 100 0.0648 0.00279 0.00894 0.0485 0.0644 0.0911 0.115 0.000203 0.000257 306 118 0.0748 0.00307 0.0134 0.0503 0.0676 0.115 0.141 0.000256 0.000316 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 306 100 0.0451 0.00209 0.00335 0.0432 0.0558 0.0597 0.0705 0.000133 0.000157 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 306 64096.376 0.005 0.00309 0.0365 0.0983 0.0507 0.0678 0.183 0.233 0.000408 0.000521 ! Validation 306 64096.376 0.005 0.00336 0.0269 0.0941 0.0528 0.0707 0.158 0.2 0.000353 0.000446 Wall time: 64096.37628129218 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 307 100 0.077 0.00304 0.0162 0.0499 0.0672 0.126 0.155 0.00028 0.000346 307 118 0.0789 0.0036 0.0068 0.0543 0.0732 0.0698 0.101 0.000156 0.000224 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 307 100 0.0478 0.00199 0.00798 0.0421 0.0544 0.0816 0.109 0.000182 0.000243 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 307 64305.605 0.005 0.00298 0.0194 0.079 0.0497 0.0665 0.138 0.17 0.000308 0.00038 ! Validation 307 64305.605 0.005 0.00325 0.0434 0.108 0.0519 0.0695 0.204 0.254 0.000456 0.000567 Wall time: 64305.605473155156 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 308 100 0.11 0.0031 0.0484 0.0501 0.0679 0.247 0.268 0.000551 0.000599 308 118 0.086 0.00336 0.0188 0.0522 0.0707 0.15 0.167 0.000336 0.000373 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 308 100 0.045 0.00198 0.00539 0.0419 0.0543 0.0776 0.0896 0.000173 0.0002 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 308 64514.721 0.005 0.00296 0.0242 0.0834 0.0495 0.0663 0.155 0.19 0.000347 0.000424 ! Validation 308 64514.721 0.005 0.00324 0.0279 0.0928 0.0519 0.0694 0.161 0.204 0.000359 0.000455 Wall time: 64514.72137806285 ! Best model 308 0.093 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 309 100 0.0882 0.0031 0.0262 0.0504 0.0679 0.173 0.197 0.000386 0.000441 309 118 0.059 0.00271 0.0048 0.0475 0.0635 0.0697 0.0845 0.000156 0.000189 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 309 100 0.0678 0.002 0.0278 0.0421 0.0545 0.194 0.203 0.000432 0.000453 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 309 64724.037 0.005 0.00298 0.0349 0.0946 0.0498 0.0666 0.181 0.229 0.000403 0.00051 ! Validation 309 64724.037 0.005 0.00326 0.0759 0.141 0.052 0.0696 0.284 0.336 0.000635 0.00075 Wall time: 64724.037716482766 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 310 100 0.0698 0.00287 0.0124 0.0491 0.0653 0.116 0.136 0.000259 0.000303 310 118 0.0845 0.00291 0.0262 0.0495 0.0658 0.187 0.197 0.000416 0.000441 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 310 100 0.053 0.00198 0.0135 0.042 0.0542 0.126 0.142 0.000281 0.000316 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 310 64933.141 0.005 0.00296 0.0238 0.083 0.0496 0.0663 0.152 0.188 0.00034 0.00042 ! Validation 310 64933.141 0.005 0.00322 0.0622 0.127 0.0517 0.0692 0.25 0.304 0.000559 0.000679 Wall time: 64933.141800203826 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 311 100 0.0637 0.00277 0.00823 0.048 0.0642 0.0879 0.111 0.000196 0.000247 311 118 0.13 0.00342 0.062 0.0526 0.0713 0.288 0.304 0.000644 0.000678 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 311 100 0.106 0.00202 0.0651 0.0426 0.0548 0.304 0.311 0.000678 0.000695 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 311 65142.500 0.005 0.00295 0.0271 0.0861 0.0495 0.0662 0.161 0.2 0.00036 0.000446 ! Validation 311 65142.500 0.005 0.00328 0.0614 0.127 0.0525 0.0699 0.26 0.302 0.00058 0.000674 Wall time: 65142.50103500392 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 312 100 0.0886 0.00344 0.0198 0.0533 0.0715 0.145 0.172 0.000323 0.000383 312 118 0.107 0.00315 0.0436 0.0512 0.0684 0.241 0.255 0.000538 0.000568 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 312 100 0.0905 0.00191 0.0523 0.0413 0.0533 0.272 0.279 0.000607 0.000622 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 312 65351.743 0.005 0.00294 0.0283 0.0871 0.0494 0.0661 0.166 0.205 0.000371 0.000457 ! Validation 312 65351.743 0.005 0.00317 0.132 0.196 0.0513 0.0687 0.388 0.444 0.000867 0.000991 Wall time: 65351.74343335582 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 313 100 0.0971 0.0028 0.0411 0.0487 0.0645 0.225 0.247 0.000503 0.000552 313 118 0.193 0.0033 0.127 0.0524 0.0701 0.422 0.435 0.000943 0.000972 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 313 100 0.114 0.00269 0.0603 0.0486 0.0633 0.291 0.3 0.00065 0.000669 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 313 65560.864 0.005 0.00297 0.0471 0.106 0.0496 0.0664 0.212 0.263 0.000474 0.000587 ! Validation 313 65560.864 0.005 0.0039 0.368 0.446 0.057 0.0761 0.673 0.74 0.0015 0.00165 Wall time: 65560.86486888304 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 314 100 0.0672 0.00291 0.00903 0.0491 0.0658 0.0939 0.116 0.00021 0.000259 314 118 0.0776 0.00321 0.0135 0.0513 0.069 0.124 0.142 0.000276 0.000316 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 314 100 0.042 0.00195 0.00305 0.0418 0.0538 0.0545 0.0674 0.000122 0.00015 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 314 65770.191 0.005 0.00307 0.0279 0.0893 0.0505 0.0675 0.156 0.204 0.000349 0.000456 ! Validation 314 65770.191 0.005 0.00318 0.0383 0.102 0.0514 0.0688 0.184 0.239 0.000411 0.000533 Wall time: 65770.19169327989 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 315 100 0.0654 0.00287 0.00794 0.049 0.0654 0.091 0.109 0.000203 0.000243 315 118 0.0743 0.00285 0.0174 0.0486 0.0651 0.129 0.161 0.000287 0.000359 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 315 100 0.0737 0.00194 0.035 0.0414 0.0537 0.219 0.228 0.000488 0.000509 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 315 65979.061 0.005 0.00289 0.0321 0.0899 0.049 0.0656 0.17 0.219 0.000379 0.000488 ! Validation 315 65979.061 0.005 0.00318 0.0319 0.0955 0.0514 0.0688 0.181 0.218 0.000405 0.000486 Wall time: 65979.06168029597 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 316 100 0.0583 0.00257 0.00695 0.0465 0.0618 0.0823 0.102 0.000184 0.000227 316 118 0.0643 0.00284 0.00754 0.048 0.065 0.0875 0.106 0.000195 0.000236 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 316 100 0.0436 0.00191 0.00545 0.0412 0.0532 0.0762 0.09 0.00017 0.000201 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 316 66188.035 0.005 0.00289 0.0241 0.0818 0.049 0.0655 0.152 0.19 0.000339 0.000423 ! Validation 316 66188.035 0.005 0.00312 0.0353 0.0978 0.0509 0.0681 0.179 0.229 0.0004 0.000512 Wall time: 66188.03532614885 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 317 100 0.0758 0.00306 0.0146 0.0503 0.0675 0.122 0.147 0.000271 0.000329 317 118 0.0763 0.00292 0.0179 0.0492 0.0659 0.129 0.163 0.000287 0.000364 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 317 100 0.0496 0.00206 0.00847 0.0424 0.0553 0.0992 0.112 0.000221 0.000251 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 317 66396.927 0.005 0.00284 0.0238 0.0806 0.0486 0.065 0.148 0.188 0.000331 0.00042 ! Validation 317 66396.927 0.005 0.00331 0.0254 0.0916 0.0525 0.0702 0.16 0.194 0.000356 0.000434 Wall time: 66396.92784429388 ! Best model 317 0.092 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 318 100 0.076 0.0031 0.0141 0.0506 0.0679 0.125 0.145 0.00028 0.000324 318 118 0.0681 0.00309 0.00631 0.0514 0.0678 0.078 0.0969 0.000174 0.000216 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 318 100 0.0641 0.00212 0.0216 0.0431 0.0562 0.171 0.179 0.000382 0.0004 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 318 66605.828 0.005 0.00287 0.0295 0.0868 0.0488 0.0653 0.168 0.21 0.000375 0.000469 ! Validation 318 66605.828 0.005 0.00339 0.0326 0.1 0.0533 0.071 0.185 0.22 0.000413 0.000492 Wall time: 66605.82816398796 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 319 100 0.0613 0.0027 0.00738 0.0474 0.0633 0.0862 0.105 0.000192 0.000234 319 118 0.0627 0.00301 0.0026 0.05 0.0669 0.0445 0.0622 9.93e-05 0.000139 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 319 100 0.0406 0.00186 0.00328 0.0406 0.0527 0.0568 0.0698 0.000127 0.000156 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 319 66814.710 0.005 0.00317 0.0357 0.099 0.0508 0.0686 0.162 0.231 0.000362 0.000516 ! Validation 319 66814.710 0.005 0.00309 0.0266 0.0884 0.0507 0.0678 0.16 0.199 0.000358 0.000444 Wall time: 66814.71010305686 ! Best model 319 0.088 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 320 100 0.0918 0.00283 0.0352 0.0485 0.0649 0.21 0.229 0.000468 0.000511 320 118 0.115 0.00285 0.0579 0.0498 0.0651 0.273 0.294 0.00061 0.000655 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 320 100 0.0651 0.00199 0.0254 0.042 0.0543 0.187 0.194 0.000417 0.000434 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 320 67023.611 0.005 0.00292 0.0392 0.0976 0.0493 0.0659 0.191 0.241 0.000427 0.000538 ! Validation 320 67023.611 0.005 0.00318 0.108 0.171 0.0515 0.0688 0.343 0.4 0.000766 0.000893 Wall time: 67023.61191418208 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 321 100 0.104 0.00316 0.0408 0.0515 0.0685 0.227 0.246 0.000506 0.00055 321 118 0.071 0.0031 0.009 0.0503 0.0679 0.0981 0.116 0.000219 0.000258 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 321 100 0.0422 0.00186 0.00489 0.0409 0.0527 0.075 0.0853 0.000167 0.00019 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 321 67232.578 0.005 0.00284 0.0205 0.0772 0.0485 0.0649 0.141 0.175 0.000316 0.00039 ! Validation 321 67232.578 0.005 0.00307 0.0225 0.084 0.0505 0.0676 0.144 0.183 0.000321 0.000409 Wall time: 67232.57885673689 ! Best model 321 0.084 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 322 100 0.0788 0.00251 0.0286 0.0459 0.0611 0.18 0.206 0.000402 0.00046 322 118 0.0565 0.00246 0.00731 0.045 0.0605 0.0855 0.104 0.000191 0.000233 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 322 100 0.0399 0.00185 0.00297 0.0406 0.0524 0.0634 0.0665 0.000142 0.000148 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 322 67441.477 0.005 0.0028 0.0269 0.0829 0.0483 0.0646 0.165 0.2 0.000369 0.000447 ! Validation 322 67441.477 0.005 0.00306 0.0323 0.0935 0.0504 0.0674 0.171 0.219 0.000382 0.000489 Wall time: 67441.47789094085 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 323 100 0.0675 0.00292 0.00902 0.0493 0.0659 0.0949 0.116 0.000212 0.000259 323 118 0.0745 0.00278 0.0189 0.0484 0.0643 0.144 0.168 0.000322 0.000374 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 323 100 0.0697 0.00189 0.0319 0.0408 0.053 0.212 0.218 0.000473 0.000486 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 323 67650.350 0.005 0.00279 0.0247 0.0805 0.0481 0.0644 0.149 0.192 0.000333 0.000428 ! Validation 323 67650.350 0.005 0.00313 0.0369 0.0995 0.0512 0.0682 0.189 0.234 0.000421 0.000523 Wall time: 67650.35043345904 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 324 100 0.0639 0.00286 0.00674 0.0487 0.0652 0.0831 0.1 0.000185 0.000223 324 118 0.123 0.00282 0.0663 0.0485 0.0648 0.298 0.314 0.000665 0.000701 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 324 100 0.0911 0.00183 0.0545 0.0404 0.0522 0.278 0.285 0.00062 0.000636 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 324 67859.240 0.005 0.00286 0.0335 0.0907 0.0488 0.0652 0.175 0.222 0.00039 0.000497 ! Validation 324 67859.240 0.005 0.00302 0.128 0.188 0.0501 0.067 0.394 0.435 0.000879 0.000972 Wall time: 67859.24028320378 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 325 100 0.0646 0.00284 0.0079 0.0483 0.0649 0.0825 0.108 0.000184 0.000242 325 118 0.0501 0.00238 0.00251 0.0449 0.0595 0.0468 0.0611 0.000104 0.000136 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 325 100 0.0385 0.00181 0.0023 0.04 0.0519 0.056 0.0585 0.000125 0.000131 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 325 68068.182 0.005 0.00279 0.0241 0.0799 0.0482 0.0644 0.149 0.19 0.000332 0.000424 ! Validation 325 68068.182 0.005 0.00299 0.0365 0.0963 0.0498 0.0667 0.177 0.233 0.000395 0.00052 Wall time: 68068.18249570206 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 326 100 0.196 0.00285 0.139 0.0486 0.0651 0.441 0.455 0.000984 0.00102 326 118 0.059 0.00263 0.0064 0.0471 0.0626 0.084 0.0976 0.000188 0.000218 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 326 100 0.0899 0.00209 0.048 0.0433 0.0558 0.257 0.267 0.000573 0.000596 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 326 68277.261 0.005 0.00277 0.0392 0.0945 0.048 0.0642 0.187 0.242 0.000418 0.00054 ! Validation 326 68277.261 0.005 0.00324 0.194 0.259 0.0521 0.0694 0.485 0.537 0.00108 0.0012 Wall time: 68277.26191067882 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 327 100 0.0694 0.0026 0.0174 0.0468 0.0622 0.137 0.161 0.000305 0.000359 327 118 0.0845 0.00313 0.0218 0.0502 0.0683 0.152 0.18 0.00034 0.000402 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 327 100 0.0431 0.00201 0.00294 0.0421 0.0546 0.0591 0.0661 0.000132 0.000148 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 327 68486.180 0.005 0.00274 0.0226 0.0773 0.0477 0.0638 0.148 0.183 0.00033 0.000409 ! Validation 327 68486.180 0.005 0.00319 0.0608 0.125 0.0517 0.0689 0.225 0.301 0.000502 0.000671 Wall time: 68486.17998484708 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 328 100 0.136 0.0027 0.0818 0.048 0.0634 0.337 0.349 0.000751 0.000779 328 118 0.0783 0.00307 0.0169 0.05 0.0676 0.124 0.158 0.000276 0.000354 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 328 100 0.0759 0.00194 0.0371 0.0417 0.0537 0.222 0.235 0.000496 0.000524 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 328 68695.103 0.005 0.00279 0.0308 0.0867 0.0482 0.0644 0.169 0.214 0.000376 0.000478 ! Validation 328 68695.103 0.005 0.00312 0.135 0.198 0.0511 0.0682 0.398 0.448 0.000888 0.001 Wall time: 68695.1036872482 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 329 100 0.0597 0.00271 0.00542 0.0474 0.0635 0.0742 0.0898 0.000166 0.0002 329 118 0.0545 0.00233 0.00802 0.0443 0.0588 0.0936 0.109 0.000209 0.000244 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 329 100 0.0391 0.00181 0.00294 0.0401 0.0519 0.0633 0.0662 0.000141 0.000148 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 329 68904.569 0.005 0.00291 0.0389 0.097 0.0492 0.0658 0.184 0.241 0.000411 0.000538 ! Validation 329 68904.569 0.005 0.00299 0.0267 0.0865 0.0499 0.0667 0.16 0.199 0.000356 0.000445 Wall time: 68904.56951636402 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 330 100 0.0902 0.00276 0.035 0.0479 0.0641 0.205 0.228 0.000458 0.000509 330 118 0.0761 0.00321 0.0118 0.0521 0.0691 0.116 0.132 0.000258 0.000296 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 330 100 0.0436 0.00193 0.00507 0.0415 0.0535 0.0634 0.0869 0.000142 0.000194 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 330 69113.503 0.005 0.00276 0.0258 0.0809 0.0479 0.064 0.157 0.196 0.000351 0.000438 ! Validation 330 69113.503 0.005 0.00311 0.0298 0.092 0.051 0.068 0.172 0.211 0.000385 0.00047 Wall time: 69113.50315854186 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 331 100 0.0758 0.00267 0.0224 0.0471 0.063 0.158 0.183 0.000353 0.000407 331 118 0.143 0.00325 0.0777 0.052 0.0695 0.325 0.34 0.000725 0.000759 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 331 100 0.0481 0.00216 0.00487 0.0433 0.0567 0.072 0.0851 0.000161 0.00019 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 331 69322.536 0.005 0.00278 0.0275 0.0832 0.0481 0.0643 0.159 0.201 0.000355 0.000449 ! Validation 331 69322.536 0.005 0.00333 0.0295 0.0961 0.0527 0.0704 0.167 0.209 0.000374 0.000467 Wall time: 69322.53673520917 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 332 100 0.0584 0.00264 0.00562 0.0468 0.0626 0.0778 0.0915 0.000174 0.000204 332 118 0.0575 0.00266 0.00439 0.047 0.0628 0.0669 0.0808 0.000149 0.00018 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 332 100 0.0374 0.00173 0.00268 0.0392 0.0508 0.0527 0.0632 0.000118 0.000141 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 332 69531.449 0.005 0.00277 0.0204 0.0758 0.048 0.0642 0.141 0.175 0.000314 0.00039 ! Validation 332 69531.449 0.005 0.00289 0.0413 0.099 0.0489 0.0655 0.188 0.248 0.000419 0.000553 Wall time: 69531.44985681586 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 333 100 0.0608 0.00262 0.00843 0.0467 0.0624 0.0899 0.112 0.000201 0.00025 333 118 0.146 0.0025 0.0965 0.046 0.0609 0.37 0.379 0.000826 0.000846 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 333 100 0.0688 0.0019 0.0307 0.0412 0.0532 0.201 0.214 0.000448 0.000477 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 333 69740.399 0.005 0.00262 0.0197 0.0721 0.0466 0.0624 0.134 0.169 0.0003 0.000377 ! Validation 333 69740.399 0.005 0.00306 0.0324 0.0936 0.0505 0.0675 0.173 0.219 0.000386 0.00049 Wall time: 69740.3992812722 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 334 100 0.0698 0.00247 0.0204 0.0456 0.0606 0.152 0.174 0.00034 0.000389 334 118 0.0559 0.00246 0.00676 0.0454 0.0605 0.0843 0.1 0.000188 0.000224 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 334 100 0.0466 0.00177 0.0112 0.0397 0.0513 0.116 0.129 0.000259 0.000288 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 334 69949.473 0.005 0.00282 0.0378 0.0942 0.0485 0.0648 0.181 0.238 0.000404 0.000531 ! Validation 334 69949.473 0.005 0.00291 0.0267 0.0849 0.0492 0.0658 0.156 0.199 0.000348 0.000445 Wall time: 69949.47369810985 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 335 100 0.0602 0.00273 0.00564 0.0476 0.0637 0.0751 0.0916 0.000168 0.000204 335 118 0.0589 0.00255 0.00787 0.046 0.0616 0.086 0.108 0.000192 0.000242 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 335 100 0.0403 0.00171 0.00603 0.039 0.0505 0.0702 0.0947 0.000157 0.000211 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 335 70158.638 0.005 0.00272 0.0253 0.0796 0.0476 0.0636 0.15 0.194 0.000334 0.000434 ! Validation 335 70158.638 0.005 0.00287 0.0442 0.102 0.0488 0.0653 0.199 0.256 0.000445 0.000572 Wall time: 70158.63845110219 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 336 100 0.058 0.00247 0.00868 0.0455 0.0606 0.091 0.114 0.000203 0.000254 336 118 0.147 0.00277 0.0915 0.0483 0.0642 0.332 0.369 0.000741 0.000824 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 336 100 0.0456 0.00213 0.00296 0.0435 0.0563 0.0606 0.0663 0.000135 0.000148 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 336 70367.774 0.005 0.00265 0.0288 0.0818 0.0469 0.0627 0.164 0.206 0.000366 0.000459 ! Validation 336 70367.774 0.005 0.00325 0.0743 0.139 0.0522 0.0695 0.256 0.332 0.000571 0.000742 Wall time: 70367.7742339489 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 337 100 0.132 0.0031 0.0701 0.0505 0.0679 0.302 0.323 0.000675 0.000721 337 118 0.0681 0.00292 0.00972 0.05 0.0659 0.0926 0.12 0.000207 0.000268 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 337 100 0.0511 0.00185 0.0141 0.0405 0.0525 0.13 0.145 0.000291 0.000323 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 337 70578.864 0.005 0.00272 0.0254 0.0798 0.0476 0.0636 0.155 0.195 0.000347 0.000435 ! Validation 337 70578.864 0.005 0.00299 0.0745 0.134 0.0499 0.0666 0.283 0.333 0.000632 0.000743 Wall time: 70578.86483396078 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 338 100 0.0914 0.00303 0.0308 0.0501 0.0671 0.19 0.214 0.000423 0.000478 338 118 0.0809 0.00293 0.0224 0.049 0.066 0.16 0.183 0.000358 0.000408 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 338 100 0.0384 0.00177 0.00299 0.0397 0.0513 0.0602 0.0667 0.000134 0.000149 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 338 70788.033 0.005 0.00275 0.0387 0.0937 0.0479 0.0639 0.188 0.24 0.000419 0.000536 ! Validation 338 70788.033 0.005 0.00292 0.0383 0.0967 0.0494 0.0659 0.179 0.239 0.000399 0.000532 Wall time: 70788.03352279915 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 339 100 0.0515 0.00234 0.00461 0.0444 0.059 0.066 0.0828 0.000147 0.000185 339 118 0.0698 0.0026 0.0178 0.0463 0.0622 0.139 0.163 0.000311 0.000363 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 339 100 0.0482 0.00182 0.0117 0.04 0.0521 0.115 0.132 0.000256 0.000294 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 339 70997.229 0.005 0.00257 0.0128 0.0643 0.0462 0.0619 0.111 0.138 0.000249 0.000308 ! Validation 339 70997.229 0.005 0.00292 0.0463 0.105 0.0493 0.0659 0.212 0.262 0.000473 0.000586 Wall time: 70997.22956980299 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 340 100 0.0997 0.00262 0.0474 0.0468 0.0624 0.25 0.265 0.000557 0.000592 340 118 0.132 0.00308 0.0704 0.0501 0.0676 0.293 0.323 0.000653 0.000722 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 340 100 0.0794 0.00261 0.0272 0.0479 0.0623 0.19 0.201 0.000425 0.000449 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 340 71206.318 0.005 0.00288 0.0374 0.0949 0.0488 0.0654 0.182 0.235 0.000406 0.000525 ! Validation 340 71206.318 0.005 0.00376 0.0666 0.142 0.0563 0.0747 0.237 0.315 0.000528 0.000703 Wall time: 71206.31876659999 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 341 100 0.0564 0.00237 0.00903 0.0445 0.0594 0.0958 0.116 0.000214 0.000259 341 118 0.05 0.00219 0.00614 0.0429 0.0571 0.0736 0.0956 0.000164 0.000213 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 341 100 0.0534 0.00172 0.0189 0.0391 0.0506 0.158 0.168 0.000353 0.000374 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 341 71415.289 0.005 0.0026 0.0156 0.0675 0.0465 0.0622 0.122 0.153 0.000272 0.000341 ! Validation 341 71415.289 0.005 0.00287 0.115 0.172 0.0489 0.0653 0.349 0.413 0.00078 0.000922 Wall time: 71415.28995144088 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 342 100 0.165 0.00321 0.101 0.0523 0.0691 0.351 0.388 0.000784 0.000865 342 118 0.0619 0.00247 0.0125 0.0457 0.0606 0.102 0.136 0.000229 0.000304 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 342 100 0.0501 0.0019 0.0122 0.0411 0.0531 0.123 0.134 0.000275 0.0003 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 342 71624.242 0.005 0.0028 0.0397 0.0958 0.0483 0.0646 0.193 0.244 0.000431 0.000544 ! Validation 342 71624.242 0.005 0.00304 0.05 0.111 0.0505 0.0673 0.223 0.273 0.000497 0.000608 Wall time: 71624.24266473297 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 343 100 0.0714 0.0026 0.0194 0.0465 0.0622 0.152 0.17 0.00034 0.000379 343 118 0.0568 0.00236 0.00972 0.0442 0.0592 0.0991 0.12 0.000221 0.000268 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 343 100 0.102 0.00166 0.069 0.0383 0.0496 0.315 0.32 0.000704 0.000715 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 343 71833.226 0.005 0.00257 0.0164 0.0678 0.0463 0.0619 0.126 0.156 0.000281 0.000349 ! Validation 343 71833.226 0.005 0.00281 0.0997 0.156 0.0483 0.0646 0.35 0.385 0.000781 0.000859 Wall time: 71833.22643150575 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 344 100 0.0645 0.00234 0.0177 0.0445 0.059 0.145 0.162 0.000323 0.000362 344 118 0.0553 0.00198 0.0157 0.0414 0.0543 0.122 0.153 0.000272 0.000341 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 344 100 0.036 0.00164 0.00325 0.038 0.0493 0.052 0.0695 0.000116 0.000155 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 344 72042.205 0.005 0.00255 0.023 0.074 0.0461 0.0616 0.144 0.185 0.000322 0.000413 ! Validation 344 72042.205 0.005 0.00275 0.0303 0.0852 0.0478 0.0639 0.17 0.212 0.000379 0.000474 Wall time: 72042.20568758808 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 345 100 0.066 0.00282 0.00959 0.0484 0.0648 0.0998 0.119 0.000223 0.000267 345 118 0.0664 0.00301 0.00622 0.0492 0.0669 0.0745 0.0962 0.000166 0.000215 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 345 100 0.0365 0.00173 0.00196 0.0391 0.0507 0.0471 0.054 0.000105 0.00012 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 345 72251.331 0.005 0.003 0.0398 0.0998 0.0497 0.0668 0.19 0.244 0.000424 0.000545 ! Validation 345 72251.331 0.005 0.00287 0.0331 0.0904 0.0489 0.0653 0.176 0.222 0.000393 0.000495 Wall time: 72251.33196182596 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 346 100 0.0658 0.00257 0.0144 0.0466 0.0618 0.127 0.147 0.000283 0.000327 346 118 0.0672 0.00263 0.0146 0.0468 0.0625 0.119 0.147 0.000265 0.000329 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 346 100 0.0397 0.00183 0.00311 0.0404 0.0521 0.0527 0.068 0.000118 0.000152 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 346 72460.261 0.005 0.00254 0.0218 0.0726 0.046 0.0615 0.141 0.18 0.000316 0.000402 ! Validation 346 72460.261 0.005 0.00292 0.0908 0.149 0.0495 0.0659 0.289 0.367 0.000646 0.00082 Wall time: 72460.26160100801 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 347 100 0.0636 0.00252 0.0132 0.0457 0.0612 0.122 0.14 0.000272 0.000313 347 118 0.0654 0.00254 0.0147 0.046 0.0614 0.114 0.148 0.000255 0.00033 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 347 100 0.0425 0.00174 0.00765 0.0391 0.0509 0.0937 0.107 0.000209 0.000238 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 347 72669.383 0.005 0.00255 0.0208 0.0718 0.0461 0.0616 0.139 0.176 0.000311 0.000393 ! Validation 347 72669.383 0.005 0.00287 0.0567 0.114 0.049 0.0653 0.233 0.29 0.000521 0.000648 Wall time: 72669.38307541003 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 348 100 0.0584 0.00253 0.00788 0.0461 0.0613 0.0912 0.108 0.000204 0.000242 348 118 0.129 0.00317 0.0661 0.0512 0.0686 0.305 0.314 0.000682 0.0007 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 348 100 0.117 0.00437 0.0296 0.0612 0.0806 0.206 0.21 0.000461 0.000468 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 348 72878.471 0.005 0.00263 0.0321 0.0847 0.0467 0.0625 0.171 0.218 0.000381 0.000486 ! Validation 348 72878.471 0.005 0.00559 0.0378 0.15 0.0687 0.0911 0.196 0.237 0.000438 0.000529 Wall time: 72878.47163355118 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 349 100 0.19 0.00257 0.138 0.0464 0.0619 0.443 0.453 0.000989 0.00101 349 118 0.109 0.00254 0.0585 0.0463 0.0614 0.279 0.295 0.000623 0.000658 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 349 100 0.0554 0.00181 0.0191 0.0401 0.0519 0.16 0.169 0.000357 0.000376 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 349 73087.406 0.005 0.00268 0.0313 0.0849 0.0472 0.0631 0.17 0.215 0.00038 0.00048 ! Validation 349 73087.406 0.005 0.00292 0.0942 0.153 0.0493 0.0659 0.31 0.374 0.000693 0.000836 Wall time: 73087.40634807991 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 350 100 0.0547 0.00248 0.00508 0.0454 0.0608 0.0687 0.0869 0.000153 0.000194 350 118 0.0484 0.00211 0.00631 0.0419 0.056 0.0799 0.0969 0.000178 0.000216 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 350 100 0.0346 0.00163 0.00202 0.038 0.0492 0.0502 0.0548 0.000112 0.000122 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 350 73296.436 0.005 0.0025 0.0149 0.0649 0.0456 0.061 0.115 0.149 0.000256 0.000333 ! Validation 350 73296.436 0.005 0.0027 0.0322 0.0862 0.0474 0.0634 0.165 0.219 0.000369 0.000488 Wall time: 73296.43652035482 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 351 100 0.184 0.00294 0.125 0.0502 0.0662 0.415 0.432 0.000926 0.000964 351 118 0.0527 0.00235 0.00579 0.0448 0.0591 0.0738 0.0928 0.000165 0.000207 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 351 100 0.0487 0.00176 0.0134 0.0394 0.0512 0.131 0.141 0.000292 0.000315 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 351 73505.390 0.005 0.00267 0.0412 0.0946 0.047 0.063 0.187 0.248 0.000418 0.000554 ! Validation 351 73505.390 0.005 0.00286 0.0266 0.0839 0.0489 0.0653 0.164 0.199 0.000366 0.000444 Wall time: 73505.3907473199 ! Best model 351 0.084 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 352 100 0.0617 0.0025 0.0117 0.0457 0.061 0.113 0.132 0.000252 0.000294 352 118 0.0721 0.00211 0.0299 0.0421 0.056 0.189 0.211 0.000423 0.000471 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 352 100 0.0606 0.00164 0.0277 0.0381 0.0494 0.195 0.203 0.000436 0.000453 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 352 73714.346 0.005 0.00249 0.017 0.0668 0.0456 0.0609 0.125 0.158 0.00028 0.000354 ! Validation 352 73714.346 0.005 0.00276 0.0833 0.138 0.0479 0.064 0.305 0.352 0.000681 0.000785 Wall time: 73714.34663962387 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 353 100 0.055 0.00239 0.00732 0.0444 0.0596 0.0829 0.104 0.000185 0.000233 353 118 0.113 0.00239 0.0651 0.0446 0.0596 0.299 0.311 0.000668 0.000695 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 353 100 0.0605 0.00161 0.0284 0.0378 0.0489 0.199 0.205 0.000444 0.000459 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 353 73923.290 0.005 0.00256 0.0273 0.0784 0.0462 0.0617 0.156 0.201 0.000348 0.000448 ! Validation 353 73923.290 0.005 0.0027 0.0993 0.153 0.0474 0.0634 0.341 0.384 0.000762 0.000858 Wall time: 73923.2900205329 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 354 100 0.0526 0.00234 0.0058 0.044 0.059 0.0739 0.0929 0.000165 0.000207 354 118 0.0498 0.00223 0.0052 0.0438 0.0576 0.0801 0.0879 0.000179 0.000196 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 354 100 0.0372 0.00159 0.00547 0.0374 0.0486 0.0724 0.0902 0.000162 0.000201 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 354 74132.452 0.005 0.00252 0.0233 0.0738 0.0458 0.0613 0.15 0.187 0.000335 0.000417 ! Validation 354 74132.452 0.005 0.00266 0.0409 0.0941 0.047 0.0629 0.188 0.247 0.00042 0.00055 Wall time: 74132.45287633594 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 355 100 0.0639 0.00257 0.0124 0.0461 0.0618 0.113 0.136 0.000252 0.000303 355 118 0.0477 0.00217 0.00436 0.0429 0.0568 0.0674 0.0805 0.00015 0.00018 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 355 100 0.0362 0.00166 0.00299 0.0383 0.0497 0.0595 0.0667 0.000133 0.000149 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 355 74341.575 0.005 0.00244 0.0198 0.0686 0.045 0.0602 0.134 0.172 0.000299 0.000385 ! Validation 355 74341.575 0.005 0.00272 0.031 0.0855 0.0476 0.0636 0.161 0.215 0.000359 0.000479 Wall time: 74341.57606091816 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 356 100 0.0733 0.00234 0.0265 0.0442 0.059 0.179 0.198 0.0004 0.000443 356 118 0.0605 0.00232 0.014 0.0443 0.0588 0.132 0.144 0.000294 0.000322 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 356 100 0.0398 0.00157 0.00839 0.0373 0.0484 0.0958 0.112 0.000214 0.000249 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 356 74550.734 0.005 0.00239 0.0145 0.0622 0.0445 0.0596 0.118 0.147 0.000263 0.000328 ! Validation 356 74550.734 0.005 0.00262 0.0212 0.0737 0.0467 0.0625 0.14 0.177 0.000313 0.000396 Wall time: 74550.734663649 ! Best model 356 0.074 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 357 100 0.0513 0.00237 0.0039 0.0446 0.0594 0.0608 0.0761 0.000136 0.00017 357 118 0.0474 0.00212 0.00502 0.0424 0.0562 0.0729 0.0864 0.000163 0.000193 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 357 100 0.0355 0.00159 0.00373 0.0374 0.0486 0.0591 0.0745 0.000132 0.000166 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 357 74760.687 0.005 0.00325 0.0544 0.119 0.0509 0.0696 0.202 0.285 0.000451 0.000637 ! Validation 357 74760.687 0.005 0.00268 0.0294 0.0829 0.0472 0.0631 0.167 0.209 0.000373 0.000466 Wall time: 74760.6880117231 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 358 100 0.052 0.00233 0.00538 0.0443 0.0589 0.0728 0.0895 0.000163 0.0002 358 118 0.0715 0.00184 0.0347 0.0397 0.0523 0.213 0.227 0.000475 0.000507 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 358 100 0.0645 0.00162 0.0321 0.0379 0.0491 0.213 0.219 0.000475 0.000488 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 358 74969.809 0.005 0.00239 0.0155 0.0632 0.0446 0.0596 0.121 0.151 0.000269 0.000338 ! Validation 358 74969.809 0.005 0.00269 0.0299 0.0838 0.0474 0.0633 0.17 0.211 0.000379 0.00047 Wall time: 74969.80986162694 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 359 100 0.062 0.00249 0.0122 0.0456 0.0609 0.109 0.135 0.000243 0.000301 359 118 0.0555 0.00227 0.0102 0.0439 0.0581 0.106 0.123 0.000236 0.000275 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 359 100 0.0388 0.00162 0.00646 0.0378 0.049 0.0861 0.098 0.000192 0.000219 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 359 75188.122 0.005 0.00246 0.0247 0.0738 0.0453 0.0605 0.155 0.192 0.000346 0.000429 ! Validation 359 75188.122 0.005 0.00267 0.0645 0.118 0.0472 0.063 0.25 0.31 0.000558 0.000691 Wall time: 75188.12274384312 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 360 100 0.0646 0.00253 0.014 0.0461 0.0614 0.124 0.144 0.000276 0.000322 360 118 0.0583 0.00255 0.00732 0.046 0.0616 0.0841 0.104 0.000188 0.000233 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 360 100 0.0376 0.00159 0.00576 0.0374 0.0487 0.0772 0.0925 0.000172 0.000207 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 360 75397.431 0.005 0.00238 0.0188 0.0664 0.0445 0.0595 0.132 0.167 0.000295 0.000374 ! Validation 360 75397.431 0.005 0.00267 0.0196 0.0731 0.0471 0.0631 0.138 0.171 0.000309 0.000381 Wall time: 75397.43113617506 ! Best model 360 0.073 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 361 100 0.0673 0.00247 0.0179 0.0447 0.0606 0.145 0.163 0.000324 0.000364 361 118 0.104 0.00458 0.0119 0.0618 0.0825 0.114 0.133 0.000254 0.000297 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 361 100 0.126 0.00249 0.0761 0.046 0.0609 0.331 0.336 0.000738 0.000751 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 361 75606.614 0.005 0.00249 0.0292 0.079 0.0454 0.0607 0.165 0.209 0.000369 0.000466 ! Validation 361 75606.614 0.005 0.00352 0.0752 0.146 0.0541 0.0724 0.283 0.334 0.000632 0.000747 Wall time: 75606.61459200783 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 362 100 0.0582 0.00235 0.0113 0.0441 0.0591 0.11 0.13 0.000246 0.000289 362 118 0.073 0.00233 0.0265 0.0441 0.0588 0.173 0.199 0.000387 0.000443 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 362 100 0.0536 0.00156 0.0225 0.037 0.0481 0.176 0.183 0.000392 0.000409 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 362 75815.770 0.005 0.0026 0.0206 0.0725 0.0463 0.0622 0.133 0.175 0.000296 0.00039 ! Validation 362 75815.770 0.005 0.00261 0.107 0.159 0.0465 0.0623 0.346 0.398 0.000771 0.000889 Wall time: 75815.77005928615 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 363 100 0.0492 0.00233 0.00252 0.0441 0.0589 0.0492 0.0613 0.00011 0.000137 363 118 0.0597 0.00277 0.00416 0.0459 0.0642 0.0641 0.0787 0.000143 0.000176 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 363 100 0.034 0.00153 0.00331 0.0368 0.0477 0.0605 0.0702 0.000135 0.000157 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 363 76025.633 0.005 0.00244 0.0209 0.0697 0.045 0.0602 0.136 0.177 0.000303 0.000394 ! Validation 363 76025.633 0.005 0.00257 0.0229 0.0744 0.0462 0.0618 0.143 0.185 0.000319 0.000412 Wall time: 76025.6330835321 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 364 100 0.0663 0.00243 0.0177 0.0449 0.0601 0.141 0.162 0.000315 0.000362 364 118 0.065 0.00238 0.0175 0.0448 0.0594 0.139 0.161 0.000311 0.00036 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 364 100 0.0345 0.00163 0.00185 0.0379 0.0492 0.0448 0.0524 0.0001 0.000117 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 364 76247.295 0.005 0.00252 0.0369 0.0873 0.0459 0.0612 0.177 0.235 0.000396 0.000524 ! Validation 364 76247.295 0.005 0.00268 0.0267 0.0803 0.0473 0.0631 0.156 0.199 0.000348 0.000445 Wall time: 76247.29514854075 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 365 100 0.0565 0.00244 0.00764 0.0449 0.0603 0.0868 0.107 0.000194 0.000238 365 118 0.0504 0.00222 0.00593 0.0433 0.0575 0.0717 0.0939 0.00016 0.00021 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 365 100 0.0472 0.00154 0.0163 0.037 0.0479 0.148 0.156 0.00033 0.000348 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 365 76456.523 0.005 0.00237 0.0155 0.0629 0.0444 0.0593 0.122 0.152 0.000272 0.00034 ! Validation 365 76456.523 0.005 0.00262 0.0241 0.0765 0.0467 0.0624 0.149 0.189 0.000333 0.000423 Wall time: 76456.52337316284 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 366 100 0.049 0.00222 0.00461 0.043 0.0575 0.0699 0.0828 0.000156 0.000185 366 118 0.0719 0.00269 0.0181 0.0466 0.0632 0.147 0.164 0.000329 0.000366 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 366 100 0.0401 0.00153 0.00949 0.0367 0.0477 0.107 0.119 0.00024 0.000265 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 366 76665.757 0.005 0.00232 0.0145 0.0609 0.0439 0.0587 0.12 0.147 0.000267 0.000328 ! Validation 366 76665.757 0.005 0.00256 0.02 0.0713 0.0461 0.0617 0.136 0.172 0.000305 0.000385 Wall time: 76665.7576631438 ! Best model 366 0.071 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 367 100 0.0704 0.0024 0.0225 0.0448 0.0597 0.154 0.183 0.000343 0.000408 367 118 0.0501 0.00216 0.00686 0.0422 0.0567 0.0713 0.101 0.000159 0.000225 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 367 100 0.0467 0.00152 0.0163 0.0366 0.0475 0.146 0.156 0.000327 0.000348 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 367 76874.985 0.005 0.00235 0.023 0.07 0.0443 0.0591 0.148 0.185 0.00033 0.000414 ! Validation 367 76874.985 0.005 0.00254 0.0645 0.115 0.0459 0.0614 0.261 0.31 0.000584 0.000691 Wall time: 76874.98572976794 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 368 100 0.0514 0.00222 0.00688 0.0431 0.0575 0.0795 0.101 0.000177 0.000226 368 118 0.0772 0.00298 0.0176 0.0493 0.0666 0.148 0.162 0.000331 0.000361 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 368 100 0.0757 0.00148 0.0461 0.0361 0.0469 0.259 0.262 0.000578 0.000585 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 368 77084.192 0.005 0.00228 0.0161 0.0617 0.0435 0.0581 0.124 0.155 0.000276 0.000346 ! Validation 368 77084.192 0.005 0.00251 0.0427 0.0929 0.0457 0.0611 0.211 0.252 0.000471 0.000562 Wall time: 77084.19241993316 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 369 100 0.0573 0.00217 0.0139 0.0428 0.0568 0.118 0.144 0.000263 0.000321 369 118 0.048 0.00211 0.00588 0.0419 0.056 0.0719 0.0935 0.000161 0.000209 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 369 100 0.0323 0.00154 0.00154 0.0369 0.0478 0.0405 0.0479 9.04e-05 0.000107 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 369 77293.486 0.005 0.00388 0.0987 0.176 0.0552 0.0761 0.261 0.384 0.000582 0.000858 ! Validation 369 77293.486 0.005 0.00258 0.0292 0.0807 0.0463 0.0619 0.161 0.208 0.000359 0.000465 Wall time: 77293.4862256879 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 370 100 0.0519 0.00212 0.00944 0.0422 0.0562 0.093 0.118 0.000207 0.000264 370 118 0.0638 0.00268 0.0101 0.0473 0.0632 0.0881 0.123 0.000197 0.000274 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 370 100 0.038 0.00153 0.00733 0.0367 0.0477 0.0971 0.104 0.000217 0.000233 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 370 77502.752 0.005 0.00228 0.0105 0.0561 0.0435 0.0582 0.1 0.125 0.000224 0.000279 ! Validation 370 77502.752 0.005 0.00259 0.0289 0.0806 0.0463 0.062 0.173 0.207 0.000387 0.000463 Wall time: 77502.75224898104 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 371 100 0.0519 0.00214 0.00898 0.0424 0.0565 0.0969 0.116 0.000216 0.000258 371 118 0.089 0.00267 0.0355 0.0464 0.0631 0.202 0.23 0.000451 0.000513 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 371 100 0.108 0.00164 0.0754 0.038 0.0493 0.331 0.335 0.000738 0.000748 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 371 77712.204 0.005 0.00226 0.0127 0.0579 0.0433 0.0579 0.107 0.137 0.000239 0.000305 ! Validation 371 77712.204 0.005 0.00266 0.233 0.286 0.047 0.0629 0.55 0.589 0.00123 0.00131 Wall time: 77712.2041842239 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 372 100 0.0923 0.00235 0.0454 0.0442 0.0591 0.245 0.26 0.000546 0.00058 372 118 0.0457 0.00209 0.00385 0.0415 0.0558 0.0639 0.0757 0.000143 0.000169 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 372 100 0.0458 0.00153 0.0153 0.0366 0.0477 0.144 0.151 0.000321 0.000336 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 372 77922.461 0.005 0.00235 0.0244 0.0714 0.0443 0.0592 0.154 0.191 0.000343 0.000426 ! Validation 372 77922.461 0.005 0.00254 0.023 0.0738 0.0459 0.0614 0.151 0.185 0.000337 0.000413 Wall time: 77922.46159756416 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 373 100 0.0657 0.00245 0.0167 0.045 0.0604 0.141 0.158 0.000315 0.000352 373 118 0.0801 0.00246 0.0308 0.0448 0.0605 0.183 0.214 0.000408 0.000478 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 373 100 0.0692 0.00168 0.0356 0.0382 0.05 0.225 0.23 0.000502 0.000514 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 373 78131.719 0.005 0.00223 0.0132 0.0578 0.0431 0.0576 0.109 0.139 0.000242 0.000311 ! Validation 373 78131.719 0.005 0.0027 0.0623 0.116 0.0473 0.0634 0.261 0.304 0.000583 0.000679 Wall time: 78131.71959479013 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 374 100 0.0585 0.00228 0.0128 0.0437 0.0582 0.118 0.138 0.000264 0.000309 374 118 0.0535 0.00236 0.00625 0.0445 0.0593 0.0823 0.0964 0.000184 0.000215 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 374 100 0.0494 0.00149 0.0197 0.0362 0.047 0.165 0.171 0.000368 0.000382 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 374 78340.886 0.005 0.0024 0.0249 0.0728 0.0447 0.0597 0.151 0.193 0.000336 0.000431 ! Validation 374 78340.886 0.005 0.00249 0.067 0.117 0.0456 0.0609 0.268 0.316 0.000598 0.000704 Wall time: 78340.88696597097 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 375 100 0.0608 0.00227 0.0154 0.0436 0.0581 0.132 0.151 0.000295 0.000338 375 118 0.0672 0.00294 0.00852 0.0484 0.0661 0.0834 0.113 0.000186 0.000251 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 375 100 0.0369 0.00156 0.00579 0.037 0.0481 0.0834 0.0928 0.000186 0.000207 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 375 78550.042 0.005 0.00229 0.0216 0.0674 0.0437 0.0583 0.148 0.18 0.000331 0.000401 ! Validation 375 78550.042 0.005 0.00255 0.0359 0.0869 0.0461 0.0616 0.181 0.231 0.000404 0.000516 Wall time: 78550.04241375811 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 376 100 0.124 0.00229 0.0781 0.0436 0.0583 0.318 0.341 0.000711 0.000761 376 118 0.062 0.00204 0.0213 0.0414 0.055 0.169 0.178 0.000377 0.000398 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 376 100 0.0359 0.00165 0.003 0.0383 0.0495 0.0594 0.0667 0.000133 0.000149 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 376 78759.224 0.005 0.00226 0.0191 0.0644 0.0434 0.058 0.134 0.169 0.000298 0.000376 ! Validation 376 78759.224 0.005 0.00268 0.0326 0.0863 0.0475 0.0631 0.173 0.22 0.000385 0.000492 Wall time: 78759.22492915811 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 377 100 0.0566 0.00215 0.0135 0.0423 0.0566 0.121 0.142 0.00027 0.000317 377 118 0.0486 0.00212 0.00619 0.0423 0.0561 0.081 0.096 0.000181 0.000214 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 377 100 0.0374 0.00145 0.00839 0.0359 0.0464 0.0995 0.112 0.000222 0.000249 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 377 78968.416 0.005 0.00227 0.0166 0.0621 0.0436 0.0582 0.128 0.157 0.000285 0.000351 ! Validation 377 78968.416 0.005 0.00245 0.0339 0.0829 0.0451 0.0604 0.187 0.225 0.000416 0.000501 Wall time: 78968.41654145904 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 378 100 0.0584 0.00217 0.0151 0.0428 0.0568 0.133 0.15 0.000297 0.000334 378 118 0.0433 0.00201 0.00318 0.0415 0.0546 0.0623 0.0687 0.000139 0.000153 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 378 100 0.0493 0.0015 0.0194 0.0363 0.0472 0.162 0.17 0.000361 0.000379 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 378 79177.695 0.005 0.00228 0.0247 0.0702 0.0436 0.0582 0.156 0.192 0.000349 0.000429 ! Validation 378 79177.695 0.005 0.0025 0.0572 0.107 0.0456 0.061 0.249 0.292 0.000555 0.000651 Wall time: 79177.69582546409 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 379 100 0.0803 0.00256 0.0291 0.0465 0.0617 0.174 0.208 0.000389 0.000465 379 118 0.0534 0.00206 0.0121 0.042 0.0554 0.103 0.134 0.00023 0.0003 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 379 100 0.043 0.00196 0.00383 0.0419 0.054 0.0711 0.0755 0.000159 0.000168 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 379 79386.868 0.005 0.00236 0.0282 0.0754 0.0444 0.0593 0.172 0.205 0.000383 0.000458 ! Validation 379 79386.868 0.005 0.00296 0.0582 0.117 0.0504 0.0663 0.227 0.294 0.000507 0.000657 Wall time: 79386.86860023998 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 380 100 0.0572 0.00223 0.0126 0.0432 0.0576 0.113 0.137 0.000252 0.000305 380 118 0.0459 0.00217 0.00255 0.0423 0.0568 0.0438 0.0616 9.78e-05 0.000137 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 380 100 0.0542 0.0015 0.0242 0.0363 0.0473 0.183 0.19 0.000409 0.000423 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 380 79596.039 0.005 0.00238 0.0247 0.0723 0.0446 0.0595 0.16 0.192 0.000357 0.000429 ! Validation 380 79596.039 0.005 0.00249 0.0539 0.104 0.0455 0.0608 0.237 0.283 0.000529 0.000632 Wall time: 79596.0393243949 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 381 100 0.0782 0.00198 0.0385 0.0409 0.0543 0.216 0.239 0.000482 0.000534 381 118 0.0678 0.00257 0.0164 0.0458 0.0619 0.133 0.156 0.000296 0.000348 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 381 100 0.034 0.00152 0.00369 0.0367 0.0475 0.0569 0.0741 0.000127 0.000165 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 381 79805.196 0.005 0.00219 0.0127 0.0565 0.0427 0.0571 0.11 0.137 0.000245 0.000306 ! Validation 381 79805.196 0.005 0.00249 0.0706 0.12 0.0455 0.0608 0.258 0.324 0.000576 0.000723 Wall time: 79805.19633203186 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 382 100 0.0508 0.00226 0.00553 0.0439 0.058 0.0718 0.0906 0.00016 0.000202 382 118 0.0596 0.00263 0.00705 0.0467 0.0625 0.0866 0.102 0.000193 0.000229 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 382 100 0.0509 0.00147 0.0214 0.0361 0.0468 0.174 0.178 0.000388 0.000398 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 382 80014.378 0.005 0.00226 0.0239 0.0691 0.0434 0.0579 0.151 0.189 0.000338 0.000422 ! Validation 382 80014.378 0.005 0.00244 0.0256 0.0745 0.0451 0.0603 0.161 0.195 0.000359 0.000436 Wall time: 80014.37822116399 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 383 100 0.0905 0.00223 0.0459 0.0431 0.0576 0.247 0.261 0.000551 0.000583 383 118 0.085 0.00229 0.0392 0.0433 0.0583 0.234 0.241 0.000523 0.000539 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 383 100 0.0393 0.00155 0.00827 0.0371 0.048 0.106 0.111 0.000236 0.000248 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 383 80223.602 0.005 0.00221 0.022 0.0662 0.0429 0.0573 0.147 0.18 0.000328 0.000403 ! Validation 383 80223.602 0.005 0.00254 0.0886 0.139 0.0461 0.0614 0.307 0.363 0.000686 0.00081 Wall time: 80223.6028089188 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 384 100 0.0513 0.00219 0.00738 0.0426 0.0571 0.085 0.105 0.00019 0.000234 384 118 0.0423 0.00177 0.00701 0.0388 0.0512 0.0839 0.102 0.000187 0.000228 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 384 100 0.0305 0.00145 0.00158 0.0359 0.0464 0.0409 0.0485 9.13e-05 0.000108 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 384 80432.759 0.005 0.00276 0.0436 0.0988 0.0478 0.0641 0.199 0.255 0.000444 0.00057 ! Validation 384 80432.759 0.005 0.00243 0.0415 0.0902 0.045 0.0602 0.193 0.249 0.00043 0.000555 Wall time: 80432.75922248978 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 385 100 0.0584 0.00245 0.00942 0.0457 0.0603 0.0912 0.118 0.000204 0.000264 385 118 0.0554 0.0022 0.0115 0.0431 0.0572 0.105 0.131 0.000234 0.000291 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 385 100 0.0575 0.00164 0.0247 0.0381 0.0494 0.187 0.192 0.000417 0.000428 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 385 80641.907 0.005 0.0022 0.0198 0.0638 0.0428 0.0572 0.128 0.172 0.000286 0.000384 ! Validation 385 80641.907 0.005 0.00264 0.0498 0.103 0.047 0.0626 0.21 0.272 0.000469 0.000607 Wall time: 80641.90771015175 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 386 100 0.0544 0.00235 0.00741 0.0444 0.0591 0.0867 0.105 0.000193 0.000234 386 118 0.0496 0.00204 0.00882 0.042 0.0551 0.101 0.115 0.000226 0.000256 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 386 100 0.0351 0.00157 0.00373 0.0372 0.0483 0.0636 0.0745 0.000142 0.000166 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 386 80851.046 0.005 0.00225 0.0162 0.0611 0.0433 0.0578 0.125 0.155 0.000278 0.000347 ! Validation 386 80851.046 0.005 0.00252 0.0511 0.101 0.0458 0.0612 0.206 0.276 0.00046 0.000615 Wall time: 80851.04625249282 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 387 100 0.0534 0.00221 0.00913 0.0426 0.0574 0.102 0.117 0.000227 0.00026 387 118 0.156 0.0024 0.108 0.0442 0.0597 0.384 0.4 0.000858 0.000894 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 387 100 0.127 0.00171 0.0923 0.0387 0.0504 0.368 0.371 0.000821 0.000827 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 387 81060.194 0.005 0.00222 0.0169 0.0612 0.043 0.0574 0.125 0.155 0.000278 0.000347 ! Validation 387 81060.194 0.005 0.0027 0.082 0.136 0.0476 0.0634 0.31 0.349 0.000691 0.000779 Wall time: 81060.19480593782 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 388 100 0.0597 0.00201 0.0195 0.0411 0.0547 0.153 0.17 0.000341 0.00038 388 118 0.0429 0.00189 0.00515 0.04 0.053 0.0696 0.0875 0.000155 0.000195 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 388 100 0.0298 0.00142 0.00147 0.0354 0.0459 0.041 0.0467 9.15e-05 0.000104 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 388 81269.464 0.005 0.00231 0.0223 0.0686 0.044 0.0587 0.139 0.183 0.000311 0.000408 ! Validation 388 81269.464 0.005 0.00239 0.0264 0.0742 0.0446 0.0596 0.162 0.198 0.000361 0.000442 Wall time: 81269.46408227878 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 389 100 0.0472 0.00209 0.00533 0.0419 0.0558 0.0718 0.0891 0.00016 0.000199 389 118 0.0627 0.00216 0.0195 0.0424 0.0567 0.159 0.17 0.000355 0.00038 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 389 100 0.0445 0.00139 0.0166 0.0351 0.0455 0.153 0.157 0.000342 0.000351 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 389 81478.641 0.005 0.00213 0.0154 0.058 0.0422 0.0563 0.12 0.151 0.000267 0.000337 ! Validation 389 81478.641 0.005 0.00235 0.0659 0.113 0.0442 0.0591 0.25 0.313 0.000559 0.000699 Wall time: 81478.64146045316 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 390 100 0.0931 0.00221 0.0489 0.0429 0.0573 0.237 0.27 0.000528 0.000602 390 118 0.0457 0.00208 0.00407 0.0422 0.0557 0.0578 0.0778 0.000129 0.000174 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 390 100 0.0404 0.00151 0.0102 0.0366 0.0474 0.115 0.123 0.000257 0.000275 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 390 81687.791 0.005 0.00239 0.0342 0.082 0.0447 0.0597 0.174 0.226 0.000388 0.000505 ! Validation 390 81687.791 0.005 0.00246 0.0196 0.0688 0.0454 0.0605 0.137 0.171 0.000306 0.000381 Wall time: 81687.79204564309 ! Best model 390 0.069 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 391 100 0.0508 0.00232 0.00443 0.0439 0.0587 0.0657 0.0812 0.000147 0.000181 391 118 0.043 0.00206 0.00188 0.0416 0.0553 0.0406 0.0529 9.07e-05 0.000118 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 391 100 0.0326 0.00147 0.00316 0.0362 0.0468 0.0601 0.0686 0.000134 0.000153 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 391 81896.989 0.005 0.0022 0.0221 0.0661 0.0428 0.0572 0.147 0.182 0.000328 0.000406 ! Validation 391 81896.989 0.005 0.00242 0.0303 0.0787 0.045 0.06 0.159 0.212 0.000355 0.000474 Wall time: 81896.989402303 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 392 100 0.0665 0.00255 0.0154 0.0464 0.0616 0.125 0.152 0.000278 0.000338 392 118 0.0613 0.00288 0.00377 0.0483 0.0654 0.0642 0.0749 0.000143 0.000167 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 392 100 0.0291 0.0014 0.00108 0.0353 0.0456 0.0321 0.0401 7.16e-05 8.96e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 392 82106.247 0.005 0.00213 0.017 0.0597 0.0421 0.0563 0.125 0.159 0.000278 0.000356 ! Validation 392 82106.247 0.005 0.00235 0.0381 0.0852 0.0442 0.0591 0.188 0.238 0.000421 0.000531 Wall time: 82106.24707134115 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 393 100 0.055 0.00209 0.0132 0.0419 0.0558 0.119 0.14 0.000266 0.000312 393 118 0.0425 0.00187 0.00504 0.0403 0.0528 0.0676 0.0866 0.000151 0.000193 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 393 100 0.0293 0.00139 0.00158 0.0349 0.0454 0.0419 0.0484 9.35e-05 0.000108 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 393 82315.416 0.005 0.00226 0.0237 0.0689 0.0434 0.058 0.144 0.188 0.000321 0.000421 ! Validation 393 82315.416 0.005 0.00233 0.0291 0.0757 0.044 0.0589 0.166 0.208 0.00037 0.000464 Wall time: 82315.4166405038 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 394 100 0.0684 0.00232 0.022 0.0438 0.0587 0.169 0.181 0.000377 0.000404 394 118 0.0403 0.0017 0.00618 0.0383 0.0503 0.0718 0.0959 0.00016 0.000214 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 394 100 0.0537 0.00138 0.0261 0.0349 0.0453 0.192 0.197 0.000429 0.00044 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 394 82524.772 0.005 0.00218 0.0282 0.0718 0.0427 0.057 0.169 0.205 0.000378 0.000458 ! Validation 394 82524.772 0.005 0.00231 0.0708 0.117 0.0439 0.0587 0.283 0.324 0.000631 0.000724 Wall time: 82524.77275273576 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 395 100 0.0552 0.0022 0.0112 0.0429 0.0572 0.102 0.129 0.000228 0.000288 395 118 0.0544 0.00248 0.00486 0.0455 0.0607 0.0634 0.085 0.000142 0.00019 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 395 100 0.0348 0.00169 0.00101 0.0384 0.0502 0.0304 0.0387 6.79e-05 8.64e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 395 82733.960 0.005 0.00214 0.0169 0.0597 0.0422 0.0564 0.126 0.159 0.00028 0.000355 ! Validation 395 82733.960 0.005 0.00261 0.0189 0.0711 0.0468 0.0623 0.137 0.168 0.000305 0.000375 Wall time: 82733.96101174178 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 396 100 0.132 0.00324 0.0677 0.0527 0.0694 0.303 0.317 0.000677 0.000708 396 118 0.085 0.00242 0.0365 0.0454 0.06 0.22 0.233 0.000491 0.00052 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 396 100 0.0944 0.0016 0.0624 0.0378 0.0488 0.302 0.305 0.000673 0.00068 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 396 82943.173 0.005 0.00224 0.0247 0.0695 0.0433 0.0577 0.141 0.191 0.000315 0.000427 ! Validation 396 82943.173 0.005 0.00257 0.0335 0.0849 0.0464 0.0618 0.185 0.223 0.000413 0.000499 Wall time: 82943.17329715611 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 397 100 0.045 0.00206 0.00369 0.0418 0.0554 0.0605 0.0741 0.000135 0.000165 397 118 0.0684 0.00207 0.0271 0.0417 0.0554 0.165 0.201 0.000368 0.000448 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 397 100 0.068 0.0017 0.0339 0.0388 0.0504 0.221 0.225 0.000493 0.000501 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 397 83152.461 0.005 0.00211 0.0143 0.0564 0.0419 0.056 0.117 0.145 0.00026 0.000325 ! Validation 397 83152.461 0.005 0.00266 0.0475 0.101 0.0472 0.0629 0.226 0.266 0.000505 0.000593 Wall time: 83152.46133116586 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 398 100 0.0996 0.00209 0.0578 0.0415 0.0557 0.283 0.293 0.000631 0.000655 398 118 0.053 0.00235 0.00599 0.0444 0.0591 0.0705 0.0944 0.000157 0.000211 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 398 100 0.0301 0.00145 0.00114 0.0359 0.0464 0.0373 0.0411 8.32e-05 9.18e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 398 83361.655 0.005 0.00217 0.0208 0.0643 0.0426 0.0569 0.138 0.176 0.000308 0.000393 ! Validation 398 83361.655 0.005 0.00236 0.0249 0.0721 0.0444 0.0592 0.148 0.192 0.000331 0.000429 Wall time: 83361.65524860704 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 399 100 0.0463 0.00207 0.00482 0.0416 0.0555 0.0699 0.0847 0.000156 0.000189 399 118 0.143 0.00241 0.0944 0.045 0.0599 0.361 0.375 0.000806 0.000836 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 399 100 0.076 0.00163 0.0435 0.0376 0.0492 0.252 0.254 0.000562 0.000568 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 399 83570.865 0.005 0.00206 0.0172 0.0584 0.0414 0.0553 0.123 0.158 0.000276 0.000352 ! Validation 399 83570.865 0.005 0.00255 0.104 0.155 0.0461 0.0616 0.328 0.393 0.000731 0.000878 Wall time: 83570.86545355618 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 400 100 0.074 0.0025 0.0239 0.0457 0.061 0.158 0.189 0.000353 0.000421 400 118 0.054 0.00221 0.00993 0.0425 0.0573 0.105 0.122 0.000235 0.000271 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 400 100 0.0319 0.0015 0.00196 0.0364 0.0472 0.0477 0.0539 0.000106 0.00012 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 400 83780.072 0.005 0.00599 0.196 0.316 0.0651 0.0946 0.388 0.542 0.000866 0.00121 ! Validation 400 83780.072 0.005 0.00249 0.0288 0.0785 0.0455 0.0608 0.163 0.207 0.000363 0.000462 Wall time: 83780.07297750888 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 401 100 0.0485 0.00218 0.00486 0.0424 0.057 0.0643 0.0851 0.000144 0.00019 401 118 0.0507 0.00247 0.00139 0.0447 0.0606 0.0356 0.0454 7.95e-05 0.000101 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 401 100 0.0284 0.00137 0.00111 0.0348 0.0451 0.0342 0.0406 7.64e-05 9.07e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 401 83989.280 0.005 0.00212 0.00754 0.05 0.042 0.0562 0.085 0.106 0.00019 0.000237 ! Validation 401 83989.280 0.005 0.00233 0.0273 0.074 0.044 0.0589 0.157 0.202 0.00035 0.00045 Wall time: 83989.28103139019 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 402 100 0.044 0.002 0.00394 0.041 0.0546 0.0609 0.0765 0.000136 0.000171 402 118 0.0699 0.00231 0.0236 0.044 0.0586 0.178 0.188 0.000397 0.000419 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 402 100 0.0376 0.00137 0.0102 0.0346 0.0451 0.117 0.123 0.000261 0.000275 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 402 84198.562 0.005 0.00207 0.00866 0.0501 0.0415 0.0555 0.0895 0.113 0.0002 0.000252 ! Validation 402 84198.562 0.005 0.00231 0.0484 0.0946 0.0438 0.0586 0.216 0.268 0.000481 0.000599 Wall time: 84198.56273584999 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 403 100 0.0444 0.00203 0.00393 0.041 0.0549 0.0588 0.0764 0.000131 0.000171 403 118 0.0417 0.00196 0.00256 0.0402 0.0539 0.0483 0.0617 0.000108 0.000138 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 403 100 0.0295 0.00132 0.00314 0.0341 0.0443 0.0553 0.0684 0.000124 0.000153 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 403 84407.752 0.005 0.00203 0.00593 0.0465 0.0411 0.055 0.0748 0.0941 0.000167 0.00021 ! Validation 403 84407.752 0.005 0.00225 0.0283 0.0733 0.0432 0.0578 0.167 0.205 0.000374 0.000458 Wall time: 84407.75235735998 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 404 100 0.0499 0.00208 0.00835 0.0411 0.0556 0.0948 0.111 0.000212 0.000249 404 118 0.0477 0.00217 0.00443 0.0415 0.0567 0.0613 0.0811 0.000137 0.000181 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 404 100 0.0344 0.00131 0.00818 0.0341 0.0442 0.101 0.11 0.000226 0.000246 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 404 84616.945 0.005 0.002 0.00585 0.0459 0.0408 0.0545 0.0748 0.0934 0.000167 0.000208 ! Validation 404 84616.945 0.005 0.00223 0.0201 0.0647 0.043 0.0576 0.135 0.173 0.0003 0.000386 Wall time: 84616.94536158396 ! Best model 404 0.065 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 405 100 0.0483 0.00212 0.00588 0.0419 0.0562 0.0792 0.0935 0.000177 0.000209 405 118 0.052 0.00199 0.0122 0.0403 0.0544 0.11 0.135 0.000246 0.000301 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 405 100 0.0293 0.00136 0.00215 0.0346 0.0449 0.044 0.0565 9.82e-05 0.000126 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 405 84826.138 0.005 0.00199 0.00905 0.0488 0.0406 0.0544 0.0928 0.116 0.000207 0.000259 ! Validation 405 84826.138 0.005 0.00226 0.0348 0.0801 0.0435 0.058 0.178 0.228 0.000398 0.000508 Wall time: 84826.138621171 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 406 100 0.0496 0.00216 0.00634 0.0422 0.0567 0.078 0.0971 0.000174 0.000217 406 118 0.0673 0.00234 0.0206 0.0439 0.059 0.149 0.175 0.000333 0.000391 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 406 100 0.0327 0.0014 0.0047 0.0351 0.0457 0.0665 0.0836 0.000148 0.000187 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 406 85035.331 0.005 0.00198 0.00723 0.0468 0.0405 0.0542 0.0823 0.103 0.000184 0.00023 ! Validation 406 85035.331 0.005 0.00231 0.068 0.114 0.0438 0.0586 0.259 0.318 0.000578 0.00071 Wall time: 85035.33144223318 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 407 100 0.048 0.00214 0.00515 0.0423 0.0564 0.0696 0.0875 0.000155 0.000195 407 118 0.0509 0.00238 0.0033 0.0448 0.0595 0.0611 0.07 0.000136 0.000156 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 407 100 0.0378 0.0013 0.0118 0.0339 0.0439 0.127 0.133 0.000284 0.000296 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 407 85244.572 0.005 0.00212 0.0214 0.0637 0.042 0.0561 0.14 0.179 0.000314 0.000399 ! Validation 407 85244.572 0.005 0.00221 0.0539 0.0981 0.0428 0.0573 0.24 0.283 0.000536 0.000632 Wall time: 85244.57246345608 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 408 100 0.0542 0.00187 0.0168 0.0396 0.0527 0.138 0.158 0.000307 0.000352 408 118 0.067 0.00265 0.0139 0.0465 0.0628 0.118 0.144 0.000264 0.000321 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 408 100 0.0314 0.00139 0.00361 0.0349 0.0455 0.0638 0.0732 0.000142 0.000163 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 408 85453.736 0.005 0.00198 0.0112 0.0508 0.0405 0.0542 0.105 0.129 0.000233 0.000287 ! Validation 408 85453.736 0.005 0.00231 0.0182 0.0644 0.0439 0.0587 0.127 0.164 0.000284 0.000367 Wall time: 85453.73613014678 ! Best model 408 0.064 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 409 100 0.0532 0.00198 0.0136 0.0407 0.0542 0.129 0.142 0.000288 0.000318 409 118 0.05 0.00204 0.00922 0.0416 0.0551 0.0956 0.117 0.000213 0.000261 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 409 100 0.0289 0.00131 0.00266 0.0341 0.0442 0.0535 0.0628 0.000119 0.00014 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 409 85662.914 0.005 0.00202 0.0184 0.0588 0.041 0.0548 0.133 0.166 0.000298 0.00037 ! Validation 409 85662.914 0.005 0.00222 0.0234 0.0679 0.043 0.0575 0.149 0.187 0.000333 0.000417 Wall time: 85662.91407686891 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 410 100 0.046 0.0021 0.00403 0.0419 0.0559 0.0593 0.0775 0.000132 0.000173 410 118 0.0447 0.00187 0.00734 0.0395 0.0527 0.0881 0.104 0.000197 0.000233 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 410 100 0.0825 0.00131 0.0562 0.0338 0.0442 0.287 0.289 0.00064 0.000645 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 410 85872.078 0.005 0.002 0.0106 0.0505 0.0408 0.0545 0.0993 0.126 0.000222 0.00028 ! Validation 410 85872.078 0.005 0.00222 0.104 0.149 0.043 0.0575 0.363 0.394 0.000809 0.00088 Wall time: 85872.07851165393 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 411 100 0.0561 0.00208 0.0144 0.0417 0.0557 0.125 0.146 0.000279 0.000327 411 118 0.0593 0.00203 0.0188 0.0412 0.0549 0.141 0.167 0.000316 0.000373 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 411 100 0.0359 0.0013 0.00989 0.0339 0.044 0.115 0.121 0.000256 0.000271 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 411 86081.336 0.005 0.00212 0.0269 0.0694 0.042 0.0562 0.147 0.2 0.000327 0.000447 ! Validation 411 86081.336 0.005 0.00221 0.0217 0.0658 0.0428 0.0573 0.144 0.18 0.00032 0.000401 Wall time: 86081.33681485197 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 412 100 0.0661 0.00211 0.0239 0.0416 0.056 0.164 0.189 0.000365 0.000421 412 118 0.0609 0.00181 0.0248 0.039 0.0518 0.161 0.192 0.00036 0.000428 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 412 100 0.0464 0.00141 0.0183 0.0353 0.0457 0.162 0.165 0.000361 0.000368 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 412 86290.463 0.005 0.0021 0.0283 0.0702 0.0418 0.0559 0.165 0.205 0.000369 0.000458 ! Validation 412 86290.463 0.005 0.00231 0.0753 0.121 0.0439 0.0586 0.294 0.335 0.000656 0.000747 Wall time: 86290.46378356311 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 413 100 0.0409 0.00185 0.00386 0.0392 0.0525 0.0579 0.0758 0.000129 0.000169 413 118 0.0507 0.00179 0.0149 0.0391 0.0516 0.114 0.149 0.000255 0.000332 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 413 100 0.0324 0.00132 0.00605 0.0341 0.0443 0.0859 0.0949 0.000192 0.000212 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 413 86499.611 0.005 0.00195 0.00722 0.0462 0.0403 0.0538 0.0822 0.103 0.000184 0.00023 ! Validation 413 86499.611 0.005 0.00221 0.0387 0.0828 0.0428 0.0573 0.202 0.24 0.000451 0.000535 Wall time: 86499.61181508517 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 414 100 0.0423 0.00191 0.00413 0.0397 0.0532 0.0621 0.0783 0.000139 0.000175 414 118 0.0413 0.00173 0.0068 0.0381 0.0507 0.08 0.101 0.000179 0.000224 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 414 100 0.0439 0.00131 0.0177 0.0338 0.0441 0.156 0.162 0.000349 0.000362 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 414 86708.765 0.005 0.00203 0.019 0.0596 0.0412 0.055 0.135 0.168 0.000301 0.000376 ! Validation 414 86708.765 0.005 0.00218 0.0635 0.107 0.0425 0.0569 0.263 0.307 0.000587 0.000686 Wall time: 86708.76511167921 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 415 100 0.0522 0.00201 0.012 0.041 0.0547 0.117 0.134 0.000261 0.000298 415 118 0.0539 0.002 0.0138 0.0415 0.0546 0.127 0.143 0.000283 0.00032 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 415 100 0.0318 0.00142 0.00345 0.0354 0.0459 0.0557 0.0716 0.000124 0.00016 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 415 86917.929 0.005 0.00203 0.0289 0.0695 0.0411 0.0549 0.172 0.208 0.000385 0.000464 ! Validation 415 86917.929 0.005 0.0023 0.0476 0.0935 0.0438 0.0585 0.215 0.266 0.00048 0.000594 Wall time: 86917.9295293102 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 416 100 0.0514 0.00203 0.0108 0.0412 0.055 0.1 0.127 0.000224 0.000283 416 118 0.0488 0.002 0.00872 0.0403 0.0546 0.0794 0.114 0.000177 0.000254 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 416 100 0.0292 0.00132 0.00268 0.034 0.0444 0.0549 0.0632 0.000123 0.000141 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 416 87127.143 0.005 0.00204 0.0159 0.0566 0.0412 0.055 0.125 0.154 0.000279 0.000344 ! Validation 416 87127.143 0.005 0.00222 0.0365 0.0808 0.0429 0.0574 0.182 0.233 0.000406 0.00052 Wall time: 87127.14380993089 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 417 100 0.0528 0.00238 0.00524 0.0449 0.0595 0.0709 0.0883 0.000158 0.000197 417 118 0.0515 0.00213 0.00895 0.0412 0.0562 0.0918 0.115 0.000205 0.000258 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 417 100 0.0462 0.00143 0.0176 0.0353 0.0461 0.155 0.162 0.000346 0.000361 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 417 87336.284 0.005 0.00202 0.0162 0.0567 0.0411 0.0548 0.125 0.156 0.000278 0.000347 ! Validation 417 87336.284 0.005 0.00232 0.0236 0.07 0.0439 0.0587 0.151 0.187 0.000338 0.000418 Wall time: 87336.28434375813 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 418 100 0.0465 0.00195 0.00749 0.0402 0.0538 0.0887 0.106 0.000198 0.000236 418 118 0.0558 0.00231 0.00964 0.0435 0.0586 0.0995 0.12 0.000222 0.000267 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 418 100 0.0281 0.00133 0.00151 0.0341 0.0445 0.0425 0.0474 9.49e-05 0.000106 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 418 87545.440 0.005 0.00195 0.0123 0.0513 0.0403 0.0538 0.108 0.135 0.000241 0.000302 ! Validation 418 87545.440 0.005 0.00219 0.0369 0.0807 0.0426 0.057 0.177 0.234 0.000394 0.000523 Wall time: 87545.44014272792 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 419 100 0.0421 0.00192 0.00374 0.0399 0.0534 0.0621 0.0746 0.000139 0.000166 419 118 0.0478 0.00195 0.00883 0.0409 0.0539 0.0986 0.115 0.00022 0.000256 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 419 100 0.0295 0.00139 0.00164 0.0351 0.0455 0.0438 0.0493 9.77e-05 0.00011 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 419 87754.836 0.005 0.00205 0.0267 0.0677 0.0413 0.0552 0.152 0.2 0.00034 0.000446 ! Validation 419 87754.836 0.005 0.00229 0.0291 0.0748 0.0437 0.0583 0.164 0.208 0.000367 0.000464 Wall time: 87754.83628310496 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 420 100 0.0486 0.00196 0.00949 0.0403 0.0539 0.0999 0.119 0.000223 0.000265 420 118 0.0568 0.00169 0.023 0.0376 0.0502 0.173 0.185 0.000387 0.000413 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 420 100 0.056 0.00132 0.0296 0.0341 0.0443 0.205 0.21 0.000459 0.000468 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 420 87963.980 0.005 0.00195 0.0139 0.0528 0.0403 0.0538 0.112 0.143 0.00025 0.00032 ! Validation 420 87963.980 0.005 0.00218 0.0223 0.0658 0.0426 0.0569 0.146 0.182 0.000326 0.000406 Wall time: 87963.9802806722 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 421 100 0.0763 0.0028 0.0203 0.0491 0.0646 0.143 0.174 0.000319 0.000387 421 118 0.159 0.00277 0.104 0.0479 0.0642 0.367 0.393 0.000819 0.000878 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 421 100 0.0665 0.00189 0.0287 0.0412 0.053 0.204 0.207 0.000456 0.000461 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 421 88173.228 0.005 0.00216 0.0314 0.0746 0.0423 0.0566 0.161 0.214 0.000359 0.000478 ! Validation 421 88173.228 0.005 0.00279 0.0655 0.121 0.0489 0.0644 0.266 0.312 0.000594 0.000696 Wall time: 88173.22900914075 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 422 100 0.166 0.00224 0.121 0.0428 0.0577 0.416 0.425 0.000929 0.000948 422 118 0.0578 0.00195 0.0187 0.0407 0.0539 0.142 0.167 0.000318 0.000373 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 422 100 0.0719 0.00172 0.0375 0.0382 0.0506 0.235 0.236 0.000524 0.000527 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 422 88382.388 0.005 0.00208 0.0202 0.0618 0.0416 0.0556 0.134 0.173 0.000298 0.000387 ! Validation 422 88382.388 0.005 0.00266 0.0473 0.1 0.047 0.0629 0.225 0.265 0.000503 0.000592 Wall time: 88382.38849688414 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 423 100 0.0499 0.00203 0.00941 0.0409 0.0549 0.104 0.118 0.000232 0.000264 423 118 0.0437 0.00204 0.00278 0.0414 0.0551 0.0536 0.0643 0.00012 0.000144 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 423 100 0.044 0.00128 0.0183 0.0336 0.0437 0.159 0.165 0.000355 0.000369 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 423 88591.546 0.005 0.00204 0.0186 0.0593 0.0413 0.055 0.131 0.167 0.000293 0.000372 ! Validation 423 88591.546 0.005 0.00215 0.0227 0.0657 0.0423 0.0566 0.143 0.184 0.000319 0.00041 Wall time: 88591.54661374306 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 424 100 0.0815 0.00187 0.044 0.0394 0.0528 0.245 0.256 0.000546 0.000571 424 118 0.0449 0.00184 0.00806 0.0396 0.0524 0.0993 0.109 0.000222 0.000244 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 424 100 0.0349 0.0013 0.00898 0.0339 0.0439 0.108 0.116 0.000241 0.000258 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 424 88800.712 0.005 0.00194 0.0148 0.0535 0.0402 0.0537 0.122 0.149 0.000271 0.000332 ! Validation 424 88800.712 0.005 0.00216 0.0686 0.112 0.0426 0.0567 0.271 0.319 0.000605 0.000713 Wall time: 88800.71246115398 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 425 100 0.0538 0.00191 0.0157 0.0398 0.0533 0.134 0.153 0.000299 0.000341 425 118 0.0803 0.00197 0.0409 0.0406 0.0541 0.212 0.247 0.000474 0.00055 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 425 100 0.0259 0.00126 0.000748 0.0334 0.0433 0.0271 0.0333 6.06e-05 7.44e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 425 89009.865 0.005 0.00189 0.0111 0.0489 0.0397 0.053 0.0999 0.127 0.000223 0.000284 ! Validation 425 89009.865 0.005 0.00213 0.0234 0.0659 0.0421 0.0562 0.157 0.186 0.00035 0.000416 Wall time: 89009.86506537301 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 426 100 0.046 0.0019 0.00808 0.04 0.0531 0.0904 0.11 0.000202 0.000245 426 118 0.055 0.00207 0.0137 0.0414 0.0554 0.127 0.143 0.000284 0.000319 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 426 100 0.0349 0.00123 0.0103 0.033 0.0428 0.115 0.124 0.000258 0.000276 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 426 89219.108 0.005 0.00212 0.0287 0.071 0.0421 0.0561 0.163 0.207 0.000364 0.000462 ! Validation 426 89219.108 0.005 0.0021 0.0558 0.0978 0.0418 0.0559 0.247 0.288 0.000551 0.000643 Wall time: 89219.10846552579 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 427 100 0.0409 0.00188 0.00319 0.0398 0.0529 0.0568 0.0689 0.000127 0.000154 427 118 0.0448 0.00189 0.00709 0.0398 0.053 0.0924 0.103 0.000206 0.000229 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 427 100 0.0427 0.00128 0.0171 0.0334 0.0436 0.155 0.16 0.000346 0.000356 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 427 89428.265 0.005 0.00191 0.0129 0.0511 0.0399 0.0533 0.113 0.139 0.000252 0.00031 ! Validation 427 89428.265 0.005 0.00213 0.0817 0.124 0.042 0.0563 0.303 0.349 0.000676 0.000778 Wall time: 89428.26578526199 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 428 100 0.0549 0.00191 0.0168 0.0395 0.0533 0.143 0.158 0.000319 0.000352 428 118 0.257 0.00289 0.199 0.0495 0.0655 0.537 0.544 0.0012 0.00121 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 428 100 0.258 0.00601 0.137 0.0676 0.0945 0.449 0.452 0.001 0.00101 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 428 89637.443 0.005 0.00194 0.0223 0.0611 0.0401 0.0536 0.132 0.177 0.000296 0.000395 ! Validation 428 89637.443 0.005 0.00693 0.191 0.329 0.0727 0.102 0.461 0.532 0.00103 0.00119 Wall time: 89637.44363116287 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 429 100 0.0445 0.00196 0.00532 0.0403 0.0539 0.0715 0.089 0.00016 0.000199 429 118 0.0592 0.00185 0.0221 0.0395 0.0525 0.172 0.181 0.000383 0.000405 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 429 100 0.032 0.00133 0.00537 0.0342 0.0445 0.0824 0.0894 0.000184 0.000199 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 429 89846.618 0.005 0.00283 0.0411 0.0978 0.0471 0.065 0.181 0.248 0.000404 0.000553 ! Validation 429 89846.618 0.005 0.00219 0.0259 0.0696 0.0427 0.057 0.156 0.196 0.000348 0.000438 Wall time: 89846.61881976109 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse slurmstepd: error: *** JOB 254470 ON a0005 CANCELLED AT 2024-12-08T08:00:31 ***