Train start time: 2024-12-08_10:31:06 Torch device: cuda Processing dataset... Loaded data: Batch(atomic_numbers=[3584000, 1], batch=[3584000], cell=[8000, 3, 3], edge_cell_shift=[138811224, 3], edge_index=[2, 138811224], forces=[3584000, 3], pbc=[8000, 3], pos=[3584000, 3], ptr=[8001], total_energy=[8000, 1]) processed data size: ~5514.67 MB Cached processed data to disk Done! Successfully loaded the data set of type ASEDataset(8000)... Replace string dataset_per_atom_total_energy_mean to -346.8895845496029 Atomic outputs are scaled by: [H, C, N, O, Zn: None], shifted by [H, C, N, O, Zn: -346.889585]. Replace string dataset_forces_rms to 1.2194973071018034 Initially outputs are globally scaled by: 1.2194973071018034, total_energy are globally shifted by None. Successfully built the network... Number of weights: 363624 Number of trainable weights: 363624 ! Starting training ... validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 0 100 54.2 0.963 35 0.885 1.2 7.21 7.21 0.0161 0.0161 Initialization # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Initial Validation 0 7.553 0.005 1.01 15.3 35.5 0.908 1.22 4.11 4.77 0.00918 0.0106 Wall time: 7.5532025853171945 ! Best model 0 35.461 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 1 100 26.6 1.01 6.5 0.903 1.22 2.51 3.11 0.00561 0.00694 1 118 31.2 0.977 11.7 0.894 1.21 2.96 4.17 0.00662 0.00931 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 1 100 40.1 0.958 21 0.881 1.19 5.57 5.58 0.0124 0.0125 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 1 218.518 0.005 0.998 9.13 29.1 0.902 1.22 2.94 3.68 0.00655 0.00822 ! Validation 1 218.518 0.005 1 12.6 32.7 0.905 1.22 3.45 4.33 0.0077 0.00966 Wall time: 218.519032177981 ! Best model 1 32.691 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 2 100 18.1 0.842 1.26 0.825 1.12 1.05 1.37 0.00234 0.00305 2 118 18.2 0.784 2.56 0.796 1.08 1.54 1.95 0.00344 0.00435 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 2 100 15 0.694 1.17 0.746 1.02 1.23 1.32 0.00275 0.00294 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 2 427.969 0.005 0.928 3.87 22.4 0.865 1.18 1.86 2.4 0.00415 0.00536 ! Validation 2 427.969 0.005 0.745 3.68 18.6 0.779 1.05 1.79 2.34 0.004 0.00522 Wall time: 427.96928978711367 ! Best model 2 18.581 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 3 100 8.23 0.337 1.49 0.527 0.708 1.21 1.49 0.00271 0.00332 3 118 8.17 0.317 1.82 0.509 0.687 1.22 1.65 0.00272 0.00367 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 3 100 21.2 0.252 16.2 0.459 0.612 4.9 4.91 0.0109 0.0109 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 3 637.531 0.005 0.455 1.74 10.8 0.609 0.824 1.28 1.61 0.00287 0.00359 ! Validation 3 637.531 0.005 0.311 6.14 12.4 0.507 0.68 2.75 3.02 0.00614 0.00674 Wall time: 637.5338004683144 ! Best model 3 12.364 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 4 100 5.88 0.224 1.4 0.43 0.577 1.25 1.44 0.00279 0.00322 4 118 4.83 0.221 0.411 0.43 0.573 0.663 0.781 0.00148 0.00174 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 4 100 9.67 0.174 6.19 0.385 0.509 3.01 3.03 0.00671 0.00677 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 4 846.991 0.005 0.255 1.17 6.28 0.459 0.616 1.06 1.32 0.00236 0.00296 ! Validation 4 846.991 0.005 0.22 1.96 6.37 0.429 0.572 1.45 1.71 0.00324 0.00382 Wall time: 846.9916381519288 ! Best model 4 6.367 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 5 100 4.31 0.166 0.986 0.374 0.497 1 1.21 0.00224 0.0027 5 118 4.08 0.159 0.902 0.368 0.486 0.87 1.16 0.00194 0.00258 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 5 100 3.59 0.132 0.96 0.337 0.443 1.1 1.19 0.00245 0.00267 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 5 1056.369 0.005 0.187 0.856 4.59 0.395 0.527 0.906 1.13 0.00202 0.00252 ! Validation 5 1056.369 0.005 0.171 0.805 4.22 0.379 0.504 0.893 1.09 0.00199 0.00244 Wall time: 1056.3690802510828 ! Best model 5 4.219 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 6 100 3.26 0.143 0.401 0.347 0.461 0.634 0.772 0.00141 0.00172 6 118 3.2 0.139 0.418 0.342 0.455 0.635 0.789 0.00142 0.00176 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 6 100 2.38 0.111 0.171 0.31 0.406 0.451 0.504 0.00101 0.00112 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 6 1266.034 0.005 0.15 0.757 3.77 0.356 0.473 0.846 1.06 0.00189 0.00237 ! Validation 6 1266.034 0.005 0.144 1.23 4.11 0.349 0.463 1.11 1.35 0.00247 0.00302 Wall time: 1266.034437624272 ! Best model 6 4.108 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 7 100 3.83 0.124 1.35 0.324 0.429 1.17 1.42 0.0026 0.00316 7 118 2.57 0.12 0.165 0.319 0.423 0.389 0.496 0.000867 0.00111 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 7 100 2.25 0.0987 0.272 0.294 0.383 0.447 0.636 0.000999 0.00142 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 7 1475.411 0.005 0.131 0.709 3.32 0.332 0.441 0.819 1.03 0.00183 0.0023 ! Validation 7 1475.411 0.005 0.128 0.648 3.2 0.33 0.436 0.793 0.982 0.00177 0.00219 Wall time: 1475.411165872123 ! Best model 7 3.202 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 8 100 2.6 0.112 0.364 0.308 0.408 0.606 0.736 0.00135 0.00164 8 118 2.61 0.114 0.323 0.312 0.412 0.502 0.693 0.00112 0.00155 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 8 100 1.99 0.0894 0.206 0.279 0.365 0.428 0.553 0.000955 0.00123 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 8 1684.881 0.005 0.117 0.644 2.98 0.315 0.417 0.791 0.98 0.00177 0.00219 ! Validation 8 1684.881 0.005 0.116 0.66 2.98 0.314 0.415 0.806 0.991 0.0018 0.00221 Wall time: 1684.8817047812045 ! Best model 8 2.981 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 9 100 2.32 0.104 0.244 0.297 0.393 0.477 0.603 0.00107 0.00134 9 118 3.2 0.101 1.18 0.293 0.388 1.22 1.32 0.00273 0.00295 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 9 100 2.41 0.0836 0.739 0.27 0.353 0.912 1.05 0.00204 0.00234 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 9 1894.265 0.005 0.109 0.742 2.92 0.304 0.403 0.841 1.05 0.00188 0.00234 ! Validation 9 1894.265 0.005 0.108 0.571 2.74 0.304 0.402 0.752 0.922 0.00168 0.00206 Wall time: 1894.2658234280534 ! Best model 9 2.739 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 10 100 2.36 0.098 0.4 0.288 0.382 0.596 0.772 0.00133 0.00172 10 118 1.97 0.0911 0.144 0.278 0.368 0.397 0.463 0.000886 0.00103 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 10 100 2.15 0.0777 0.597 0.261 0.34 0.77 0.942 0.00172 0.0021 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 10 2103.631 0.005 0.101 0.537 2.56 0.293 0.388 0.714 0.896 0.00159 0.002 ! Validation 10 2103.631 0.005 0.101 0.65 2.67 0.294 0.388 0.805 0.984 0.0018 0.0022 Wall time: 2103.6320450780913 ! Best model 10 2.673 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 11 100 2.92 0.0914 1.09 0.279 0.369 1.11 1.27 0.00247 0.00284 11 118 2.2 0.09 0.398 0.277 0.366 0.604 0.769 0.00135 0.00172 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 11 100 1.71 0.0736 0.234 0.254 0.331 0.448 0.59 0.001 0.00132 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 11 2313.021 0.005 0.0955 0.631 2.54 0.285 0.377 0.778 0.97 0.00174 0.00216 ! Validation 11 2313.021 0.005 0.096 0.473 2.39 0.286 0.378 0.667 0.839 0.00149 0.00187 Wall time: 2313.021381059196 ! Best model 11 2.393 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 12 100 2.15 0.0903 0.349 0.277 0.366 0.573 0.72 0.00128 0.00161 12 118 2.31 0.0904 0.505 0.277 0.367 0.727 0.867 0.00162 0.00193 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 12 100 1.59 0.07 0.188 0.248 0.323 0.494 0.529 0.0011 0.00118 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 12 2522.491 0.005 0.0909 0.64 2.46 0.278 0.368 0.778 0.976 0.00174 0.00218 ! Validation 12 2522.491 0.005 0.0922 0.474 2.32 0.281 0.37 0.67 0.84 0.0015 0.00187 Wall time: 2522.492699956987 ! Best model 12 2.317 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 13 100 2.08 0.0869 0.341 0.272 0.359 0.561 0.713 0.00125 0.00159 13 118 2.02 0.0855 0.306 0.271 0.357 0.575 0.675 0.00128 0.00151 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 13 100 2.05 0.0666 0.713 0.241 0.315 0.887 1.03 0.00198 0.0023 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 13 2731.848 0.005 0.0872 0.537 2.28 0.272 0.36 0.71 0.895 0.00158 0.002 ! Validation 13 2731.848 0.005 0.0876 1.51 3.27 0.274 0.361 1.33 1.5 0.00296 0.00335 Wall time: 2731.848956375383 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 14 100 1.9 0.0799 0.302 0.262 0.345 0.516 0.67 0.00115 0.0015 14 118 2.4 0.081 0.785 0.261 0.347 0.866 1.08 0.00193 0.00241 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 14 100 1.47 0.0635 0.197 0.236 0.307 0.453 0.541 0.00101 0.00121 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 14 2941.174 0.005 0.0832 0.542 2.21 0.266 0.352 0.714 0.897 0.00159 0.002 ! Validation 14 2941.174 0.005 0.084 0.414 2.09 0.268 0.353 0.641 0.785 0.00143 0.00175 Wall time: 2941.1743748332374 ! Best model 14 2.094 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 15 100 2.4 0.0766 0.866 0.255 0.338 0.985 1.13 0.0022 0.00253 15 118 2.06 0.085 0.362 0.27 0.355 0.448 0.734 0.001 0.00164 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 15 100 1.5 0.0599 0.304 0.228 0.299 0.477 0.673 0.00107 0.0015 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 15 3150.678 0.005 0.0787 0.426 2 0.258 0.342 0.633 0.797 0.00141 0.00178 ! Validation 15 3150.678 0.005 0.08 0.345 1.94 0.261 0.345 0.576 0.717 0.00129 0.0016 Wall time: 3150.6786881452426 ! Best model 15 1.944 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 16 100 2.98 0.0764 1.45 0.254 0.337 1.36 1.47 0.00303 0.00328 16 118 1.94 0.076 0.423 0.254 0.336 0.705 0.793 0.00157 0.00177 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 16 100 1.42 0.0584 0.248 0.225 0.295 0.424 0.608 0.000946 0.00136 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 16 3359.983 0.005 0.0762 0.64 2.16 0.254 0.337 0.79 0.977 0.00176 0.00218 ! Validation 16 3359.983 0.005 0.078 0.343 1.9 0.258 0.341 0.575 0.714 0.00128 0.00159 Wall time: 3359.983429037966 ! Best model 16 1.903 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 17 100 3.41 0.0738 1.93 0.25 0.331 1.6 1.7 0.00356 0.00379 17 118 2.56 0.0709 1.14 0.245 0.325 1.18 1.3 0.00263 0.00291 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 17 100 1.93 0.0562 0.808 0.22 0.289 0.976 1.1 0.00218 0.00245 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 17 3569.356 0.005 0.0733 0.525 1.99 0.249 0.33 0.701 0.88 0.00157 0.00196 ! Validation 17 3569.356 0.005 0.0753 0.461 1.97 0.253 0.335 0.674 0.828 0.00151 0.00185 Wall time: 3569.356122731231 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 18 100 1.65 0.0693 0.264 0.243 0.321 0.497 0.626 0.00111 0.0014 18 118 1.61 0.0724 0.165 0.246 0.328 0.43 0.495 0.000959 0.00111 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 18 100 2.08 0.0532 1.01 0.215 0.281 1.13 1.23 0.00251 0.00274 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 18 3778.679 0.005 0.0707 0.388 1.8 0.245 0.324 0.608 0.762 0.00136 0.0017 ! Validation 18 3778.679 0.005 0.0716 0.621 2.05 0.247 0.326 0.791 0.961 0.00177 0.00215 Wall time: 3778.6798543692566 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 19 100 1.73 0.0683 0.364 0.24 0.319 0.593 0.736 0.00132 0.00164 19 118 1.65 0.0633 0.381 0.232 0.307 0.681 0.753 0.00152 0.00168 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 19 100 1.19 0.0508 0.175 0.21 0.275 0.437 0.51 0.000975 0.00114 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 19 3987.986 0.005 0.0677 0.469 1.82 0.24 0.317 0.671 0.836 0.0015 0.00187 ! Validation 19 3987.986 0.005 0.0687 0.369 1.74 0.242 0.32 0.603 0.741 0.00135 0.00165 Wall time: 3987.9860757361166 ! Best model 19 1.744 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 20 100 1.97 0.0636 0.695 0.232 0.307 0.861 1.02 0.00192 0.00227 20 118 1.49 0.0605 0.286 0.228 0.3 0.524 0.652 0.00117 0.00146 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 20 100 1.22 0.049 0.238 0.206 0.27 0.41 0.595 0.000914 0.00133 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 20 4197.330 0.005 0.0653 0.489 1.79 0.235 0.312 0.689 0.854 0.00154 0.00191 ! Validation 20 4197.330 0.005 0.0669 0.334 1.67 0.238 0.315 0.577 0.705 0.00129 0.00157 Wall time: 4197.330184041988 ! Best model 20 1.672 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 21 100 1.83 0.0612 0.606 0.228 0.302 0.873 0.949 0.00195 0.00212 21 118 3 0.0675 1.65 0.239 0.317 1.26 1.56 0.00281 0.00349 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 21 100 3.79 0.047 2.85 0.201 0.264 2 2.06 0.00447 0.00459 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 21 4406.688 0.005 0.063 0.443 1.7 0.231 0.306 0.646 0.804 0.00144 0.0018 ! Validation 21 4406.688 0.005 0.0639 1.95 3.23 0.233 0.308 1.57 1.7 0.0035 0.0038 Wall time: 4406.688342885114 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 22 100 1.99 0.0612 0.76 0.228 0.302 0.948 1.06 0.00212 0.00237 22 118 1.33 0.0594 0.144 0.225 0.297 0.382 0.463 0.000853 0.00103 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 22 100 1.12 0.0459 0.197 0.199 0.261 0.501 0.542 0.00112 0.00121 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 22 4616.135 0.005 0.0612 0.466 1.69 0.228 0.302 0.659 0.835 0.00147 0.00186 ! Validation 22 4616.135 0.005 0.062 0.559 1.8 0.23 0.304 0.749 0.912 0.00167 0.00204 Wall time: 4616.135846798308 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 23 100 1.29 0.0568 0.156 0.219 0.291 0.387 0.482 0.000864 0.00108 23 118 1.27 0.0559 0.149 0.218 0.288 0.425 0.47 0.000948 0.00105 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 23 100 1.15 0.0438 0.276 0.195 0.255 0.453 0.641 0.00101 0.00143 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 23 4825.453 0.005 0.059 0.434 1.61 0.224 0.296 0.647 0.805 0.00144 0.0018 ! Validation 23 4825.453 0.005 0.0598 0.286 1.48 0.226 0.298 0.525 0.652 0.00117 0.00146 Wall time: 4825.453565953299 ! Best model 23 1.483 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 24 100 3.22 0.0566 2.09 0.219 0.29 1.66 1.76 0.00372 0.00394 24 118 1.44 0.0505 0.427 0.207 0.274 0.612 0.797 0.00137 0.00178 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 24 100 1.36 0.043 0.495 0.193 0.253 0.724 0.858 0.00162 0.00192 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 24 5034.801 0.005 0.0567 0.456 1.59 0.219 0.291 0.658 0.824 0.00147 0.00184 ! Validation 24 5034.801 0.005 0.0583 1.01 2.18 0.223 0.294 1.07 1.23 0.00238 0.00274 Wall time: 5034.801302480977 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 25 100 1.86 0.0553 0.756 0.217 0.287 0.93 1.06 0.00208 0.00237 25 118 1.36 0.0529 0.303 0.214 0.28 0.613 0.671 0.00137 0.0015 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 25 100 0.999 0.0417 0.165 0.19 0.249 0.417 0.495 0.00093 0.0011 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 25 5244.122 0.005 0.0559 0.551 1.67 0.218 0.288 0.728 0.906 0.00163 0.00202 ! Validation 25 5244.122 0.005 0.0567 0.32 1.45 0.22 0.29 0.561 0.69 0.00125 0.00154 Wall time: 5244.122490603942 ! Best model 25 1.455 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 26 100 1.21 0.0514 0.179 0.209 0.276 0.421 0.516 0.000939 0.00115 26 118 1.25 0.0518 0.212 0.209 0.278 0.473 0.561 0.00106 0.00125 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 26 100 1.01 0.0391 0.231 0.184 0.241 0.408 0.586 0.000911 0.00131 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 26 5453.602 0.005 0.0538 0.416 1.49 0.214 0.283 0.629 0.787 0.0014 0.00176 ! Validation 26 5453.602 0.005 0.0536 0.285 1.36 0.214 0.282 0.533 0.651 0.00119 0.00145 Wall time: 5453.6030353181995 ! Best model 26 1.357 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 27 100 1.08 0.0488 0.109 0.204 0.269 0.313 0.402 0.000698 0.000897 27 118 1.34 0.049 0.365 0.205 0.27 0.621 0.737 0.00139 0.00165 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 27 100 0.967 0.0366 0.235 0.179 0.233 0.425 0.591 0.000948 0.00132 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 27 5663.025 0.005 0.0504 0.369 1.38 0.208 0.274 0.596 0.741 0.00133 0.00165 ! Validation 27 5663.025 0.005 0.0506 0.284 1.3 0.209 0.274 0.539 0.65 0.0012 0.00145 Wall time: 5663.025675435085 ! Best model 27 1.295 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 28 100 1.18 0.0469 0.24 0.201 0.264 0.496 0.597 0.00111 0.00133 28 118 1.7 0.0469 0.759 0.201 0.264 0.967 1.06 0.00216 0.00237 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 28 100 1.08 0.0339 0.405 0.173 0.225 0.671 0.776 0.0015 0.00173 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 28 5872.330 0.005 0.0472 0.409 1.35 0.201 0.265 0.624 0.777 0.00139 0.00173 ! Validation 28 5872.330 0.005 0.0474 0.283 1.23 0.202 0.266 0.533 0.648 0.00119 0.00145 Wall time: 5872.330981240142 ! Best model 28 1.231 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 29 100 1.08 0.0426 0.224 0.191 0.252 0.452 0.577 0.00101 0.00129 29 118 0.972 0.0387 0.197 0.182 0.24 0.477 0.542 0.00106 0.00121 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 29 100 0.718 0.0314 0.0909 0.167 0.216 0.341 0.368 0.000762 0.000821 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 29 6081.664 0.005 0.0442 0.401 1.29 0.195 0.257 0.624 0.774 0.00139 0.00173 ! Validation 29 6081.664 0.005 0.0443 0.414 1.3 0.196 0.257 0.651 0.785 0.00145 0.00175 Wall time: 6081.664125041105 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 30 100 1.5 0.0404 0.697 0.187 0.245 0.942 1.02 0.0021 0.00227 30 118 3.02 0.0402 2.22 0.186 0.244 1.77 1.82 0.00396 0.00405 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 30 100 0.769 0.0313 0.143 0.167 0.216 0.313 0.461 0.000698 0.00103 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 30 6290.970 0.005 0.0416 0.465 1.3 0.189 0.249 0.658 0.821 0.00147 0.00183 ! Validation 30 6290.970 0.005 0.045 0.332 1.23 0.196 0.259 0.568 0.703 0.00127 0.00157 Wall time: 6290.970812376123 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 31 100 1.15 0.0397 0.357 0.185 0.243 0.626 0.729 0.0014 0.00163 31 118 1.08 0.04 0.279 0.184 0.244 0.574 0.644 0.00128 0.00144 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 31 100 0.817 0.0283 0.25 0.159 0.205 0.518 0.61 0.00116 0.00136 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 31 6500.559 0.005 0.0406 0.379 1.19 0.187 0.246 0.593 0.751 0.00132 0.00168 ! Validation 31 6500.559 0.005 0.0413 0.24 1.07 0.188 0.248 0.494 0.598 0.0011 0.00133 Wall time: 6500.559403134044 ! Best model 31 1.067 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 32 100 1.03 0.0387 0.251 0.182 0.24 0.505 0.611 0.00113 0.00136 32 118 0.864 0.0361 0.143 0.176 0.232 0.382 0.461 0.000852 0.00103 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 32 100 0.626 0.0263 0.1 0.153 0.198 0.262 0.386 0.000584 0.000862 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 32 6709.874 0.005 0.0381 0.275 1.04 0.181 0.238 0.508 0.64 0.00113 0.00143 ! Validation 32 6709.874 0.005 0.039 0.23 1.01 0.183 0.241 0.484 0.585 0.00108 0.00131 Wall time: 6709.874363966286 ! Best model 32 1.010 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 33 100 0.844 0.0358 0.128 0.175 0.231 0.337 0.437 0.000753 0.000976 33 118 0.924 0.0366 0.193 0.176 0.233 0.416 0.536 0.000928 0.0012 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 33 100 0.58 0.0255 0.0712 0.151 0.195 0.28 0.325 0.000625 0.000726 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 33 6919.176 0.005 0.0371 0.387 1.13 0.178 0.235 0.612 0.76 0.00137 0.0017 ! Validation 33 6919.176 0.005 0.038 0.268 1.03 0.18 0.238 0.522 0.631 0.00116 0.00141 Wall time: 6919.176824378315 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 34 100 1.12 0.036 0.397 0.175 0.231 0.656 0.769 0.00146 0.00172 34 118 0.994 0.0379 0.235 0.18 0.238 0.526 0.591 0.00117 0.00132 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 34 100 0.75 0.0247 0.255 0.148 0.192 0.537 0.616 0.0012 0.00138 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 34 7128.499 0.005 0.0358 0.378 1.09 0.175 0.231 0.614 0.75 0.00137 0.00167 ! Validation 34 7128.499 0.005 0.0372 0.225 0.968 0.178 0.235 0.483 0.578 0.00108 0.00129 Wall time: 7128.499433769379 ! Best model 34 0.968 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 35 100 0.805 0.0327 0.15 0.167 0.221 0.397 0.473 0.000885 0.00106 35 118 0.792 0.0332 0.128 0.169 0.222 0.342 0.437 0.000763 0.000976 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 35 100 1.01 0.0235 0.545 0.145 0.187 0.851 0.9 0.0019 0.00201 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 35 7337.841 0.005 0.0341 0.184 0.867 0.171 0.225 0.417 0.524 0.000931 0.00117 ! Validation 35 7337.841 0.005 0.0355 0.326 1.04 0.174 0.23 0.566 0.696 0.00126 0.00155 Wall time: 7337.841247442178 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 36 100 0.782 0.0309 0.163 0.163 0.215 0.401 0.492 0.000896 0.0011 36 118 1.39 0.0336 0.719 0.17 0.224 0.99 1.03 0.00221 0.00231 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 36 100 1.2 0.0227 0.741 0.142 0.184 1.01 1.05 0.00226 0.00234 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 36 7547.244 0.005 0.0335 0.353 1.02 0.169 0.223 0.58 0.722 0.00129 0.00161 ! Validation 36 7547.244 0.005 0.0345 1.49 2.18 0.172 0.227 1.38 1.49 0.00308 0.00333 Wall time: 7547.244664735161 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 37 100 0.777 0.0327 0.124 0.166 0.22 0.348 0.43 0.000777 0.00096 37 118 1.3 0.0369 0.561 0.176 0.234 0.815 0.913 0.00182 0.00204 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 37 100 0.542 0.0229 0.0834 0.142 0.185 0.244 0.352 0.000545 0.000786 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 37 7756.774 0.005 0.0332 0.473 1.14 0.168 0.222 0.681 0.838 0.00152 0.00187 ! Validation 37 7756.774 0.005 0.0344 0.29 0.979 0.171 0.226 0.536 0.657 0.0012 0.00147 Wall time: 7756.77417025622 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 38 100 1.13 0.0337 0.453 0.169 0.224 0.664 0.821 0.00148 0.00183 38 118 1.03 0.0349 0.335 0.172 0.228 0.563 0.705 0.00126 0.00157 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 38 100 1.27 0.0221 0.824 0.14 0.181 1.07 1.11 0.00239 0.00247 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 38 7966.062 0.005 0.0323 0.357 1 0.166 0.219 0.602 0.729 0.00134 0.00163 ! Validation 38 7966.062 0.005 0.0335 0.615 1.28 0.169 0.223 0.812 0.956 0.00181 0.00213 Wall time: 7966.062246744055 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 39 100 1.14 0.031 0.515 0.162 0.215 0.775 0.875 0.00173 0.00195 39 118 0.917 0.0354 0.208 0.171 0.23 0.501 0.557 0.00112 0.00124 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 39 100 0.518 0.0215 0.0889 0.138 0.179 0.261 0.364 0.000583 0.000812 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 39 8175.409 0.005 0.0312 0.272 0.896 0.163 0.215 0.512 0.636 0.00114 0.00142 ! Validation 39 8175.409 0.005 0.0326 0.215 0.867 0.167 0.22 0.47 0.565 0.00105 0.00126 Wall time: 8175.409657500219 ! Best model 39 0.867 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 40 100 0.781 0.0295 0.191 0.158 0.209 0.428 0.533 0.000955 0.00119 40 118 1.53 0.0317 0.896 0.164 0.217 1.08 1.15 0.00242 0.00258 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 40 100 0.629 0.0211 0.207 0.136 0.177 0.479 0.555 0.00107 0.00124 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 40 8384.608 0.005 0.0304 0.284 0.891 0.161 0.212 0.506 0.644 0.00113 0.00144 ! Validation 40 8384.608 0.005 0.032 0.2 0.84 0.165 0.218 0.45 0.545 0.001 0.00122 Wall time: 8384.608424043283 ! Best model 40 0.840 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 41 100 1.02 0.0284 0.454 0.155 0.206 0.723 0.821 0.00161 0.00183 41 118 0.834 0.0287 0.26 0.155 0.207 0.595 0.622 0.00133 0.00139 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 41 100 0.92 0.0204 0.512 0.135 0.174 0.832 0.873 0.00186 0.00195 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 41 8593.847 0.005 0.0294 0.242 0.829 0.158 0.209 0.483 0.599 0.00108 0.00134 ! Validation 41 8593.847 0.005 0.031 0.29 0.91 0.162 0.215 0.535 0.657 0.00119 0.00147 Wall time: 8593.847510638181 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 42 100 0.819 0.0282 0.254 0.155 0.205 0.506 0.615 0.00113 0.00137 42 118 0.739 0.0304 0.131 0.159 0.213 0.391 0.441 0.000872 0.000984 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 42 100 0.497 0.0193 0.11 0.131 0.17 0.351 0.405 0.000782 0.000905 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 42 8803.126 0.005 0.0286 0.232 0.804 0.156 0.206 0.47 0.588 0.00105 0.00131 ! Validation 42 8803.126 0.005 0.0297 0.433 1.03 0.159 0.21 0.666 0.802 0.00149 0.00179 Wall time: 8803.126876822207 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 43 100 0.663 0.0275 0.113 0.153 0.202 0.321 0.409 0.000716 0.000914 43 118 0.839 0.0288 0.263 0.157 0.207 0.548 0.625 0.00122 0.0014 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 43 100 0.525 0.0193 0.139 0.13 0.169 0.391 0.454 0.000874 0.00101 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 43 9012.408 0.005 0.028 0.324 0.885 0.154 0.204 0.567 0.694 0.00126 0.00155 ! Validation 43 9012.408 0.005 0.0294 0.481 1.07 0.158 0.209 0.707 0.846 0.00158 0.00189 Wall time: 9012.408072636928 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 44 100 1.01 0.0295 0.424 0.157 0.209 0.702 0.794 0.00157 0.00177 44 118 0.64 0.0268 0.104 0.152 0.2 0.323 0.393 0.000721 0.000877 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 44 100 0.593 0.0196 0.201 0.131 0.171 0.489 0.547 0.00109 0.00122 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 44 9221.974 0.005 0.0277 0.388 0.942 0.153 0.203 0.63 0.762 0.00141 0.0017 ! Validation 44 9221.974 0.005 0.0298 0.809 1.4 0.159 0.211 0.956 1.1 0.00213 0.00245 Wall time: 9221.974995218217 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 45 100 0.717 0.028 0.157 0.154 0.204 0.39 0.483 0.00087 0.00108 45 118 0.693 0.0226 0.24 0.14 0.184 0.539 0.598 0.0012 0.00133 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 45 100 0.644 0.0182 0.281 0.126 0.164 0.595 0.646 0.00133 0.00144 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 45 9431.292 0.005 0.027 0.226 0.766 0.151 0.201 0.465 0.579 0.00104 0.00129 ! Validation 45 9431.292 0.005 0.028 0.216 0.776 0.154 0.204 0.463 0.567 0.00103 0.00127 Wall time: 9431.292210315354 ! Best model 45 0.776 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 46 100 0.639 0.0268 0.104 0.15 0.199 0.34 0.393 0.000759 0.000878 46 118 1.16 0.0252 0.657 0.146 0.194 0.946 0.988 0.00211 0.00221 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 46 100 2.53 0.0176 2.18 0.124 0.162 1.78 1.8 0.00398 0.00402 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 46 9640.728 0.005 0.0262 0.232 0.757 0.149 0.198 0.47 0.583 0.00105 0.0013 ! Validation 46 9640.728 0.005 0.0273 1.55 2.1 0.152 0.202 1.43 1.52 0.00319 0.00339 Wall time: 9640.728363692295 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 47 100 1.22 0.0257 0.702 0.147 0.195 0.964 1.02 0.00215 0.00228 47 118 0.608 0.0238 0.133 0.142 0.188 0.415 0.445 0.000926 0.000994 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 47 100 1.04 0.0177 0.688 0.125 0.162 0.981 1.01 0.00219 0.00226 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 47 9850.047 0.005 0.0262 0.33 0.854 0.149 0.197 0.552 0.702 0.00123 0.00157 ! Validation 47 9850.047 0.005 0.0272 0.517 1.06 0.152 0.201 0.748 0.877 0.00167 0.00196 Wall time: 9850.047761515249 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 48 100 0.613 0.025 0.113 0.145 0.193 0.317 0.41 0.000708 0.000915 48 118 0.539 0.0246 0.0472 0.145 0.191 0.204 0.265 0.000454 0.000591 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 48 100 0.412 0.0172 0.0685 0.122 0.16 0.236 0.319 0.000528 0.000713 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 48 10059.387 0.005 0.0256 0.289 0.801 0.147 0.195 0.535 0.658 0.00119 0.00147 ! Validation 48 10059.387 0.005 0.0266 0.178 0.71 0.15 0.199 0.43 0.514 0.00096 0.00115 Wall time: 10059.387309748214 ! Best model 48 0.710 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 49 100 0.781 0.0251 0.28 0.146 0.193 0.557 0.645 0.00124 0.00144 49 118 0.697 0.0244 0.209 0.144 0.191 0.486 0.557 0.00108 0.00124 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 49 100 0.452 0.017 0.113 0.122 0.159 0.346 0.41 0.000773 0.000914 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 49 10271.410 0.005 0.0249 0.244 0.741 0.145 0.192 0.492 0.602 0.0011 0.00134 ! Validation 49 10271.410 0.005 0.026 0.422 0.942 0.149 0.197 0.665 0.792 0.00148 0.00177 Wall time: 10271.410095646977 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 50 100 0.597 0.0241 0.116 0.142 0.189 0.339 0.416 0.000758 0.000928 50 118 0.613 0.0249 0.114 0.145 0.193 0.345 0.412 0.000771 0.00092 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 50 100 0.48 0.0162 0.155 0.119 0.155 0.428 0.48 0.000954 0.00107 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 50 10481.154 0.005 0.0243 0.236 0.722 0.143 0.19 0.475 0.593 0.00106 0.00132 ! Validation 50 10481.154 0.005 0.0253 0.515 1.02 0.146 0.194 0.748 0.875 0.00167 0.00195 Wall time: 10481.154428523034 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 51 100 0.551 0.0229 0.0921 0.14 0.185 0.295 0.37 0.000658 0.000826 51 118 0.884 0.0217 0.449 0.136 0.18 0.793 0.817 0.00177 0.00182 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 51 100 0.97 0.0161 0.647 0.119 0.155 0.956 0.981 0.00213 0.00219 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 51 10695.192 0.005 0.0238 0.243 0.718 0.142 0.188 0.485 0.599 0.00108 0.00134 ! Validation 51 10695.192 0.005 0.0248 0.389 0.885 0.145 0.192 0.637 0.761 0.00142 0.0017 Wall time: 10695.19241341902 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 52 100 1.61 0.0225 1.16 0.138 0.183 1.27 1.31 0.00285 0.00293 52 118 0.636 0.0218 0.199 0.136 0.18 0.481 0.545 0.00107 0.00122 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 52 100 1.56 0.0162 1.23 0.118 0.155 1.34 1.35 0.00298 0.00302 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 52 10905.579 0.005 0.0234 0.285 0.753 0.141 0.187 0.535 0.652 0.0012 0.00146 ! Validation 52 10905.579 0.005 0.0247 2 2.49 0.145 0.192 1.66 1.72 0.0037 0.00385 Wall time: 10905.579998221248 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 53 100 0.53 0.0222 0.0861 0.138 0.182 0.273 0.358 0.00061 0.000799 53 118 0.916 0.0226 0.465 0.139 0.183 0.797 0.832 0.00178 0.00186 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 53 100 0.417 0.0163 0.0923 0.119 0.155 0.32 0.37 0.000715 0.000827 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 53 11114.941 0.005 0.0235 0.309 0.778 0.141 0.187 0.533 0.676 0.00119 0.00151 ! Validation 53 11114.941 0.005 0.0246 0.457 0.948 0.144 0.191 0.695 0.824 0.00155 0.00184 Wall time: 11114.941615107935 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 54 100 0.513 0.0223 0.0669 0.137 0.182 0.256 0.315 0.000571 0.000704 54 118 0.515 0.02 0.116 0.131 0.172 0.355 0.415 0.000793 0.000927 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 54 100 0.376 0.0153 0.0705 0.115 0.151 0.278 0.324 0.000621 0.000723 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 54 11324.266 0.005 0.0224 0.172 0.619 0.138 0.182 0.405 0.507 0.000903 0.00113 ! Validation 54 11324.266 0.005 0.0235 0.459 0.928 0.141 0.187 0.697 0.826 0.00156 0.00184 Wall time: 11324.266860218253 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 55 100 0.523 0.0219 0.0851 0.136 0.18 0.279 0.356 0.000623 0.000794 55 118 0.59 0.0215 0.16 0.134 0.179 0.406 0.488 0.000907 0.00109 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 55 100 0.556 0.0147 0.262 0.113 0.148 0.59 0.624 0.00132 0.00139 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 55 11535.952 0.005 0.022 0.248 0.689 0.136 0.181 0.484 0.609 0.00108 0.00136 ! Validation 55 11535.952 0.005 0.0229 0.656 1.11 0.139 0.185 0.886 0.988 0.00198 0.00221 Wall time: 11535.95265330514 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 56 100 1.17 0.0204 0.766 0.132 0.174 1.03 1.07 0.00229 0.00238 56 118 0.765 0.0196 0.374 0.13 0.171 0.706 0.746 0.00158 0.00166 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 56 100 0.592 0.0149 0.294 0.114 0.149 0.631 0.662 0.00141 0.00148 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 56 11745.292 0.005 0.0216 0.259 0.691 0.135 0.179 0.504 0.62 0.00112 0.00138 ! Validation 56 11745.292 0.005 0.0229 0.653 1.11 0.139 0.185 0.89 0.986 0.00199 0.0022 Wall time: 11745.292980004102 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 57 100 0.51 0.0218 0.0727 0.136 0.18 0.275 0.329 0.000613 0.000734 57 118 0.491 0.0213 0.0645 0.135 0.178 0.285 0.31 0.000637 0.000691 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 57 100 0.335 0.014 0.0547 0.11 0.144 0.222 0.285 0.000495 0.000637 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 57 11954.505 0.005 0.0213 0.214 0.639 0.134 0.178 0.459 0.565 0.00102 0.00126 ! Validation 57 11954.505 0.005 0.0219 0.134 0.571 0.136 0.18 0.367 0.447 0.00082 0.000998 Wall time: 11954.505294875242 ! Best model 57 0.571 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 58 100 0.616 0.02 0.216 0.13 0.172 0.489 0.567 0.00109 0.00127 58 118 0.662 0.02 0.262 0.131 0.173 0.551 0.624 0.00123 0.00139 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 58 100 1.27 0.014 0.993 0.11 0.144 1.2 1.22 0.00268 0.00271 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 58 12163.704 0.005 0.0211 0.284 0.706 0.134 0.177 0.52 0.65 0.00116 0.00145 ! Validation 58 12163.704 0.005 0.0219 0.68 1.12 0.136 0.18 0.906 1.01 0.00202 0.00225 Wall time: 12163.704482701141 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 59 100 0.469 0.0191 0.0863 0.128 0.169 0.296 0.358 0.000662 0.0008 59 118 0.502 0.0206 0.0906 0.132 0.175 0.299 0.367 0.000666 0.000819 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 59 100 0.341 0.0135 0.0714 0.108 0.142 0.27 0.326 0.000602 0.000727 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 59 12372.941 0.005 0.0207 0.22 0.634 0.132 0.175 0.447 0.573 0.000998 0.00128 ! Validation 59 12372.941 0.005 0.0212 0.121 0.546 0.134 0.178 0.356 0.424 0.000795 0.000947 Wall time: 12372.941161636263 ! Best model 59 0.546 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 60 100 0.478 0.0198 0.0814 0.13 0.172 0.283 0.348 0.000631 0.000777 60 118 0.744 0.0176 0.393 0.123 0.162 0.687 0.764 0.00153 0.00171 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 60 100 0.621 0.0133 0.356 0.108 0.14 0.704 0.728 0.00157 0.00162 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 60 12582.250 0.005 0.0199 0.182 0.579 0.13 0.172 0.42 0.518 0.000937 0.00116 ! Validation 60 12582.250 0.005 0.0209 0.187 0.605 0.133 0.176 0.434 0.528 0.000968 0.00118 Wall time: 12582.250386714935 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 61 100 0.531 0.0204 0.123 0.131 0.174 0.343 0.427 0.000765 0.000954 61 118 0.504 0.0202 0.101 0.131 0.173 0.316 0.387 0.000704 0.000865 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 61 100 0.292 0.0133 0.025 0.107 0.141 0.154 0.193 0.000344 0.00043 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 61 12791.446 0.005 0.0198 0.234 0.63 0.129 0.172 0.468 0.59 0.00104 0.00132 ! Validation 61 12791.446 0.005 0.0208 0.168 0.584 0.133 0.176 0.407 0.499 0.000908 0.00111 Wall time: 12791.446966676973 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 62 100 0.787 0.0197 0.394 0.129 0.171 0.661 0.765 0.00147 0.00171 62 118 0.515 0.0188 0.139 0.125 0.167 0.347 0.455 0.000774 0.00102 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 62 100 0.324 0.0131 0.0628 0.106 0.139 0.253 0.306 0.000564 0.000682 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 62 13000.636 0.005 0.0198 0.302 0.698 0.129 0.172 0.551 0.671 0.00123 0.0015 ! Validation 62 13000.636 0.005 0.0205 0.116 0.526 0.132 0.175 0.348 0.415 0.000777 0.000927 Wall time: 13000.636828571092 ! Best model 62 0.526 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 63 100 0.842 0.0186 0.47 0.126 0.166 0.791 0.836 0.00177 0.00187 63 118 0.421 0.019 0.0412 0.127 0.168 0.212 0.248 0.000474 0.000553 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 63 100 0.276 0.0127 0.022 0.105 0.137 0.15 0.181 0.000336 0.000404 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 63 13210.021 0.005 0.0189 0.144 0.522 0.126 0.168 0.367 0.463 0.00082 0.00103 ! Validation 63 13210.021 0.005 0.0198 0.201 0.597 0.13 0.172 0.441 0.547 0.000984 0.00122 Wall time: 13210.02157008415 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 64 100 0.479 0.0182 0.116 0.124 0.164 0.346 0.415 0.000773 0.000927 64 118 0.49 0.0192 0.105 0.126 0.169 0.344 0.396 0.000769 0.000884 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 64 100 0.792 0.0126 0.541 0.104 0.137 0.882 0.897 0.00197 0.002 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 64 13419.225 0.005 0.0188 0.256 0.632 0.126 0.167 0.514 0.619 0.00115 0.00138 ! Validation 64 13419.225 0.005 0.0199 0.29 0.688 0.13 0.172 0.55 0.657 0.00123 0.00147 Wall time: 13419.225826818962 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 65 100 0.503 0.0185 0.133 0.125 0.166 0.373 0.445 0.000833 0.000993 65 118 0.559 0.0207 0.146 0.131 0.175 0.364 0.466 0.000812 0.00104 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 65 100 0.27 0.0121 0.0282 0.103 0.134 0.16 0.205 0.000356 0.000457 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 65 13628.587 0.005 0.0183 0.169 0.536 0.124 0.165 0.409 0.502 0.000912 0.00112 ! Validation 65 13628.587 0.005 0.0192 0.134 0.519 0.128 0.169 0.365 0.447 0.000815 0.000998 Wall time: 13628.587827638257 ! Best model 65 0.519 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 66 100 0.892 0.0177 0.538 0.122 0.162 0.845 0.895 0.00189 0.002 66 118 0.964 0.0187 0.589 0.125 0.167 0.88 0.936 0.00196 0.00209 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 66 100 1.13 0.0125 0.879 0.105 0.136 1.13 1.14 0.00252 0.00255 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 66 13837.945 0.005 0.0182 0.232 0.596 0.124 0.165 0.477 0.584 0.00106 0.0013 ! Validation 66 13837.945 0.005 0.0195 0.526 0.915 0.129 0.17 0.79 0.884 0.00176 0.00197 Wall time: 13837.945227585267 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 67 100 0.432 0.0181 0.0702 0.124 0.164 0.269 0.323 0.0006 0.000721 67 118 0.453 0.0166 0.121 0.119 0.157 0.365 0.424 0.000815 0.000946 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 67 100 0.449 0.0118 0.213 0.101 0.133 0.539 0.563 0.0012 0.00126 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 67 14047.232 0.005 0.0178 0.163 0.519 0.123 0.163 0.396 0.492 0.000883 0.0011 ! Validation 67 14047.232 0.005 0.0187 0.545 0.918 0.126 0.167 0.812 0.9 0.00181 0.00201 Wall time: 14047.232349815313 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 68 100 0.464 0.0179 0.105 0.122 0.163 0.311 0.396 0.000695 0.000884 68 118 0.433 0.0185 0.0631 0.124 0.166 0.275 0.306 0.000614 0.000684 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 68 100 0.25 0.0115 0.02 0.1 0.131 0.147 0.172 0.000329 0.000385 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 68 14256.559 0.005 0.0177 0.214 0.567 0.122 0.162 0.442 0.565 0.000986 0.00126 ! Validation 68 14256.559 0.005 0.0182 0.211 0.575 0.124 0.165 0.454 0.561 0.00101 0.00125 Wall time: 14256.559316413943 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 69 100 0.511 0.0169 0.173 0.119 0.159 0.427 0.507 0.000954 0.00113 69 118 0.386 0.0166 0.0538 0.119 0.157 0.226 0.283 0.000506 0.000631 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 69 100 0.243 0.0113 0.0176 0.099 0.129 0.145 0.162 0.000324 0.000361 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 69 14465.960 0.005 0.0172 0.189 0.533 0.121 0.16 0.435 0.531 0.000971 0.00119 ! Validation 69 14465.960 0.005 0.0179 0.176 0.533 0.123 0.163 0.409 0.511 0.000912 0.00114 Wall time: 14465.96059684828 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 70 100 0.631 0.0174 0.282 0.122 0.161 0.575 0.648 0.00128 0.00145 70 118 0.43 0.0167 0.0953 0.12 0.158 0.328 0.377 0.000733 0.00084 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 70 100 0.437 0.0117 0.204 0.101 0.132 0.524 0.551 0.00117 0.00123 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 70 14675.287 0.005 0.0173 0.299 0.646 0.121 0.161 0.554 0.669 0.00124 0.00149 ! Validation 70 14675.287 0.005 0.0184 0.544 0.911 0.125 0.165 0.807 0.9 0.0018 0.00201 Wall time: 14675.287121209316 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 71 100 0.526 0.0167 0.192 0.119 0.157 0.465 0.534 0.00104 0.00119 71 118 0.549 0.017 0.208 0.121 0.159 0.441 0.556 0.000985 0.00124 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 71 100 0.647 0.0111 0.425 0.0983 0.128 0.781 0.795 0.00174 0.00178 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 71 14884.602 0.005 0.0168 0.158 0.495 0.119 0.158 0.387 0.484 0.000865 0.00108 ! Validation 71 14884.602 0.005 0.0175 0.26 0.61 0.122 0.161 0.522 0.622 0.00117 0.00139 Wall time: 14884.602114497218 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 72 100 0.411 0.0172 0.0675 0.12 0.16 0.249 0.317 0.000555 0.000707 72 118 0.324 0.014 0.0425 0.11 0.145 0.196 0.252 0.000438 0.000561 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 72 100 0.231 0.0107 0.0166 0.0968 0.126 0.13 0.157 0.000289 0.000351 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 72 15093.915 0.005 0.0165 0.164 0.494 0.118 0.157 0.39 0.496 0.000871 0.00111 ! Validation 72 15093.915 0.005 0.0172 0.136 0.479 0.121 0.16 0.361 0.449 0.000805 0.001 Wall time: 15093.91528133303 ! Best model 72 0.479 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 73 100 0.377 0.0166 0.0457 0.119 0.157 0.201 0.261 0.000449 0.000582 73 118 0.404 0.016 0.0845 0.117 0.154 0.289 0.355 0.000645 0.000791 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 73 100 0.233 0.0107 0.0182 0.0968 0.126 0.135 0.164 0.000302 0.000367 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 73 15303.244 0.005 0.0162 0.163 0.487 0.117 0.155 0.39 0.493 0.000871 0.0011 ! Validation 73 15303.244 0.005 0.017 0.159 0.498 0.12 0.159 0.39 0.486 0.000872 0.00108 Wall time: 15303.244648643304 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 74 100 0.423 0.0163 0.0958 0.117 0.156 0.318 0.378 0.000711 0.000843 74 118 0.515 0.019 0.135 0.126 0.168 0.382 0.448 0.000853 0.001 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 74 100 0.302 0.0107 0.0885 0.0972 0.126 0.331 0.363 0.00074 0.00081 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 74 15512.565 0.005 0.0162 0.21 0.534 0.117 0.155 0.444 0.56 0.00099 0.00125 ! Validation 74 15512.565 0.005 0.017 0.1 0.441 0.12 0.159 0.318 0.386 0.00071 0.000862 Wall time: 15512.565592064057 ! Best model 74 0.441 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 75 100 0.47 0.0159 0.151 0.116 0.154 0.393 0.474 0.000878 0.00106 75 118 0.553 0.0178 0.197 0.122 0.163 0.482 0.541 0.00108 0.00121 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 75 100 0.232 0.0107 0.0171 0.0976 0.126 0.146 0.159 0.000326 0.000356 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 75 15721.853 0.005 0.0157 0.142 0.455 0.115 0.153 0.367 0.459 0.000819 0.00102 ! Validation 75 15721.853 0.005 0.0169 0.238 0.576 0.12 0.158 0.484 0.596 0.00108 0.00133 Wall time: 15721.853081203997 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 76 100 0.419 0.0154 0.11 0.114 0.152 0.337 0.404 0.000752 0.000902 76 118 0.522 0.0148 0.226 0.112 0.148 0.556 0.58 0.00124 0.00129 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 76 100 0.785 0.0102 0.581 0.0944 0.123 0.918 0.93 0.00205 0.00207 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 76 15931.221 0.005 0.0156 0.195 0.508 0.115 0.152 0.433 0.539 0.000967 0.0012 ! Validation 76 15931.221 0.005 0.0163 1.24 1.56 0.118 0.156 1.3 1.36 0.0029 0.00303 Wall time: 15931.221561430022 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 77 100 0.329 0.0143 0.0434 0.111 0.146 0.21 0.254 0.000469 0.000567 77 118 1.18 0.0157 0.868 0.116 0.153 1.12 1.14 0.00251 0.00254 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 77 100 1.28 0.0112 1.06 0.0992 0.129 1.24 1.25 0.00278 0.0028 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 77 16140.582 0.005 0.0154 0.226 0.533 0.114 0.151 0.457 0.574 0.00102 0.00128 ! Validation 77 16140.582 0.005 0.0174 0.42 0.768 0.122 0.161 0.682 0.791 0.00152 0.00176 Wall time: 16140.582655590959 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 78 100 0.326 0.0144 0.039 0.111 0.146 0.181 0.241 0.000404 0.000537 78 118 0.397 0.0157 0.0833 0.115 0.153 0.298 0.352 0.000666 0.000786 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 78 100 0.373 0.00983 0.176 0.0929 0.121 0.493 0.512 0.0011 0.00114 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 78 16349.956 0.005 0.0153 0.143 0.448 0.114 0.151 0.367 0.461 0.00082 0.00103 ! Validation 78 16349.956 0.005 0.0157 0.13 0.445 0.116 0.153 0.361 0.44 0.000806 0.000982 Wall time: 16349.956507646944 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 79 100 0.344 0.0145 0.0538 0.111 0.147 0.228 0.283 0.000509 0.000631 79 118 0.325 0.0134 0.0577 0.108 0.141 0.242 0.293 0.00054 0.000654 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 79 100 0.552 0.00951 0.362 0.0916 0.119 0.721 0.733 0.00161 0.00164 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 79 16559.424 0.005 0.0147 0.143 0.436 0.112 0.148 0.373 0.462 0.000832 0.00103 ! Validation 79 16559.424 0.005 0.0154 0.242 0.55 0.115 0.151 0.506 0.6 0.00113 0.00134 Wall time: 16559.42440383928 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 80 100 0.352 0.0145 0.0616 0.111 0.147 0.237 0.303 0.00053 0.000675 80 118 0.372 0.0164 0.043 0.116 0.156 0.185 0.253 0.000414 0.000564 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 80 100 0.356 0.00962 0.164 0.0921 0.12 0.472 0.493 0.00105 0.0011 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 80 16770.867 0.005 0.0148 0.23 0.526 0.112 0.148 0.484 0.586 0.00108 0.00131 ! Validation 80 16770.867 0.005 0.0156 0.108 0.419 0.115 0.152 0.327 0.4 0.000731 0.000893 Wall time: 16770.867179705296 ! Best model 80 0.419 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 81 100 0.514 0.0131 0.253 0.106 0.139 0.539 0.613 0.0012 0.00137 81 118 0.366 0.015 0.0661 0.113 0.149 0.265 0.313 0.000592 0.0007 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 81 100 0.228 0.00974 0.0327 0.0925 0.12 0.182 0.22 0.000406 0.000492 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 81 16980.211 0.005 0.0143 0.125 0.411 0.11 0.146 0.341 0.433 0.000761 0.000966 ! Validation 81 16980.211 0.005 0.0154 0.16 0.469 0.115 0.151 0.392 0.488 0.000875 0.00109 Wall time: 16980.211223425344 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 82 100 0.924 0.0148 0.629 0.112 0.148 0.928 0.967 0.00207 0.00216 82 118 0.305 0.0138 0.0299 0.109 0.143 0.177 0.211 0.000395 0.00047 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 82 100 0.299 0.00917 0.115 0.0903 0.117 0.389 0.414 0.000867 0.000925 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 82 17189.576 0.005 0.0141 0.175 0.458 0.11 0.145 0.416 0.512 0.000929 0.00114 ! Validation 82 17189.576 0.005 0.0149 0.0995 0.397 0.113 0.149 0.315 0.385 0.000704 0.000859 Wall time: 17189.576279731 ! Best model 82 0.397 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 83 100 0.378 0.0141 0.0971 0.11 0.145 0.303 0.38 0.000676 0.000848 83 118 0.338 0.0135 0.0692 0.108 0.141 0.276 0.321 0.000617 0.000716 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 83 100 0.488 0.00911 0.306 0.0899 0.116 0.662 0.674 0.00148 0.0015 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 83 17398.955 0.005 0.0139 0.15 0.427 0.109 0.144 0.381 0.473 0.000851 0.00106 ! Validation 83 17398.955 0.005 0.0147 0.177 0.471 0.112 0.148 0.428 0.513 0.000956 0.00115 Wall time: 17398.955628304277 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 84 100 0.659 0.0135 0.389 0.107 0.142 0.713 0.76 0.00159 0.0017 84 118 0.702 0.0126 0.45 0.105 0.137 0.761 0.818 0.0017 0.00183 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 84 100 0.197 0.00918 0.0135 0.0905 0.117 0.119 0.142 0.000265 0.000316 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 84 17608.426 0.005 0.0139 0.209 0.488 0.109 0.144 0.455 0.556 0.00102 0.00124 ! Validation 84 17608.426 0.005 0.0147 0.156 0.45 0.112 0.148 0.386 0.482 0.000862 0.00108 Wall time: 17608.426731680054 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 85 100 0.305 0.013 0.0451 0.106 0.139 0.198 0.259 0.000442 0.000578 85 118 0.471 0.016 0.152 0.116 0.154 0.408 0.475 0.00091 0.00106 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 85 100 0.465 0.00956 0.274 0.0924 0.119 0.624 0.638 0.00139 0.00142 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 85 17817.806 0.005 0.0135 0.153 0.424 0.107 0.142 0.376 0.477 0.00084 0.00106 ! Validation 85 17817.806 0.005 0.015 0.814 1.11 0.113 0.149 1.03 1.1 0.00231 0.00246 Wall time: 17817.806148335338 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 86 100 0.364 0.0132 0.0996 0.106 0.14 0.318 0.385 0.000711 0.000859 86 118 0.378 0.0127 0.124 0.105 0.137 0.324 0.43 0.000723 0.00096 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 86 100 0.214 0.00888 0.0358 0.0893 0.115 0.187 0.231 0.000417 0.000515 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 86 18027.186 0.005 0.0136 0.185 0.457 0.108 0.142 0.43 0.526 0.000959 0.00117 ! Validation 86 18027.186 0.005 0.0143 0.083 0.368 0.11 0.146 0.289 0.351 0.000645 0.000784 Wall time: 18027.18605800206 ! Best model 86 0.368 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 87 100 0.677 0.0128 0.421 0.105 0.138 0.735 0.791 0.00164 0.00177 87 118 0.406 0.0143 0.12 0.112 0.146 0.36 0.423 0.000803 0.000944 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 87 100 0.316 0.0084 0.148 0.0866 0.112 0.45 0.469 0.001 0.00105 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 87 18236.553 0.005 0.0133 0.157 0.424 0.107 0.141 0.391 0.483 0.000873 0.00108 ! Validation 87 18236.553 0.005 0.0138 0.111 0.387 0.109 0.143 0.332 0.406 0.000741 0.000906 Wall time: 18236.553522224072 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 88 100 0.297 0.0127 0.0422 0.104 0.138 0.198 0.251 0.000443 0.000559 88 118 0.375 0.014 0.0946 0.11 0.144 0.336 0.375 0.00075 0.000837 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 88 100 0.212 0.00833 0.0457 0.0863 0.111 0.231 0.261 0.000516 0.000582 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 88 18446.198 0.005 0.0127 0.0887 0.343 0.104 0.137 0.288 0.363 0.000644 0.00081 ! Validation 88 18446.198 0.005 0.0135 0.288 0.558 0.107 0.142 0.558 0.655 0.00125 0.00146 Wall time: 18446.198246830143 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 89 100 0.314 0.0129 0.0562 0.106 0.138 0.242 0.289 0.00054 0.000645 89 118 0.289 0.0116 0.0572 0.1 0.131 0.272 0.292 0.000606 0.000651 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 89 100 0.221 0.00824 0.0562 0.0861 0.111 0.261 0.289 0.000581 0.000645 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 89 18655.537 0.005 0.0128 0.163 0.419 0.104 0.138 0.4 0.493 0.000893 0.0011 ! Validation 89 18655.537 0.005 0.0135 0.0801 0.35 0.107 0.142 0.283 0.345 0.000631 0.000771 Wall time: 18655.537391384132 ! Best model 89 0.350 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 90 100 0.428 0.0124 0.179 0.103 0.136 0.457 0.517 0.00102 0.00115 90 118 0.67 0.0125 0.419 0.104 0.136 0.733 0.79 0.00164 0.00176 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 90 100 0.193 0.00823 0.0285 0.0862 0.111 0.174 0.206 0.000389 0.00046 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 90 18864.887 0.005 0.0126 0.162 0.414 0.104 0.137 0.388 0.487 0.000866 0.00109 ! Validation 90 18864.887 0.005 0.0133 0.2 0.467 0.107 0.141 0.451 0.545 0.00101 0.00122 Wall time: 18864.887447969988 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 91 100 0.454 0.0122 0.21 0.102 0.135 0.504 0.559 0.00112 0.00125 91 118 0.934 0.0127 0.68 0.104 0.138 0.984 1.01 0.0022 0.00224 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 91 100 0.213 0.00934 0.0257 0.0918 0.118 0.165 0.195 0.000368 0.000436 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 91 19074.228 0.005 0.0126 0.221 0.474 0.104 0.137 0.442 0.57 0.000986 0.00127 ! Validation 91 19074.228 0.005 0.0144 0.21 0.499 0.111 0.147 0.459 0.559 0.00102 0.00125 Wall time: 19074.228409552015 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 92 100 0.383 0.0117 0.148 0.1 0.132 0.409 0.469 0.000912 0.00105 92 118 0.279 0.0116 0.0466 0.1 0.131 0.227 0.263 0.000507 0.000587 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 92 100 0.203 0.00768 0.0497 0.0833 0.107 0.244 0.272 0.000544 0.000607 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 92 19283.540 0.005 0.0126 0.119 0.37 0.104 0.137 0.326 0.422 0.000728 0.000941 ! Validation 92 19283.540 0.005 0.0127 0.23 0.484 0.104 0.137 0.49 0.584 0.00109 0.0013 Wall time: 19283.54022611538 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 93 100 0.376 0.0128 0.12 0.104 0.138 0.363 0.423 0.00081 0.000944 93 118 0.448 0.0105 0.239 0.0958 0.125 0.524 0.596 0.00117 0.00133 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 93 100 0.29 0.00769 0.137 0.0832 0.107 0.435 0.451 0.000972 0.00101 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 93 19492.980 0.005 0.012 0.164 0.404 0.101 0.134 0.402 0.493 0.000897 0.0011 ! Validation 93 19492.980 0.005 0.0127 0.11 0.365 0.104 0.138 0.332 0.405 0.000741 0.000904 Wall time: 19492.98055598326 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 94 100 0.291 0.0118 0.0558 0.0998 0.132 0.22 0.288 0.000491 0.000643 94 118 0.434 0.011 0.214 0.0982 0.128 0.53 0.564 0.00118 0.00126 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 94 100 0.641 0.0077 0.487 0.0832 0.107 0.841 0.851 0.00188 0.0019 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 94 19702.318 0.005 0.0117 0.0963 0.331 0.1 0.132 0.299 0.377 0.000668 0.000841 ! Validation 94 19702.318 0.005 0.0126 0.264 0.517 0.104 0.137 0.549 0.627 0.00123 0.0014 Wall time: 19702.318567869253 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 95 100 0.475 0.012 0.234 0.101 0.134 0.531 0.59 0.00119 0.00132 95 118 0.288 0.0114 0.0586 0.0997 0.13 0.257 0.295 0.000574 0.000659 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 95 100 0.234 0.00726 0.0888 0.0811 0.104 0.341 0.363 0.000761 0.000811 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 95 19911.645 0.005 0.0116 0.129 0.362 0.0997 0.132 0.348 0.439 0.000777 0.00098 ! Validation 95 19911.645 0.005 0.0121 0.27 0.513 0.102 0.134 0.543 0.633 0.00121 0.00141 Wall time: 19911.645740634296 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 96 100 0.49 0.0115 0.261 0.0985 0.131 0.573 0.623 0.00128 0.00139 96 118 0.361 0.00975 0.166 0.0922 0.12 0.466 0.497 0.00104 0.00111 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 96 100 0.372 0.00755 0.221 0.0827 0.106 0.564 0.573 0.00126 0.00128 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 96 20120.965 0.005 0.0115 0.153 0.383 0.0993 0.131 0.392 0.476 0.000876 0.00106 ! Validation 96 20120.965 0.005 0.0123 0.117 0.364 0.103 0.135 0.341 0.418 0.00076 0.000933 Wall time: 20120.96590245515 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 97 100 0.397 0.011 0.178 0.0963 0.128 0.459 0.515 0.00102 0.00115 97 118 0.459 0.011 0.238 0.0971 0.128 0.498 0.595 0.00111 0.00133 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 97 100 0.342 0.00885 0.165 0.089 0.115 0.483 0.495 0.00108 0.00111 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 97 20330.289 0.005 0.0113 0.134 0.36 0.0981 0.129 0.338 0.446 0.000754 0.000995 ! Validation 97 20330.289 0.005 0.0137 1.15 1.43 0.109 0.143 1.17 1.31 0.00262 0.00293 Wall time: 20330.289332896005 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 98 100 0.285 0.011 0.0649 0.0974 0.128 0.245 0.311 0.000548 0.000693 98 118 0.38 0.0101 0.177 0.0935 0.123 0.476 0.514 0.00106 0.00115 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 98 100 0.161 0.0072 0.0175 0.0808 0.103 0.137 0.161 0.000306 0.00036 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 98 20540.459 0.005 0.0116 0.183 0.416 0.0998 0.132 0.409 0.522 0.000912 0.00117 ! Validation 98 20540.459 0.005 0.0118 0.187 0.423 0.101 0.132 0.427 0.527 0.000952 0.00118 Wall time: 20540.459723998327 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 99 100 0.258 0.0107 0.0437 0.0956 0.126 0.217 0.255 0.000483 0.000569 99 118 0.35 0.0113 0.124 0.0976 0.13 0.375 0.429 0.000838 0.000958 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 99 100 0.169 0.00698 0.0296 0.0797 0.102 0.18 0.21 0.000403 0.000469 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 99 20749.795 0.005 0.011 0.122 0.342 0.0969 0.128 0.345 0.427 0.000769 0.000953 ! Validation 99 20749.795 0.005 0.0116 0.118 0.35 0.0996 0.131 0.329 0.419 0.000733 0.000936 Wall time: 20749.79563351907 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 100 100 0.288 0.0104 0.0795 0.0948 0.125 0.284 0.344 0.000634 0.000767 100 118 0.318 0.0117 0.0829 0.0998 0.132 0.314 0.351 0.000701 0.000784 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 100 100 0.196 0.00694 0.0568 0.0793 0.102 0.272 0.291 0.000607 0.000649 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 100 20959.121 0.005 0.0111 0.152 0.374 0.0976 0.129 0.388 0.476 0.000866 0.00106 ! Validation 100 20959.121 0.005 0.0115 0.0762 0.307 0.0993 0.131 0.273 0.337 0.00061 0.000751 Wall time: 20959.12144750124 ! Best model 100 0.307 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 101 100 0.262 0.0104 0.0535 0.0946 0.125 0.223 0.282 0.000497 0.00063 101 118 0.276 0.0113 0.0509 0.0972 0.129 0.255 0.275 0.000568 0.000614 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 101 100 0.149 0.00679 0.0134 0.0786 0.101 0.12 0.141 0.000268 0.000315 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 101 21168.567 0.005 0.0107 0.0966 0.311 0.0956 0.126 0.306 0.38 0.000682 0.000847 ! Validation 101 21168.567 0.005 0.0112 0.087 0.311 0.098 0.129 0.282 0.36 0.00063 0.000803 Wall time: 21168.567291424144 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 102 100 0.243 0.01 0.0432 0.0922 0.122 0.199 0.254 0.000443 0.000566 102 118 0.914 0.01 0.714 0.0916 0.122 1.02 1.03 0.00227 0.0023 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 102 100 0.556 0.00763 0.404 0.0829 0.107 0.769 0.775 0.00172 0.00173 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 102 21377.852 0.005 0.0106 0.124 0.336 0.095 0.125 0.331 0.423 0.00074 0.000944 ! Validation 102 21377.852 0.005 0.012 1.1 1.34 0.101 0.133 1.22 1.28 0.00273 0.00286 Wall time: 21377.852275107987 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 103 100 0.313 0.0104 0.106 0.0938 0.124 0.347 0.397 0.000775 0.000885 103 118 0.295 0.00997 0.0961 0.0924 0.122 0.313 0.378 0.000698 0.000844 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 103 100 0.185 0.00655 0.0545 0.0771 0.0987 0.269 0.285 0.0006 0.000636 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 103 21587.136 0.005 0.0104 0.13 0.339 0.0943 0.125 0.349 0.44 0.00078 0.000982 ! Validation 103 21587.136 0.005 0.0109 0.0688 0.286 0.0965 0.127 0.264 0.32 0.00059 0.000714 Wall time: 21587.13622866012 ! Best model 103 0.286 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 104 100 0.4 0.0104 0.192 0.0945 0.124 0.48 0.534 0.00107 0.00119 104 118 0.223 0.00937 0.0357 0.0904 0.118 0.189 0.23 0.000422 0.000514 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 104 100 0.146 0.00688 0.00825 0.0788 0.101 0.0901 0.111 0.000201 0.000247 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 104 21796.338 0.005 0.0106 0.186 0.399 0.0952 0.126 0.429 0.528 0.000957 0.00118 ! Validation 104 21796.338 0.005 0.0112 0.0682 0.293 0.0981 0.129 0.256 0.318 0.000572 0.000711 Wall time: 21796.33834269317 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 105 100 0.26 0.0101 0.0591 0.0926 0.122 0.23 0.297 0.000513 0.000662 105 118 0.278 0.00997 0.0787 0.092 0.122 0.289 0.342 0.000645 0.000763 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 105 100 0.232 0.00647 0.103 0.0766 0.0981 0.379 0.391 0.000845 0.000874 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 105 22005.529 0.005 0.0101 0.0709 0.273 0.0927 0.122 0.256 0.325 0.000572 0.000724 ! Validation 105 22005.529 0.005 0.0108 0.359 0.574 0.0958 0.126 0.657 0.73 0.00147 0.00163 Wall time: 22005.52919346094 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 106 100 0.257 0.0104 0.048 0.0942 0.125 0.207 0.267 0.000462 0.000596 106 118 0.256 0.00981 0.0602 0.0913 0.121 0.254 0.299 0.000567 0.000668 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 106 100 0.135 0.00639 0.00723 0.0765 0.0975 0.0835 0.104 0.000186 0.000232 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 106 22214.753 0.005 0.0101 0.113 0.314 0.0927 0.122 0.333 0.41 0.000742 0.000916 ! Validation 106 22214.753 0.005 0.0106 0.0792 0.291 0.0951 0.125 0.274 0.343 0.000612 0.000766 Wall time: 22214.75330148125 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 107 100 0.298 0.0106 0.0857 0.0948 0.126 0.305 0.357 0.00068 0.000797 107 118 0.273 0.00942 0.0846 0.09 0.118 0.297 0.355 0.000664 0.000792 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 107 100 0.14 0.00642 0.0117 0.0764 0.0977 0.112 0.132 0.00025 0.000294 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 107 22424.073 0.005 0.0103 0.168 0.373 0.0936 0.124 0.394 0.501 0.000879 0.00112 ! Validation 107 22424.073 0.005 0.0107 0.0759 0.289 0.0955 0.126 0.266 0.336 0.000594 0.00075 Wall time: 22424.07378569804 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 108 100 0.268 0.0093 0.0821 0.0893 0.118 0.292 0.349 0.000652 0.00078 108 118 0.266 0.0106 0.0541 0.094 0.126 0.264 0.284 0.00059 0.000633 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 108 100 0.134 0.0062 0.00953 0.0752 0.096 0.1 0.119 0.000223 0.000266 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 108 22633.498 0.005 0.00986 0.106 0.303 0.0916 0.121 0.323 0.397 0.000721 0.000886 ! Validation 108 22633.498 0.005 0.0104 0.0729 0.281 0.0942 0.124 0.261 0.329 0.000584 0.000735 Wall time: 22633.498852634337 ! Best model 108 0.281 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 109 100 0.228 0.0094 0.0402 0.0895 0.118 0.195 0.244 0.000434 0.000545 109 118 0.484 0.0103 0.278 0.0922 0.124 0.607 0.643 0.00136 0.00144 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 109 100 0.642 0.00593 0.523 0.0733 0.0939 0.877 0.882 0.00196 0.00197 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 109 22842.857 0.005 0.00949 0.0667 0.256 0.0898 0.119 0.25 0.311 0.000558 0.000695 ! Validation 109 22842.857 0.005 0.00996 0.402 0.601 0.0921 0.122 0.709 0.773 0.00158 0.00173 Wall time: 22842.857313163113 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 110 100 0.267 0.00974 0.0727 0.0909 0.12 0.283 0.329 0.000632 0.000734 110 118 0.312 0.00942 0.123 0.0894 0.118 0.368 0.428 0.000821 0.000956 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 110 100 0.378 0.00635 0.251 0.0759 0.0972 0.605 0.611 0.00135 0.00136 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 110 23052.207 0.005 0.00965 0.154 0.347 0.0906 0.12 0.395 0.478 0.000883 0.00107 ! Validation 110 23052.207 0.005 0.0104 0.154 0.363 0.0943 0.124 0.407 0.479 0.000909 0.00107 Wall time: 23052.20798440324 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 111 100 0.375 0.00939 0.187 0.0893 0.118 0.478 0.528 0.00107 0.00118 111 118 0.22 0.00975 0.0253 0.0906 0.12 0.134 0.194 0.0003 0.000433 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 111 100 0.239 0.00594 0.121 0.0733 0.0939 0.414 0.424 0.000925 0.000946 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 111 23261.554 0.005 0.00947 0.104 0.294 0.0897 0.119 0.32 0.395 0.000714 0.000882 ! Validation 111 23261.554 0.005 0.00988 0.0833 0.281 0.0918 0.121 0.29 0.352 0.000648 0.000786 Wall time: 23261.554501993116 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 112 100 0.226 0.00859 0.0545 0.0856 0.113 0.237 0.285 0.000528 0.000636 112 118 0.217 0.00808 0.0555 0.0838 0.11 0.215 0.287 0.000481 0.000641 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 112 100 0.233 0.00583 0.117 0.073 0.0931 0.409 0.417 0.000914 0.00093 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 112 23470.977 0.005 0.00919 0.0793 0.263 0.0884 0.117 0.273 0.344 0.00061 0.000767 ! Validation 112 23470.977 0.005 0.00976 0.0844 0.28 0.0912 0.121 0.294 0.354 0.000655 0.000791 Wall time: 23470.977175153326 ! Best model 112 0.280 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 113 100 0.214 0.00927 0.0283 0.0882 0.117 0.157 0.205 0.00035 0.000458 113 118 0.415 0.00911 0.233 0.0882 0.116 0.564 0.589 0.00126 0.00131 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 113 100 0.173 0.0059 0.055 0.0732 0.0937 0.275 0.286 0.000614 0.000638 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 113 23680.340 0.005 0.00928 0.148 0.334 0.0888 0.118 0.369 0.468 0.000825 0.00105 ! Validation 113 23680.340 0.005 0.00973 0.0716 0.266 0.0911 0.12 0.264 0.326 0.000589 0.000729 Wall time: 23680.340890647378 ! Best model 113 0.266 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 114 100 0.433 0.00996 0.234 0.092 0.122 0.551 0.59 0.00123 0.00132 114 118 0.357 0.0102 0.152 0.094 0.123 0.446 0.475 0.000995 0.00106 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 114 100 0.147 0.00702 0.00644 0.0788 0.102 0.0849 0.0979 0.00019 0.000218 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 114 23889.678 0.005 0.00916 0.122 0.305 0.0882 0.117 0.341 0.426 0.000761 0.000951 ! Validation 114 23889.678 0.005 0.0108 0.0642 0.28 0.0959 0.127 0.25 0.309 0.000558 0.000689 Wall time: 23889.67866322724 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 115 100 0.358 0.00918 0.175 0.0883 0.117 0.476 0.51 0.00106 0.00114 115 118 0.189 0.0078 0.033 0.0819 0.108 0.19 0.222 0.000424 0.000495 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 115 100 0.12 0.00579 0.00454 0.0727 0.0928 0.0702 0.0821 0.000157 0.000183 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 115 24099.386 0.005 0.00902 0.104 0.285 0.0876 0.116 0.32 0.395 0.000715 0.000881 ! Validation 115 24099.386 0.005 0.00964 0.0688 0.262 0.0907 0.12 0.255 0.32 0.000569 0.000714 Wall time: 24099.38683404727 ! Best model 115 0.262 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 116 100 0.215 0.0083 0.0486 0.0845 0.111 0.226 0.269 0.000504 0.0006 116 118 0.206 0.00908 0.0243 0.0874 0.116 0.13 0.19 0.000289 0.000424 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 116 100 0.115 0.00556 0.00407 0.0711 0.0909 0.0662 0.0778 0.000148 0.000174 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 116 24308.727 0.005 0.00884 0.0922 0.269 0.0866 0.115 0.298 0.371 0.000664 0.000829 ! Validation 116 24308.727 0.005 0.00933 0.103 0.29 0.0891 0.118 0.31 0.392 0.000692 0.000875 Wall time: 24308.727175750304 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 117 100 0.331 0.00951 0.141 0.089 0.119 0.406 0.457 0.000906 0.00102 117 118 0.37 0.00952 0.179 0.0893 0.119 0.456 0.516 0.00102 0.00115 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 117 100 0.571 0.00583 0.454 0.0723 0.0931 0.818 0.822 0.00183 0.00183 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 117 24518.158 0.005 0.00885 0.108 0.285 0.0866 0.115 0.319 0.4 0.000713 0.000893 ! Validation 117 24518.158 0.005 0.00965 0.435 0.628 0.0907 0.12 0.755 0.805 0.00169 0.0018 Wall time: 24518.15832545329 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 118 100 0.255 0.00881 0.079 0.0858 0.114 0.268 0.343 0.000597 0.000765 118 118 0.345 0.00706 0.204 0.0787 0.102 0.525 0.551 0.00117 0.00123 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 118 100 0.479 0.00596 0.36 0.0731 0.0941 0.727 0.731 0.00162 0.00163 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 118 24738.603 0.005 0.00868 0.104 0.278 0.0858 0.114 0.316 0.392 0.000705 0.000875 ! Validation 118 24738.603 0.005 0.00969 0.499 0.693 0.0909 0.12 0.812 0.861 0.00181 0.00192 Wall time: 24738.60374747729 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 119 100 0.279 0.00796 0.119 0.0826 0.109 0.376 0.422 0.000838 0.000941 119 118 0.281 0.00821 0.117 0.0826 0.11 0.376 0.416 0.000838 0.000929 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 119 100 0.341 0.0054 0.233 0.0703 0.0896 0.585 0.589 0.00131 0.00131 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 119 24947.821 0.005 0.00854 0.0759 0.247 0.0851 0.113 0.266 0.335 0.000593 0.000748 ! Validation 119 24947.821 0.005 0.00894 0.512 0.691 0.0872 0.115 0.82 0.873 0.00183 0.00195 Wall time: 24947.821178064216 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 120 100 0.341 0.009 0.161 0.0867 0.116 0.455 0.49 0.00102 0.00109 120 118 0.224 0.00884 0.0472 0.086 0.115 0.21 0.265 0.000469 0.000591 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 120 100 0.113 0.00533 0.00607 0.0696 0.089 0.0716 0.095 0.00016 0.000212 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 120 25157.040 0.005 0.00844 0.109 0.278 0.0845 0.112 0.328 0.404 0.000731 0.000902 ! Validation 120 25157.040 0.005 0.00893 0.0641 0.243 0.087 0.115 0.242 0.309 0.000541 0.000689 Wall time: 25157.04094988294 ! Best model 120 0.243 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 121 100 0.198 0.0085 0.028 0.0846 0.112 0.178 0.204 0.000397 0.000456 121 118 0.215 0.00875 0.0406 0.086 0.114 0.204 0.246 0.000456 0.000548 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 121 100 0.122 0.00525 0.0166 0.069 0.0884 0.133 0.157 0.000296 0.00035 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 121 25366.247 0.005 0.00842 0.111 0.28 0.0844 0.112 0.319 0.408 0.000713 0.00091 ! Validation 121 25366.247 0.005 0.00885 0.0653 0.242 0.0868 0.115 0.245 0.312 0.000546 0.000695 Wall time: 25366.247591628227 ! Best model 121 0.242 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 122 100 0.198 0.00778 0.0425 0.0812 0.108 0.211 0.251 0.000471 0.000561 122 118 0.233 0.0082 0.0693 0.0833 0.11 0.246 0.321 0.00055 0.000716 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 122 100 0.137 0.00531 0.0307 0.0693 0.0889 0.2 0.214 0.000446 0.000477 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 122 25575.556 0.005 0.00825 0.11 0.274 0.0835 0.111 0.319 0.404 0.000712 0.000902 ! Validation 122 25575.556 0.005 0.00887 0.125 0.302 0.0869 0.115 0.355 0.431 0.000793 0.000963 Wall time: 25575.556937876157 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 123 100 0.225 0.00845 0.0557 0.0854 0.112 0.228 0.288 0.000509 0.000643 123 118 0.308 0.00937 0.12 0.0874 0.118 0.382 0.423 0.000852 0.000944 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 123 100 0.128 0.00547 0.019 0.0704 0.0902 0.158 0.168 0.000353 0.000375 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 123 25784.871 0.005 0.00822 0.1 0.264 0.0833 0.111 0.299 0.385 0.000666 0.00086 ! Validation 123 25784.871 0.005 0.00892 0.138 0.317 0.087 0.115 0.376 0.453 0.000839 0.00101 Wall time: 25784.871658330318 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 124 100 0.336 0.00809 0.174 0.0825 0.11 0.472 0.509 0.00105 0.00114 124 118 0.203 0.00804 0.0426 0.0831 0.109 0.209 0.252 0.000465 0.000562 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 124 100 0.131 0.0052 0.0274 0.0689 0.088 0.184 0.202 0.000411 0.00045 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 124 25994.063 0.005 0.00836 0.124 0.291 0.0842 0.112 0.354 0.43 0.000791 0.000961 ! Validation 124 25994.063 0.005 0.00868 0.146 0.32 0.0859 0.114 0.389 0.467 0.000867 0.00104 Wall time: 25994.063703488093 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 125 100 0.188 0.00766 0.0351 0.0805 0.107 0.187 0.229 0.000418 0.00051 125 118 0.177 0.00824 0.0125 0.0835 0.111 0.106 0.137 0.000238 0.000305 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 125 100 0.112 0.005 0.012 0.0671 0.0862 0.121 0.133 0.00027 0.000298 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 125 26203.246 0.005 0.00792 0.0532 0.212 0.0818 0.109 0.225 0.282 0.000503 0.00063 ! Validation 125 26203.246 0.005 0.00841 0.0491 0.217 0.0846 0.112 0.216 0.27 0.000482 0.000603 Wall time: 26203.24644506816 ! Best model 125 0.217 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 126 100 0.227 0.00797 0.068 0.0821 0.109 0.277 0.318 0.000618 0.00071 126 118 0.238 0.00875 0.0631 0.0855 0.114 0.246 0.306 0.00055 0.000684 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 126 100 0.517 0.00497 0.418 0.067 0.086 0.785 0.788 0.00175 0.00176 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 126 26412.529 0.005 0.00801 0.114 0.274 0.0823 0.109 0.32 0.412 0.000715 0.000919 ! Validation 126 26412.529 0.005 0.00841 0.318 0.486 0.0844 0.112 0.635 0.688 0.00142 0.00154 Wall time: 26412.52931561414 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 127 100 0.191 0.00757 0.0394 0.0802 0.106 0.199 0.242 0.000445 0.00054 127 118 0.209 0.00866 0.0353 0.0845 0.114 0.16 0.229 0.000356 0.000511 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 127 100 0.177 0.0048 0.0811 0.0658 0.0845 0.34 0.347 0.00076 0.000775 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 127 26621.707 0.005 0.00795 0.0764 0.235 0.0819 0.109 0.263 0.338 0.000587 0.000754 ! Validation 127 26621.707 0.005 0.00814 0.062 0.225 0.083 0.11 0.249 0.304 0.000556 0.000678 Wall time: 26621.7072025612 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 128 100 0.468 0.00809 0.306 0.0826 0.11 0.629 0.675 0.0014 0.00151 128 118 0.245 0.00785 0.0876 0.0811 0.108 0.335 0.361 0.000747 0.000806 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 128 100 0.195 0.00511 0.0931 0.0684 0.0872 0.369 0.372 0.000824 0.000831 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 128 26830.873 0.005 0.00768 0.106 0.26 0.0805 0.107 0.326 0.397 0.000727 0.000886 ! Validation 128 26830.873 0.005 0.0084 0.0944 0.262 0.0845 0.112 0.311 0.375 0.000694 0.000836 Wall time: 26830.873531810008 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 129 100 0.204 0.00858 0.032 0.0845 0.113 0.174 0.218 0.000389 0.000487 129 118 0.453 0.00777 0.298 0.0815 0.107 0.626 0.665 0.0014 0.00148 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 129 100 0.459 0.00506 0.358 0.0675 0.0867 0.727 0.73 0.00162 0.00163 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 129 27040.041 0.005 0.00782 0.105 0.262 0.0813 0.108 0.313 0.393 0.000699 0.000878 ! Validation 129 27040.041 0.005 0.00841 0.234 0.403 0.0846 0.112 0.521 0.59 0.00116 0.00132 Wall time: 27040.04126149416 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 130 100 0.217 0.008 0.0567 0.0823 0.109 0.237 0.29 0.000529 0.000648 130 118 0.204 0.00836 0.0372 0.0831 0.112 0.187 0.235 0.000417 0.000525 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 130 100 0.113 0.00488 0.0155 0.0666 0.0852 0.14 0.152 0.000312 0.000339 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 130 27249.219 0.005 0.00783 0.0881 0.245 0.0813 0.108 0.296 0.363 0.000661 0.00081 ! Validation 130 27249.219 0.005 0.0081 0.112 0.274 0.0829 0.11 0.328 0.408 0.000731 0.00091 Wall time: 27249.219575043302 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 131 100 0.172 0.00759 0.0205 0.0797 0.106 0.144 0.175 0.00032 0.00039 131 118 0.189 0.00739 0.0415 0.0795 0.105 0.211 0.249 0.000472 0.000555 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 131 100 0.0989 0.00469 0.00507 0.0651 0.0835 0.0726 0.0868 0.000162 0.000194 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 131 27458.486 0.005 0.00756 0.0721 0.223 0.0798 0.106 0.26 0.328 0.000581 0.000732 ! Validation 131 27458.486 0.005 0.00788 0.0693 0.227 0.0818 0.108 0.254 0.321 0.000568 0.000716 Wall time: 27458.48656206811 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 132 100 0.272 0.00734 0.125 0.0791 0.104 0.372 0.431 0.000831 0.000962 132 118 0.312 0.00809 0.15 0.0821 0.11 0.44 0.472 0.000982 0.00105 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 132 100 0.116 0.00495 0.0173 0.067 0.0858 0.146 0.16 0.000325 0.000358 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 132 27667.669 0.005 0.00749 0.119 0.268 0.0794 0.105 0.345 0.419 0.000771 0.000936 ! Validation 132 27667.669 0.005 0.00824 0.0632 0.228 0.0835 0.111 0.238 0.307 0.000532 0.000685 Wall time: 27667.66908290796 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 133 100 0.241 0.0071 0.0994 0.0776 0.103 0.343 0.384 0.000767 0.000858 133 118 0.183 0.0068 0.0472 0.0764 0.101 0.231 0.265 0.000517 0.000591 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 133 100 0.123 0.00459 0.0311 0.0645 0.0826 0.203 0.215 0.000453 0.00048 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 133 27876.841 0.005 0.00731 0.0703 0.216 0.0785 0.104 0.261 0.324 0.000583 0.000723 ! Validation 133 27876.841 0.005 0.00773 0.0471 0.202 0.0808 0.107 0.213 0.265 0.000474 0.000591 Wall time: 27876.841197146103 ! Best model 133 0.202 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 134 100 0.165 0.00708 0.023 0.0775 0.103 0.148 0.185 0.00033 0.000413 134 118 0.184 0.0066 0.052 0.075 0.0991 0.225 0.278 0.000502 0.000621 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 134 100 0.0943 0.00451 0.00423 0.064 0.0819 0.0665 0.0793 0.000148 0.000177 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 134 28086.019 0.005 0.00732 0.0899 0.236 0.0785 0.104 0.296 0.366 0.000662 0.000817 ! Validation 134 28086.019 0.005 0.00761 0.108 0.26 0.0803 0.106 0.312 0.4 0.000697 0.000893 Wall time: 28086.019659728277 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 135 100 0.166 0.00683 0.0295 0.0763 0.101 0.17 0.209 0.000379 0.000468 135 118 0.348 0.00693 0.21 0.0764 0.102 0.536 0.558 0.0012 0.00125 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 135 100 0.383 0.00461 0.291 0.0642 0.0828 0.654 0.657 0.00146 0.00147 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 135 28295.635 0.005 0.00717 0.0731 0.216 0.0777 0.103 0.259 0.328 0.000578 0.000731 ! Validation 135 28295.635 0.005 0.00775 0.256 0.411 0.0809 0.107 0.548 0.617 0.00122 0.00138 Wall time: 28295.635455525015 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 136 100 0.324 0.00725 0.179 0.0783 0.104 0.489 0.516 0.00109 0.00115 136 118 0.159 0.00721 0.0151 0.078 0.104 0.135 0.15 0.000301 0.000334 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 136 100 0.164 0.00451 0.0735 0.0637 0.0819 0.325 0.331 0.000726 0.000738 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 136 28505.380 0.005 0.00742 0.115 0.263 0.0791 0.105 0.338 0.415 0.000755 0.000926 ! Validation 136 28505.380 0.005 0.00771 0.06 0.214 0.0807 0.107 0.242 0.299 0.00054 0.000667 Wall time: 28505.38030030718 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 137 100 0.175 0.00702 0.0349 0.0768 0.102 0.183 0.228 0.000409 0.000509 137 118 0.215 0.00868 0.041 0.085 0.114 0.224 0.247 0.0005 0.000551 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 137 100 0.16 0.00462 0.0675 0.0648 0.0829 0.31 0.317 0.000693 0.000707 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 137 28714.708 0.005 0.00708 0.0757 0.217 0.0771 0.103 0.261 0.336 0.000584 0.00075 ! Validation 137 28714.708 0.005 0.00765 0.259 0.412 0.0805 0.107 0.56 0.62 0.00125 0.00138 Wall time: 28714.708625470288 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 138 100 0.179 0.00673 0.0446 0.0757 0.1 0.209 0.257 0.000467 0.000575 138 118 0.482 0.00769 0.329 0.0802 0.107 0.626 0.699 0.0014 0.00156 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 138 100 0.176 0.00441 0.0876 0.0632 0.0809 0.357 0.361 0.000796 0.000806 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 138 28924.930 0.005 0.00696 0.0762 0.215 0.0765 0.102 0.274 0.333 0.000611 0.000743 ! Validation 138 28924.930 0.005 0.0074 0.285 0.433 0.0791 0.105 0.591 0.651 0.00132 0.00145 Wall time: 28924.930120720994 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 139 100 0.15 0.00646 0.0205 0.0738 0.098 0.126 0.175 0.000282 0.00039 139 118 0.243 0.00989 0.0453 0.0909 0.121 0.197 0.26 0.000439 0.000579 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 139 100 0.237 0.00577 0.122 0.0722 0.0926 0.42 0.426 0.000937 0.00095 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 139 29134.424 0.005 0.00698 0.091 0.231 0.0765 0.102 0.291 0.368 0.000649 0.000822 ! Validation 139 29134.424 0.005 0.0089 0.518 0.696 0.0871 0.115 0.818 0.878 0.00183 0.00196 Wall time: 29134.424111722037 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 140 100 0.204 0.00688 0.066 0.0763 0.101 0.279 0.313 0.000622 0.0007 140 118 0.151 0.00611 0.0291 0.072 0.0953 0.179 0.208 0.000399 0.000465 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 140 100 0.108 0.00446 0.0192 0.0635 0.0814 0.16 0.169 0.000358 0.000378 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 140 29343.725 0.005 0.00705 0.0857 0.227 0.0771 0.102 0.283 0.358 0.000632 0.000799 ! Validation 140 29343.725 0.005 0.00749 0.142 0.292 0.0795 0.106 0.387 0.46 0.000864 0.00103 Wall time: 29343.725562370382 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 141 100 0.168 0.00622 0.0434 0.0724 0.0962 0.213 0.254 0.000476 0.000567 141 118 0.121 0.00533 0.014 0.0681 0.089 0.119 0.144 0.000266 0.000322 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 141 100 0.172 0.00413 0.0898 0.0612 0.0784 0.363 0.365 0.00081 0.000816 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 141 29553.386 0.005 0.00675 0.055 0.19 0.0754 0.1 0.233 0.287 0.000519 0.00064 ! Validation 141 29553.386 0.005 0.00706 0.267 0.408 0.0772 0.102 0.553 0.63 0.00123 0.00141 Wall time: 29553.386903088074 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 142 100 0.303 0.00664 0.17 0.0749 0.0994 0.48 0.503 0.00107 0.00112 142 118 0.177 0.00646 0.0481 0.0742 0.098 0.227 0.268 0.000507 0.000597 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 142 100 0.233 0.00446 0.144 0.0635 0.0815 0.458 0.462 0.00102 0.00103 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 142 29762.708 0.005 0.00705 0.124 0.265 0.077 0.102 0.342 0.43 0.000764 0.00096 ! Validation 142 29762.708 0.005 0.00736 0.303 0.45 0.0789 0.105 0.62 0.671 0.00138 0.0015 Wall time: 29762.70897624921 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 143 100 0.242 0.00687 0.105 0.0764 0.101 0.333 0.395 0.000744 0.000882 143 118 0.151 0.006 0.0305 0.0714 0.0945 0.166 0.213 0.00037 0.000476 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 143 100 0.0951 0.00432 0.00867 0.0623 0.0802 0.104 0.114 0.000232 0.000253 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 143 29972.032 0.005 0.00676 0.078 0.213 0.0754 0.1 0.274 0.341 0.000611 0.000762 ! Validation 143 29972.032 0.005 0.00723 0.0542 0.199 0.0781 0.104 0.225 0.284 0.000502 0.000634 Wall time: 29972.032943899278 ! Best model 143 0.199 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 144 100 0.161 0.00674 0.0257 0.0752 0.1 0.153 0.196 0.000342 0.000437 144 118 0.15 0.00605 0.0286 0.0715 0.0949 0.164 0.206 0.000367 0.00046 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 144 100 0.0846 0.00405 0.00366 0.0604 0.0776 0.0588 0.0738 0.000131 0.000165 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 144 30181.374 0.005 0.00652 0.0525 0.183 0.074 0.0985 0.222 0.28 0.000495 0.000625 ! Validation 144 30181.374 0.005 0.00687 0.0826 0.22 0.0761 0.101 0.279 0.351 0.000622 0.000782 Wall time: 30181.37407871196 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 145 100 0.182 0.00658 0.0501 0.074 0.0989 0.238 0.273 0.000531 0.00061 145 118 0.229 0.00716 0.0856 0.0772 0.103 0.333 0.357 0.000743 0.000796 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 145 100 0.114 0.00446 0.0248 0.0631 0.0814 0.187 0.192 0.000418 0.000429 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 145 30390.818 0.005 0.00652 0.0928 0.223 0.0739 0.0984 0.291 0.372 0.000648 0.000829 ! Validation 145 30390.818 0.005 0.00741 0.0606 0.209 0.0792 0.105 0.241 0.3 0.000537 0.00067 Wall time: 30390.818093256094 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 146 100 0.146 0.00606 0.0245 0.0714 0.0949 0.155 0.191 0.000346 0.000426 146 118 0.171 0.00529 0.065 0.0671 0.0887 0.249 0.311 0.000555 0.000694 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 146 100 0.173 0.00403 0.0921 0.0604 0.0774 0.366 0.37 0.000818 0.000826 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 146 30600.589 0.005 0.00665 0.0793 0.212 0.0748 0.0995 0.272 0.344 0.000606 0.000767 ! Validation 146 30600.589 0.005 0.0068 0.072 0.208 0.0756 0.101 0.266 0.327 0.000595 0.000731 Wall time: 30600.589677792042 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 147 100 0.152 0.00661 0.0195 0.0741 0.0992 0.134 0.17 0.000298 0.00038 147 118 0.158 0.00566 0.0444 0.0695 0.0918 0.224 0.257 0.000499 0.000573 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 147 100 0.0843 0.00405 0.00328 0.0604 0.0776 0.0583 0.0698 0.00013 0.000156 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 147 30809.907 0.005 0.00638 0.0591 0.187 0.0732 0.0975 0.24 0.297 0.000536 0.000662 ! Validation 147 30809.907 0.005 0.0068 0.0998 0.236 0.0757 0.101 0.308 0.385 0.000688 0.00086 Wall time: 30809.907771282364 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 148 100 0.322 0.00671 0.187 0.0752 0.0999 0.499 0.528 0.00111 0.00118 148 118 0.128 0.00568 0.0142 0.0701 0.0919 0.112 0.145 0.000251 0.000324 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 148 100 0.126 0.00454 0.0353 0.0635 0.0822 0.222 0.229 0.000497 0.000511 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 148 31019.217 0.005 0.00644 0.0835 0.212 0.0736 0.0979 0.293 0.353 0.000654 0.000789 ! Validation 148 31019.217 0.005 0.00726 0.0988 0.244 0.0782 0.104 0.312 0.383 0.000697 0.000856 Wall time: 31019.217067226302 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 149 100 0.146 0.00653 0.0154 0.0736 0.0985 0.123 0.151 0.000275 0.000337 149 118 0.215 0.00558 0.103 0.0692 0.0911 0.332 0.392 0.00074 0.000875 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 149 100 0.116 0.00407 0.0344 0.0601 0.0778 0.214 0.226 0.000478 0.000505 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 149 31228.554 0.005 0.00627 0.0613 0.187 0.0725 0.0966 0.244 0.301 0.000544 0.000672 ! Validation 149 31228.554 0.005 0.00678 0.151 0.287 0.0755 0.1 0.408 0.474 0.000911 0.00106 Wall time: 31228.554606255144 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 150 100 0.25 0.0057 0.136 0.0691 0.092 0.422 0.45 0.000941 0.001 150 118 0.413 0.00713 0.27 0.0772 0.103 0.603 0.634 0.00135 0.00141 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 150 100 0.468 0.00442 0.38 0.0629 0.0811 0.75 0.751 0.00167 0.00168 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 150 31437.996 0.005 0.00631 0.0773 0.203 0.0727 0.0968 0.273 0.336 0.000609 0.00075 ! Validation 150 31437.996 0.005 0.00714 0.277 0.42 0.0779 0.103 0.585 0.642 0.00131 0.00143 Wall time: 31437.996199279092 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 151 100 0.155 0.00615 0.0322 0.0719 0.0956 0.183 0.219 0.00041 0.000488 151 118 0.167 0.00616 0.0433 0.0719 0.0957 0.209 0.254 0.000468 0.000567 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 151 100 0.131 0.0041 0.0489 0.0604 0.0781 0.263 0.27 0.000586 0.000602 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 151 31647.334 0.005 0.00616 0.0508 0.174 0.0718 0.0957 0.215 0.275 0.00048 0.000614 ! Validation 151 31647.334 0.005 0.00672 0.124 0.258 0.0752 0.1 0.37 0.429 0.000826 0.000958 Wall time: 31647.334366244264 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 152 100 0.209 0.00577 0.0937 0.0695 0.0927 0.341 0.373 0.000762 0.000833 152 118 0.211 0.00641 0.0832 0.0734 0.0976 0.315 0.352 0.000704 0.000785 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 152 100 0.151 0.00391 0.0732 0.0589 0.0763 0.325 0.33 0.000726 0.000736 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 152 31856.663 0.005 0.00615 0.0887 0.212 0.0718 0.0956 0.293 0.363 0.000654 0.000811 ! Validation 152 31856.663 0.005 0.00662 0.193 0.326 0.0747 0.0992 0.479 0.536 0.00107 0.0012 Wall time: 31856.663919300307 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 153 100 0.22 0.00597 0.1 0.0709 0.0942 0.347 0.386 0.000774 0.000862 153 118 0.133 0.0061 0.0112 0.0721 0.0952 0.107 0.129 0.00024 0.000288 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 153 100 0.0829 0.00389 0.00521 0.0589 0.076 0.0717 0.088 0.00016 0.000197 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 153 32065.950 0.005 0.00608 0.0645 0.186 0.0714 0.0951 0.253 0.311 0.000565 0.000693 ! Validation 153 32065.950 0.005 0.0065 0.0641 0.194 0.0739 0.0983 0.242 0.309 0.000539 0.000689 Wall time: 32065.951007053256 ! Best model 153 0.194 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 154 100 0.167 0.00659 0.0353 0.0743 0.099 0.198 0.229 0.000443 0.000512 154 118 0.156 0.0054 0.0482 0.0685 0.0897 0.249 0.268 0.000555 0.000598 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 154 100 0.123 0.00404 0.0427 0.0603 0.0775 0.246 0.252 0.000549 0.000563 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 154 32275.803 0.005 0.00657 0.133 0.264 0.0742 0.0989 0.352 0.446 0.000786 0.000995 ! Validation 154 32275.803 0.005 0.00661 0.0477 0.18 0.0746 0.0991 0.214 0.266 0.000477 0.000594 Wall time: 32275.803132087924 ! Best model 154 0.180 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 155 100 0.282 0.00692 0.143 0.0758 0.101 0.417 0.462 0.000932 0.00103 155 118 0.2 0.0061 0.0783 0.0726 0.0952 0.322 0.341 0.000718 0.000762 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 155 100 0.096 0.00401 0.0158 0.06 0.0772 0.139 0.153 0.00031 0.000342 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 155 32485.304 0.005 0.0061 0.0621 0.184 0.0715 0.0952 0.239 0.304 0.000532 0.000678 ! Validation 155 32485.304 0.005 0.00664 0.152 0.285 0.0747 0.0994 0.397 0.475 0.000887 0.00106 Wall time: 32485.30478351703 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 156 100 0.139 0.00611 0.0172 0.0711 0.0953 0.128 0.16 0.000285 0.000357 156 118 0.149 0.00483 0.0524 0.0644 0.0847 0.233 0.279 0.00052 0.000623 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 156 100 0.137 0.00388 0.0589 0.0591 0.076 0.29 0.296 0.000646 0.000661 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 156 32694.501 0.005 0.00588 0.0368 0.154 0.0702 0.0936 0.186 0.234 0.000416 0.000521 ! Validation 156 32694.501 0.005 0.00639 0.0493 0.177 0.0734 0.0975 0.22 0.271 0.00049 0.000604 Wall time: 32694.501052530017 ! Best model 156 0.177 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 157 100 0.137 0.00584 0.0204 0.0701 0.0932 0.142 0.174 0.000317 0.000389 157 118 0.177 0.00641 0.0484 0.0723 0.0977 0.233 0.268 0.000521 0.000599 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 157 100 0.101 0.00414 0.0182 0.0602 0.0784 0.152 0.165 0.000339 0.000368 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 157 32903.695 0.005 0.00585 0.0707 0.188 0.07 0.0933 0.261 0.325 0.000582 0.000725 ! Validation 157 32903.695 0.005 0.00686 0.0726 0.21 0.0756 0.101 0.263 0.329 0.000586 0.000734 Wall time: 32903.69553873129 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 158 100 0.136 0.00573 0.0217 0.0692 0.0923 0.143 0.179 0.00032 0.000401 158 118 0.217 0.00538 0.109 0.0681 0.0895 0.383 0.403 0.000856 0.0009 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 158 100 0.195 0.00366 0.121 0.0572 0.0738 0.422 0.425 0.000941 0.000948 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 158 33112.894 0.005 0.0058 0.0536 0.17 0.0697 0.0929 0.225 0.281 0.000501 0.000628 ! Validation 158 33112.894 0.005 0.00613 0.211 0.334 0.0717 0.0954 0.508 0.56 0.00113 0.00125 Wall time: 33112.89420687128 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 159 100 0.22 0.00594 0.101 0.0706 0.094 0.356 0.388 0.000794 0.000866 159 118 0.123 0.00568 0.0099 0.0687 0.0919 0.0961 0.121 0.000214 0.000271 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 159 100 0.0808 0.00357 0.00951 0.0564 0.0728 0.103 0.119 0.000231 0.000265 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 159 33322.083 0.005 0.00575 0.0655 0.18 0.0693 0.0925 0.251 0.313 0.000561 0.000699 ! Validation 159 33322.083 0.005 0.00608 0.0414 0.163 0.0714 0.0951 0.192 0.248 0.000429 0.000554 Wall time: 33322.08379783109 ! Best model 159 0.163 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 160 100 0.152 0.00585 0.0353 0.0702 0.0933 0.188 0.229 0.000419 0.000511 160 118 0.249 0.00629 0.124 0.0716 0.0967 0.409 0.429 0.000913 0.000957 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 160 100 0.307 0.00393 0.228 0.0594 0.0764 0.58 0.583 0.00129 0.0013 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 160 33531.686 0.005 0.00568 0.0556 0.169 0.0689 0.0919 0.233 0.286 0.000519 0.000639 ! Validation 160 33531.686 0.005 0.0064 0.193 0.321 0.0736 0.0975 0.488 0.536 0.00109 0.0012 Wall time: 33531.68606356718 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 161 100 0.151 0.00566 0.0382 0.0682 0.0918 0.199 0.238 0.000444 0.000532 161 118 0.118 0.00504 0.0171 0.065 0.0866 0.135 0.159 0.000301 0.000356 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 161 100 0.109 0.00348 0.0397 0.0558 0.072 0.237 0.243 0.000529 0.000543 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 161 33740.895 0.005 0.00564 0.0646 0.177 0.0687 0.0916 0.245 0.311 0.000546 0.000693 ! Validation 161 33740.895 0.005 0.00592 0.0479 0.166 0.0704 0.0938 0.214 0.267 0.000478 0.000595 Wall time: 33740.89534373302 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 162 100 0.126 0.00543 0.0172 0.0674 0.0899 0.132 0.16 0.000294 0.000357 162 118 0.133 0.00536 0.0259 0.0676 0.0893 0.174 0.196 0.000388 0.000438 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 162 100 0.223 0.0036 0.151 0.0567 0.0732 0.47 0.474 0.00105 0.00106 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 162 33950.114 0.005 0.00585 0.0768 0.194 0.07 0.0933 0.257 0.339 0.000573 0.000756 ! Validation 162 33950.114 0.005 0.00594 0.127 0.246 0.0706 0.094 0.385 0.435 0.000859 0.000972 Wall time: 33950.114620547276 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 163 100 0.156 0.0054 0.0481 0.0673 0.0896 0.224 0.267 0.0005 0.000597 163 118 0.224 0.00524 0.12 0.0654 0.0883 0.406 0.422 0.000906 0.000941 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 163 100 0.4 0.00349 0.33 0.0556 0.072 0.699 0.7 0.00156 0.00156 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 163 34159.336 0.005 0.0055 0.056 0.166 0.0678 0.0904 0.227 0.287 0.000508 0.000641 ! Validation 163 34159.336 0.005 0.00592 0.238 0.356 0.0705 0.0938 0.546 0.594 0.00122 0.00133 Wall time: 34159.33672608528 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 164 100 0.292 0.0072 0.148 0.078 0.103 0.427 0.47 0.000954 0.00105 164 118 0.125 0.00575 0.0102 0.0693 0.0925 0.11 0.123 0.000247 0.000275 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 164 100 0.0967 0.00376 0.0216 0.0581 0.0747 0.171 0.179 0.000382 0.0004 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 164 34368.774 0.005 0.00565 0.0951 0.208 0.0688 0.0917 0.3 0.377 0.000669 0.000842 ! Validation 164 34368.774 0.005 0.00616 0.045 0.168 0.072 0.0957 0.209 0.259 0.000467 0.000577 Wall time: 34368.77398878895 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 165 100 0.131 0.00526 0.026 0.0665 0.0884 0.167 0.197 0.000373 0.000439 165 118 0.152 0.00511 0.0494 0.0651 0.0871 0.244 0.271 0.000544 0.000605 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 165 100 0.215 0.00351 0.144 0.0559 0.0722 0.459 0.463 0.00102 0.00103 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 165 34577.765 0.005 0.00551 0.0581 0.168 0.0679 0.0906 0.24 0.294 0.000535 0.000656 ! Validation 165 34577.765 0.005 0.00585 0.288 0.405 0.07 0.0933 0.602 0.654 0.00134 0.00146 Wall time: 34577.76512708003 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 166 100 0.12 0.0054 0.0117 0.067 0.0896 0.11 0.132 0.000245 0.000295 166 118 0.123 0.00537 0.0159 0.067 0.0894 0.127 0.154 0.000284 0.000343 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 166 100 0.0737 0.00356 0.00252 0.0562 0.0728 0.059 0.0612 0.000132 0.000137 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 166 34786.760 0.005 0.00542 0.056 0.164 0.0672 0.0898 0.233 0.289 0.00052 0.000646 ! Validation 166 34786.760 0.005 0.00595 0.077 0.196 0.0706 0.0941 0.266 0.338 0.000595 0.000756 Wall time: 34786.76030735811 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 167 100 0.143 0.00575 0.0283 0.0697 0.0925 0.163 0.205 0.000364 0.000458 167 118 0.108 0.00507 0.00624 0.0658 0.0868 0.0851 0.0964 0.00019 0.000215 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 167 100 0.0737 0.00339 0.00593 0.0552 0.071 0.0781 0.0939 0.000174 0.00021 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 167 34995.752 0.005 0.00537 0.0641 0.171 0.0669 0.0894 0.228 0.31 0.00051 0.000691 ! Validation 167 34995.752 0.005 0.00569 0.0535 0.167 0.0691 0.092 0.217 0.282 0.000483 0.00063 Wall time: 34995.75202440098 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 168 100 0.162 0.00603 0.0409 0.0714 0.0947 0.203 0.247 0.000453 0.00055 168 118 0.114 0.00524 0.00947 0.0656 0.0883 0.0906 0.119 0.000202 0.000265 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 168 100 0.104 0.00366 0.0306 0.0573 0.0738 0.202 0.213 0.00045 0.000476 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 168 35204.737 0.005 0.00539 0.078 0.186 0.0671 0.0896 0.262 0.342 0.000584 0.000762 ! Validation 168 35204.737 0.005 0.00597 0.163 0.283 0.0708 0.0942 0.429 0.493 0.000958 0.0011 Wall time: 35204.737156054005 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 169 100 0.203 0.00568 0.0889 0.0686 0.0919 0.328 0.364 0.000733 0.000812 169 118 0.133 0.00574 0.0184 0.0693 0.0924 0.137 0.166 0.000307 0.000369 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 169 100 0.0742 0.00345 0.00516 0.0553 0.0717 0.0728 0.0876 0.000163 0.000196 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 169 35414.172 0.005 0.00544 0.0661 0.175 0.0674 0.0899 0.258 0.314 0.000577 0.000701 ! Validation 169 35414.172 0.005 0.00581 0.0488 0.165 0.0698 0.093 0.209 0.269 0.000467 0.000601 Wall time: 35414.17287452007 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 170 100 0.151 0.00546 0.042 0.0675 0.0901 0.221 0.25 0.000493 0.000558 170 118 0.144 0.00603 0.0231 0.0706 0.0947 0.15 0.185 0.000336 0.000413 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 170 100 0.078 0.00333 0.0115 0.0545 0.0703 0.116 0.131 0.00026 0.000291 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 170 35623.390 0.005 0.00521 0.0382 0.142 0.0659 0.088 0.19 0.239 0.000424 0.000533 ! Validation 170 35623.390 0.005 0.00559 0.0308 0.143 0.0685 0.0912 0.17 0.214 0.000379 0.000478 Wall time: 35623.390634647105 ! Best model 170 0.143 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 171 100 0.153 0.00538 0.0456 0.067 0.0895 0.215 0.26 0.000481 0.000581 171 118 0.139 0.00616 0.0161 0.0711 0.0957 0.114 0.155 0.000255 0.000346 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 171 100 0.0698 0.00337 0.00231 0.0549 0.0708 0.0562 0.0587 0.000125 0.000131 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 171 35832.567 0.005 0.00547 0.092 0.201 0.0676 0.0902 0.299 0.371 0.000668 0.000828 ! Validation 171 35832.567 0.005 0.00569 0.0558 0.17 0.0689 0.092 0.226 0.288 0.000505 0.000643 Wall time: 35832.56791372597 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 172 100 0.116 0.00502 0.0159 0.0645 0.0864 0.129 0.154 0.000288 0.000344 172 118 0.168 0.00477 0.0724 0.0639 0.0843 0.278 0.328 0.000621 0.000732 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 172 100 0.257 0.00329 0.191 0.0542 0.07 0.53 0.533 0.00118 0.00119 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 172 36044.164 0.005 0.00517 0.0425 0.146 0.0657 0.0877 0.2 0.251 0.000446 0.00056 ! Validation 172 36044.164 0.005 0.00548 0.136 0.245 0.0678 0.0903 0.399 0.449 0.00089 0.001 Wall time: 36044.16456115013 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 173 100 0.133 0.0051 0.0314 0.0654 0.0871 0.186 0.216 0.000416 0.000482 173 118 0.132 0.00614 0.00961 0.0699 0.0956 0.102 0.12 0.000227 0.000267 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 173 100 0.0676 0.00321 0.00348 0.0535 0.0691 0.0512 0.072 0.000114 0.000161 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 173 36253.398 0.005 0.00521 0.0631 0.167 0.0659 0.088 0.242 0.307 0.000541 0.000686 ! Validation 173 36253.398 0.005 0.00541 0.0467 0.155 0.0672 0.0897 0.202 0.263 0.000451 0.000588 Wall time: 36253.39814008307 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 174 100 0.223 0.00541 0.114 0.067 0.0897 0.386 0.412 0.000862 0.00092 174 118 0.237 0.00584 0.121 0.07 0.0932 0.377 0.424 0.000842 0.000946 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 174 100 0.166 0.00337 0.0981 0.0547 0.0708 0.377 0.382 0.000841 0.000853 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 174 36462.719 0.005 0.00508 0.0441 0.146 0.065 0.0869 0.204 0.255 0.000455 0.000568 ! Validation 174 36462.719 0.005 0.00553 0.303 0.414 0.0681 0.0907 0.612 0.671 0.00137 0.0015 Wall time: 36462.719591657165 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 175 100 0.113 0.00485 0.0156 0.064 0.0849 0.128 0.152 0.000286 0.00034 175 118 0.115 0.00479 0.019 0.0634 0.0844 0.127 0.168 0.000285 0.000376 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 175 100 0.0704 0.00328 0.00479 0.054 0.0698 0.0732 0.0844 0.000163 0.000188 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 175 36672.124 0.005 0.0052 0.0589 0.163 0.066 0.088 0.239 0.297 0.000534 0.000662 ! Validation 175 36672.124 0.005 0.00538 0.0697 0.177 0.0671 0.0894 0.245 0.322 0.000548 0.000719 Wall time: 36672.12447413523 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 176 100 0.133 0.00526 0.0281 0.0663 0.0885 0.164 0.204 0.000365 0.000456 176 118 0.172 0.00516 0.0687 0.0655 0.0876 0.24 0.32 0.000535 0.000713 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 176 100 0.0708 0.00334 0.00407 0.0549 0.0704 0.0604 0.0778 0.000135 0.000174 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 176 36883.722 0.005 0.005 0.0554 0.155 0.0646 0.0862 0.228 0.287 0.000508 0.00064 ! Validation 176 36883.722 0.005 0.00549 0.11 0.22 0.0681 0.0904 0.317 0.405 0.000708 0.000905 Wall time: 36883.722073753364 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 177 100 0.238 0.00472 0.144 0.0632 0.0838 0.435 0.463 0.000971 0.00103 177 118 0.235 0.00511 0.133 0.0663 0.0872 0.377 0.444 0.000841 0.000991 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 177 100 0.123 0.00385 0.0463 0.0585 0.0757 0.251 0.262 0.000559 0.000586 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 177 37092.907 0.005 0.00499 0.0554 0.155 0.0645 0.0861 0.23 0.286 0.000514 0.000638 ! Validation 177 37092.907 0.005 0.00601 0.136 0.257 0.0711 0.0946 0.387 0.45 0.000864 0.001 Wall time: 37092.907059856225 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 178 100 0.175 0.005 0.0747 0.0646 0.0862 0.299 0.333 0.000666 0.000744 178 118 0.177 0.00552 0.0664 0.067 0.0906 0.273 0.314 0.000609 0.000701 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 178 100 0.0989 0.00371 0.0247 0.0578 0.0743 0.184 0.192 0.00041 0.000428 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 178 37305.742 0.005 0.00505 0.0555 0.156 0.0649 0.0866 0.23 0.287 0.000513 0.000641 ! Validation 178 37305.742 0.005 0.00573 0.0849 0.2 0.0697 0.0923 0.294 0.355 0.000657 0.000793 Wall time: 37305.74268612033 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 179 100 0.123 0.00487 0.0255 0.0637 0.0851 0.164 0.195 0.000367 0.000434 179 118 0.0971 0.00458 0.00553 0.062 0.0825 0.0777 0.0907 0.000173 0.000202 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 179 100 0.0756 0.00305 0.0145 0.052 0.0674 0.136 0.147 0.000303 0.000328 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 179 37515.012 0.005 0.00495 0.04 0.139 0.0643 0.0858 0.194 0.245 0.000432 0.000546 ! Validation 179 37515.012 0.005 0.00517 0.0328 0.136 0.0657 0.0877 0.173 0.221 0.000386 0.000493 Wall time: 37515.01268892037 ! Best model 179 0.136 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 180 100 0.117 0.00501 0.0172 0.0644 0.0863 0.124 0.16 0.000277 0.000357 180 118 0.177 0.00538 0.0691 0.0663 0.0895 0.261 0.321 0.000583 0.000716 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 180 100 0.107 0.00327 0.0418 0.0538 0.0697 0.244 0.249 0.000545 0.000557 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 180 37724.218 0.005 0.0049 0.0678 0.166 0.0639 0.0853 0.26 0.318 0.000579 0.000709 ! Validation 180 37724.218 0.005 0.00542 0.0488 0.157 0.0674 0.0897 0.212 0.269 0.000474 0.000601 Wall time: 37724.21892667236 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 181 100 0.11 0.0047 0.016 0.0627 0.0836 0.118 0.154 0.000262 0.000344 181 118 0.119 0.00447 0.0299 0.0612 0.0815 0.185 0.211 0.000412 0.000471 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 181 100 0.074 0.00321 0.00975 0.0535 0.0691 0.105 0.12 0.000235 0.000269 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 181 37933.407 0.005 0.00571 0.0959 0.21 0.0688 0.0922 0.293 0.379 0.000653 0.000845 ! Validation 181 37933.407 0.005 0.00526 0.0895 0.195 0.0664 0.0885 0.283 0.365 0.000632 0.000814 Wall time: 37933.40718326904 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 182 100 0.105 0.00455 0.0142 0.0615 0.0823 0.113 0.145 0.000252 0.000324 182 118 0.118 0.00475 0.0227 0.0627 0.0841 0.16 0.184 0.000357 0.00041 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 182 100 0.158 0.00325 0.0928 0.0538 0.0695 0.366 0.372 0.000816 0.000829 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 182 38142.594 0.005 0.00478 0.0419 0.137 0.0631 0.0843 0.199 0.25 0.000445 0.000558 ! Validation 182 38142.594 0.005 0.00523 0.0598 0.164 0.0662 0.0882 0.247 0.298 0.000551 0.000665 Wall time: 38142.59456725838 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 183 100 0.218 0.00475 0.123 0.0632 0.0841 0.396 0.427 0.000883 0.000953 183 118 0.126 0.00493 0.0271 0.0628 0.0856 0.163 0.201 0.000364 0.000448 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 183 100 0.0687 0.00326 0.00349 0.0539 0.0696 0.0642 0.072 0.000143 0.000161 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 183 38351.897 0.005 0.00482 0.0429 0.139 0.0633 0.0846 0.202 0.253 0.000451 0.000564 ! Validation 183 38351.897 0.005 0.00521 0.029 0.133 0.0661 0.0881 0.163 0.208 0.000365 0.000463 Wall time: 38351.89770634798 ! Best model 183 0.133 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 184 100 0.307 0.00496 0.208 0.0643 0.0859 0.537 0.556 0.0012 0.00124 184 118 0.139 0.00504 0.0384 0.0655 0.0866 0.177 0.239 0.000396 0.000533 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 184 100 0.0757 0.00348 0.00609 0.0556 0.0719 0.0804 0.0952 0.000179 0.000212 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 184 38561.100 0.005 0.00483 0.0601 0.157 0.0635 0.0848 0.243 0.299 0.000543 0.000668 ! Validation 184 38561.100 0.005 0.00552 0.0676 0.178 0.0681 0.0906 0.25 0.317 0.000559 0.000708 Wall time: 38561.101003192365 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 185 100 0.177 0.00479 0.0816 0.063 0.0844 0.322 0.348 0.000718 0.000777 185 118 0.104 0.00442 0.0161 0.0608 0.0811 0.129 0.155 0.000288 0.000345 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 185 100 0.197 0.00303 0.136 0.052 0.0672 0.446 0.45 0.000995 0.00101 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 185 38770.286 0.005 0.00469 0.0382 0.132 0.0625 0.0836 0.191 0.239 0.000427 0.000533 ! Validation 185 38770.286 0.005 0.00501 0.207 0.308 0.0648 0.0864 0.511 0.555 0.00114 0.00124 Wall time: 38770.28609237401 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 186 100 0.126 0.00493 0.0279 0.0637 0.0856 0.167 0.204 0.000373 0.000455 186 118 0.0861 0.00398 0.00651 0.0578 0.0769 0.0776 0.0984 0.000173 0.00022 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 186 100 0.0665 0.00296 0.00725 0.0514 0.0664 0.0885 0.104 0.000197 0.000232 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 186 38979.486 0.005 0.00468 0.0521 0.146 0.0624 0.0834 0.224 0.279 0.0005 0.000624 ! Validation 186 38979.486 0.005 0.00492 0.0343 0.133 0.0641 0.0856 0.176 0.226 0.000392 0.000504 Wall time: 38979.48619559733 ! Best model 186 0.133 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 187 100 0.242 0.00472 0.148 0.0629 0.0838 0.449 0.468 0.001 0.00105 187 118 0.111 0.00469 0.0174 0.0624 0.0835 0.124 0.161 0.000277 0.000359 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 187 100 0.0662 0.00312 0.00376 0.0527 0.0681 0.0697 0.0748 0.000156 0.000167 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 187 39188.828 0.005 0.00476 0.0664 0.162 0.063 0.0842 0.251 0.315 0.000559 0.000703 ! Validation 187 39188.828 0.005 0.00509 0.0287 0.13 0.0652 0.087 0.164 0.207 0.000366 0.000461 Wall time: 39188.8281635982 ! Best model 187 0.130 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 188 100 0.0975 0.00416 0.0143 0.0591 0.0787 0.117 0.146 0.000261 0.000325 188 118 0.135 0.00518 0.0312 0.0646 0.0877 0.174 0.215 0.000388 0.000481 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 188 100 0.171 0.00302 0.11 0.0516 0.067 0.402 0.405 0.000897 0.000904 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 188 39398.281 0.005 0.00459 0.0497 0.142 0.0618 0.0826 0.215 0.272 0.000481 0.000608 ! Validation 188 39398.281 0.005 0.00504 0.0732 0.174 0.0649 0.0866 0.273 0.33 0.000608 0.000737 Wall time: 39398.28144237632 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 189 100 0.138 0.00445 0.0492 0.061 0.0813 0.23 0.271 0.000514 0.000604 189 118 0.0952 0.00427 0.00977 0.0607 0.0797 0.103 0.121 0.000229 0.000269 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 189 100 0.0828 0.00294 0.0241 0.0512 0.0661 0.179 0.189 0.000399 0.000422 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 189 39607.507 0.005 0.00459 0.0553 0.147 0.0618 0.0826 0.236 0.288 0.000527 0.000642 ! Validation 189 39607.507 0.005 0.0049 0.102 0.2 0.064 0.0854 0.331 0.39 0.000738 0.00087 Wall time: 39607.50740058813 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 190 100 0.101 0.00457 0.00979 0.0615 0.0825 0.0977 0.121 0.000218 0.000269 190 118 0.102 0.00426 0.0168 0.0581 0.0796 0.142 0.158 0.000318 0.000352 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 190 100 0.0652 0.003 0.00532 0.0517 0.0668 0.0664 0.0889 0.000148 0.000198 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 190 39816.696 0.005 0.00453 0.0374 0.128 0.0614 0.0821 0.189 0.236 0.000421 0.000528 ! Validation 190 39816.696 0.005 0.00489 0.0464 0.144 0.0638 0.0852 0.208 0.263 0.000465 0.000586 Wall time: 39816.69655463705 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 191 100 0.104 0.00427 0.0186 0.0597 0.0797 0.138 0.166 0.000307 0.000371 191 118 0.245 0.00424 0.161 0.0601 0.0794 0.476 0.489 0.00106 0.00109 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 191 100 0.282 0.00294 0.223 0.0512 0.0661 0.572 0.576 0.00128 0.00128 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 191 40025.863 0.005 0.00446 0.0378 0.127 0.061 0.0815 0.186 0.234 0.000416 0.000523 ! Validation 191 40025.863 0.005 0.00481 0.404 0.5 0.0634 0.0846 0.701 0.775 0.00157 0.00173 Wall time: 40025.86379151931 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 192 100 0.12 0.00487 0.0229 0.0634 0.0851 0.148 0.185 0.00033 0.000412 192 118 0.0956 0.00439 0.00785 0.061 0.0808 0.0953 0.108 0.000213 0.000241 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 192 100 0.0701 0.00288 0.0125 0.0508 0.0655 0.119 0.136 0.000265 0.000304 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 192 40235.051 0.005 0.00463 0.0673 0.16 0.0621 0.083 0.246 0.317 0.000548 0.000708 ! Validation 192 40235.051 0.005 0.00483 0.0324 0.129 0.0636 0.0848 0.169 0.219 0.000377 0.00049 Wall time: 40235.05107222311 ! Best model 192 0.129 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 193 100 0.131 0.00471 0.0366 0.0631 0.0837 0.202 0.233 0.000451 0.000521 193 118 0.27 0.00508 0.169 0.0642 0.0869 0.477 0.501 0.00106 0.00112 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 193 100 0.184 0.00294 0.125 0.0514 0.0661 0.424 0.431 0.000946 0.000963 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 193 40444.321 0.005 0.00448 0.0543 0.144 0.0611 0.0816 0.232 0.282 0.000517 0.000629 ! Validation 193 40444.321 0.005 0.00489 0.14 0.238 0.064 0.0853 0.404 0.456 0.000902 0.00102 Wall time: 40444.321274698246 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 194 100 0.118 0.00429 0.0319 0.0598 0.0798 0.186 0.218 0.000415 0.000486 194 118 0.101 0.00455 0.00997 0.0611 0.0822 0.0987 0.122 0.00022 0.000272 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 194 100 0.0675 0.00318 0.0039 0.0533 0.0688 0.0639 0.0762 0.000143 0.00017 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 194 40653.492 0.005 0.00449 0.0491 0.139 0.0612 0.0817 0.218 0.271 0.000486 0.000605 ! Validation 194 40653.492 0.005 0.005 0.0504 0.15 0.0647 0.0862 0.21 0.274 0.00047 0.000611 Wall time: 40653.492069988046 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 195 100 0.093 0.00398 0.0134 0.0577 0.0769 0.107 0.141 0.00024 0.000315 195 118 0.211 0.00405 0.13 0.058 0.0776 0.402 0.439 0.000897 0.000981 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 195 100 0.0992 0.00286 0.042 0.0504 0.0653 0.241 0.25 0.000537 0.000558 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 195 40863.251 0.005 0.00431 0.0295 0.116 0.0599 0.0801 0.165 0.207 0.000368 0.000462 ! Validation 195 40863.251 0.005 0.00477 0.125 0.221 0.0631 0.0842 0.374 0.432 0.000835 0.000963 Wall time: 40863.25197339896 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 196 100 0.099 0.00424 0.0142 0.0594 0.0794 0.118 0.145 0.000263 0.000324 196 118 0.147 0.00378 0.0711 0.0568 0.075 0.291 0.325 0.000649 0.000726 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 196 100 0.22 0.00292 0.162 0.051 0.0659 0.487 0.49 0.00109 0.00109 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 196 41072.489 0.005 0.00448 0.0538 0.143 0.0611 0.0817 0.22 0.283 0.000492 0.000631 ! Validation 196 41072.489 0.005 0.00471 0.196 0.291 0.0629 0.0837 0.491 0.54 0.0011 0.00121 Wall time: 41072.48987958301 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 197 100 0.282 0.00474 0.187 0.0628 0.084 0.508 0.527 0.00113 0.00118 197 118 0.111 0.0046 0.0191 0.0625 0.0827 0.14 0.168 0.000313 0.000376 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 197 100 0.095 0.00297 0.0355 0.0515 0.0665 0.223 0.23 0.000498 0.000513 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 197 41281.675 0.005 0.00468 0.0747 0.168 0.0625 0.0835 0.269 0.334 0.0006 0.000746 ! Validation 197 41281.675 0.005 0.00492 0.0402 0.139 0.0642 0.0855 0.202 0.245 0.000451 0.000546 Wall time: 41281.67576830508 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 198 100 0.142 0.0046 0.05 0.0615 0.0827 0.246 0.273 0.00055 0.000609 198 118 0.105 0.00429 0.0197 0.0594 0.0798 0.129 0.171 0.000287 0.000382 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 198 100 0.0895 0.00284 0.0327 0.0503 0.065 0.211 0.22 0.000471 0.000492 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 198 41490.935 0.005 0.00429 0.0421 0.128 0.0598 0.0799 0.201 0.251 0.000449 0.00056 ! Validation 198 41490.935 0.005 0.00467 0.056 0.149 0.0625 0.0833 0.241 0.289 0.000538 0.000644 Wall time: 41490.93602349423 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 199 100 0.106 0.00442 0.0173 0.0603 0.0811 0.126 0.16 0.000281 0.000358 199 118 0.117 0.00483 0.0202 0.0626 0.0848 0.148 0.173 0.000331 0.000387 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 199 100 0.0717 0.00278 0.0162 0.0498 0.0643 0.138 0.155 0.000308 0.000347 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 199 41700.943 0.005 0.00438 0.0621 0.15 0.0604 0.0807 0.247 0.305 0.000551 0.00068 ! Validation 199 41700.943 0.005 0.00458 0.03 0.122 0.0618 0.0825 0.168 0.211 0.000374 0.000471 Wall time: 41700.94394087838 ! Best model 199 0.122 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 200 100 0.101 0.00381 0.0248 0.0565 0.0752 0.164 0.192 0.000365 0.000428 200 118 0.156 0.00425 0.0708 0.0598 0.0795 0.298 0.324 0.000665 0.000724 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 200 100 0.0898 0.00314 0.0271 0.0527 0.0683 0.189 0.201 0.000423 0.000448 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 200 41909.969 0.005 0.00417 0.0229 0.106 0.0588 0.0787 0.146 0.183 0.000327 0.000409 ! Validation 200 41909.969 0.005 0.005 0.186 0.286 0.0647 0.0862 0.449 0.526 0.001 0.00117 Wall time: 41909.96938798018 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 201 100 0.0974 0.00396 0.0182 0.0576 0.0767 0.132 0.165 0.000296 0.000368 201 118 0.474 0.00505 0.373 0.0656 0.0866 0.734 0.745 0.00164 0.00166 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 201 100 0.232 0.00466 0.139 0.0636 0.0833 0.451 0.455 0.00101 0.00102 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 201 42119.295 0.005 0.0044 0.0647 0.153 0.0605 0.0808 0.226 0.305 0.000505 0.000681 ! Validation 201 42119.295 0.005 0.00649 0.186 0.316 0.0737 0.0982 0.456 0.526 0.00102 0.00117 Wall time: 42119.29579699924 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 202 100 0.101 0.00438 0.0138 0.0602 0.0807 0.114 0.143 0.000255 0.00032 202 118 0.0962 0.00432 0.00974 0.0608 0.0802 0.0875 0.12 0.000195 0.000269 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 202 100 0.0753 0.00275 0.0202 0.0499 0.064 0.162 0.173 0.000361 0.000387 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 202 42328.412 0.005 0.00437 0.0286 0.116 0.0603 0.0806 0.163 0.207 0.000363 0.000461 ! Validation 202 42328.412 0.005 0.00458 0.108 0.2 0.0621 0.0825 0.323 0.401 0.000721 0.000896 Wall time: 42328.41236513015 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 203 100 0.141 0.00415 0.0581 0.0589 0.0785 0.264 0.294 0.00059 0.000656 203 118 0.12 0.00471 0.0253 0.0617 0.0837 0.151 0.194 0.000336 0.000433 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 203 100 0.124 0.00301 0.0638 0.0516 0.0669 0.298 0.308 0.000664 0.000688 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 203 42537.809 0.005 0.00469 0.0824 0.176 0.0624 0.0835 0.275 0.351 0.000614 0.000783 ! Validation 203 42537.809 0.005 0.00484 0.0791 0.176 0.0636 0.0848 0.297 0.343 0.000664 0.000766 Wall time: 42537.80983998394 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 204 100 0.111 0.00414 0.0279 0.0585 0.0785 0.17 0.204 0.000379 0.000455 204 118 0.105 0.00441 0.0169 0.0603 0.081 0.119 0.159 0.000265 0.000354 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 204 100 0.0581 0.0028 0.00214 0.05 0.0645 0.046 0.0564 0.000103 0.000126 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 204 42746.875 0.005 0.00417 0.0365 0.12 0.0589 0.0788 0.191 0.234 0.000426 0.000521 ! Validation 204 42746.875 0.005 0.00459 0.0307 0.122 0.062 0.0826 0.164 0.214 0.000365 0.000477 Wall time: 42746.87536315899 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 205 100 0.136 0.00415 0.053 0.0583 0.0786 0.258 0.281 0.000575 0.000626 205 118 0.0957 0.00431 0.00941 0.0592 0.0801 0.0972 0.118 0.000217 0.000264 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 205 100 0.0722 0.0027 0.0182 0.0491 0.0633 0.145 0.165 0.000324 0.000367 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 205 42955.924 0.005 0.00403 0.0268 0.107 0.0579 0.0774 0.16 0.2 0.000357 0.000447 ! Validation 205 42955.924 0.005 0.00442 0.0586 0.147 0.0607 0.0811 0.242 0.295 0.00054 0.000659 Wall time: 42955.92409916222 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 206 100 0.105 0.00393 0.0265 0.0571 0.0764 0.163 0.198 0.000364 0.000443 206 118 0.205 0.00476 0.11 0.0624 0.0842 0.379 0.404 0.000845 0.000901 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 206 100 0.336 0.00308 0.274 0.0524 0.0677 0.634 0.639 0.00142 0.00143 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 206 43165.054 0.005 0.00406 0.0377 0.119 0.0581 0.0777 0.191 0.235 0.000426 0.000525 ! Validation 206 43165.054 0.005 0.00489 0.494 0.591 0.0643 0.0853 0.827 0.857 0.00185 0.00191 Wall time: 43165.054847069085 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 207 100 0.175 0.00375 0.0998 0.0561 0.0747 0.368 0.385 0.000821 0.00086 207 118 0.129 0.005 0.0293 0.0634 0.0862 0.168 0.209 0.000376 0.000466 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 207 100 0.0662 0.00266 0.013 0.0487 0.0629 0.124 0.139 0.000277 0.00031 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 207 43374.194 0.005 0.00416 0.0469 0.13 0.0588 0.0786 0.202 0.264 0.00045 0.00059 ! Validation 207 43374.194 0.005 0.00441 0.117 0.205 0.0607 0.081 0.338 0.417 0.000753 0.00093 Wall time: 43374.19416052895 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 208 100 0.1 0.00416 0.017 0.0586 0.0786 0.126 0.159 0.000282 0.000355 208 118 0.146 0.00379 0.0701 0.057 0.075 0.273 0.323 0.000609 0.000721 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 208 100 0.0729 0.00291 0.0147 0.0509 0.0658 0.128 0.148 0.000286 0.00033 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 208 43583.246 0.005 0.00409 0.0459 0.128 0.0584 0.078 0.209 0.261 0.000466 0.000582 ! Validation 208 43583.246 0.005 0.00467 0.102 0.195 0.0625 0.0833 0.328 0.389 0.000732 0.000869 Wall time: 43583.24626151007 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 209 100 0.149 0.00448 0.0595 0.0608 0.0817 0.261 0.298 0.000583 0.000664 209 118 0.103 0.00423 0.0188 0.0591 0.0793 0.128 0.167 0.000285 0.000373 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 209 100 0.11 0.00298 0.0502 0.0516 0.0666 0.261 0.273 0.000582 0.00061 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 209 43792.297 0.005 0.00406 0.0519 0.133 0.0581 0.0777 0.219 0.279 0.000489 0.000622 ! Validation 209 43792.297 0.005 0.00466 0.0541 0.147 0.0626 0.0833 0.237 0.284 0.000529 0.000633 Wall time: 43792.29738582298 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 210 100 0.0907 0.0037 0.0166 0.0553 0.0742 0.123 0.157 0.000274 0.000351 210 118 0.234 0.00365 0.161 0.0559 0.0737 0.441 0.489 0.000984 0.00109 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 210 100 0.0757 0.00293 0.017 0.051 0.066 0.138 0.159 0.000308 0.000355 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 210 44001.355 0.005 0.004 0.0397 0.12 0.0577 0.0771 0.187 0.24 0.000417 0.000536 ! Validation 210 44001.355 0.005 0.00465 0.07 0.163 0.0626 0.0832 0.257 0.323 0.000573 0.00072 Wall time: 44001.355525727384 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 211 100 0.0876 0.0039 0.00962 0.057 0.0762 0.0947 0.12 0.000211 0.000267 211 118 0.0773 0.00311 0.015 0.0519 0.068 0.132 0.149 0.000296 0.000334 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 211 100 0.0686 0.00265 0.0155 0.0487 0.0628 0.134 0.152 0.000298 0.000339 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 211 44210.408 0.005 0.00415 0.048 0.131 0.0589 0.0787 0.206 0.268 0.000461 0.000597 ! Validation 211 44210.408 0.005 0.00438 0.124 0.212 0.0605 0.0807 0.349 0.43 0.000779 0.000959 Wall time: 44210.40869040834 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 212 100 0.168 0.00406 0.0863 0.0582 0.0777 0.33 0.358 0.000738 0.0008 212 118 0.0833 0.00386 0.00616 0.0569 0.0757 0.0818 0.0957 0.000183 0.000214 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 212 100 0.105 0.0026 0.0531 0.0481 0.0622 0.271 0.281 0.000606 0.000627 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 212 44419.694 0.005 0.00418 0.0757 0.159 0.059 0.0789 0.267 0.336 0.000595 0.000751 ! Validation 212 44419.694 0.005 0.00432 0.044 0.13 0.0599 0.0801 0.21 0.256 0.000468 0.000571 Wall time: 44419.69448027527 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 213 100 0.0886 0.00388 0.011 0.0568 0.076 0.103 0.128 0.000229 0.000286 213 118 0.164 0.00487 0.0668 0.0624 0.0851 0.302 0.315 0.000674 0.000703 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 213 100 0.119 0.0026 0.0667 0.0481 0.0622 0.307 0.315 0.000686 0.000703 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 213 44629.070 0.005 0.00391 0.0293 0.107 0.0569 0.0762 0.168 0.208 0.000375 0.000464 ! Validation 213 44629.070 0.005 0.00425 0.191 0.276 0.0596 0.0795 0.47 0.533 0.00105 0.00119 Wall time: 44629.07064421894 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 214 100 0.124 0.00408 0.042 0.0583 0.0779 0.227 0.25 0.000506 0.000558 214 118 0.125 0.00475 0.03 0.0612 0.084 0.18 0.211 0.000401 0.000472 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 214 100 0.0712 0.00269 0.0173 0.049 0.0633 0.145 0.161 0.000325 0.000358 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 214 44838.245 0.005 0.00401 0.0435 0.124 0.0578 0.0771 0.206 0.255 0.00046 0.000568 ! Validation 214 44838.245 0.005 0.00439 0.102 0.19 0.0605 0.0808 0.327 0.389 0.00073 0.000868 Wall time: 44838.24532514298 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 215 100 0.0918 0.00391 0.0137 0.0571 0.0762 0.121 0.143 0.00027 0.000319 215 118 0.0825 0.00347 0.0131 0.0541 0.0719 0.0989 0.139 0.000221 0.000311 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 215 100 0.0774 0.00256 0.0262 0.0477 0.0617 0.186 0.197 0.000414 0.00044 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 215 45047.419 0.005 0.0039 0.0403 0.118 0.057 0.0762 0.203 0.245 0.000453 0.000548 ! Validation 215 45047.419 0.005 0.00421 0.113 0.197 0.0592 0.0792 0.344 0.41 0.000769 0.000914 Wall time: 45047.4198545753 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 216 100 0.0807 0.00365 0.00766 0.0552 0.0737 0.0821 0.107 0.000183 0.000238 216 118 0.247 0.00344 0.178 0.0543 0.0716 0.498 0.515 0.00111 0.00115 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 216 100 0.269 0.00258 0.218 0.048 0.0619 0.566 0.569 0.00126 0.00127 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 216 45256.589 0.005 0.00385 0.0302 0.107 0.0566 0.0757 0.165 0.208 0.000369 0.000465 ! Validation 216 45256.589 0.005 0.00429 0.225 0.311 0.0601 0.0799 0.547 0.579 0.00122 0.00129 Wall time: 45256.58977065934 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 217 100 0.103 0.00358 0.0313 0.0546 0.073 0.172 0.216 0.000384 0.000481 217 118 0.104 0.00338 0.0367 0.0535 0.0708 0.175 0.234 0.00039 0.000521 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 217 100 0.0739 0.00253 0.0233 0.0475 0.0613 0.17 0.186 0.00038 0.000416 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 217 45465.851 0.005 0.00393 0.0494 0.128 0.0573 0.0765 0.221 0.271 0.000493 0.000606 ! Validation 217 45465.851 0.005 0.00419 0.0341 0.118 0.0591 0.0789 0.176 0.225 0.000394 0.000502 Wall time: 45465.851101832 ! Best model 217 0.118 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 218 100 0.0874 0.00358 0.0158 0.055 0.073 0.13 0.153 0.000291 0.000342 218 118 0.203 0.00364 0.13 0.0552 0.0736 0.404 0.44 0.000901 0.000983 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 218 100 0.0875 0.0026 0.0354 0.0485 0.0622 0.219 0.23 0.000489 0.000512 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 218 45680.188 0.005 0.00386 0.0422 0.119 0.0567 0.0758 0.19 0.249 0.000423 0.000555 ! Validation 218 45680.188 0.005 0.00423 0.0837 0.168 0.0595 0.0793 0.291 0.353 0.000649 0.000788 Wall time: 45680.18808545731 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 219 100 0.124 0.00368 0.0501 0.0553 0.074 0.24 0.273 0.000536 0.000609 219 118 0.239 0.00448 0.149 0.0599 0.0816 0.422 0.471 0.000941 0.00105 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 219 100 0.0638 0.00284 0.00698 0.0502 0.065 0.0807 0.102 0.00018 0.000227 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 219 45889.432 0.005 0.00382 0.0296 0.106 0.0563 0.0753 0.162 0.207 0.000362 0.000462 ! Validation 219 45889.432 0.005 0.00445 0.214 0.303 0.061 0.0813 0.434 0.564 0.00097 0.00126 Wall time: 45889.43273709016 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 220 100 0.0917 0.00401 0.0115 0.057 0.0772 0.102 0.131 0.000229 0.000292 220 118 0.0914 0.00354 0.0206 0.0548 0.0726 0.153 0.175 0.000341 0.00039 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 220 100 0.0519 0.00241 0.00364 0.0465 0.0599 0.0628 0.0735 0.00014 0.000164 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 220 46098.789 0.005 0.00394 0.0573 0.136 0.0574 0.0766 0.215 0.293 0.000479 0.000653 ! Validation 220 46098.789 0.005 0.00405 0.0556 0.137 0.0582 0.0776 0.22 0.288 0.000491 0.000642 Wall time: 46098.78963109618 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 221 100 0.126 0.00375 0.0513 0.0555 0.0747 0.246 0.276 0.000549 0.000616 221 118 0.132 0.00464 0.0396 0.0612 0.083 0.225 0.243 0.000502 0.000541 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 221 100 0.0952 0.00252 0.0449 0.0473 0.0612 0.251 0.258 0.000561 0.000577 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 221 46308.083 0.005 0.00373 0.0373 0.112 0.0556 0.0744 0.187 0.236 0.000418 0.000526 ! Validation 221 46308.083 0.005 0.00414 0.0449 0.128 0.0588 0.0784 0.208 0.258 0.000465 0.000577 Wall time: 46308.08317198604 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 222 100 0.128 0.00357 0.0564 0.0541 0.0729 0.264 0.29 0.000588 0.000647 222 118 0.0775 0.00306 0.0164 0.0514 0.0674 0.139 0.156 0.000311 0.000348 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 222 100 0.051 0.00239 0.0032 0.0461 0.0596 0.0639 0.069 0.000143 0.000154 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 222 46517.291 0.005 0.00393 0.0486 0.127 0.0573 0.0765 0.209 0.27 0.000467 0.000602 ! Validation 222 46517.291 0.005 0.004 0.0421 0.122 0.0577 0.0771 0.191 0.25 0.000427 0.000559 Wall time: 46517.29166283738 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 223 100 0.12 0.00415 0.0368 0.0585 0.0785 0.196 0.234 0.000438 0.000522 223 118 0.111 0.00387 0.0333 0.0566 0.0759 0.185 0.222 0.000412 0.000496 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 223 100 0.0608 0.00284 0.00402 0.0498 0.065 0.0691 0.0773 0.000154 0.000173 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 223 46726.839 0.005 0.00391 0.0431 0.121 0.057 0.0762 0.19 0.253 0.000425 0.000565 ! Validation 223 46726.839 0.005 0.0044 0.0273 0.115 0.0606 0.0809 0.159 0.201 0.000354 0.000449 Wall time: 46726.83975395793 ! Best model 223 0.115 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 224 100 0.0811 0.00355 0.00998 0.0545 0.0727 0.0936 0.122 0.000209 0.000272 224 118 0.0746 0.00349 0.00489 0.0544 0.072 0.0742 0.0853 0.000166 0.00019 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 224 100 0.0569 0.00242 0.00859 0.0462 0.0599 0.0922 0.113 0.000206 0.000252 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 224 46936.102 0.005 0.00368 0.025 0.0986 0.0553 0.074 0.153 0.193 0.000341 0.000431 ! Validation 224 46936.102 0.005 0.00397 0.0381 0.118 0.0575 0.0769 0.183 0.238 0.000409 0.000531 Wall time: 46936.10280657699 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 225 100 0.0873 0.00372 0.0129 0.0559 0.0744 0.118 0.138 0.000262 0.000309 225 118 0.0807 0.00354 0.00995 0.0544 0.0725 0.0984 0.122 0.00022 0.000271 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 225 100 0.0505 0.00236 0.00338 0.0458 0.0592 0.0667 0.0709 0.000149 0.000158 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 225 47145.344 0.005 0.00366 0.0362 0.109 0.0551 0.0737 0.189 0.233 0.000423 0.000519 ! Validation 225 47145.344 0.005 0.00389 0.0385 0.116 0.0569 0.0761 0.182 0.239 0.000407 0.000534 Wall time: 47145.34472690709 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 226 100 0.087 0.0038 0.011 0.0563 0.0752 0.109 0.128 0.000244 0.000285 226 118 0.209 0.00374 0.134 0.0565 0.0746 0.423 0.446 0.000943 0.000997 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 226 100 0.211 0.00253 0.16 0.0474 0.0613 0.483 0.489 0.00108 0.00109 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 226 47354.661 0.005 0.00364 0.0386 0.111 0.055 0.0735 0.19 0.238 0.000423 0.00053 ! Validation 226 47354.661 0.005 0.00408 0.205 0.287 0.0583 0.0779 0.511 0.552 0.00114 0.00123 Wall time: 47354.6610325193 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 227 100 0.0924 0.00345 0.0235 0.0538 0.0716 0.15 0.187 0.000336 0.000417 227 118 0.128 0.00321 0.0638 0.0525 0.0691 0.286 0.308 0.000638 0.000688 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 227 100 0.0647 0.00268 0.0111 0.0489 0.0631 0.108 0.129 0.000241 0.000287 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 227 47563.906 0.005 0.00378 0.042 0.118 0.0562 0.075 0.201 0.25 0.000448 0.000557 ! Validation 227 47563.906 0.005 0.0042 0.0441 0.128 0.0594 0.079 0.202 0.256 0.00045 0.000572 Wall time: 47563.90682039829 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 228 100 0.0927 0.00385 0.0157 0.0567 0.0757 0.125 0.153 0.000279 0.000341 228 118 0.0865 0.00373 0.012 0.0554 0.0744 0.121 0.133 0.00027 0.000298 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 228 100 0.0635 0.00248 0.0138 0.0467 0.0608 0.129 0.143 0.000288 0.00032 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 228 47773.116 0.005 0.00368 0.042 0.116 0.0554 0.074 0.202 0.251 0.000451 0.000559 ! Validation 228 47773.116 0.005 0.00399 0.037 0.117 0.0575 0.077 0.19 0.235 0.000424 0.000524 Wall time: 47773.11637092801 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 229 100 0.0769 0.00325 0.0119 0.0523 0.0696 0.106 0.133 0.000237 0.000297 229 118 0.0841 0.00385 0.00713 0.0564 0.0757 0.0883 0.103 0.000197 0.00023 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 229 100 0.0605 0.00227 0.0151 0.0449 0.0581 0.138 0.15 0.000308 0.000335 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 229 47983.133 0.005 0.00356 0.0299 0.101 0.0543 0.0727 0.169 0.212 0.000377 0.000472 ! Validation 229 47983.133 0.005 0.00382 0.0534 0.13 0.0563 0.0753 0.229 0.282 0.000512 0.000629 Wall time: 47983.133165777195 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 230 100 0.0961 0.00364 0.0233 0.0548 0.0735 0.162 0.186 0.000362 0.000416 230 118 0.147 0.00399 0.0667 0.0567 0.0771 0.282 0.315 0.00063 0.000703 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 230 100 0.0713 0.00249 0.0215 0.047 0.0608 0.16 0.179 0.000358 0.000399 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 230 48192.335 0.005 0.00365 0.0509 0.124 0.0552 0.0737 0.222 0.275 0.000495 0.000614 ! Validation 230 48192.335 0.005 0.004 0.0484 0.128 0.0577 0.0771 0.213 0.268 0.000476 0.000599 Wall time: 48192.33549796324 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 231 100 0.165 0.0036 0.0936 0.0549 0.0731 0.349 0.373 0.00078 0.000833 231 118 0.0734 0.00311 0.0112 0.0514 0.068 0.101 0.129 0.000226 0.000288 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 231 100 0.109 0.00225 0.0643 0.0447 0.0579 0.303 0.309 0.000675 0.00069 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 231 48401.619 0.005 0.00366 0.0408 0.114 0.0552 0.0738 0.203 0.247 0.000453 0.000551 ! Validation 231 48401.619 0.005 0.00381 0.128 0.204 0.0563 0.0753 0.392 0.437 0.000876 0.000974 Wall time: 48401.61918901838 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 232 100 0.111 0.00379 0.0357 0.0567 0.075 0.196 0.23 0.000438 0.000514 232 118 0.0795 0.00378 0.00387 0.0563 0.075 0.0636 0.0758 0.000142 0.000169 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 232 100 0.0509 0.0023 0.00483 0.0451 0.0585 0.0663 0.0848 0.000148 0.000189 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 232 48610.816 0.005 0.00369 0.0391 0.113 0.0555 0.0741 0.19 0.242 0.000423 0.00054 ! Validation 232 48610.816 0.005 0.00382 0.0506 0.127 0.0564 0.0754 0.219 0.274 0.000489 0.000612 Wall time: 48610.81673927838 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 233 100 0.0862 0.00363 0.0135 0.0551 0.0735 0.11 0.142 0.000245 0.000317 233 118 0.0904 0.0035 0.0205 0.0534 0.0721 0.144 0.175 0.000321 0.00039 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 233 100 0.0573 0.0023 0.0113 0.0453 0.0585 0.11 0.13 0.000246 0.00029 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 233 48820.022 0.005 0.00353 0.0368 0.107 0.0542 0.0725 0.181 0.234 0.000403 0.000523 ! Validation 233 48820.022 0.005 0.00383 0.0304 0.107 0.0566 0.0755 0.167 0.213 0.000373 0.000475 Wall time: 48820.02205134416 ! Best model 233 0.107 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 234 100 0.126 0.00391 0.0482 0.0574 0.0763 0.238 0.268 0.00053 0.000598 234 118 0.0721 0.00341 0.00393 0.053 0.0712 0.061 0.0765 0.000136 0.000171 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 234 100 0.0693 0.00232 0.0229 0.0456 0.0587 0.178 0.185 0.000398 0.000412 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 234 49029.242 0.005 0.00351 0.0336 0.104 0.054 0.0723 0.183 0.224 0.000409 0.000501 ! Validation 234 49029.242 0.005 0.00385 0.11 0.187 0.0568 0.0756 0.344 0.405 0.000769 0.000904 Wall time: 49029.24294852 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 235 100 0.0872 0.00357 0.0159 0.054 0.0728 0.126 0.154 0.000282 0.000343 235 118 0.0776 0.00339 0.00991 0.0529 0.071 0.102 0.121 0.000227 0.000271 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 235 100 0.0606 0.00242 0.0122 0.0464 0.06 0.116 0.135 0.000259 0.000301 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 235 49240.169 0.005 0.00362 0.0401 0.113 0.0549 0.0734 0.193 0.245 0.000431 0.000546 ! Validation 235 49240.169 0.005 0.00388 0.0323 0.11 0.0569 0.076 0.171 0.219 0.000382 0.000489 Wall time: 49240.16991235223 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 236 100 0.0797 0.00331 0.0135 0.0526 0.0702 0.12 0.142 0.000268 0.000317 236 118 0.0957 0.00331 0.0296 0.0529 0.0701 0.162 0.21 0.000363 0.000468 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 236 100 0.0503 0.00237 0.00295 0.0459 0.0593 0.0617 0.0662 0.000138 0.000148 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 236 49449.506 0.005 0.0034 0.0338 0.102 0.0532 0.0711 0.184 0.224 0.000412 0.000501 ! Validation 236 49449.506 0.005 0.00384 0.0419 0.119 0.0565 0.0755 0.189 0.25 0.000422 0.000557 Wall time: 49449.5061342041 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 237 100 0.12 0.00381 0.0432 0.0572 0.0753 0.214 0.254 0.000479 0.000566 237 118 0.0838 0.00383 0.00715 0.0556 0.0755 0.0805 0.103 0.00018 0.00023 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 237 100 0.0575 0.00236 0.0103 0.0459 0.0593 0.112 0.123 0.000249 0.000276 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 237 49658.758 0.005 0.00362 0.0359 0.108 0.0548 0.0734 0.175 0.232 0.000392 0.000517 ! Validation 237 49658.758 0.005 0.00388 0.0401 0.118 0.0571 0.076 0.187 0.244 0.000418 0.000545 Wall time: 49658.758847601246 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 238 100 0.0984 0.00361 0.0261 0.0551 0.0733 0.165 0.197 0.000369 0.00044 238 118 0.136 0.00397 0.0561 0.0573 0.0768 0.249 0.289 0.000555 0.000645 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 238 100 0.0509 0.0023 0.00498 0.0452 0.0585 0.0721 0.0861 0.000161 0.000192 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 238 49867.992 0.005 0.00344 0.0328 0.101 0.0534 0.0715 0.176 0.22 0.000394 0.000491 ! Validation 238 49867.992 0.005 0.00372 0.111 0.186 0.0558 0.0744 0.315 0.407 0.000704 0.000908 Wall time: 49867.99277734896 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 239 100 0.089 0.00351 0.0188 0.0538 0.0722 0.136 0.167 0.000303 0.000373 239 118 0.118 0.0035 0.0482 0.0548 0.0721 0.256 0.268 0.000572 0.000598 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 239 100 0.0664 0.00255 0.0153 0.0479 0.0616 0.13 0.151 0.00029 0.000336 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 239 50077.228 0.005 0.0036 0.0437 0.116 0.0548 0.0732 0.193 0.255 0.000431 0.000569 ! Validation 239 50077.228 0.005 0.00398 0.0298 0.109 0.0579 0.0769 0.166 0.211 0.000371 0.00047 Wall time: 50077.22887761425 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 240 100 0.0998 0.00333 0.0332 0.0527 0.0704 0.194 0.222 0.000434 0.000496 240 118 0.113 0.00328 0.0472 0.0524 0.0698 0.259 0.265 0.000578 0.000591 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 240 100 0.0621 0.00285 0.0052 0.0501 0.0651 0.0762 0.0879 0.00017 0.000196 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 240 50286.652 0.005 0.00342 0.0438 0.112 0.0534 0.0713 0.205 0.255 0.000458 0.00057 ! Validation 240 50286.652 0.005 0.0043 0.0822 0.168 0.0601 0.08 0.268 0.35 0.000598 0.00078 Wall time: 50286.65246950416 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 241 100 0.108 0.00342 0.0397 0.0533 0.0713 0.221 0.243 0.000493 0.000542 241 118 0.106 0.00353 0.0354 0.0543 0.0725 0.19 0.229 0.000424 0.000512 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 241 100 0.0488 0.00223 0.00422 0.0445 0.0576 0.07 0.0792 0.000156 0.000177 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 241 50496.234 0.005 0.00338 0.0271 0.0947 0.053 0.0709 0.159 0.201 0.000355 0.000448 ! Validation 241 50496.234 0.005 0.00368 0.0539 0.128 0.0554 0.074 0.211 0.283 0.000472 0.000632 Wall time: 50496.23465952603 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 242 100 0.13 0.00451 0.04 0.0618 0.0819 0.214 0.244 0.000478 0.000545 242 118 0.0725 0.00329 0.00665 0.0533 0.07 0.0837 0.0995 0.000187 0.000222 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 242 100 0.0607 0.0023 0.0148 0.045 0.0584 0.129 0.148 0.000289 0.000331 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 242 50706.617 0.005 0.0036 0.0437 0.116 0.0547 0.0732 0.2 0.256 0.000447 0.000571 ! Validation 242 50706.617 0.005 0.0037 0.0399 0.114 0.0555 0.0742 0.187 0.244 0.000418 0.000544 Wall time: 50706.61742201634 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 243 100 0.0822 0.0035 0.0122 0.0539 0.0722 0.111 0.134 0.000248 0.0003 243 118 0.0874 0.00352 0.017 0.0533 0.0723 0.13 0.159 0.000289 0.000355 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 243 100 0.125 0.00224 0.0798 0.0449 0.0578 0.341 0.344 0.000761 0.000769 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 243 50915.681 0.005 0.00336 0.0371 0.104 0.0529 0.0707 0.191 0.235 0.000427 0.000526 ! Validation 243 50915.681 0.005 0.00372 0.135 0.21 0.0558 0.0744 0.399 0.449 0.000892 0.001 Wall time: 50915.68147787312 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 244 100 0.0705 0.00312 0.00816 0.051 0.0681 0.0907 0.11 0.000202 0.000246 244 118 0.088 0.00369 0.0141 0.0557 0.0741 0.119 0.145 0.000265 0.000323 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 244 100 0.0813 0.00217 0.0379 0.0438 0.0568 0.231 0.238 0.000517 0.00053 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 244 51124.719 0.005 0.00327 0.0296 0.095 0.0521 0.0697 0.166 0.21 0.00037 0.000469 ! Validation 244 51124.719 0.005 0.00364 0.117 0.19 0.0551 0.0736 0.366 0.417 0.000817 0.000931 Wall time: 51124.719763243105 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 245 100 0.0833 0.00329 0.0175 0.0524 0.07 0.136 0.161 0.000304 0.00036 245 118 0.0731 0.00306 0.0118 0.0502 0.0675 0.106 0.132 0.000236 0.000296 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 245 100 0.0495 0.00212 0.00703 0.0434 0.0562 0.0867 0.102 0.000194 0.000228 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 245 51333.835 0.005 0.00328 0.0283 0.0939 0.0522 0.0698 0.169 0.206 0.000377 0.000459 ! Validation 245 51333.835 0.005 0.0035 0.0385 0.109 0.054 0.0722 0.181 0.239 0.000404 0.000534 Wall time: 51333.83551899204 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 246 100 0.0713 0.00324 0.00654 0.0515 0.0694 0.0795 0.0986 0.000178 0.00022 246 118 0.0922 0.0036 0.0202 0.0547 0.0731 0.133 0.173 0.000297 0.000387 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 246 100 0.0904 0.00211 0.0482 0.0433 0.056 0.26 0.268 0.000579 0.000598 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 246 51543.987 0.005 0.00342 0.0455 0.114 0.0533 0.0713 0.194 0.261 0.000434 0.000582 ! Validation 246 51543.987 0.005 0.00352 0.139 0.209 0.054 0.0723 0.388 0.455 0.000866 0.00101 Wall time: 51543.9871246242 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 247 100 0.0787 0.00339 0.0109 0.0531 0.071 0.105 0.127 0.000233 0.000284 247 118 0.105 0.00279 0.0489 0.0488 0.0644 0.258 0.27 0.000575 0.000602 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 247 100 0.0833 0.0022 0.0393 0.0443 0.0572 0.23 0.242 0.000514 0.000539 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 247 51753.040 0.005 0.00324 0.0301 0.0949 0.0519 0.0694 0.171 0.211 0.000382 0.000471 ! Validation 247 51753.040 0.005 0.00354 0.114 0.185 0.0543 0.0726 0.353 0.411 0.000788 0.000918 Wall time: 51753.040306219365 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 248 100 0.0708 0.00302 0.0103 0.0503 0.0671 0.0921 0.124 0.000206 0.000276 248 118 0.0761 0.00272 0.0218 0.0479 0.0636 0.159 0.18 0.000354 0.000402 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 248 100 0.0861 0.00205 0.0451 0.0426 0.0552 0.252 0.259 0.000563 0.000578 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 248 51962.103 0.005 0.00344 0.0418 0.111 0.0536 0.0716 0.201 0.25 0.00045 0.000558 ! Validation 248 51962.103 0.005 0.00349 0.0855 0.155 0.0538 0.072 0.309 0.357 0.00069 0.000796 Wall time: 51962.10367993731 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 249 100 0.187 0.00494 0.0884 0.064 0.0858 0.336 0.363 0.000749 0.00081 249 118 0.146 0.00385 0.0692 0.0572 0.0757 0.303 0.321 0.000676 0.000716 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 249 100 0.0945 0.00228 0.0489 0.0451 0.0582 0.263 0.27 0.000587 0.000602 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 249 52171.167 0.005 0.00355 0.0571 0.128 0.0543 0.0726 0.22 0.291 0.00049 0.00065 ! Validation 249 52171.167 0.005 0.00369 0.134 0.208 0.0555 0.074 0.396 0.447 0.000883 0.000997 Wall time: 52171.167588144075 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 250 100 0.0764 0.00328 0.0108 0.0522 0.0699 0.101 0.127 0.000226 0.000283 250 118 0.0642 0.00253 0.0136 0.0467 0.0613 0.116 0.142 0.000258 0.000317 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 250 100 0.0439 0.002 0.00385 0.0422 0.0546 0.0653 0.0757 0.000146 0.000169 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 250 52380.309 0.005 0.00319 0.0198 0.0835 0.0515 0.0689 0.136 0.172 0.000304 0.000383 ! Validation 250 52380.309 0.005 0.00341 0.0304 0.0985 0.0532 0.0712 0.164 0.213 0.000365 0.000474 Wall time: 52380.30959534412 ! Best model 250 0.099 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 251 100 0.079 0.00313 0.0164 0.0507 0.0682 0.133 0.156 0.000297 0.000348 251 118 0.0667 0.00313 0.00408 0.0512 0.0682 0.0715 0.0779 0.00016 0.000174 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 251 100 0.0461 0.00203 0.0056 0.0424 0.0549 0.0788 0.0912 0.000176 0.000204 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 251 52589.366 0.005 0.00311 0.0187 0.0808 0.0508 0.068 0.134 0.167 0.0003 0.000373 ! Validation 251 52589.366 0.005 0.00344 0.0255 0.0944 0.0534 0.0716 0.153 0.195 0.000341 0.000435 Wall time: 52589.366518795956 ! Best model 251 0.094 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 252 100 0.0753 0.00323 0.0107 0.0517 0.0693 0.0918 0.126 0.000205 0.000282 252 118 0.0815 0.00278 0.0259 0.0483 0.0643 0.164 0.196 0.000367 0.000438 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 252 100 0.0749 0.00229 0.0291 0.0453 0.0584 0.201 0.208 0.000448 0.000464 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 252 52798.844 0.005 0.00317 0.0261 0.0895 0.0514 0.0687 0.158 0.197 0.000353 0.000439 ! Validation 252 52798.844 0.005 0.00366 0.042 0.115 0.0555 0.0738 0.209 0.25 0.000466 0.000558 Wall time: 52798.84418634605 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 253 100 1.74 0.0715 0.31 0.247 0.326 0.573 0.679 0.00128 0.00152 253 118 0.904 0.0395 0.114 0.183 0.242 0.323 0.412 0.00072 0.000919 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 253 100 2.07 0.0314 1.44 0.166 0.216 1.45 1.46 0.00324 0.00327 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 253 53008.118 0.005 0.0701 1.83 3.23 0.189 0.323 0.972 1.66 0.00217 0.00369 ! Validation 253 53008.118 0.005 0.0387 0.501 1.27 0.182 0.24 0.75 0.863 0.00168 0.00193 Wall time: 53008.11808863096 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 254 100 0.342 0.0117 0.108 0.0981 0.132 0.336 0.401 0.000751 0.000894 254 118 0.324 0.0123 0.078 0.101 0.135 0.297 0.34 0.000662 0.00076 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 254 100 0.219 0.00847 0.0492 0.085 0.112 0.257 0.271 0.000573 0.000604 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 254 53217.654 0.005 0.0182 0.174 0.538 0.122 0.165 0.407 0.509 0.000907 0.00114 ! Validation 254 53217.654 0.005 0.012 0.0839 0.325 0.1 0.134 0.276 0.353 0.000616 0.000788 Wall time: 53217.654674977995 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 255 100 0.655 0.00929 0.47 0.0882 0.118 0.801 0.836 0.00179 0.00187 255 118 0.207 0.00774 0.0522 0.0807 0.107 0.238 0.279 0.000531 0.000622 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 255 100 0.134 0.00637 0.00606 0.0739 0.0974 0.0729 0.095 0.000163 0.000212 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 255 53427.392 0.005 0.0099 0.15 0.348 0.0908 0.121 0.373 0.473 0.000834 0.00106 ! Validation 255 53427.392 0.005 0.00914 0.117 0.3 0.0877 0.117 0.337 0.417 0.000753 0.000932 Wall time: 53427.39292117022 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 256 100 0.231 0.00742 0.0826 0.0793 0.105 0.296 0.351 0.00066 0.000782 256 118 0.382 0.00781 0.226 0.0808 0.108 0.544 0.579 0.00121 0.00129 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 256 100 0.486 0.00525 0.381 0.0675 0.0883 0.75 0.753 0.00167 0.00168 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 256 53637.371 0.005 0.00776 0.0919 0.247 0.0808 0.107 0.297 0.368 0.000663 0.000821 ! Validation 256 53637.371 0.005 0.00779 0.781 0.936 0.0815 0.108 1.04 1.08 0.00232 0.00241 Wall time: 53637.37180937594 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 257 100 0.196 0.00674 0.0609 0.0756 0.1 0.255 0.301 0.00057 0.000672 257 118 0.179 0.00676 0.0435 0.0757 0.1 0.197 0.254 0.00044 0.000568 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 257 100 0.0979 0.00436 0.0108 0.0622 0.0805 0.113 0.127 0.000253 0.000283 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 257 53846.619 0.005 0.00667 0.0612 0.195 0.0753 0.0996 0.233 0.302 0.00052 0.000674 ! Validation 257 53846.619 0.005 0.0066 0.109 0.241 0.0753 0.0991 0.333 0.404 0.000742 0.000901 Wall time: 53846.62020870624 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 258 100 0.129 0.00554 0.0178 0.0688 0.0908 0.136 0.163 0.000303 0.000363 258 118 0.13 0.00558 0.018 0.0697 0.0911 0.132 0.163 0.000294 0.000365 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 258 100 0.0803 0.00392 0.00183 0.0594 0.0764 0.0431 0.0522 9.62e-05 0.000116 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 258 54056.938 0.005 0.00592 0.0709 0.189 0.0712 0.0939 0.261 0.326 0.000583 0.000727 ! Validation 258 54056.938 0.005 0.00604 0.0625 0.183 0.0721 0.0948 0.241 0.305 0.000538 0.00068 Wall time: 54056.93877907423 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 259 100 0.127 0.00525 0.022 0.0671 0.0884 0.148 0.181 0.00033 0.000403 259 118 0.158 0.0056 0.0462 0.0698 0.0913 0.229 0.262 0.000512 0.000585 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 259 100 0.0985 0.00366 0.0253 0.0577 0.0738 0.188 0.194 0.000419 0.000433 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 259 54266.523 0.005 0.00546 0.0632 0.172 0.0684 0.0901 0.248 0.307 0.000553 0.000685 ! Validation 259 54266.523 0.005 0.00568 0.039 0.153 0.07 0.0919 0.192 0.241 0.000429 0.000538 Wall time: 54266.52419150015 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 260 100 0.118 0.00492 0.0196 0.0649 0.0855 0.141 0.171 0.000315 0.000381 260 118 0.108 0.00454 0.0175 0.0629 0.0822 0.152 0.161 0.000338 0.00036 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 260 100 0.0733 0.00333 0.00666 0.0553 0.0704 0.0941 0.0995 0.00021 0.000222 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 260 54475.929 0.005 0.00501 0.0317 0.132 0.0656 0.0864 0.168 0.218 0.000375 0.000486 ! Validation 260 54475.929 0.005 0.00529 0.0843 0.19 0.0676 0.0887 0.288 0.354 0.000643 0.00079 Wall time: 54475.92944200337 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 261 100 0.147 0.00487 0.0499 0.0644 0.0851 0.227 0.273 0.000507 0.000608 261 118 0.107 0.00476 0.0115 0.0637 0.0841 0.104 0.131 0.000232 0.000292 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 261 100 0.153 0.00312 0.091 0.0535 0.0681 0.367 0.368 0.000819 0.000821 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 261 54689.613 0.005 0.00478 0.0583 0.154 0.064 0.0843 0.235 0.295 0.000525 0.000659 ! Validation 261 54689.613 0.005 0.00498 0.19 0.29 0.0655 0.086 0.482 0.532 0.00108 0.00119 Wall time: 54689.61364217894 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 262 100 0.117 0.00443 0.028 0.0618 0.0811 0.17 0.204 0.000379 0.000456 262 118 0.143 0.00541 0.0345 0.0672 0.0897 0.204 0.227 0.000456 0.000506 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 262 100 0.0662 0.00298 0.00652 0.0524 0.0666 0.0937 0.0985 0.000209 0.00022 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 262 54898.849 0.005 0.00463 0.0846 0.177 0.063 0.0829 0.285 0.355 0.000636 0.000793 ! Validation 262 54898.849 0.005 0.00484 0.0689 0.166 0.0645 0.0848 0.259 0.32 0.000578 0.000715 Wall time: 54898.84992222721 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 263 100 0.105 0.00423 0.0204 0.0604 0.0793 0.142 0.174 0.000318 0.000389 263 118 0.124 0.00411 0.042 0.0596 0.0782 0.217 0.25 0.000485 0.000558 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 263 100 0.0575 0.00285 0.000486 0.0513 0.0651 0.0198 0.0269 4.42e-05 6e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 263 55108.101 0.005 0.00436 0.0356 0.123 0.0611 0.0805 0.185 0.23 0.000412 0.000513 ! Validation 263 55108.101 0.005 0.00461 0.0399 0.132 0.0629 0.0828 0.189 0.244 0.000422 0.000544 Wall time: 55108.1010499713 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 264 100 0.122 0.00414 0.0392 0.0594 0.0785 0.214 0.242 0.000477 0.000539 264 118 0.0953 0.00391 0.017 0.0581 0.0763 0.129 0.159 0.000288 0.000355 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 264 100 0.0594 0.00276 0.00424 0.0504 0.0641 0.0749 0.0794 0.000167 0.000177 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 264 55317.424 0.005 0.00417 0.0382 0.122 0.0596 0.0787 0.192 0.239 0.000428 0.000533 ! Validation 264 55317.424 0.005 0.00448 0.074 0.163 0.0618 0.0816 0.268 0.332 0.000597 0.00074 Wall time: 55317.424316803925 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 265 100 0.0892 0.00388 0.0116 0.0579 0.076 0.0959 0.131 0.000214 0.000293 265 118 0.0752 0.00363 0.00261 0.0556 0.0735 0.0445 0.0623 9.94e-05 0.000139 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 265 100 0.054 0.00266 0.000765 0.0493 0.0629 0.0314 0.0337 7.01e-05 7.53e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 265 55526.653 0.005 0.00405 0.0503 0.131 0.0588 0.0777 0.216 0.274 0.000483 0.000612 ! Validation 265 55526.653 0.005 0.00436 0.0463 0.134 0.061 0.0806 0.201 0.262 0.000449 0.000586 Wall time: 55526.653678120114 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 266 100 0.0915 0.00395 0.0125 0.0579 0.0767 0.111 0.136 0.000249 0.000304 266 118 0.0817 0.00342 0.0134 0.0544 0.0713 0.106 0.141 0.000236 0.000315 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 266 100 0.0606 0.00254 0.00984 0.0481 0.0614 0.118 0.121 0.000264 0.00027 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 266 55735.886 0.005 0.00391 0.0262 0.104 0.0577 0.0763 0.157 0.198 0.000349 0.000441 ! Validation 266 55735.886 0.005 0.00419 0.0851 0.169 0.0597 0.079 0.289 0.356 0.000645 0.000794 Wall time: 55735.88668794138 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 267 100 0.101 0.00391 0.0232 0.0572 0.0762 0.159 0.186 0.000354 0.000415 267 118 0.134 0.00383 0.0578 0.0567 0.0754 0.239 0.293 0.000534 0.000655 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 267 100 0.0667 0.00256 0.0154 0.0483 0.0618 0.147 0.152 0.000328 0.000338 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 267 55945.299 0.005 0.00388 0.0678 0.145 0.0574 0.076 0.263 0.318 0.000586 0.000709 ! Validation 267 55945.299 0.005 0.00422 0.0739 0.158 0.0599 0.0792 0.275 0.331 0.000613 0.00074 Wall time: 55945.29929028312 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 268 100 0.234 0.00385 0.157 0.0567 0.0756 0.467 0.483 0.00104 0.00108 268 118 0.086 0.0038 0.00995 0.0566 0.0752 0.106 0.122 0.000236 0.000272 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 268 100 0.109 0.00251 0.0584 0.0476 0.0611 0.292 0.295 0.000652 0.000658 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 268 56154.508 0.005 0.00376 0.0367 0.112 0.0564 0.0748 0.188 0.234 0.000419 0.000523 ! Validation 268 56154.508 0.005 0.00412 0.153 0.236 0.059 0.0782 0.425 0.477 0.000948 0.00107 Wall time: 56154.508402316365 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 269 100 0.107 0.00361 0.0344 0.055 0.0732 0.198 0.226 0.000442 0.000505 269 118 0.135 0.00386 0.0579 0.0563 0.0757 0.274 0.293 0.000612 0.000655 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 269 100 0.0945 0.00238 0.0469 0.0464 0.0595 0.262 0.264 0.000584 0.000589 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 269 56363.951 0.005 0.00369 0.0365 0.11 0.0557 0.074 0.189 0.232 0.000421 0.000519 ! Validation 269 56363.951 0.005 0.00398 0.117 0.196 0.058 0.0769 0.367 0.417 0.000818 0.00093 Wall time: 56363.951921964064 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 270 100 0.102 0.00391 0.0233 0.0566 0.0763 0.152 0.186 0.00034 0.000415 270 118 0.109 0.00344 0.04 0.0544 0.0716 0.208 0.244 0.000464 0.000544 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 270 100 0.057 0.00236 0.00986 0.0461 0.0592 0.113 0.121 0.000252 0.00027 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 270 56573.194 0.005 0.0036 0.0269 0.099 0.055 0.0732 0.158 0.2 0.000352 0.000446 ! Validation 270 56573.194 0.005 0.00392 0.0286 0.107 0.0574 0.0763 0.163 0.206 0.000364 0.000461 Wall time: 56573.194818546996 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 271 100 0.0807 0.00356 0.00963 0.0545 0.0727 0.0952 0.12 0.000213 0.000267 271 118 0.0937 0.0036 0.0217 0.0554 0.0732 0.142 0.18 0.000318 0.000401 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 271 100 0.0518 0.00231 0.00557 0.0457 0.0586 0.0797 0.091 0.000178 0.000203 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 271 56784.053 0.005 0.00359 0.0434 0.115 0.0549 0.0731 0.197 0.254 0.000441 0.000568 ! Validation 271 56784.053 0.005 0.00386 0.0561 0.133 0.057 0.0758 0.233 0.289 0.00052 0.000645 Wall time: 56784.05310500134 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 272 100 0.0799 0.00334 0.013 0.0531 0.0705 0.109 0.139 0.000242 0.00031 272 118 0.0759 0.00348 0.00631 0.0536 0.0719 0.0802 0.0969 0.000179 0.000216 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 272 100 0.0755 0.00228 0.0298 0.0454 0.0583 0.206 0.211 0.000459 0.00047 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 272 56993.239 0.005 0.0035 0.0281 0.098 0.0541 0.0721 0.164 0.205 0.000366 0.000457 ! Validation 272 56993.239 0.005 0.00381 0.113 0.19 0.0565 0.0752 0.359 0.411 0.000802 0.000917 Wall time: 56993.23966233013 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 273 100 0.0913 0.00363 0.0188 0.0547 0.0734 0.142 0.167 0.000316 0.000374 273 118 0.143 0.00348 0.0738 0.0541 0.0719 0.288 0.331 0.000644 0.00074 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 273 100 0.0803 0.00224 0.0355 0.0449 0.0577 0.226 0.23 0.000505 0.000513 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 273 57202.422 0.005 0.00346 0.0306 0.0998 0.0538 0.0717 0.17 0.212 0.000381 0.000474 ! Validation 273 57202.422 0.005 0.00375 0.1 0.175 0.0561 0.0747 0.339 0.386 0.000758 0.000861 Wall time: 57202.42232769402 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 274 100 0.0941 0.00328 0.0285 0.0524 0.0698 0.178 0.206 0.000397 0.000459 274 118 0.0842 0.00329 0.0184 0.053 0.0699 0.142 0.165 0.000318 0.000369 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 274 100 0.0611 0.00222 0.0166 0.0446 0.0575 0.148 0.157 0.000331 0.000351 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 274 57411.741 0.005 0.00344 0.038 0.107 0.0537 0.0716 0.193 0.238 0.00043 0.000532 ! Validation 274 57411.741 0.005 0.00374 0.0518 0.127 0.056 0.0745 0.231 0.278 0.000516 0.00062 Wall time: 57411.74121343205 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 275 100 0.102 0.00334 0.0354 0.0526 0.0705 0.199 0.229 0.000445 0.000512 275 118 0.0793 0.00354 0.00858 0.0545 0.0725 0.0908 0.113 0.000203 0.000252 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 275 100 0.0461 0.0022 0.00219 0.0443 0.0571 0.0511 0.0571 0.000114 0.000127 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 275 57620.841 0.005 0.00342 0.0368 0.105 0.0534 0.0713 0.189 0.234 0.000421 0.000523 ! Validation 275 57620.841 0.005 0.00367 0.0382 0.112 0.0554 0.0739 0.186 0.238 0.000416 0.000532 Wall time: 57620.84153510304 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 276 100 0.111 0.00317 0.048 0.0515 0.0687 0.245 0.267 0.000547 0.000597 276 118 0.106 0.00296 0.0473 0.0503 0.0663 0.257 0.265 0.000573 0.000592 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 276 100 0.0981 0.00215 0.055 0.0439 0.0566 0.281 0.286 0.000628 0.000638 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 276 57829.931 0.005 0.00334 0.0291 0.0959 0.0528 0.0706 0.169 0.207 0.000377 0.000463 ! Validation 276 57829.931 0.005 0.00362 0.0489 0.121 0.055 0.0734 0.227 0.27 0.000506 0.000602 Wall time: 57829.93190317834 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 277 100 0.222 0.00301 0.162 0.0507 0.067 0.48 0.491 0.00107 0.0011 277 118 0.0723 0.00335 0.00539 0.0536 0.0706 0.0706 0.0895 0.000158 0.0002 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 277 100 0.0468 0.00224 0.00195 0.0446 0.0577 0.0422 0.0538 9.42e-05 0.00012 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 277 58039.027 0.005 0.00334 0.0401 0.107 0.0527 0.0704 0.199 0.245 0.000445 0.000547 ! Validation 277 58039.027 0.005 0.0037 0.038 0.112 0.0557 0.0741 0.184 0.238 0.000411 0.000531 Wall time: 58039.027363477275 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 278 100 0.0746 0.0032 0.0105 0.0519 0.069 0.089 0.125 0.000199 0.000279 278 118 0.174 0.0037 0.0997 0.0538 0.0742 0.375 0.385 0.000837 0.000859 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 278 100 0.12 0.00217 0.077 0.0441 0.0569 0.335 0.338 0.000748 0.000755 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 278 58248.275 0.005 0.00329 0.0273 0.0932 0.0523 0.0699 0.16 0.2 0.000356 0.000446 ! Validation 278 58248.275 0.005 0.00361 0.0641 0.136 0.0549 0.0732 0.265 0.309 0.000592 0.000689 Wall time: 58248.27592425607 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 279 100 0.0736 0.00298 0.014 0.0501 0.0666 0.122 0.144 0.000273 0.000322 279 118 0.0905 0.00383 0.0138 0.0559 0.0755 0.118 0.143 0.000264 0.00032 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 279 100 0.0676 0.00209 0.0257 0.0433 0.0558 0.187 0.196 0.000417 0.000437 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 279 58457.482 0.005 0.00329 0.0347 0.101 0.0523 0.0699 0.178 0.228 0.000398 0.000508 ! Validation 279 58457.482 0.005 0.00356 0.0365 0.108 0.0546 0.0728 0.187 0.233 0.000417 0.00052 Wall time: 58457.48230661405 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 280 100 0.176 0.00325 0.111 0.0521 0.0695 0.394 0.406 0.000878 0.000905 280 118 0.18 0.00307 0.119 0.0506 0.0676 0.416 0.42 0.000929 0.000938 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 280 100 0.0933 0.00218 0.0498 0.0441 0.0569 0.266 0.272 0.000593 0.000607 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 280 58666.672 0.005 0.00322 0.0319 0.0963 0.0518 0.0692 0.174 0.216 0.000389 0.000482 ! Validation 280 58666.672 0.005 0.00359 0.14 0.211 0.0549 0.0731 0.41 0.456 0.000915 0.00102 Wall time: 58666.672264365014 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 281 100 0.106 0.00343 0.0368 0.053 0.0715 0.209 0.234 0.000466 0.000522 281 118 0.0758 0.00298 0.0163 0.0494 0.0665 0.143 0.155 0.00032 0.000347 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 281 100 0.0592 0.00208 0.0175 0.043 0.0557 0.151 0.162 0.000337 0.000361 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 281 58876.071 0.005 0.00326 0.0354 0.1 0.0521 0.0696 0.178 0.23 0.000398 0.000513 ! Validation 281 58876.071 0.005 0.00348 0.0747 0.144 0.0539 0.072 0.282 0.333 0.00063 0.000744 Wall time: 58876.07123522833 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 282 100 0.0747 0.00305 0.0137 0.0501 0.0673 0.108 0.143 0.000242 0.000319 282 118 0.0631 0.00263 0.0106 0.0473 0.0625 0.0986 0.125 0.00022 0.00028 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 282 100 0.0451 0.00211 0.00291 0.0434 0.056 0.0512 0.0658 0.000114 0.000147 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 282 59085.285 0.005 0.00319 0.0298 0.0935 0.0515 0.0689 0.162 0.211 0.000362 0.000471 ! Validation 282 59085.285 0.005 0.00349 0.0629 0.133 0.0539 0.072 0.24 0.306 0.000535 0.000682 Wall time: 59085.28542643925 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 283 100 0.183 0.00312 0.12 0.0507 0.0682 0.411 0.423 0.000918 0.000943 283 118 0.0822 0.0039 0.00425 0.0556 0.0761 0.0615 0.0795 0.000137 0.000178 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 283 100 0.0489 0.00212 0.00648 0.0436 0.0561 0.0868 0.0982 0.000194 0.000219 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 283 59294.606 0.005 0.00317 0.0286 0.0921 0.0513 0.0687 0.164 0.207 0.000366 0.000462 ! Validation 283 59294.606 0.005 0.0035 0.0404 0.11 0.0541 0.0721 0.187 0.245 0.000417 0.000547 Wall time: 59294.60658074636 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 284 100 0.141 0.00334 0.0743 0.0526 0.0705 0.317 0.332 0.000708 0.000742 284 118 0.0901 0.00304 0.0294 0.0503 0.0672 0.171 0.209 0.000382 0.000467 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 284 100 0.0452 0.00213 0.00249 0.0437 0.0563 0.0555 0.0608 0.000124 0.000136 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 284 59503.836 0.005 0.00318 0.0463 0.11 0.0514 0.0688 0.208 0.263 0.000464 0.000587 ! Validation 284 59503.836 0.005 0.00352 0.0382 0.109 0.0542 0.0724 0.187 0.238 0.000417 0.000532 Wall time: 59503.836025908124 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 285 100 0.076 0.00307 0.0145 0.0503 0.0676 0.113 0.147 0.000253 0.000328 285 118 0.136 0.00272 0.082 0.048 0.0637 0.34 0.349 0.00076 0.000779 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 285 100 0.203 0.00217 0.159 0.0439 0.0568 0.483 0.487 0.00108 0.00109 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 285 59713.059 0.005 0.00312 0.0226 0.085 0.051 0.0682 0.143 0.181 0.00032 0.000405 ! Validation 285 59713.059 0.005 0.00356 0.0989 0.17 0.0545 0.0727 0.335 0.384 0.000749 0.000856 Wall time: 59713.05930632632 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 286 100 0.141 0.00317 0.0778 0.0515 0.0687 0.32 0.34 0.000715 0.000759 286 118 0.159 0.00335 0.0918 0.0528 0.0706 0.34 0.37 0.00076 0.000825 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 286 100 0.0657 0.0022 0.0218 0.0441 0.0572 0.173 0.18 0.000386 0.000402 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 286 59922.280 0.005 0.00318 0.0431 0.107 0.0514 0.0687 0.206 0.252 0.000459 0.000563 ! Validation 286 59922.280 0.005 0.00359 0.034 0.106 0.0548 0.073 0.184 0.225 0.000411 0.000502 Wall time: 59922.28010093421 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 287 100 0.068 0.00303 0.00752 0.0502 0.0671 0.0814 0.106 0.000182 0.000236 287 118 0.142 0.0033 0.0759 0.0522 0.0701 0.324 0.336 0.000724 0.00075 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 287 100 0.065 0.00207 0.0235 0.043 0.0555 0.176 0.187 0.000393 0.000417 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 287 60131.679 0.005 0.0031 0.0225 0.0845 0.0507 0.0679 0.143 0.182 0.000319 0.000405 ! Validation 287 60131.679 0.005 0.0034 0.126 0.194 0.0532 0.0711 0.363 0.433 0.00081 0.000966 Wall time: 60131.67992033111 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 288 100 0.1 0.0029 0.042 0.0492 0.0657 0.228 0.25 0.000509 0.000558 288 118 0.0633 0.00269 0.00959 0.0478 0.0632 0.0939 0.119 0.00021 0.000267 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 288 100 0.0457 0.00211 0.00346 0.0432 0.056 0.0531 0.0718 0.000118 0.00016 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 288 60340.914 0.005 0.00308 0.0311 0.0927 0.0506 0.0677 0.17 0.215 0.00038 0.000481 ! Validation 288 60340.914 0.005 0.00346 0.0496 0.119 0.0536 0.0717 0.22 0.272 0.000492 0.000606 Wall time: 60340.91479382804 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 289 100 0.079 0.00324 0.0142 0.0515 0.0694 0.122 0.145 0.000273 0.000324 289 118 0.138 0.00337 0.0703 0.0524 0.0708 0.309 0.323 0.000689 0.000722 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 289 100 0.0436 0.00206 0.00244 0.0425 0.0553 0.0568 0.0602 0.000127 0.000134 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 289 60550.003 0.005 0.00308 0.0367 0.0983 0.0506 0.0677 0.186 0.233 0.000416 0.00052 ! Validation 289 60550.003 0.005 0.00341 0.0282 0.0964 0.0532 0.0712 0.164 0.205 0.000367 0.000457 Wall time: 60550.00392541103 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 290 100 0.0732 0.003 0.0132 0.0501 0.0668 0.12 0.14 0.000268 0.000312 290 118 0.0786 0.00342 0.0101 0.0536 0.0714 0.11 0.123 0.000247 0.000274 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 290 100 0.0447 0.00203 0.00401 0.0425 0.055 0.057 0.0773 0.000127 0.000172 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 290 60759.090 0.005 0.00309 0.0248 0.0865 0.0506 0.0677 0.158 0.193 0.000353 0.00043 ! Validation 290 60759.090 0.005 0.00334 0.0409 0.108 0.0527 0.0705 0.195 0.247 0.000436 0.00055 Wall time: 60759.09069976909 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 291 100 0.15 0.00402 0.0699 0.0583 0.0773 0.282 0.322 0.000629 0.000719 291 118 0.0984 0.0038 0.0224 0.0558 0.0752 0.155 0.183 0.000345 0.000407 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 291 100 0.113 0.00214 0.0698 0.0437 0.0565 0.314 0.322 0.000701 0.000719 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 291 60968.182 0.005 0.00319 0.0512 0.115 0.0514 0.0688 0.216 0.276 0.000482 0.000617 ! Validation 291 60968.182 0.005 0.00348 0.0517 0.121 0.0539 0.0719 0.231 0.277 0.000515 0.000619 Wall time: 60968.18243379099 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 292 100 0.0675 0.003 0.00745 0.0497 0.0668 0.0878 0.105 0.000196 0.000235 292 118 0.0668 0.00288 0.00929 0.0497 0.0654 0.102 0.118 0.000228 0.000262 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 292 100 0.0466 0.002 0.00667 0.042 0.0545 0.0857 0.0996 0.000191 0.000222 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 292 61177.270 0.005 0.00305 0.0292 0.0902 0.0503 0.0673 0.172 0.209 0.000384 0.000467 ! Validation 292 61177.270 0.005 0.0033 0.0315 0.0974 0.0523 0.07 0.167 0.216 0.000373 0.000483 Wall time: 61177.27055700729 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 293 100 0.0812 0.00309 0.0195 0.0502 0.0678 0.153 0.17 0.000342 0.00038 293 118 0.0776 0.00337 0.0102 0.0525 0.0708 0.101 0.123 0.000225 0.000275 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 293 100 0.0512 0.00198 0.0116 0.0418 0.0542 0.12 0.132 0.000267 0.000294 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 293 61386.428 0.005 0.00298 0.0195 0.0791 0.0497 0.0666 0.138 0.17 0.000307 0.000381 ! Validation 293 61386.428 0.005 0.00332 0.04 0.106 0.0526 0.0703 0.199 0.244 0.000443 0.000544 Wall time: 61386.42795209121 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 294 100 0.0786 0.00311 0.0165 0.0507 0.068 0.135 0.157 0.000301 0.000349 294 118 0.0886 0.00306 0.0274 0.051 0.0675 0.166 0.202 0.00037 0.00045 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 294 100 0.0678 0.00197 0.0284 0.0418 0.0541 0.196 0.206 0.000438 0.000459 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 294 61595.505 0.005 0.00301 0.0299 0.0901 0.05 0.0669 0.171 0.211 0.000382 0.000471 ! Validation 294 61595.505 0.005 0.00325 0.1 0.165 0.052 0.0696 0.336 0.386 0.000749 0.000861 Wall time: 61595.50540740229 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 295 100 0.0985 0.00301 0.0383 0.0497 0.0669 0.212 0.239 0.000474 0.000533 295 118 0.0627 0.00281 0.00647 0.0488 0.0647 0.0812 0.0981 0.000181 0.000219 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 295 100 0.0459 0.00195 0.00689 0.0416 0.0539 0.0751 0.101 0.000168 0.000226 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 295 61804.612 0.005 0.00298 0.0294 0.089 0.0498 0.0666 0.17 0.21 0.000379 0.000468 ! Validation 295 61804.612 0.005 0.00323 0.0421 0.107 0.0518 0.0693 0.196 0.25 0.000438 0.000558 Wall time: 61804.61259872606 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 296 100 0.0717 0.00319 0.00794 0.0515 0.0688 0.0861 0.109 0.000192 0.000243 296 118 0.0598 0.00274 0.00506 0.0481 0.0638 0.0692 0.0868 0.000155 0.000194 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 296 100 0.0492 0.002 0.00917 0.0421 0.0545 0.0997 0.117 0.000223 0.000261 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 296 62013.705 0.005 0.00297 0.0289 0.0882 0.0497 0.0664 0.167 0.208 0.000373 0.000464 ! Validation 296 62013.705 0.005 0.00329 0.0289 0.0946 0.0524 0.0699 0.163 0.207 0.000364 0.000462 Wall time: 62013.705891471356 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 297 100 0.109 0.00289 0.0511 0.0492 0.0655 0.248 0.276 0.000553 0.000615 297 118 0.175 0.00293 0.116 0.0495 0.066 0.399 0.415 0.00089 0.000927 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 297 100 0.113 0.00248 0.0638 0.0467 0.0607 0.298 0.308 0.000665 0.000687 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 297 62222.891 0.005 0.00292 0.0282 0.0866 0.0493 0.0659 0.155 0.202 0.000346 0.000452 ! Validation 297 62222.891 0.005 0.0037 0.0538 0.128 0.0556 0.0742 0.229 0.283 0.000512 0.000632 Wall time: 62222.89099347824 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 298 100 0.11 0.0034 0.0419 0.0531 0.0711 0.223 0.25 0.000497 0.000557 298 118 0.142 0.00285 0.0846 0.0492 0.0652 0.344 0.355 0.000769 0.000792 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 298 100 0.229 0.00225 0.184 0.0447 0.0578 0.519 0.523 0.00116 0.00117 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 298 62432.141 0.005 0.00304 0.0312 0.092 0.0503 0.0673 0.171 0.214 0.000383 0.000478 ! Validation 298 62432.141 0.005 0.00353 0.107 0.178 0.0545 0.0724 0.35 0.399 0.000781 0.00089 Wall time: 62432.14108947525 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 299 100 0.0799 0.00302 0.0195 0.0499 0.067 0.147 0.17 0.000327 0.000381 299 118 0.0921 0.00308 0.0305 0.0499 0.0677 0.194 0.213 0.000432 0.000475 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 299 100 0.0457 0.00201 0.00555 0.042 0.0547 0.0793 0.0909 0.000177 0.000203 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 299 62641.235 0.005 0.00307 0.0376 0.099 0.0505 0.0676 0.183 0.237 0.000408 0.000528 ! Validation 299 62641.235 0.005 0.00329 0.0237 0.0895 0.0523 0.0699 0.15 0.188 0.000334 0.000419 Wall time: 62641.23576998338 ! Best model 299 0.089 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 300 100 0.0583 0.00257 0.007 0.0466 0.0618 0.0803 0.102 0.000179 0.000228 300 118 0.107 0.00269 0.0533 0.0477 0.0632 0.273 0.282 0.000609 0.000628 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 300 100 0.0897 0.00195 0.0506 0.0415 0.0539 0.268 0.274 0.000598 0.000612 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 300 62850.325 0.005 0.00288 0.015 0.0726 0.0489 0.0655 0.118 0.148 0.000264 0.00033 ! Validation 300 62850.325 0.005 0.00322 0.0394 0.104 0.0519 0.0692 0.201 0.242 0.00045 0.000541 Wall time: 62850.325197908096 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 301 100 0.125 0.00297 0.0653 0.0496 0.0665 0.298 0.312 0.000665 0.000696 301 118 0.0689 0.00269 0.0152 0.0468 0.0632 0.131 0.15 0.000292 0.000335 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 301 100 0.0787 0.00191 0.0404 0.0413 0.0533 0.238 0.245 0.000531 0.000547 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 301 63061.696 0.005 0.0029 0.036 0.094 0.049 0.0657 0.187 0.232 0.000417 0.000518 ! Validation 301 63061.696 0.005 0.00319 0.0822 0.146 0.0516 0.0689 0.304 0.35 0.000678 0.000781 Wall time: 63061.69675356103 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 302 100 0.0988 0.00293 0.0402 0.0492 0.066 0.211 0.245 0.000471 0.000546 302 118 0.0866 0.0029 0.0287 0.0493 0.0656 0.177 0.206 0.000395 0.000461 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 302 100 0.0466 0.00191 0.0083 0.0411 0.0534 0.095 0.111 0.000212 0.000248 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 302 63270.903 0.005 0.00293 0.0295 0.088 0.0493 0.066 0.168 0.209 0.000375 0.000467 ! Validation 302 63270.903 0.005 0.00318 0.0278 0.0915 0.0515 0.0688 0.158 0.203 0.000352 0.000454 Wall time: 63270.90367253311 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 303 100 0.225 0.00289 0.167 0.0493 0.0656 0.485 0.498 0.00108 0.00111 303 118 0.077 0.00282 0.0207 0.0485 0.0647 0.16 0.176 0.000357 0.000392 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 303 100 0.0647 0.00249 0.0148 0.0463 0.0609 0.134 0.148 0.0003 0.000331 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 303 63480.018 0.005 0.00296 0.0376 0.0967 0.0496 0.0663 0.186 0.237 0.000415 0.000529 ! Validation 303 63480.018 0.005 0.0038 0.0551 0.131 0.0563 0.0752 0.235 0.286 0.000524 0.000639 Wall time: 63480.01869262196 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 304 100 0.143 0.00289 0.0847 0.0487 0.0656 0.341 0.355 0.00076 0.000792 304 118 0.0642 0.00265 0.0113 0.0474 0.0628 0.114 0.129 0.000255 0.000289 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 304 100 0.0453 0.00213 0.0027 0.0433 0.0563 0.0604 0.0633 0.000135 0.000141 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 304 63703.289 0.005 0.00292 0.0285 0.0869 0.0493 0.0659 0.166 0.206 0.000371 0.000461 ! Validation 304 63703.289 0.005 0.00339 0.0325 0.1 0.0531 0.071 0.173 0.22 0.000387 0.00049 Wall time: 63703.289227459114 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 305 100 0.0676 0.00296 0.00845 0.0495 0.0663 0.0915 0.112 0.000204 0.00025 305 118 0.33 0.004 0.25 0.0579 0.0771 0.606 0.61 0.00135 0.00136 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 305 100 0.06 0.00287 0.00248 0.0505 0.0654 0.0435 0.0608 9.72e-05 0.000136 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 305 63912.490 0.005 0.00284 0.0261 0.0828 0.0484 0.0649 0.145 0.191 0.000324 0.000426 ! Validation 305 63912.490 0.005 0.00419 0.168 0.252 0.0595 0.0789 0.391 0.5 0.000873 0.00112 Wall time: 63912.49012562819 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 306 100 0.061 0.00271 0.00683 0.048 0.0634 0.0832 0.101 0.000186 0.000225 306 118 0.0936 0.00291 0.0354 0.0488 0.0657 0.204 0.23 0.000456 0.000512 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 306 100 0.0414 0.0019 0.00344 0.0409 0.0531 0.053 0.0715 0.000118 0.00016 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 306 64121.706 0.005 0.00296 0.0372 0.0964 0.0497 0.0664 0.18 0.235 0.000402 0.000525 ! Validation 306 64121.706 0.005 0.00315 0.0363 0.0992 0.0512 0.0684 0.18 0.232 0.000402 0.000519 Wall time: 64121.706457147375 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 307 100 0.0726 0.00288 0.0151 0.0486 0.0654 0.125 0.15 0.000278 0.000334 307 118 0.0774 0.00338 0.0098 0.0527 0.0709 0.0951 0.121 0.000212 0.00027 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 307 100 0.0435 0.00198 0.00385 0.042 0.0543 0.0676 0.0756 0.000151 0.000169 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 307 64331.026 0.005 0.0028 0.0183 0.0743 0.0482 0.0645 0.134 0.165 0.0003 0.000369 ! Validation 307 64331.026 0.005 0.0032 0.024 0.0879 0.0517 0.069 0.149 0.189 0.000332 0.000421 Wall time: 64331.02628820995 ! Best model 307 0.088 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 308 100 0.0685 0.00292 0.0101 0.0487 0.0659 0.1 0.123 0.000224 0.000273 308 118 0.0771 0.00308 0.0154 0.0499 0.0677 0.113 0.151 0.000253 0.000338 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 308 100 0.0411 0.00186 0.00391 0.0406 0.0526 0.0561 0.0763 0.000125 0.00017 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 308 64540.241 0.005 0.0028 0.0181 0.0741 0.0482 0.0645 0.131 0.164 0.000292 0.000367 ! Validation 308 64540.241 0.005 0.00307 0.0638 0.125 0.0505 0.0676 0.242 0.308 0.000541 0.000688 Wall time: 64540.241087195 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 309 100 0.0641 0.00289 0.00631 0.0488 0.0656 0.0794 0.0968 0.000177 0.000216 309 118 0.0561 0.0025 0.00615 0.0457 0.0609 0.0726 0.0956 0.000162 0.000213 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 309 100 0.042 0.00185 0.00495 0.0404 0.0525 0.0593 0.0858 0.000132 0.000192 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 309 64749.440 0.005 0.00281 0.025 0.0813 0.0484 0.0647 0.152 0.193 0.000338 0.000432 ! Validation 309 64749.440 0.005 0.00307 0.0468 0.108 0.0505 0.0676 0.214 0.264 0.000477 0.000589 Wall time: 64749.44048169535 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 310 100 0.0654 0.00273 0.0108 0.0479 0.0637 0.103 0.127 0.00023 0.000283 310 118 0.104 0.00279 0.0483 0.0484 0.0644 0.249 0.268 0.000557 0.000598 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 310 100 0.06 0.00181 0.0238 0.0401 0.0519 0.179 0.188 0.0004 0.00042 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 310 64958.659 0.005 0.00277 0.0248 0.0802 0.048 0.0642 0.158 0.191 0.000353 0.000427 ! Validation 310 64958.659 0.005 0.00301 0.0833 0.144 0.0501 0.0669 0.298 0.352 0.000665 0.000786 Wall time: 64958.659438506234 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 311 100 0.0643 0.00279 0.0085 0.0484 0.0644 0.0885 0.112 0.000197 0.000251 311 118 0.089 0.00327 0.0235 0.0517 0.0698 0.182 0.187 0.000407 0.000418 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 311 100 0.0671 0.00192 0.0286 0.0413 0.0535 0.198 0.206 0.000441 0.00046 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 311 65167.843 0.005 0.00278 0.0251 0.0806 0.048 0.0642 0.155 0.193 0.000347 0.000431 ! Validation 311 65167.843 0.005 0.00317 0.0371 0.1 0.0514 0.0687 0.191 0.235 0.000427 0.000524 Wall time: 65167.84366336698 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 312 100 0.0848 0.00344 0.016 0.0534 0.0715 0.13 0.154 0.00029 0.000345 312 118 0.063 0.00289 0.00515 0.0493 0.0656 0.0731 0.0876 0.000163 0.000195 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 312 100 0.0411 0.00193 0.00244 0.041 0.0536 0.0488 0.0603 0.000109 0.000135 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 312 65377.119 0.005 0.00287 0.0312 0.0887 0.0488 0.0653 0.174 0.216 0.000388 0.000482 ! Validation 312 65377.119 0.005 0.00317 0.0351 0.0985 0.0513 0.0687 0.178 0.228 0.000398 0.00051 Wall time: 65377.11919796327 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 313 100 0.0799 0.00274 0.0251 0.0483 0.0638 0.157 0.193 0.000351 0.000431 313 118 0.0734 0.00319 0.00966 0.0512 0.0688 0.0989 0.12 0.000221 0.000267 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 313 100 0.0603 0.00212 0.0178 0.0436 0.0562 0.146 0.163 0.000325 0.000363 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 313 65586.324 0.005 0.00285 0.0396 0.0966 0.0487 0.0651 0.185 0.243 0.000413 0.000543 ! Validation 313 65586.324 0.005 0.00335 0.137 0.204 0.053 0.0705 0.396 0.452 0.000883 0.00101 Wall time: 65586.32492244104 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 314 100 0.087 0.00283 0.0303 0.0485 0.0649 0.192 0.212 0.000428 0.000474 314 118 0.0744 0.00305 0.0134 0.0503 0.0674 0.118 0.141 0.000262 0.000315 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 314 100 0.0457 0.00196 0.00651 0.0416 0.054 0.0752 0.0984 0.000168 0.00022 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 314 65795.524 0.005 0.0028 0.0254 0.0814 0.0482 0.0645 0.154 0.195 0.000344 0.000435 ! Validation 314 65795.524 0.005 0.00315 0.0427 0.106 0.0513 0.0684 0.196 0.252 0.000438 0.000563 Wall time: 65795.52479942422 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 315 100 0.0756 0.00304 0.0149 0.0505 0.0672 0.118 0.149 0.000263 0.000332 315 118 0.0682 0.00265 0.0152 0.047 0.0628 0.112 0.15 0.00025 0.000335 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 315 100 0.0655 0.00181 0.0293 0.0401 0.0519 0.2 0.209 0.000446 0.000466 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 315 66009.431 0.005 0.00278 0.0286 0.0843 0.0481 0.0643 0.162 0.207 0.000362 0.000461 ! Validation 315 66009.431 0.005 0.00299 0.0286 0.0884 0.0498 0.0667 0.172 0.206 0.000383 0.000461 Wall time: 66009.43199494015 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 316 100 0.0622 0.00239 0.0145 0.0449 0.0596 0.122 0.147 0.000272 0.000327 316 118 0.0586 0.00262 0.00617 0.0464 0.0624 0.0746 0.0958 0.000167 0.000214 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 316 100 0.0449 0.00176 0.00964 0.0396 0.0512 0.105 0.12 0.000234 0.000267 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 316 66218.729 0.005 0.00279 0.0304 0.0862 0.0482 0.0645 0.169 0.213 0.000377 0.000476 ! Validation 316 66218.729 0.005 0.00295 0.0286 0.0875 0.0495 0.0662 0.161 0.206 0.00036 0.00046 Wall time: 66218.72967053624 ! Best model 316 0.088 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 317 100 0.077 0.00286 0.0199 0.0486 0.0652 0.147 0.172 0.000328 0.000384 317 118 0.0612 0.00267 0.00778 0.0469 0.063 0.0939 0.108 0.00021 0.00024 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 317 100 0.0382 0.00183 0.00152 0.04 0.0522 0.0404 0.0475 9.03e-05 0.000106 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 317 66427.946 0.005 0.00271 0.0189 0.0732 0.0475 0.0635 0.134 0.168 0.0003 0.000375 ! Validation 317 66427.946 0.005 0.00304 0.0203 0.0811 0.0503 0.0673 0.141 0.174 0.000314 0.000388 Wall time: 66427.94672459923 ! Best model 317 0.081 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 318 100 0.063 0.0029 0.00497 0.049 0.0657 0.0649 0.0859 0.000145 0.000192 318 118 0.0602 0.00274 0.00553 0.0482 0.0638 0.0765 0.0907 0.000171 0.000202 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 318 100 0.0512 0.00199 0.0114 0.0418 0.0544 0.121 0.13 0.00027 0.000291 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 318 66637.160 0.005 0.0027 0.0255 0.0794 0.0473 0.0633 0.158 0.195 0.000353 0.000435 ! Validation 318 66637.160 0.005 0.00316 0.044 0.107 0.0514 0.0686 0.21 0.256 0.00047 0.000571 Wall time: 66637.16025195923 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 319 100 0.0638 0.00259 0.0121 0.0464 0.062 0.11 0.134 0.000245 0.0003 319 118 0.0583 0.00278 0.00281 0.0482 0.0642 0.0545 0.0646 0.000122 0.000144 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 319 100 0.0381 0.00171 0.00382 0.039 0.0505 0.0549 0.0753 0.000123 0.000168 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 319 66846.377 0.005 0.00276 0.0273 0.0825 0.0479 0.064 0.159 0.202 0.000356 0.000451 ! Validation 319 66846.377 0.005 0.00289 0.0326 0.0904 0.049 0.0656 0.176 0.22 0.000394 0.000492 Wall time: 66846.37789700925 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 320 100 0.101 0.00267 0.0473 0.047 0.063 0.251 0.265 0.000561 0.000592 320 118 0.142 0.0028 0.0857 0.0491 0.0645 0.338 0.357 0.000754 0.000797 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 320 100 0.0582 0.00229 0.0123 0.045 0.0584 0.124 0.135 0.000276 0.000302 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 320 67055.606 0.005 0.00278 0.0381 0.0937 0.0482 0.0643 0.191 0.237 0.000427 0.000529 ! Validation 320 67055.606 0.005 0.00346 0.12 0.189 0.0539 0.0717 0.358 0.422 0.0008 0.000941 Wall time: 67055.606870241 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 321 100 0.127 0.00301 0.0671 0.05 0.0669 0.301 0.316 0.000671 0.000705 321 118 0.0899 0.00333 0.0232 0.0526 0.0704 0.142 0.186 0.000318 0.000415 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 321 100 0.0639 0.00278 0.00819 0.0487 0.0643 0.0917 0.11 0.000205 0.000246 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 321 67264.906 0.005 0.00276 0.0204 0.0756 0.0479 0.064 0.138 0.174 0.000308 0.000388 ! Validation 321 67264.906 0.005 0.00398 0.0273 0.107 0.0577 0.0769 0.157 0.202 0.000351 0.00045 Wall time: 67264.90660601994 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 322 100 0.0594 0.0025 0.00946 0.0458 0.061 0.0968 0.119 0.000216 0.000265 322 118 0.0593 0.00235 0.0123 0.0442 0.0591 0.108 0.135 0.000241 0.000302 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 322 100 0.0459 0.00207 0.00442 0.0427 0.0555 0.066 0.0811 0.000147 0.000181 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 322 67474.113 0.005 0.00291 0.0391 0.0974 0.0493 0.0659 0.199 0.242 0.000444 0.00054 ! Validation 322 67474.113 0.005 0.00325 0.0477 0.113 0.0521 0.0695 0.212 0.266 0.000474 0.000595 Wall time: 67474.11315792194 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 323 100 0.102 0.00318 0.0381 0.0514 0.0688 0.208 0.238 0.000465 0.000531 323 118 0.104 0.00271 0.0503 0.0479 0.0635 0.256 0.274 0.000572 0.000611 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 323 100 0.119 0.00218 0.0756 0.0446 0.057 0.327 0.335 0.000731 0.000749 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 323 67683.323 0.005 0.00276 0.0405 0.0956 0.0479 0.064 0.181 0.245 0.000403 0.000547 ! Validation 323 67683.323 0.005 0.00336 0.0489 0.116 0.0535 0.0707 0.225 0.27 0.000503 0.000602 Wall time: 67683.32345841127 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 324 100 0.0629 0.00271 0.00869 0.0475 0.0635 0.0899 0.114 0.000201 0.000254 324 118 0.0717 0.00269 0.0179 0.0476 0.0633 0.137 0.163 0.000305 0.000364 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 324 100 0.0382 0.00177 0.00276 0.0398 0.0513 0.0595 0.0641 0.000133 0.000143 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 324 67892.521 0.005 0.00275 0.0206 0.0756 0.0479 0.0639 0.142 0.175 0.000317 0.000391 ! Validation 324 67892.521 0.005 0.00292 0.0268 0.0853 0.0494 0.0659 0.154 0.2 0.000343 0.000446 Wall time: 67892.52137999423 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 325 100 0.06 0.00265 0.00709 0.0466 0.0627 0.0797 0.103 0.000178 0.000229 325 118 0.0482 0.00225 0.00316 0.0437 0.0579 0.0521 0.0685 0.000116 0.000153 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 325 100 0.0365 0.00174 0.00173 0.0393 0.0509 0.0426 0.0507 9.52e-05 0.000113 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 325 68101.738 0.005 0.00258 0.0145 0.066 0.0463 0.0619 0.117 0.147 0.000261 0.000328 ! Validation 325 68101.738 0.005 0.00288 0.0438 0.101 0.049 0.0655 0.192 0.255 0.000428 0.00057 Wall time: 68101.73838896817 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 326 100 0.129 0.00263 0.0761 0.0467 0.0625 0.321 0.336 0.000718 0.000751 326 118 0.0597 0.00237 0.0124 0.0448 0.0593 0.108 0.136 0.000242 0.000303 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 326 100 0.0414 0.00187 0.00404 0.0408 0.0527 0.0539 0.0775 0.00012 0.000173 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 326 68311.022 0.005 0.00261 0.0252 0.0774 0.0466 0.0623 0.157 0.194 0.00035 0.000433 ! Validation 326 68311.022 0.005 0.003 0.0817 0.142 0.0501 0.0668 0.278 0.348 0.000621 0.000778 Wall time: 68311.0221080971 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 327 100 0.0669 0.00249 0.0171 0.0461 0.0609 0.136 0.159 0.000304 0.000355 327 118 0.0934 0.00301 0.0332 0.0492 0.0669 0.201 0.222 0.000449 0.000496 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 327 100 0.051 0.0022 0.00705 0.0441 0.0571 0.0903 0.102 0.000202 0.000229 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 327 68520.244 0.005 0.00265 0.0248 0.0778 0.047 0.0628 0.158 0.192 0.000352 0.000428 ! Validation 327 68520.244 0.005 0.00337 0.089 0.156 0.0533 0.0708 0.278 0.364 0.00062 0.000812 Wall time: 68520.24407151435 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 328 100 0.0878 0.00248 0.0383 0.0457 0.0607 0.222 0.239 0.000496 0.000533 328 118 0.0893 0.00277 0.034 0.0473 0.0641 0.209 0.225 0.000466 0.000502 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 328 100 0.0386 0.00176 0.00346 0.0398 0.0511 0.0555 0.0717 0.000124 0.00016 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 328 68729.449 0.005 0.00271 0.0262 0.0804 0.0475 0.0635 0.156 0.197 0.000349 0.00044 ! Validation 328 68729.449 0.005 0.00289 0.074 0.132 0.0491 0.0656 0.27 0.332 0.000603 0.000741 Wall time: 68729.4498518291 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 329 100 0.0575 0.00258 0.00592 0.0463 0.0619 0.0776 0.0938 0.000173 0.000209 329 118 0.0488 0.0022 0.00468 0.0432 0.0572 0.0706 0.0835 0.000158 0.000186 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 329 100 0.0427 0.00173 0.00818 0.0392 0.0507 0.0944 0.11 0.000211 0.000246 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 329 68938.656 0.005 0.00284 0.0453 0.102 0.0487 0.065 0.193 0.26 0.000431 0.000581 ! Validation 329 68938.656 0.005 0.00286 0.0253 0.0825 0.0488 0.0653 0.152 0.194 0.000339 0.000433 Wall time: 68938.65692276228 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 330 100 0.0692 0.00256 0.018 0.0461 0.0617 0.139 0.164 0.00031 0.000365 330 118 0.0632 0.00276 0.00793 0.0483 0.0641 0.0943 0.109 0.000211 0.000242 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 330 100 0.0378 0.00174 0.00303 0.0394 0.0509 0.0565 0.0672 0.000126 0.00015 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 330 69147.843 0.005 0.00257 0.018 0.0695 0.0462 0.0618 0.13 0.164 0.00029 0.000366 ! Validation 330 69147.843 0.005 0.00286 0.0294 0.0866 0.0488 0.0652 0.167 0.209 0.000373 0.000467 Wall time: 69147.84394898219 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 331 100 0.0816 0.00259 0.0298 0.0465 0.0621 0.194 0.21 0.000434 0.00047 331 118 0.112 0.00329 0.0461 0.0525 0.07 0.237 0.262 0.00053 0.000584 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 331 100 0.0428 0.00203 0.00215 0.0423 0.055 0.0491 0.0566 0.00011 0.000126 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 331 69357.123 0.005 0.00262 0.0267 0.0791 0.0467 0.0624 0.158 0.199 0.000352 0.000444 ! Validation 331 69357.123 0.005 0.00318 0.0334 0.097 0.0517 0.0688 0.179 0.223 0.0004 0.000497 Wall time: 69357.1235571932 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 332 100 0.059 0.00249 0.00925 0.0455 0.0608 0.0952 0.117 0.000212 0.000262 332 118 0.056 0.00252 0.00564 0.0458 0.0612 0.0827 0.0916 0.000185 0.000204 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 332 100 0.0344 0.00163 0.00187 0.0379 0.0492 0.0448 0.0527 9.99e-05 0.000118 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 332 69566.302 0.005 0.00257 0.0157 0.0672 0.0463 0.0619 0.124 0.153 0.000276 0.000342 ! Validation 332 69566.302 0.005 0.00274 0.027 0.0819 0.0477 0.0639 0.154 0.2 0.000343 0.000447 Wall time: 69566.30225337623 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 333 100 0.0647 0.0026 0.0128 0.0466 0.0621 0.114 0.138 0.000255 0.000308 333 118 0.0633 0.00232 0.017 0.0442 0.0587 0.143 0.159 0.000319 0.000354 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 333 100 0.0391 0.0017 0.00508 0.0389 0.0503 0.0625 0.0869 0.00014 0.000194 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 333 69775.494 0.005 0.00256 0.0244 0.0756 0.0462 0.0617 0.156 0.191 0.000348 0.000426 ! Validation 333 69775.494 0.005 0.0028 0.0519 0.108 0.0484 0.0646 0.223 0.278 0.000498 0.00062 Wall time: 69775.49452239322 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 334 100 0.0763 0.0023 0.0304 0.0439 0.0584 0.197 0.213 0.000439 0.000474 334 118 0.0569 0.00245 0.00796 0.0453 0.0603 0.0909 0.109 0.000203 0.000243 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 334 100 0.0639 0.00172 0.0294 0.0394 0.0506 0.204 0.209 0.000455 0.000467 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 334 69984.697 0.005 0.00275 0.0341 0.089 0.0477 0.0639 0.166 0.226 0.00037 0.000504 ! Validation 334 69984.697 0.005 0.00283 0.0326 0.0893 0.0488 0.0649 0.178 0.22 0.000398 0.000491 Wall time: 69984.69772143336 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 335 100 0.0689 0.00256 0.0177 0.046 0.0617 0.145 0.162 0.000324 0.000362 335 118 0.0696 0.00253 0.0191 0.046 0.0613 0.149 0.168 0.000333 0.000376 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 335 100 0.0946 0.00162 0.0622 0.0379 0.0491 0.299 0.304 0.000668 0.000679 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 335 70193.972 0.005 0.00264 0.0308 0.0836 0.047 0.0626 0.173 0.214 0.000386 0.000479 ! Validation 335 70193.972 0.005 0.00273 0.122 0.176 0.0476 0.0637 0.382 0.426 0.000853 0.00095 Wall time: 70193.97238899022 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 336 100 0.0619 0.0024 0.0138 0.0451 0.0598 0.119 0.143 0.000266 0.00032 336 118 0.0684 0.00269 0.0146 0.0474 0.0632 0.136 0.147 0.000305 0.000328 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 336 100 0.0857 0.00203 0.0452 0.0422 0.0549 0.256 0.259 0.000571 0.000579 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 336 70403.163 0.005 0.00263 0.0307 0.0833 0.0468 0.0625 0.17 0.214 0.00038 0.000478 ! Validation 336 70403.163 0.005 0.00314 0.0364 0.0992 0.0512 0.0683 0.186 0.233 0.000416 0.000519 Wall time: 70403.16387213022 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 337 100 0.0697 0.0026 0.0177 0.0463 0.0622 0.135 0.162 0.000302 0.000362 337 118 0.0518 0.00231 0.00547 0.0444 0.0587 0.0781 0.0902 0.000174 0.000201 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 337 100 0.0372 0.00169 0.00348 0.0386 0.0501 0.0527 0.0719 0.000118 0.000161 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 337 70612.360 0.005 0.00254 0.017 0.0678 0.046 0.0615 0.129 0.159 0.000289 0.000355 ! Validation 337 70612.360 0.005 0.00279 0.056 0.112 0.0482 0.0644 0.236 0.288 0.000526 0.000644 Wall time: 70612.36009793729 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 338 100 0.0721 0.00269 0.0184 0.0473 0.0632 0.136 0.165 0.000303 0.000369 338 118 0.0737 0.00266 0.0204 0.0466 0.0629 0.156 0.174 0.000348 0.000389 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 338 100 0.0402 0.00186 0.00297 0.0408 0.0526 0.0584 0.0664 0.00013 0.000148 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 338 70821.570 0.005 0.00254 0.0291 0.0799 0.046 0.0615 0.164 0.208 0.000366 0.000465 ! Validation 338 70821.570 0.005 0.00295 0.0229 0.0819 0.0498 0.0662 0.144 0.185 0.000323 0.000412 Wall time: 70821.57093266211 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 339 100 0.0498 0.00226 0.0046 0.0436 0.058 0.0664 0.0827 0.000148 0.000185 339 118 0.07 0.00252 0.0196 0.0456 0.0612 0.152 0.171 0.00034 0.000381 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 339 100 0.0456 0.0017 0.0117 0.0387 0.0502 0.118 0.132 0.000264 0.000295 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 339 71030.789 0.005 0.00245 0.0134 0.0624 0.0451 0.0603 0.114 0.141 0.000254 0.000315 ! Validation 339 71030.789 0.005 0.00274 0.052 0.107 0.0478 0.0639 0.228 0.278 0.000508 0.000621 Wall time: 71030.78999796417 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 340 100 0.0807 0.00238 0.033 0.0445 0.0595 0.204 0.222 0.000456 0.000495 340 118 0.104 0.00289 0.0462 0.0485 0.0656 0.229 0.262 0.000511 0.000585 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 340 100 0.055 0.00202 0.0147 0.0421 0.0548 0.136 0.148 0.000304 0.00033 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 340 71240.081 0.005 0.00278 0.0363 0.092 0.048 0.0643 0.181 0.232 0.000403 0.000518 ! Validation 340 71240.081 0.005 0.00311 0.058 0.12 0.0511 0.068 0.22 0.294 0.000491 0.000656 Wall time: 71240.08124142513 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 341 100 0.0517 0.00225 0.0066 0.0435 0.0579 0.0804 0.099 0.00018 0.000221 341 118 0.0522 0.00209 0.0103 0.042 0.0558 0.0951 0.124 0.000212 0.000277 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 341 100 0.0461 0.00168 0.0125 0.0387 0.05 0.128 0.136 0.000286 0.000304 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 341 71449.268 0.005 0.00244 0.0148 0.0637 0.0451 0.0603 0.118 0.148 0.000264 0.000331 ! Validation 341 71449.268 0.005 0.00278 0.113 0.168 0.0481 0.0643 0.348 0.409 0.000776 0.000914 Wall time: 71449.2688398771 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 342 100 0.153 0.00292 0.0949 0.0499 0.0659 0.343 0.376 0.000766 0.000838 342 118 0.067 0.00256 0.0158 0.0468 0.0617 0.109 0.153 0.000244 0.000342 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 342 100 0.0709 0.00177 0.0356 0.0396 0.0513 0.225 0.23 0.000503 0.000514 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 342 71658.480 0.005 0.0027 0.0371 0.0911 0.0475 0.0634 0.188 0.235 0.000419 0.000525 ! Validation 342 71658.480 0.005 0.00287 0.0682 0.126 0.049 0.0654 0.276 0.318 0.000617 0.000711 Wall time: 71658.48055219138 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 343 100 0.136 0.0026 0.0837 0.0466 0.0622 0.343 0.353 0.000765 0.000788 343 118 0.0895 0.00232 0.0431 0.0439 0.0587 0.242 0.253 0.000541 0.000565 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 343 100 0.0987 0.0016 0.0667 0.0376 0.0488 0.311 0.315 0.000695 0.000703 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 343 71867.696 0.005 0.00255 0.0246 0.0755 0.0461 0.0616 0.157 0.191 0.000351 0.000425 ! Validation 343 71867.696 0.005 0.00271 0.123 0.178 0.0474 0.0634 0.392 0.429 0.000875 0.000957 Wall time: 71867.69656547019 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 344 100 0.0629 0.00227 0.0175 0.0437 0.0581 0.145 0.161 0.000324 0.00036 344 118 0.0551 0.00187 0.0176 0.0403 0.0528 0.13 0.162 0.00029 0.000361 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 344 100 0.0353 0.0016 0.00328 0.0376 0.0488 0.053 0.0698 0.000118 0.000156 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 344 72076.908 0.005 0.00242 0.0154 0.0638 0.0449 0.06 0.123 0.151 0.000275 0.000338 ! Validation 344 72076.908 0.005 0.00265 0.0375 0.0904 0.047 0.0627 0.19 0.236 0.000424 0.000527 Wall time: 72076.90864896029 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 345 100 0.0562 0.0024 0.0082 0.0446 0.0597 0.0942 0.11 0.00021 0.000247 345 118 0.069 0.0028 0.013 0.0472 0.0645 0.123 0.139 0.000275 0.000311 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 345 100 0.0458 0.00171 0.0115 0.0391 0.0505 0.118 0.131 0.000263 0.000292 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 345 72286.183 0.005 0.00259 0.0276 0.0795 0.0464 0.0621 0.15 0.203 0.000335 0.000453 ! Validation 345 72286.183 0.005 0.00277 0.0848 0.14 0.0481 0.0642 0.304 0.355 0.000678 0.000793 Wall time: 72286.18370069796 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 346 100 0.0532 0.00231 0.00693 0.0441 0.0587 0.0845 0.102 0.000189 0.000227 346 118 0.194 0.00266 0.141 0.0473 0.0629 0.442 0.458 0.000986 0.00102 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 346 100 0.172 0.00238 0.125 0.0463 0.0595 0.429 0.431 0.000958 0.000961 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 346 72495.398 0.005 0.00238 0.0173 0.0649 0.0444 0.0594 0.124 0.157 0.000277 0.000349 ! Validation 346 72495.398 0.005 0.00349 0.078 0.148 0.0547 0.0721 0.284 0.341 0.000635 0.00076 Wall time: 72495.3982642563 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 347 100 0.255 0.0113 0.0291 0.0922 0.13 0.165 0.208 0.000369 0.000464 347 118 0.209 0.00895 0.0298 0.0832 0.115 0.184 0.211 0.000411 0.00047 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 347 100 0.163 0.00654 0.0318 0.0715 0.0986 0.204 0.218 0.000455 0.000486 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 347 72705.261 0.005 0.0295 1.08 1.67 0.136 0.21 0.768 1.27 0.00172 0.00284 ! Validation 347 72705.261 0.005 0.00902 0.0756 0.256 0.084 0.116 0.269 0.335 0.000601 0.000748 Wall time: 72705.26103047421 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 348 100 0.0946 0.00363 0.0221 0.0557 0.0734 0.149 0.181 0.000333 0.000404 348 118 0.0874 0.00379 0.0115 0.0562 0.0751 0.123 0.131 0.000274 0.000292 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 348 100 0.121 0.00239 0.0731 0.0461 0.0597 0.327 0.33 0.000729 0.000736 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 348 72914.458 0.005 0.0052 0.0303 0.134 0.0649 0.088 0.169 0.213 0.000376 0.000474 ! Validation 348 72914.458 0.005 0.00377 0.0465 0.122 0.0567 0.0749 0.217 0.263 0.000485 0.000587 Wall time: 72914.4586243513 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 349 100 0.0718 0.003 0.0118 0.0502 0.0668 0.102 0.132 0.000227 0.000295 349 118 0.0593 0.00268 0.00573 0.0477 0.0631 0.0751 0.0923 0.000168 0.000206 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 349 100 0.0437 0.00191 0.00537 0.0412 0.0534 0.0694 0.0894 0.000155 0.000199 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 349 73123.648 0.005 0.00309 0.0146 0.0764 0.051 0.0678 0.118 0.148 0.000265 0.00033 ! Validation 349 73123.648 0.005 0.00317 0.0449 0.108 0.0515 0.0686 0.208 0.258 0.000463 0.000576 Wall time: 73123.64804023504 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 350 100 0.0637 0.00279 0.00789 0.0481 0.0644 0.0882 0.108 0.000197 0.000242 350 118 0.0545 0.00233 0.00776 0.0441 0.0589 0.0839 0.107 0.000187 0.00024 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 350 100 0.04 0.0018 0.00403 0.0399 0.0517 0.0711 0.0774 0.000159 0.000173 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 350 73332.906 0.005 0.00278 0.00869 0.0643 0.0481 0.0643 0.0905 0.114 0.000202 0.000254 ! Validation 350 73332.906 0.005 0.00299 0.0278 0.0875 0.0499 0.0667 0.162 0.203 0.00036 0.000454 Wall time: 73332.90641207201 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 351 100 0.0548 0.00247 0.00545 0.0455 0.0606 0.0711 0.0901 0.000159 0.000201 351 118 0.0561 0.00235 0.00899 0.0444 0.0592 0.099 0.116 0.000221 0.000258 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 351 100 0.0412 0.00174 0.00633 0.0393 0.0509 0.0872 0.0971 0.000195 0.000217 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 351 73542.141 0.005 0.00265 0.00844 0.0614 0.0469 0.0628 0.0892 0.112 0.000199 0.00025 ! Validation 351 73542.141 0.005 0.00291 0.0269 0.085 0.0492 0.0657 0.158 0.2 0.000354 0.000446 Wall time: 73542.14176378213 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 352 100 0.057 0.00256 0.00575 0.0462 0.0618 0.0682 0.0925 0.000152 0.000206 352 118 0.0678 0.00222 0.0234 0.043 0.0574 0.175 0.187 0.000391 0.000417 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 352 100 0.039 0.00168 0.00537 0.0385 0.05 0.0647 0.0894 0.000144 0.0002 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 352 73751.385 0.005 0.00257 0.0088 0.0602 0.0462 0.0619 0.0906 0.114 0.000202 0.000254 ! Validation 352 73751.385 0.005 0.00281 0.0451 0.101 0.0483 0.0646 0.207 0.259 0.000461 0.000578 Wall time: 73751.38553724298 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 353 100 0.0596 0.00248 0.00994 0.0453 0.0607 0.103 0.122 0.000229 0.000271 353 118 0.0522 0.00244 0.0034 0.0448 0.0603 0.0565 0.0711 0.000126 0.000159 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 353 100 0.0368 0.00167 0.00347 0.0384 0.0498 0.0538 0.0719 0.00012 0.00016 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 353 73960.622 0.005 0.00251 0.00956 0.0598 0.0456 0.0611 0.0944 0.119 0.000211 0.000267 ! Validation 353 73960.622 0.005 0.00276 0.0319 0.0872 0.0479 0.0641 0.169 0.218 0.000378 0.000486 Wall time: 73960.62287542317 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 354 100 0.0588 0.00244 0.01 0.0449 0.0602 0.104 0.122 0.000233 0.000273 354 118 0.0502 0.00231 0.00404 0.0446 0.0586 0.0595 0.0775 0.000133 0.000173 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 354 100 0.0352 0.00162 0.00291 0.0378 0.049 0.0587 0.0658 0.000131 0.000147 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 354 74169.959 0.005 0.00247 0.0102 0.0595 0.0452 0.0606 0.0987 0.123 0.00022 0.000275 ! Validation 354 74169.959 0.005 0.00271 0.0235 0.0777 0.0474 0.0635 0.148 0.187 0.00033 0.000418 Wall time: 74169.95919443434 ! Best model 354 0.078 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 355 100 0.0794 0.00252 0.029 0.0454 0.0612 0.191 0.208 0.000426 0.000464 355 118 0.0695 0.00218 0.0259 0.0429 0.0569 0.18 0.196 0.000401 0.000438 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 355 100 0.0341 0.00159 0.00235 0.0375 0.0486 0.052 0.0591 0.000116 0.000132 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 355 74379.214 0.005 0.00243 0.00919 0.0577 0.0448 0.0601 0.0914 0.116 0.000204 0.000259 ! Validation 355 74379.214 0.005 0.00268 0.0243 0.078 0.0472 0.0632 0.149 0.19 0.000332 0.000424 Wall time: 74379.21496412996 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 356 100 0.053 0.00238 0.00535 0.0445 0.0596 0.0745 0.0892 0.000166 0.000199 356 118 0.0512 0.00236 0.00402 0.0446 0.0592 0.0595 0.0773 0.000133 0.000173 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 356 100 0.0536 0.00158 0.0221 0.0374 0.0484 0.172 0.181 0.000385 0.000404 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 356 74588.454 0.005 0.00241 0.0109 0.0591 0.0447 0.0599 0.102 0.128 0.000228 0.000285 ! Validation 356 74588.454 0.005 0.00265 0.0635 0.116 0.0469 0.0628 0.26 0.307 0.000581 0.000686 Wall time: 74588.45424556034 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 357 100 0.0557 0.00234 0.00897 0.0441 0.0589 0.101 0.116 0.000226 0.000258 357 118 0.0479 0.00212 0.00549 0.0423 0.0562 0.0694 0.0904 0.000155 0.000202 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 357 100 0.034 0.00157 0.00261 0.0373 0.0483 0.0543 0.0623 0.000121 0.000139 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 357 74814.817 0.005 0.00237 0.00996 0.0574 0.0443 0.0594 0.0972 0.122 0.000217 0.000272 ! Validation 357 74814.817 0.005 0.00263 0.0269 0.0795 0.0467 0.0625 0.154 0.2 0.000343 0.000446 Wall time: 74814.81794475997 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 358 100 0.0506 0.00233 0.004 0.0442 0.0589 0.0635 0.0771 0.000142 0.000172 358 118 0.0823 0.00187 0.0448 0.0401 0.0528 0.246 0.258 0.000548 0.000576 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 358 100 0.0411 0.00155 0.0101 0.0371 0.048 0.112 0.123 0.00025 0.000274 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 358 75024.024 0.005 0.00235 0.0134 0.0605 0.0442 0.0592 0.112 0.14 0.000249 0.000313 ! Validation 358 75024.024 0.005 0.00261 0.024 0.0762 0.0466 0.0623 0.151 0.189 0.000336 0.000422 Wall time: 75024.02465360006 ! Best model 358 0.076 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 359 100 0.0533 0.00238 0.00574 0.0444 0.0595 0.0722 0.0924 0.000161 0.000206 359 118 0.0496 0.00222 0.00514 0.0434 0.0575 0.0796 0.0875 0.000178 0.000195 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 359 100 0.0335 0.00157 0.00206 0.0374 0.0483 0.0506 0.0554 0.000113 0.000124 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 359 75233.324 0.005 0.00234 0.0105 0.0574 0.0441 0.0591 0.0988 0.125 0.000221 0.00028 ! Validation 359 75233.324 0.005 0.00261 0.0329 0.0851 0.0466 0.0623 0.17 0.221 0.000379 0.000494 Wall time: 75233.32457238715 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 360 100 0.0559 0.00245 0.00681 0.0452 0.0604 0.0814 0.101 0.000182 0.000225 360 118 0.0588 0.0025 0.00872 0.0456 0.061 0.0934 0.114 0.000208 0.000254 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 360 100 0.0496 0.00155 0.0187 0.0371 0.0479 0.158 0.167 0.000353 0.000372 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 360 75442.534 0.005 0.00232 0.0118 0.0581 0.0438 0.0587 0.105 0.132 0.000235 0.000295 ! Validation 360 75442.534 0.005 0.00259 0.0244 0.0762 0.0464 0.0621 0.154 0.19 0.000344 0.000425 Wall time: 75442.53437889321 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 361 100 0.101 0.00244 0.0522 0.0445 0.0603 0.269 0.279 0.0006 0.000622 361 118 0.0878 0.00234 0.0409 0.0447 0.059 0.223 0.247 0.000499 0.000551 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 361 100 0.122 0.00164 0.0895 0.0382 0.0494 0.36 0.365 0.000805 0.000814 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 361 75651.742 0.005 0.00233 0.0196 0.0662 0.044 0.0589 0.136 0.17 0.000304 0.000379 ! Validation 361 75651.742 0.005 0.00266 0.0603 0.113 0.0471 0.0628 0.257 0.299 0.000573 0.000668 Wall time: 75651.74202884315 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 362 100 0.0583 0.00238 0.0106 0.0443 0.0595 0.106 0.126 0.000238 0.00028 362 118 0.0744 0.00234 0.0275 0.0441 0.059 0.189 0.202 0.000422 0.000452 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 362 100 0.036 0.00153 0.00536 0.0368 0.0477 0.0722 0.0893 0.000161 0.000199 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 362 75861.026 0.005 0.00233 0.0165 0.063 0.0439 0.0588 0.122 0.156 0.000273 0.000348 ! Validation 362 75861.026 0.005 0.00256 0.0479 0.0991 0.0461 0.0617 0.213 0.267 0.000476 0.000596 Wall time: 75861.02676639333 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 363 100 0.0491 0.00233 0.0026 0.044 0.0588 0.05 0.0622 0.000112 0.000139 363 118 0.0585 0.00276 0.00332 0.0458 0.064 0.0595 0.0703 0.000133 0.000157 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 363 100 0.0554 0.0015 0.0254 0.0365 0.0473 0.187 0.194 0.000418 0.000434 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 363 76070.228 0.005 0.0023 0.013 0.0591 0.0437 0.0585 0.111 0.139 0.000247 0.000311 ! Validation 363 76070.228 0.005 0.00253 0.0288 0.0794 0.0458 0.0614 0.172 0.207 0.000384 0.000462 Wall time: 76070.22900570417 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 364 100 0.065 0.00235 0.018 0.0441 0.0591 0.137 0.164 0.000307 0.000366 364 118 0.0941 0.00217 0.0507 0.0427 0.0568 0.263 0.275 0.000588 0.000613 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 364 100 0.0545 0.00162 0.022 0.0377 0.0491 0.177 0.181 0.000394 0.000404 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 364 76279.505 0.005 0.00229 0.0185 0.0643 0.0436 0.0584 0.132 0.165 0.000296 0.000368 ! Validation 364 76279.505 0.005 0.00266 0.0623 0.115 0.047 0.0628 0.251 0.304 0.000561 0.000679 Wall time: 76279.50521329604 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 365 100 0.0501 0.00231 0.00387 0.0435 0.0586 0.0597 0.0759 0.000133 0.000169 365 118 0.0525 0.0022 0.00859 0.0431 0.0572 0.085 0.113 0.00019 0.000252 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 365 100 0.035 0.00151 0.00487 0.0365 0.0473 0.0739 0.0851 0.000165 0.00019 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 365 76488.695 0.005 0.00231 0.0196 0.0659 0.0438 0.0587 0.135 0.171 0.000301 0.000382 ! Validation 365 76488.695 0.005 0.00252 0.0206 0.0711 0.0457 0.0612 0.138 0.175 0.000309 0.000391 Wall time: 76488.69505782938 ! Best model 365 0.071 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 366 100 0.0475 0.00217 0.00409 0.0423 0.0568 0.0635 0.0779 0.000142 0.000174 366 118 0.0577 0.00271 0.00358 0.0465 0.0634 0.0586 0.073 0.000131 0.000163 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 366 100 0.0313 0.00149 0.00154 0.0363 0.047 0.0425 0.0479 9.48e-05 0.000107 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 366 76697.908 0.005 0.00229 0.015 0.0608 0.0436 0.0583 0.12 0.15 0.000267 0.000334 ! Validation 366 76697.908 0.005 0.00251 0.0417 0.0918 0.0456 0.0611 0.191 0.249 0.000426 0.000556 Wall time: 76697.90875172708 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 367 100 0.0529 0.00233 0.00625 0.044 0.0589 0.0773 0.0964 0.000173 0.000215 367 118 0.057 0.00216 0.0138 0.0423 0.0567 0.124 0.143 0.000278 0.000319 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 367 100 0.0761 0.0015 0.0461 0.0365 0.0472 0.257 0.262 0.000573 0.000585 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 367 76907.119 0.005 0.00227 0.0158 0.0612 0.0434 0.0581 0.117 0.153 0.00026 0.000343 ! Validation 367 76907.119 0.005 0.00251 0.105 0.155 0.0457 0.0611 0.358 0.395 0.000799 0.000881 Wall time: 76907.11908992892 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 368 100 0.0499 0.00223 0.00518 0.0431 0.0576 0.072 0.0878 0.000161 0.000196 368 118 0.062 0.00291 0.00379 0.0487 0.0658 0.0618 0.075 0.000138 0.000168 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 368 100 0.0392 0.00147 0.00967 0.0361 0.0468 0.113 0.12 0.000252 0.000268 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 368 77116.326 0.005 0.00228 0.0159 0.0614 0.0434 0.0581 0.122 0.154 0.000272 0.000344 ! Validation 368 77116.326 0.005 0.00248 0.0217 0.0713 0.0454 0.0607 0.144 0.18 0.000321 0.000401 Wall time: 77116.32658188837 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 369 100 0.171 0.00227 0.126 0.0438 0.0581 0.426 0.433 0.000952 0.000966 369 118 0.13 0.00253 0.0799 0.0464 0.0613 0.332 0.345 0.000742 0.000769 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 369 100 0.043 0.0019 0.005 0.0412 0.0531 0.0787 0.0862 0.000176 0.000192 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 369 77325.601 0.005 0.00231 0.0396 0.0859 0.0438 0.0586 0.188 0.242 0.000419 0.00054 ! Validation 369 77325.601 0.005 0.00296 0.0508 0.11 0.0502 0.0664 0.208 0.275 0.000465 0.000613 Wall time: 77325.60173568735 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 370 100 0.0474 0.00209 0.00549 0.0418 0.0558 0.0693 0.0904 0.000155 0.000202 370 118 0.0646 0.00264 0.0117 0.047 0.0627 0.111 0.132 0.000247 0.000295 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 370 100 0.0421 0.00159 0.0103 0.0377 0.0487 0.113 0.124 0.000253 0.000276 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 370 77534.782 0.005 0.00233 0.018 0.0646 0.044 0.0589 0.122 0.164 0.000272 0.000365 ! Validation 370 77534.782 0.005 0.00258 0.0751 0.127 0.0464 0.062 0.268 0.334 0.000598 0.000746 Wall time: 77534.78270546813 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 371 100 0.05 0.00219 0.00611 0.043 0.0571 0.0795 0.0954 0.000177 0.000213 371 118 0.0851 0.00274 0.0304 0.0467 0.0638 0.198 0.213 0.000443 0.000475 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 371 100 0.119 0.00157 0.0877 0.0374 0.0484 0.358 0.361 0.000798 0.000806 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 371 77743.977 0.005 0.00229 0.0251 0.0708 0.0436 0.0583 0.16 0.193 0.000358 0.000431 ! Validation 371 77743.977 0.005 0.00257 0.223 0.275 0.0463 0.0619 0.54 0.577 0.00121 0.00129 Wall time: 77743.97708098218 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 372 100 0.0594 0.00231 0.0132 0.0437 0.0586 0.119 0.14 0.000266 0.000313 372 118 0.0538 0.0021 0.0117 0.0417 0.0559 0.113 0.132 0.000252 0.000295 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 372 100 0.0665 0.00155 0.0355 0.0369 0.048 0.226 0.23 0.000503 0.000513 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 372 77953.186 0.005 0.0023 0.0191 0.065 0.0437 0.0585 0.133 0.169 0.000296 0.000377 ! Validation 372 77953.186 0.005 0.00254 0.0329 0.0837 0.046 0.0615 0.186 0.221 0.000415 0.000494 Wall time: 77953.1864174162 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 373 100 0.0801 0.00248 0.0305 0.0453 0.0607 0.2 0.213 0.000446 0.000476 373 118 0.0636 0.0025 0.0136 0.0452 0.061 0.101 0.142 0.000225 0.000318 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 373 100 0.0688 0.00161 0.0365 0.0376 0.049 0.227 0.233 0.000507 0.00052 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 373 78162.466 0.005 0.00222 0.0143 0.0588 0.0429 0.0575 0.115 0.146 0.000257 0.000326 ! Validation 373 78162.466 0.005 0.0026 0.0473 0.0993 0.0464 0.0622 0.222 0.265 0.000495 0.000592 Wall time: 78162.46658926504 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 374 100 0.0564 0.00235 0.00947 0.0444 0.0591 0.0951 0.119 0.000212 0.000265 374 118 0.0641 0.00234 0.0173 0.044 0.059 0.146 0.16 0.000325 0.000358 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 374 100 0.0489 0.00148 0.0193 0.0361 0.0469 0.164 0.17 0.000366 0.000378 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 374 78371.659 0.005 0.00227 0.0227 0.0681 0.0435 0.0581 0.143 0.184 0.000319 0.00041 ! Validation 374 78371.659 0.005 0.00247 0.0782 0.128 0.0453 0.0606 0.284 0.341 0.000635 0.000761 Wall time: 78371.65967688523 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 375 100 0.0743 0.00214 0.0314 0.0423 0.0565 0.202 0.216 0.000451 0.000482 375 118 0.0678 0.00292 0.00949 0.0481 0.0659 0.0999 0.119 0.000223 0.000265 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 375 100 0.0476 0.00149 0.0178 0.0363 0.047 0.156 0.163 0.000348 0.000363 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 375 78580.852 0.005 0.00226 0.0186 0.0638 0.0433 0.0579 0.136 0.167 0.000304 0.000372 ! Validation 375 78580.852 0.005 0.00247 0.0251 0.0746 0.0453 0.0607 0.159 0.193 0.000355 0.000432 Wall time: 78580.85224734526 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 376 100 0.0794 0.00224 0.0347 0.0429 0.0577 0.202 0.227 0.00045 0.000507 376 118 0.0754 0.00207 0.0341 0.042 0.0554 0.221 0.225 0.000492 0.000502 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 376 100 0.0334 0.0016 0.00133 0.0376 0.0488 0.0391 0.0445 8.74e-05 9.94e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 376 78790.062 0.005 0.00229 0.0273 0.073 0.0436 0.0583 0.163 0.201 0.000363 0.000449 ! Validation 376 78790.062 0.005 0.0026 0.0233 0.0754 0.0466 0.0622 0.152 0.186 0.000338 0.000416 Wall time: 78790.06267946213 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 377 100 0.0507 0.00204 0.0099 0.0412 0.0551 0.0998 0.121 0.000223 0.000271 377 118 0.0531 0.002 0.0131 0.041 0.0545 0.132 0.14 0.000295 0.000311 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 377 100 0.0486 0.00149 0.0188 0.0364 0.0471 0.158 0.167 0.000354 0.000373 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 377 78999.264 0.005 0.00229 0.0205 0.0662 0.0436 0.0583 0.141 0.175 0.000315 0.00039 ! Validation 377 78999.264 0.005 0.00247 0.0691 0.119 0.0453 0.0606 0.276 0.321 0.000616 0.000716 Wall time: 78999.26499468926 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 378 100 0.0452 0.0021 0.00318 0.042 0.0559 0.055 0.0688 0.000123 0.000154 378 118 0.0457 0.00194 0.00692 0.0407 0.0537 0.0892 0.101 0.000199 0.000226 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 378 100 0.0347 0.00148 0.00511 0.0362 0.0469 0.0732 0.0872 0.000163 0.000195 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 378 79208.550 0.005 0.00231 0.028 0.0741 0.0439 0.0586 0.163 0.205 0.000365 0.000456 ! Validation 378 79208.550 0.005 0.00245 0.0449 0.0939 0.0451 0.0604 0.21 0.258 0.000469 0.000577 Wall time: 79208.55034358101 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 379 100 0.0721 0.00206 0.0309 0.0415 0.0554 0.191 0.214 0.000427 0.000478 379 118 0.0692 0.00209 0.0273 0.0421 0.0558 0.186 0.202 0.000415 0.00045 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 379 100 0.0315 0.00153 0.000966 0.0368 0.0477 0.0332 0.0379 7.42e-05 8.46e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 379 79417.766 0.005 0.00222 0.0208 0.0652 0.043 0.0575 0.142 0.176 0.000317 0.000392 ! Validation 379 79417.766 0.005 0.0025 0.0386 0.0887 0.0459 0.061 0.183 0.24 0.000407 0.000535 Wall time: 79417.76690983633 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 380 100 0.0527 0.00224 0.00788 0.0433 0.0577 0.0867 0.108 0.000194 0.000242 380 118 0.0511 0.0022 0.00714 0.0425 0.0571 0.0896 0.103 0.0002 0.00023 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 380 100 0.058 0.00148 0.0283 0.036 0.047 0.202 0.205 0.00045 0.000458 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 380 79626.975 0.005 0.00228 0.0248 0.0703 0.0436 0.0582 0.16 0.192 0.000357 0.00043 ! Validation 380 79626.975 0.005 0.00248 0.0741 0.124 0.0454 0.0607 0.283 0.332 0.000632 0.000741 Wall time: 79626.97579074418 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 381 100 0.0724 0.00201 0.0323 0.0411 0.0546 0.198 0.219 0.000443 0.000489 381 118 0.0714 0.00255 0.0205 0.0455 0.0615 0.151 0.175 0.000337 0.00039 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 381 100 0.0407 0.0015 0.0106 0.0365 0.0473 0.117 0.126 0.000262 0.00028 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 381 79836.177 0.005 0.00219 0.0105 0.0542 0.0426 0.057 0.0986 0.125 0.00022 0.000278 ! Validation 381 79836.177 0.005 0.00249 0.11 0.16 0.0455 0.0608 0.335 0.405 0.000747 0.000904 Wall time: 79836.17777181696 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 382 100 0.0557 0.00248 0.00611 0.046 0.0607 0.0733 0.0953 0.000164 0.000213 382 118 0.0663 0.00267 0.0129 0.0471 0.0631 0.118 0.138 0.000263 0.000309 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 382 100 0.125 0.00153 0.094 0.0367 0.0477 0.372 0.374 0.000829 0.000834 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 382 80045.388 0.005 0.00227 0.0254 0.0708 0.0434 0.058 0.153 0.195 0.000342 0.000435 ! Validation 382 80045.388 0.005 0.0025 0.0711 0.121 0.0456 0.061 0.284 0.325 0.000633 0.000726 Wall time: 80045.38901742408 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 383 100 0.0518 0.00221 0.00765 0.0429 0.0573 0.0885 0.107 0.000197 0.000238 383 118 0.0508 0.00229 0.00497 0.0434 0.0584 0.0673 0.086 0.00015 0.000192 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 383 100 0.03 0.00144 0.0012 0.0356 0.0462 0.0368 0.0423 8.22e-05 9.43e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 383 80254.660 0.005 0.00223 0.0232 0.0677 0.043 0.0575 0.14 0.186 0.000314 0.000416 ! Validation 383 80254.660 0.005 0.00241 0.0349 0.0831 0.0447 0.0598 0.178 0.228 0.000397 0.000509 Wall time: 80254.66062747827 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 384 100 0.049 0.0022 0.00493 0.0426 0.0572 0.0694 0.0856 0.000155 0.000191 384 118 0.0423 0.00194 0.00346 0.0405 0.0538 0.0597 0.0717 0.000133 0.00016 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 384 100 0.0317 0.00151 0.00137 0.0366 0.0475 0.0389 0.0452 8.68e-05 0.000101 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 384 80463.876 0.005 0.00251 0.0322 0.0824 0.0455 0.0611 0.163 0.22 0.000364 0.00049 ! Validation 384 80463.876 0.005 0.00249 0.0456 0.0954 0.0455 0.0608 0.2 0.261 0.000446 0.000582 Wall time: 80463.87611945998 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 385 100 0.0628 0.00235 0.0159 0.0446 0.0591 0.122 0.154 0.000272 0.000343 385 118 0.0493 0.00217 0.00589 0.0427 0.0568 0.0701 0.0936 0.000156 0.000209 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 385 100 0.0486 0.00167 0.0152 0.0383 0.0499 0.144 0.15 0.000321 0.000336 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 385 80673.089 0.005 0.00222 0.0222 0.0666 0.043 0.0575 0.141 0.182 0.000314 0.000407 ! Validation 385 80673.089 0.005 0.00263 0.0532 0.106 0.0468 0.0626 0.213 0.281 0.000476 0.000628 Wall time: 80673.08930347906 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 386 100 0.0504 0.00224 0.00568 0.0434 0.0577 0.0743 0.0919 0.000166 0.000205 386 118 0.0515 0.00192 0.013 0.0407 0.0535 0.126 0.139 0.000282 0.000311 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 386 100 0.0358 0.00157 0.00439 0.0372 0.0483 0.0694 0.0808 0.000155 0.00018 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 386 80882.283 0.005 0.00221 0.0165 0.0607 0.0429 0.0574 0.125 0.157 0.00028 0.000349 ! Validation 386 80882.283 0.005 0.00252 0.0436 0.094 0.0459 0.0612 0.191 0.255 0.000427 0.000568 Wall time: 80882.28344420623 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 387 100 0.0528 0.00223 0.00824 0.0428 0.0576 0.0941 0.111 0.00021 0.000247 387 118 0.188 0.0024 0.14 0.0444 0.0597 0.443 0.457 0.000989 0.00102 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 387 100 0.143 0.00174 0.109 0.0389 0.0509 0.399 0.402 0.000891 0.000897 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 387 81091.483 0.005 0.0022 0.0182 0.0623 0.0428 0.0572 0.128 0.161 0.000285 0.000359 ! Validation 387 81091.483 0.005 0.00273 0.0876 0.142 0.0478 0.0637 0.32 0.361 0.000715 0.000806 Wall time: 81091.48332237313 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 388 100 0.0582 0.00202 0.0178 0.0413 0.0548 0.144 0.163 0.000321 0.000363 388 118 0.0454 0.00195 0.00642 0.0405 0.0538 0.0763 0.0977 0.00017 0.000218 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 388 100 0.0299 0.00143 0.00125 0.0356 0.0461 0.0369 0.0431 8.25e-05 9.62e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 388 81300.773 0.005 0.00227 0.0204 0.0658 0.0435 0.0581 0.137 0.175 0.000305 0.00039 ! Validation 388 81300.773 0.005 0.0024 0.029 0.0769 0.0447 0.0597 0.17 0.208 0.00038 0.000463 Wall time: 81300.77392035816 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 389 100 0.0543 0.00208 0.0127 0.0418 0.0556 0.118 0.138 0.000264 0.000307 389 118 0.0522 0.00213 0.0095 0.0421 0.0563 0.102 0.119 0.000227 0.000265 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 389 100 0.0302 0.0014 0.00216 0.0352 0.0456 0.0434 0.0567 9.69e-05 0.000127 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 389 81509.981 0.005 0.00219 0.0178 0.0615 0.0427 0.057 0.127 0.163 0.000284 0.000364 ! Validation 389 81509.981 0.005 0.00235 0.0342 0.0812 0.0443 0.0592 0.173 0.225 0.000387 0.000503 Wall time: 81509.98191994429 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 390 100 0.057 0.00214 0.0142 0.0421 0.0564 0.123 0.145 0.000275 0.000324 390 118 0.0558 0.00196 0.0166 0.0409 0.054 0.142 0.157 0.000317 0.00035 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 390 100 0.0337 0.00143 0.00514 0.0355 0.0461 0.0763 0.0874 0.00017 0.000195 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 390 81719.187 0.005 0.00226 0.0228 0.068 0.0434 0.058 0.131 0.184 0.000293 0.000411 ! Validation 390 81719.187 0.005 0.00238 0.0457 0.0932 0.0445 0.0594 0.212 0.261 0.000473 0.000582 Wall time: 81719.18705864111 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 391 100 0.0602 0.00232 0.0138 0.0439 0.0588 0.127 0.143 0.000283 0.000319 391 118 0.0936 0.00216 0.0503 0.0425 0.0567 0.269 0.274 0.000601 0.000611 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 391 100 0.0647 0.00147 0.0353 0.036 0.0467 0.226 0.229 0.000505 0.000511 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 391 81928.392 0.005 0.00216 0.0235 0.0667 0.0424 0.0566 0.152 0.186 0.000339 0.000416 ! Validation 391 81928.392 0.005 0.00242 0.0983 0.147 0.0449 0.0599 0.344 0.382 0.000768 0.000853 Wall time: 81928.39289686922 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 392 100 0.0591 0.00253 0.00858 0.0461 0.0613 0.087 0.113 0.000194 0.000252 392 118 0.0667 0.00292 0.00825 0.0488 0.0659 0.0922 0.111 0.000206 0.000247 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 392 100 0.0302 0.00144 0.00138 0.0358 0.0463 0.0331 0.0453 7.38e-05 0.000101 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 392 82137.659 0.005 0.00218 0.0212 0.0647 0.0425 0.0568 0.142 0.178 0.000316 0.000397 ! Validation 392 82137.659 0.005 0.00241 0.0513 0.0995 0.0449 0.0599 0.223 0.276 0.000498 0.000616 Wall time: 82137.65941560222 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 393 100 0.0624 0.0021 0.0204 0.0418 0.0559 0.152 0.174 0.00034 0.000389 393 118 0.0415 0.00189 0.00367 0.0404 0.053 0.0552 0.0739 0.000123 0.000165 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 393 100 0.0302 0.0014 0.00216 0.0352 0.0456 0.0424 0.0567 9.47e-05 0.000126 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 393 82346.841 0.005 0.00218 0.0198 0.0634 0.0426 0.0569 0.136 0.172 0.000304 0.000384 ! Validation 393 82346.841 0.005 0.00234 0.0366 0.0835 0.0442 0.059 0.186 0.233 0.000416 0.000521 Wall time: 82346.84110644506 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 394 100 0.0549 0.00231 0.00872 0.0436 0.0586 0.0962 0.114 0.000215 0.000254 394 118 0.0382 0.00167 0.00488 0.0377 0.0498 0.0589 0.0852 0.000132 0.00019 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 394 100 0.0463 0.00139 0.0186 0.035 0.0454 0.162 0.166 0.000361 0.000371 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 394 82556.009 0.005 0.00218 0.0255 0.0691 0.0427 0.0569 0.157 0.195 0.00035 0.000436 ! Validation 394 82556.009 0.005 0.00232 0.0722 0.119 0.044 0.0588 0.285 0.328 0.000637 0.000731 Wall time: 82556.00985324895 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 395 100 0.0472 0.00218 0.00367 0.0426 0.0569 0.0599 0.0739 0.000134 0.000165 395 118 0.0504 0.00238 0.0029 0.0444 0.0594 0.0559 0.0657 0.000125 0.000147 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 395 100 0.0291 0.00139 0.00125 0.035 0.0455 0.0332 0.0432 7.41e-05 9.64e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 395 82765.183 0.005 0.00213 0.0155 0.0582 0.0421 0.0563 0.122 0.152 0.000273 0.00034 ! Validation 395 82765.183 0.005 0.00231 0.0258 0.0719 0.0438 0.0586 0.158 0.196 0.000352 0.000437 Wall time: 82765.18382697692 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 396 100 0.0933 0.0026 0.0412 0.0466 0.0622 0.234 0.248 0.000523 0.000553 396 118 0.0518 0.00218 0.00815 0.0428 0.057 0.0887 0.11 0.000198 0.000246 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 396 100 0.0384 0.00137 0.011 0.0347 0.0452 0.123 0.128 0.000274 0.000286 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 396 82974.587 0.005 0.00229 0.0276 0.0734 0.0437 0.0584 0.155 0.203 0.000345 0.000453 ! Validation 396 82974.587 0.005 0.00231 0.0228 0.0691 0.0439 0.0586 0.145 0.184 0.000323 0.000411 Wall time: 82974.58756644 ! Best model 396 0.069 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 397 100 0.0567 0.00212 0.0143 0.0424 0.0562 0.131 0.146 0.000291 0.000326 397 118 0.107 0.00225 0.0622 0.0437 0.0579 0.287 0.304 0.00064 0.000679 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 397 100 0.091 0.00239 0.0433 0.0456 0.0596 0.251 0.254 0.000561 0.000567 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 397 83183.874 0.005 0.00208 0.018 0.0597 0.0417 0.0557 0.127 0.162 0.000284 0.000362 ! Validation 397 83183.874 0.005 0.00339 0.0735 0.141 0.0536 0.071 0.285 0.331 0.000635 0.000738 Wall time: 83183.8748910022 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 398 100 0.0847 0.00208 0.0431 0.0414 0.0556 0.242 0.253 0.00054 0.000565 398 118 0.0505 0.00231 0.00431 0.0439 0.0586 0.0634 0.0801 0.000141 0.000179 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 398 100 0.0299 0.00145 0.000901 0.0359 0.0465 0.0312 0.0366 6.96e-05 8.17e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 398 83393.086 0.005 0.00224 0.0188 0.0635 0.0432 0.0577 0.131 0.167 0.000292 0.000374 ! Validation 398 83393.086 0.005 0.00236 0.0293 0.0764 0.0443 0.0592 0.16 0.209 0.000358 0.000466 Wall time: 83393.08695426816 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 399 100 0.0466 0.00206 0.00529 0.0416 0.0554 0.0728 0.0887 0.000162 0.000198 399 118 0.129 0.00253 0.0782 0.0459 0.0614 0.325 0.341 0.000726 0.000761 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 399 100 0.0555 0.00162 0.0232 0.0372 0.049 0.183 0.186 0.000409 0.000414 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 399 83602.307 0.005 0.00206 0.0184 0.0597 0.0414 0.0553 0.129 0.164 0.000288 0.000365 ! Validation 399 83602.307 0.005 0.00255 0.0694 0.12 0.0461 0.0616 0.267 0.321 0.000595 0.000717 Wall time: 83602.30747210933 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 400 100 0.0531 0.00213 0.0106 0.0419 0.0563 0.105 0.125 0.000235 0.00028 400 118 0.0488 0.00211 0.00658 0.0415 0.0561 0.0849 0.0989 0.000189 0.000221 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 400 100 0.045 0.00145 0.016 0.0358 0.0464 0.148 0.154 0.00033 0.000344 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 400 83811.550 0.005 0.00307 0.0559 0.117 0.0498 0.0676 0.222 0.289 0.000495 0.000645 ! Validation 400 83811.550 0.005 0.00237 0.0773 0.125 0.0445 0.0594 0.29 0.339 0.000647 0.000757 Wall time: 83811.5501557393 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 401 100 0.0482 0.00211 0.00601 0.0417 0.056 0.0733 0.0945 0.000164 0.000211 401 118 0.0513 0.00238 0.00361 0.044 0.0595 0.063 0.0732 0.000141 0.000163 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 401 100 0.0317 0.00137 0.00431 0.0348 0.0452 0.0728 0.0801 0.000163 0.000179 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 401 84020.801 0.005 0.00206 0.00861 0.0498 0.0414 0.0553 0.0901 0.113 0.000201 0.000253 ! Validation 401 84020.801 0.005 0.0023 0.0385 0.0845 0.0438 0.0585 0.193 0.239 0.000432 0.000534 Wall time: 84020.80150764016 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 402 100 0.043 0.00192 0.00456 0.0401 0.0534 0.0655 0.0824 0.000146 0.000184 402 118 0.0485 0.00224 0.00366 0.0431 0.0577 0.0587 0.0738 0.000131 0.000165 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 402 100 0.0738 0.00134 0.047 0.0344 0.0446 0.262 0.264 0.000584 0.00059 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 402 84230.120 0.005 0.00204 0.0142 0.055 0.0412 0.0551 0.116 0.146 0.00026 0.000325 ! Validation 402 84230.120 0.005 0.00225 0.0971 0.142 0.0433 0.0579 0.342 0.38 0.000764 0.000848 Wall time: 84230.12097500637 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 403 100 0.0857 0.00221 0.0414 0.043 0.0574 0.236 0.248 0.000527 0.000554 403 118 0.0449 0.00205 0.00381 0.0412 0.0553 0.0644 0.0753 0.000144 0.000168 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 403 100 0.0483 0.0015 0.0184 0.0362 0.0472 0.163 0.165 0.000365 0.000369 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 403 84439.334 0.005 0.00208 0.0168 0.0585 0.0417 0.0557 0.128 0.159 0.000286 0.000354 ! Validation 403 84439.334 0.005 0.0024 0.0492 0.0971 0.0448 0.0597 0.229 0.27 0.00051 0.000604 Wall time: 84439.33409776399 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 404 100 0.0486 0.00213 0.00599 0.0418 0.0563 0.0778 0.0944 0.000174 0.000211 404 118 0.0529 0.00216 0.00973 0.0416 0.0566 0.11 0.12 0.000245 0.000269 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 404 100 0.0316 0.00133 0.00498 0.0343 0.0445 0.0808 0.0861 0.00018 0.000192 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 404 84648.563 0.005 0.00208 0.0187 0.0603 0.0417 0.0556 0.128 0.167 0.000287 0.000373 ! Validation 404 84648.563 0.005 0.00224 0.022 0.0668 0.0432 0.0577 0.144 0.181 0.000322 0.000404 Wall time: 84648.56368465303 ! Best model 404 0.067 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 405 100 0.0617 0.00206 0.0206 0.0413 0.0553 0.16 0.175 0.000357 0.000391 405 118 0.0617 0.00212 0.0193 0.0417 0.0561 0.147 0.169 0.000327 0.000378 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 405 100 0.0295 0.00135 0.00244 0.0345 0.0449 0.0544 0.0602 0.000121 0.000134 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 405 84857.771 0.005 0.00204 0.0171 0.0579 0.0412 0.055 0.13 0.159 0.000289 0.000356 ! Validation 405 84857.771 0.005 0.00226 0.0352 0.0803 0.0433 0.0579 0.171 0.229 0.000381 0.000511 Wall time: 84857.77141941199 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 406 100 0.0472 0.00216 0.00409 0.0421 0.0566 0.0621 0.0779 0.000139 0.000174 406 118 0.0581 0.00234 0.0112 0.044 0.059 0.106 0.129 0.000236 0.000288 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 406 100 0.0407 0.00132 0.0142 0.0342 0.0444 0.141 0.146 0.000315 0.000325 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 406 85066.990 0.005 0.002 0.0112 0.0512 0.0408 0.0545 0.103 0.129 0.000231 0.000288 ! Validation 406 85066.990 0.005 0.00223 0.019 0.0635 0.0431 0.0576 0.134 0.168 0.0003 0.000375 Wall time: 85066.990225974 ! Best model 406 0.064 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 407 100 0.0533 0.00216 0.0101 0.0426 0.0567 0.107 0.122 0.000238 0.000273 407 118 0.0652 0.00235 0.0183 0.0444 0.0591 0.153 0.165 0.000342 0.000368 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 407 100 0.0309 0.00139 0.00305 0.0349 0.0455 0.0619 0.0673 0.000138 0.00015 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 407 85276.301 0.005 0.00214 0.0241 0.0669 0.0422 0.0564 0.149 0.189 0.000333 0.000423 ! Validation 407 85276.301 0.005 0.00231 0.018 0.0643 0.0439 0.0586 0.129 0.164 0.000289 0.000366 Wall time: 85276.3017874402 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 408 100 0.047 0.0019 0.00898 0.0399 0.0532 0.097 0.116 0.000217 0.000258 408 118 0.0681 0.00256 0.0169 0.0456 0.0617 0.142 0.158 0.000317 0.000354 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 408 100 0.0267 0.00131 0.000398 0.0341 0.0442 0.0197 0.0243 4.4e-05 5.43e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 408 85485.526 0.005 0.00213 0.019 0.0616 0.0421 0.0562 0.131 0.168 0.000292 0.000375 ! Validation 408 85485.526 0.005 0.00222 0.0737 0.118 0.0431 0.0575 0.255 0.331 0.000569 0.000739 Wall time: 85485.52624318702 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 409 100 0.0471 0.00197 0.00769 0.0407 0.0541 0.0866 0.107 0.000193 0.000239 409 118 0.0448 0.00202 0.00446 0.0413 0.0548 0.0674 0.0814 0.00015 0.000182 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 409 100 0.0283 0.0013 0.00241 0.0337 0.0439 0.0517 0.0598 0.000115 0.000134 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 409 85694.759 0.005 0.0024 0.0403 0.0884 0.0446 0.0598 0.188 0.246 0.00042 0.000548 ! Validation 409 85694.759 0.005 0.00222 0.0401 0.0844 0.0429 0.0574 0.202 0.244 0.000451 0.000545 Wall time: 85694.75914237322 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 410 100 0.0431 0.00193 0.0045 0.0402 0.0536 0.0655 0.0818 0.000146 0.000183 410 118 0.0462 0.00182 0.00976 0.0388 0.052 0.0914 0.12 0.000204 0.000269 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 410 100 0.0321 0.00129 0.00627 0.0336 0.0438 0.0925 0.0966 0.000206 0.000216 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 410 85903.987 0.005 0.00197 0.00918 0.0485 0.0405 0.0541 0.0947 0.117 0.000211 0.000261 ! Validation 410 85903.987 0.005 0.00219 0.042 0.0858 0.0427 0.0571 0.2 0.25 0.000445 0.000558 Wall time: 85903.98779594107 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 411 100 0.0497 0.0022 0.00579 0.0429 0.0571 0.0772 0.0928 0.000172 0.000207 411 118 0.076 0.00205 0.035 0.0414 0.0552 0.197 0.228 0.00044 0.000509 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 411 100 0.0304 0.00136 0.00317 0.0349 0.045 0.0623 0.0687 0.000139 0.000153 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 411 86113.307 0.005 0.00208 0.021 0.0626 0.0416 0.0556 0.144 0.176 0.000321 0.000394 ! Validation 411 86113.307 0.005 0.00229 0.0285 0.0743 0.0438 0.0584 0.157 0.206 0.00035 0.000459 Wall time: 86113.30719028693 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 412 100 0.0501 0.00213 0.00751 0.042 0.0563 0.0869 0.106 0.000194 0.000236 412 118 0.046 0.0018 0.01 0.039 0.0517 0.107 0.122 0.000239 0.000273 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 412 100 0.0313 0.00136 0.00417 0.0346 0.0449 0.072 0.0787 0.000161 0.000176 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 412 86322.546 0.005 0.00221 0.0309 0.075 0.0431 0.0573 0.173 0.215 0.000386 0.00048 ! Validation 412 86322.546 0.005 0.00224 0.0507 0.0954 0.0432 0.0577 0.228 0.274 0.00051 0.000613 Wall time: 86322.5465100212 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 413 100 0.041 0.00184 0.00409 0.0391 0.0524 0.0612 0.078 0.000137 0.000174 413 118 0.0447 0.00179 0.00892 0.0391 0.0516 0.0865 0.115 0.000193 0.000257 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 413 100 0.0298 0.00133 0.00323 0.0343 0.0444 0.0612 0.0693 0.000137 0.000155 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 413 86531.780 0.005 0.00194 0.00661 0.0455 0.0402 0.0538 0.0785 0.0991 0.000175 0.000221 ! Validation 413 86531.780 0.005 0.0022 0.0356 0.0796 0.0428 0.0572 0.19 0.23 0.000425 0.000514 Wall time: 86531.78073047707 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 414 100 0.0563 0.00189 0.0185 0.0396 0.053 0.139 0.166 0.00031 0.000371 414 118 0.039 0.00172 0.00463 0.0381 0.0506 0.0679 0.083 0.000151 0.000185 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 414 100 0.033 0.00133 0.00649 0.0342 0.0444 0.0927 0.0982 0.000207 0.000219 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 414 86741.028 0.005 0.00201 0.0178 0.058 0.0409 0.0547 0.131 0.163 0.000292 0.000364 ! Validation 414 86741.028 0.005 0.00219 0.0661 0.11 0.0428 0.0571 0.261 0.314 0.000583 0.0007 Wall time: 86741.02877423912 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 415 100 0.0473 0.00196 0.00811 0.0404 0.054 0.0828 0.11 0.000185 0.000245 415 118 0.134 0.00256 0.0828 0.047 0.0617 0.344 0.351 0.000767 0.000783 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 415 100 0.0853 0.00189 0.0476 0.0407 0.053 0.265 0.266 0.00059 0.000594 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 415 86950.276 0.005 0.002 0.0239 0.064 0.0408 0.0545 0.147 0.187 0.000328 0.000417 ! Validation 415 86950.276 0.005 0.00284 0.0359 0.0928 0.049 0.065 0.186 0.231 0.000415 0.000516 Wall time: 86950.27665291028 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 416 100 0.0543 0.00208 0.0128 0.0417 0.0556 0.109 0.138 0.000243 0.000309 416 118 0.0441 0.00202 0.00361 0.0407 0.0549 0.0499 0.0732 0.000111 0.000163 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 416 100 0.034 0.00144 0.00515 0.0354 0.0463 0.0833 0.0875 0.000186 0.000195 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 416 87159.593 0.005 0.00208 0.0196 0.0612 0.0417 0.0556 0.134 0.171 0.000299 0.000382 ! Validation 416 87159.593 0.005 0.0023 0.0376 0.0836 0.0438 0.0584 0.191 0.237 0.000427 0.000528 Wall time: 87159.59326383704 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 417 100 0.0457 0.00213 0.00313 0.0422 0.0562 0.0548 0.0682 0.000122 0.000152 417 118 0.0613 0.0021 0.0194 0.0409 0.0558 0.148 0.17 0.000331 0.000379 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 417 100 0.0292 0.00139 0.0014 0.0349 0.0455 0.0343 0.0456 7.65e-05 0.000102 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 417 87368.841 0.005 0.00204 0.0166 0.0574 0.0412 0.0551 0.126 0.157 0.000281 0.000351 ! Validation 417 87368.841 0.005 0.00223 0.0387 0.0834 0.0432 0.0577 0.187 0.24 0.000418 0.000535 Wall time: 87368.84155178303 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 418 100 0.0428 0.00194 0.00405 0.04 0.0537 0.0622 0.0776 0.000139 0.000173 418 118 0.0528 0.0023 0.0069 0.0435 0.0585 0.0828 0.101 0.000185 0.000226 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 418 100 0.0289 0.0013 0.0029 0.0338 0.044 0.0574 0.0656 0.000128 0.000147 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 418 87578.096 0.005 0.00214 0.0243 0.0671 0.0422 0.0564 0.142 0.191 0.000316 0.000426 ! Validation 418 87578.096 0.005 0.00215 0.0277 0.0708 0.0423 0.0566 0.153 0.203 0.000342 0.000453 Wall time: 87578.0965668573 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 419 100 0.0441 0.00192 0.00563 0.0399 0.0535 0.0732 0.0915 0.000164 0.000204 419 118 0.047 0.00196 0.00775 0.0411 0.054 0.0887 0.107 0.000198 0.00024 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 419 100 0.056 0.00134 0.0293 0.0344 0.0446 0.206 0.209 0.00046 0.000466 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 419 87787.370 0.005 0.00203 0.0216 0.0621 0.0411 0.0549 0.143 0.179 0.000319 0.000401 ! Validation 419 87787.370 0.005 0.00223 0.0259 0.0705 0.0432 0.0576 0.161 0.196 0.00036 0.000438 Wall time: 87787.3701800541 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 420 100 0.0435 0.00198 0.00384 0.0407 0.0543 0.0598 0.0755 0.000133 0.000169 420 118 0.0894 0.00372 0.015 0.0546 0.0744 0.116 0.149 0.00026 0.000333 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 420 100 0.0451 0.00185 0.00805 0.0398 0.0525 0.101 0.109 0.000225 0.000244 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 420 87996.620 0.005 0.00203 0.0224 0.0629 0.0409 0.0548 0.137 0.183 0.000307 0.000407 ! Validation 420 87996.620 0.005 0.00273 0.0244 0.0789 0.0477 0.0637 0.154 0.19 0.000343 0.000425 Wall time: 87996.62049642531 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 421 100 0.0695 0.00205 0.0286 0.0414 0.0552 0.194 0.206 0.000433 0.000461 421 118 0.0531 0.00226 0.00781 0.0422 0.058 0.0999 0.108 0.000223 0.000241 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 421 100 0.0298 0.00135 0.00272 0.0347 0.0449 0.0591 0.0636 0.000132 0.000142 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 421 88206.034 0.005 0.00212 0.0154 0.0578 0.042 0.0561 0.12 0.152 0.000268 0.000339 ! Validation 421 88206.034 0.005 0.00223 0.0301 0.0747 0.0433 0.0576 0.166 0.211 0.000371 0.000472 Wall time: 88206.03461503424 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 422 100 0.0559 0.00206 0.0147 0.0413 0.0553 0.132 0.148 0.000294 0.00033 422 118 0.0474 0.00184 0.0107 0.0394 0.0523 0.107 0.126 0.000238 0.000281 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 422 100 0.0332 0.00129 0.00738 0.0338 0.0438 0.0993 0.105 0.000222 0.000234 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 422 88415.301 0.005 0.00191 0.011 0.0492 0.0399 0.0533 0.104 0.128 0.000231 0.000286 ! Validation 422 88415.301 0.005 0.00213 0.0192 0.0618 0.0421 0.0563 0.133 0.169 0.000297 0.000377 Wall time: 88415.30138317216 ! Best model 422 0.062 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 423 100 0.0792 0.00206 0.0381 0.0412 0.0553 0.222 0.238 0.000496 0.000531 423 118 0.0591 0.00231 0.0129 0.0441 0.0586 0.124 0.139 0.000277 0.000309 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 423 100 0.0369 0.0014 0.00884 0.0349 0.0457 0.107 0.115 0.000239 0.000256 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 423 88624.570 0.005 0.00196 0.0155 0.0547 0.0404 0.054 0.12 0.152 0.000267 0.000339 ! Validation 423 88624.570 0.005 0.00228 0.0313 0.0769 0.0436 0.0582 0.167 0.216 0.000372 0.000482 Wall time: 88624.57084751595 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 424 100 1.97 0.0755 0.462 0.253 0.335 0.668 0.829 0.00149 0.00185 424 118 1.17 0.0462 0.25 0.194 0.262 0.497 0.61 0.00111 0.00136 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 424 100 2.11 0.0353 1.41 0.172 0.229 1.44 1.45 0.00322 0.00323 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 424 88833.821 0.005 0.105 2.94 5.04 0.23 0.396 1.28 2.1 0.00286 0.00468 ! Validation 424 88833.821 0.005 0.0452 0.314 1.22 0.193 0.259 0.553 0.683 0.00124 0.00152 Wall time: 88833.82137365034 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 425 100 0.235 0.00934 0.0481 0.0879 0.118 0.215 0.268 0.000481 0.000597 425 118 0.222 0.0086 0.0504 0.0837 0.113 0.197 0.274 0.00044 0.000611 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 425 100 0.143 0.00584 0.0259 0.0714 0.0932 0.193 0.196 0.00043 0.000438 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 425 89043.068 0.005 0.0174 0.0942 0.441 0.116 0.161 0.291 0.375 0.000651 0.000837 ! Validation 425 89043.068 0.005 0.00853 0.102 0.273 0.0846 0.113 0.313 0.39 0.000698 0.00087 Wall time: 89043.06830719998 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 426 100 0.131 0.00516 0.0283 0.0666 0.0876 0.152 0.205 0.000339 0.000458 426 118 0.122 0.00518 0.0188 0.0666 0.0877 0.137 0.167 0.000305 0.000373 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 426 100 0.0935 0.00352 0.0231 0.0557 0.0724 0.179 0.185 0.000399 0.000413 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 426 89252.416 0.005 0.00611 0.0354 0.158 0.0719 0.0954 0.183 0.23 0.000409 0.000513 ! Validation 426 89252.416 0.005 0.00533 0.0439 0.151 0.0672 0.089 0.208 0.256 0.000464 0.000571 Wall time: 89252.41657798598 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 427 100 0.0955 0.00393 0.0169 0.0577 0.0765 0.122 0.158 0.000273 0.000354 427 118 0.102 0.00388 0.0241 0.0574 0.076 0.159 0.189 0.000354 0.000423 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 427 100 0.0706 0.00265 0.0176 0.0481 0.0627 0.157 0.162 0.000351 0.000362 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 427 89461.665 0.005 0.00431 0.0193 0.106 0.0604 0.0801 0.135 0.169 0.000301 0.000378 ! Validation 427 89461.665 0.005 0.00419 0.096 0.18 0.0593 0.0789 0.313 0.378 0.000699 0.000843 Wall time: 89461.66517181322 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 428 100 0.0856 0.00349 0.0159 0.0537 0.072 0.126 0.154 0.000281 0.000343 428 118 0.0839 0.00389 0.00608 0.0569 0.0761 0.0876 0.0951 0.000196 0.000212 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 428 100 0.0585 0.00225 0.0135 0.0442 0.0579 0.136 0.142 0.000305 0.000316 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 428 89670.973 0.005 0.00356 0.0182 0.0894 0.0545 0.0727 0.131 0.165 0.000292 0.000368 ! Validation 428 89670.973 0.005 0.00365 0.0701 0.143 0.0551 0.0737 0.265 0.323 0.000591 0.000721 Wall time: 89670.97347304737 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 429 100 0.072 0.00312 0.00962 0.0507 0.0681 0.0989 0.12 0.000221 0.000267 429 118 0.0666 0.003 0.00665 0.0499 0.0668 0.0871 0.0995 0.000194 0.000222 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 429 100 0.0415 0.00202 0.00107 0.0419 0.0549 0.0285 0.0398 6.37e-05 8.89e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 429 89880.220 0.005 0.00315 0.0159 0.079 0.0512 0.0685 0.123 0.154 0.000275 0.000344 ! Validation 429 89880.220 0.005 0.00332 0.0412 0.108 0.0525 0.0703 0.191 0.247 0.000427 0.000552 Wall time: 89880.22037905827 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 430 100 0.0818 0.00287 0.0243 0.0487 0.0654 0.161 0.19 0.00036 0.000425 430 118 0.0689 0.00299 0.00907 0.049 0.0667 0.0944 0.116 0.000211 0.000259 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 430 100 0.0399 0.00186 0.00265 0.0403 0.0527 0.0596 0.0628 0.000133 0.00014 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 430 90089.536 0.005 0.00291 0.0169 0.0752 0.0491 0.0658 0.127 0.159 0.000284 0.000355 ! Validation 430 90089.536 0.005 0.00311 0.0267 0.0889 0.0508 0.068 0.158 0.199 0.000353 0.000445 Wall time: 90089.536156205 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 431 100 0.0642 0.00283 0.00764 0.0484 0.0649 0.0829 0.107 0.000185 0.000238 431 118 0.0692 0.00272 0.0149 0.0473 0.0635 0.137 0.149 0.000306 0.000332 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 431 100 0.0403 0.00177 0.00497 0.0392 0.0512 0.0775 0.0859 0.000173 0.000192 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 431 90298.778 0.005 0.00274 0.0121 0.0668 0.0476 0.0638 0.108 0.134 0.00024 0.000299 ! Validation 431 90298.778 0.005 0.00295 0.0232 0.0821 0.0494 0.0662 0.15 0.186 0.000335 0.000414 Wall time: 90298.77865977911 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 432 100 0.0669 0.00269 0.0132 0.0471 0.0632 0.117 0.14 0.000262 0.000313 432 118 0.0604 0.00253 0.00986 0.0462 0.0613 0.0933 0.121 0.000208 0.00027 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 432 100 0.0534 0.00169 0.0195 0.0384 0.0502 0.166 0.17 0.000371 0.00038 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 432 90508.016 0.005 0.0026 0.00946 0.0615 0.0464 0.0622 0.0944 0.119 0.000211 0.000265 ! Validation 432 90508.016 0.005 0.00282 0.0635 0.12 0.0484 0.0648 0.257 0.307 0.000573 0.000686 Wall time: 90508.01656890102 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 433 100 0.0628 0.00239 0.0149 0.0447 0.0596 0.126 0.149 0.000281 0.000333 433 118 0.0534 0.00249 0.00354 0.0459 0.0609 0.0588 0.0725 0.000131 0.000162 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 433 100 0.0345 0.00163 0.00201 0.0377 0.0492 0.0521 0.0546 0.000116 0.000122 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 433 90717.278 0.005 0.00251 0.0103 0.0605 0.0456 0.0611 0.0993 0.124 0.000222 0.000277 ! Validation 433 90717.278 0.005 0.00273 0.0248 0.0794 0.0475 0.0637 0.15 0.192 0.000335 0.000429 Wall time: 90717.27855748637 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 434 100 0.0542 0.00242 0.00569 0.0448 0.06 0.0733 0.092 0.000164 0.000205 434 118 0.0581 0.00256 0.00688 0.0465 0.0617 0.0817 0.101 0.000182 0.000226 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 434 100 0.0546 0.00158 0.023 0.0371 0.0485 0.182 0.185 0.000406 0.000413 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 434 90926.512 0.005 0.00243 0.0076 0.0561 0.0448 0.06 0.0847 0.106 0.000189 0.000237 ! Validation 434 90926.512 0.005 0.00265 0.0278 0.0807 0.0469 0.0627 0.169 0.203 0.000378 0.000454 Wall time: 90926.51272113109 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 435 100 0.0605 0.00216 0.0173 0.0428 0.0567 0.134 0.161 0.000299 0.000358 435 118 0.0544 0.00267 0.000963 0.0466 0.063 0.0292 0.0379 6.52e-05 8.45e-05 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 435 100 0.0315 0.00153 0.000836 0.0366 0.0478 0.023 0.0353 5.14e-05 7.87e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 435 91135.813 0.005 0.00236 0.00922 0.0564 0.0442 0.0592 0.093 0.117 0.000208 0.000262 ! Validation 435 91135.813 0.005 0.00258 0.0283 0.0798 0.0462 0.0619 0.158 0.205 0.000353 0.000458 Wall time: 91135.81362522999 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 436 100 0.0544 0.00219 0.0106 0.0428 0.0571 0.101 0.126 0.000226 0.00028 436 118 0.0546 0.00236 0.0073 0.0445 0.0593 0.0823 0.104 0.000184 0.000233 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 436 100 0.0448 0.0015 0.0148 0.0363 0.0473 0.144 0.148 0.000322 0.000331 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 436 91345.048 0.005 0.0023 0.00977 0.0557 0.0436 0.0584 0.0972 0.121 0.000217 0.000269 ! Validation 436 91345.048 0.005 0.00252 0.0591 0.11 0.0457 0.0612 0.249 0.297 0.000557 0.000662 Wall time: 91345.0490013361 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 437 100 0.0534 0.00238 0.00581 0.0443 0.0595 0.073 0.093 0.000163 0.000208 437 118 0.0528 0.00198 0.0132 0.041 0.0543 0.123 0.14 0.000275 0.000312 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 437 100 0.031 0.00149 0.00129 0.0361 0.047 0.0421 0.0437 9.39e-05 9.76e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 437 91554.263 0.005 0.00225 0.0089 0.0538 0.0431 0.0578 0.0918 0.115 0.000205 0.000256 ! Validation 437 91554.263 0.005 0.00248 0.0237 0.0732 0.0453 0.0607 0.146 0.188 0.000326 0.000419 Wall time: 91554.26398741314 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 438 100 0.0517 0.00215 0.00877 0.0423 0.0565 0.0991 0.114 0.000221 0.000255 438 118 0.0453 0.00209 0.00357 0.0422 0.0557 0.0556 0.0728 0.000124 0.000163 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 438 100 0.0377 0.00144 0.00883 0.0356 0.0463 0.11 0.115 0.000246 0.000256 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 438 91763.555 0.005 0.0022 0.00624 0.0502 0.0427 0.0572 0.0771 0.0965 0.000172 0.000215 ! Validation 438 91763.555 0.005 0.00243 0.0512 0.0997 0.0449 0.0601 0.228 0.276 0.00051 0.000616 Wall time: 91763.5555252661 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 439 100 0.0632 0.00222 0.0188 0.0428 0.0575 0.154 0.167 0.000343 0.000373 439 118 0.0558 0.00244 0.00706 0.0449 0.0602 0.0842 0.103 0.000188 0.000229 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 439 100 0.0317 0.00143 0.0032 0.0355 0.046 0.0592 0.069 0.000132 0.000154 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 439 91972.786 0.005 0.00217 0.00989 0.0532 0.0423 0.0567 0.0981 0.121 0.000219 0.000271 ! Validation 439 91972.786 0.005 0.0024 0.0401 0.0881 0.0446 0.0597 0.193 0.244 0.000431 0.000545 Wall time: 91972.78620418208 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 440 100 0.0436 0.00195 0.00468 0.0404 0.0538 0.0643 0.0834 0.000143 0.000186 440 118 0.0448 0.002 0.00491 0.0405 0.0545 0.0661 0.0854 0.000148 0.000191 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 440 100 0.0287 0.0014 0.000695 0.0351 0.0456 0.0217 0.0322 4.84e-05 7.18e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 440 92182.103 0.005 0.00213 0.00954 0.0521 0.042 0.0563 0.0948 0.119 0.000212 0.000266 ! Validation 440 92182.103 0.005 0.00236 0.0273 0.0745 0.0442 0.0592 0.159 0.202 0.000354 0.00045 Wall time: 92182.10314665595 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 441 100 0.0479 0.00218 0.00428 0.0425 0.057 0.0619 0.0797 0.000138 0.000178 441 118 0.0499 0.00229 0.00408 0.0427 0.0583 0.0605 0.0779 0.000135 0.000174 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 441 100 0.0436 0.00138 0.016 0.0349 0.0453 0.15 0.154 0.000336 0.000344 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 441 92391.340 0.005 0.0021 0.00892 0.051 0.0417 0.0559 0.0916 0.115 0.000205 0.000258 ! Validation 441 92391.340 0.005 0.00233 0.0621 0.109 0.0439 0.0588 0.259 0.304 0.000578 0.000678 Wall time: 92391.34080546815 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 442 100 0.0445 0.00206 0.00322 0.0412 0.0554 0.0525 0.0692 0.000117 0.000154 442 118 0.0438 0.00208 0.00222 0.0415 0.0556 0.0484 0.0574 0.000108 0.000128 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 442 100 0.0281 0.00135 0.00106 0.0345 0.0448 0.0377 0.0397 8.41e-05 8.85e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 442 92600.595 0.005 0.00208 0.012 0.0536 0.0415 0.0556 0.106 0.134 0.000236 0.000299 ! Validation 442 92600.595 0.005 0.00229 0.0218 0.0677 0.0436 0.0584 0.14 0.18 0.000313 0.000402 Wall time: 92600.59555765009 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 443 100 0.0554 0.00202 0.0149 0.0411 0.0548 0.129 0.149 0.000289 0.000333 443 118 0.0469 0.0019 0.00882 0.0401 0.0532 0.0926 0.114 0.000207 0.000256 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 443 100 0.0402 0.00135 0.0133 0.0345 0.0448 0.136 0.141 0.000304 0.000314 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 443 92809.830 0.005 0.00206 0.012 0.0532 0.0413 0.0553 0.107 0.134 0.000239 0.000299 ! Validation 443 92809.830 0.005 0.00227 0.0544 0.0999 0.0434 0.0582 0.24 0.284 0.000537 0.000635 Wall time: 92809.83083061408 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 444 100 0.0443 0.002 0.00423 0.0406 0.0546 0.0611 0.0793 0.000136 0.000177 444 118 0.0592 0.0026 0.00724 0.0463 0.0621 0.0917 0.104 0.000205 0.000232 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 444 100 0.0291 0.00133 0.00242 0.0343 0.0445 0.05 0.06 0.000112 0.000134 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 444 93019.065 0.005 0.00204 0.0133 0.0541 0.0411 0.055 0.115 0.141 0.000257 0.000314 ! Validation 444 93019.065 0.005 0.00225 0.0391 0.0842 0.0432 0.0579 0.193 0.241 0.000432 0.000538 Wall time: 93019.06587027526 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 445 100 0.0521 0.00211 0.0098 0.0417 0.0561 0.1 0.121 0.000223 0.000269 445 118 0.0403 0.00192 0.00187 0.0398 0.0535 0.0438 0.0527 9.78e-05 0.000118 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 445 100 0.029 0.00137 0.00161 0.0348 0.0452 0.0454 0.0489 0.000101 0.000109 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 445 93228.382 0.005 0.00203 0.0138 0.0544 0.041 0.0549 0.114 0.144 0.000255 0.000321 ! Validation 445 93228.382 0.005 0.00228 0.0255 0.071 0.0435 0.0582 0.148 0.195 0.000331 0.000435 Wall time: 93228.38268080493 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 446 100 0.0481 0.00205 0.00704 0.0409 0.0553 0.0845 0.102 0.000189 0.000228 446 118 0.0562 0.00234 0.00944 0.0425 0.0589 0.106 0.119 0.000236 0.000265 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 446 100 0.04 0.00134 0.0133 0.0344 0.0446 0.138 0.141 0.000308 0.000314 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 446 93437.749 0.005 0.002 0.00976 0.0497 0.0407 0.0545 0.097 0.12 0.000216 0.000269 ! Validation 446 93437.749 0.005 0.00223 0.0212 0.0659 0.043 0.0576 0.145 0.178 0.000324 0.000397 Wall time: 93437.7497463813 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 447 100 0.0439 0.00203 0.00338 0.0411 0.0549 0.0594 0.0709 0.000133 0.000158 447 118 0.0387 0.00167 0.00527 0.0376 0.0499 0.0714 0.0885 0.000159 0.000198 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 447 100 0.0448 0.00132 0.0184 0.0343 0.0444 0.161 0.165 0.00036 0.000369 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 447 93646.978 0.005 0.00198 0.0111 0.0508 0.0406 0.0543 0.104 0.129 0.000231 0.000288 ! Validation 447 93646.978 0.005 0.00221 0.0795 0.124 0.0428 0.0573 0.299 0.344 0.000668 0.000768 Wall time: 93646.97877445025 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 448 100 0.0449 0.00195 0.00595 0.0402 0.0538 0.0805 0.0941 0.00018 0.00021 448 118 0.0471 0.00179 0.0113 0.0386 0.0516 0.111 0.129 0.000248 0.000289 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 448 100 0.0285 0.00129 0.00272 0.0338 0.0438 0.0567 0.0637 0.000127 0.000142 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 448 93856.216 0.005 0.00197 0.0126 0.052 0.0405 0.0542 0.107 0.137 0.000238 0.000305 ! Validation 448 93856.216 0.005 0.00218 0.0203 0.0638 0.0425 0.0569 0.135 0.174 0.000302 0.000388 Wall time: 93856.21647942532 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 449 100 0.0449 0.00198 0.00539 0.0405 0.0542 0.07 0.0895 0.000156 0.0002 449 118 0.0444 0.00203 0.00377 0.041 0.055 0.0616 0.0749 0.000138 0.000167 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 449 100 0.0287 0.00132 0.0024 0.0343 0.0442 0.0545 0.0597 0.000122 0.000133 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 449 94065.546 0.005 0.00194 0.00896 0.0478 0.0401 0.0538 0.0911 0.116 0.000203 0.000258 ! Validation 449 94065.546 0.005 0.0022 0.0288 0.0728 0.0428 0.0572 0.164 0.207 0.000365 0.000462 Wall time: 94065.54671017313 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 450 100 0.0515 0.00194 0.0128 0.0401 0.0537 0.121 0.138 0.000269 0.000308 450 118 0.0406 0.00189 0.00275 0.0395 0.0531 0.0499 0.064 0.000111 0.000143 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 450 100 0.0327 0.00129 0.00697 0.0337 0.0438 0.0953 0.102 0.000213 0.000227 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 450 94274.787 0.005 0.00195 0.0129 0.0518 0.0402 0.0538 0.112 0.139 0.00025 0.00031 ! Validation 450 94274.787 0.005 0.00216 0.0416 0.0849 0.0424 0.0567 0.205 0.249 0.000458 0.000555 Wall time: 94274.78702217434 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 451 100 0.101 0.00203 0.0602 0.0412 0.0549 0.285 0.299 0.000637 0.000668 451 118 0.0371 0.00165 0.00422 0.037 0.0495 0.0643 0.0792 0.000144 0.000177 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 451 100 0.0288 0.00131 0.00271 0.034 0.0441 0.0576 0.0634 0.000129 0.000142 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 451 94484.037 0.005 0.00196 0.022 0.0611 0.0403 0.054 0.139 0.181 0.000311 0.000405 ! Validation 451 94484.037 0.005 0.00219 0.0232 0.067 0.0427 0.0571 0.147 0.186 0.000328 0.000415 Wall time: 94484.03724118508 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 452 100 0.0494 0.00208 0.00768 0.0414 0.0557 0.089 0.107 0.000199 0.000239 452 118 0.0432 0.00177 0.00786 0.0385 0.0512 0.0813 0.108 0.000181 0.000241 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 452 100 0.0353 0.00127 0.00991 0.0334 0.0434 0.117 0.121 0.000261 0.000271 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 452 94694.855 0.005 0.00193 0.0117 0.0502 0.04 0.0535 0.108 0.132 0.00024 0.000295 ! Validation 452 94694.855 0.005 0.00214 0.0195 0.0624 0.0422 0.0564 0.136 0.17 0.000304 0.000381 Wall time: 94694.85586646805 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 453 100 0.0474 0.00191 0.00911 0.0394 0.0533 0.0995 0.116 0.000222 0.00026 453 118 0.0552 0.00224 0.0105 0.0427 0.0577 0.0973 0.125 0.000217 0.000279 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 453 100 0.0374 0.00133 0.0108 0.0343 0.0445 0.12 0.127 0.000268 0.000283 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 453 94910.622 0.005 0.00193 0.0176 0.0563 0.0401 0.0536 0.131 0.162 0.000292 0.000362 ! Validation 453 94910.622 0.005 0.00218 0.0671 0.111 0.0426 0.057 0.264 0.316 0.000589 0.000705 Wall time: 94910.62252996815 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 454 100 0.0432 0.00186 0.00601 0.0392 0.0526 0.0774 0.0946 0.000173 0.000211 454 118 0.0464 0.00157 0.0151 0.0367 0.0483 0.132 0.15 0.000295 0.000334 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 454 100 0.0527 0.00127 0.0274 0.0334 0.0434 0.199 0.202 0.000443 0.00045 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 454 95119.827 0.005 0.00193 0.0154 0.054 0.0401 0.0536 0.118 0.151 0.000264 0.000338 ! Validation 454 95119.827 0.005 0.00213 0.0257 0.0683 0.042 0.0563 0.158 0.195 0.000353 0.000436 Wall time: 95119.82727241097 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 455 100 0.042 0.00193 0.00344 0.04 0.0536 0.057 0.0716 0.000127 0.00016 455 118 0.0617 0.00194 0.0228 0.04 0.0538 0.173 0.184 0.000386 0.000411 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 455 100 0.0548 0.00127 0.0294 0.0335 0.0435 0.206 0.209 0.00046 0.000466 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 455 95328.946 0.005 0.00189 0.0114 0.0492 0.0396 0.053 0.104 0.13 0.000232 0.00029 ! Validation 455 95328.946 0.005 0.00213 0.0309 0.0735 0.0421 0.0563 0.171 0.214 0.000382 0.000478 Wall time: 95328.94688260835 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 456 100 0.0485 0.00196 0.00923 0.0404 0.0541 0.101 0.117 0.000225 0.000262 456 118 0.0486 0.00223 0.00392 0.0429 0.0576 0.0618 0.0763 0.000138 0.00017 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 456 100 0.0353 0.00128 0.00972 0.0337 0.0436 0.115 0.12 0.000256 0.000268 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 456 95538.047 0.005 0.00194 0.0193 0.0581 0.0401 0.0537 0.132 0.17 0.000294 0.000379 ! Validation 456 95538.047 0.005 0.00213 0.0531 0.0956 0.042 0.0562 0.237 0.281 0.00053 0.000627 Wall time: 95538.047316805 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 457 100 0.0435 0.00192 0.00501 0.0402 0.0535 0.0704 0.0863 0.000157 0.000193 457 118 0.0434 0.00179 0.00751 0.0389 0.0516 0.0936 0.106 0.000209 0.000236 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 457 100 0.0278 0.00125 0.00274 0.0332 0.0432 0.0537 0.0638 0.00012 0.000142 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 457 95747.150 0.005 0.00191 0.0129 0.0511 0.0398 0.0533 0.113 0.139 0.000252 0.00031 ! Validation 457 95747.150 0.005 0.0021 0.0234 0.0654 0.0418 0.0559 0.149 0.187 0.000332 0.000416 Wall time: 95747.1508429111 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 458 100 0.0391 0.00175 0.00416 0.0384 0.051 0.0624 0.0786 0.000139 0.000175 458 118 0.0407 0.0018 0.00473 0.039 0.0517 0.0693 0.0839 0.000155 0.000187 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 458 100 0.0303 0.00122 0.00579 0.0329 0.0427 0.0876 0.0928 0.000195 0.000207 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 458 95956.269 0.005 0.00188 0.0136 0.0513 0.0395 0.0529 0.117 0.143 0.000261 0.000318 ! Validation 458 95956.269 0.005 0.00208 0.0501 0.0917 0.0415 0.0556 0.224 0.273 0.0005 0.000609 Wall time: 95956.26959668426 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 459 100 0.104 0.0019 0.0657 0.0398 0.0531 0.3 0.313 0.00067 0.000698 459 118 0.0814 0.00172 0.0469 0.0381 0.0506 0.247 0.264 0.000552 0.000589 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 459 100 0.1 0.00127 0.0751 0.0335 0.0434 0.333 0.334 0.000743 0.000746 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 459 96165.460 0.005 0.0019 0.025 0.0631 0.0398 0.0532 0.16 0.192 0.000358 0.000429 ! Validation 459 96165.460 0.005 0.00213 0.133 0.176 0.0422 0.0563 0.415 0.445 0.000927 0.000994 Wall time: 96165.46034800727 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 460 100 0.0392 0.00178 0.00363 0.0384 0.0514 0.0582 0.0735 0.00013 0.000164 460 118 0.0476 0.00181 0.0115 0.0394 0.0518 0.118 0.131 0.000263 0.000292 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 460 100 0.048 0.00127 0.0227 0.0334 0.0434 0.181 0.184 0.000403 0.00041 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 460 96374.555 0.005 0.00189 0.0139 0.0517 0.0396 0.053 0.118 0.144 0.000262 0.000322 ! Validation 460 96374.555 0.005 0.00211 0.0205 0.0626 0.0419 0.056 0.142 0.174 0.000317 0.000389 Wall time: 96374.55562343635 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 461 100 0.0483 0.00187 0.0109 0.0396 0.0527 0.107 0.127 0.00024 0.000284 461 118 0.0367 0.00166 0.00342 0.0375 0.0497 0.0568 0.0713 0.000127 0.000159 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 461 100 0.0279 0.00124 0.00321 0.0331 0.0429 0.0579 0.0691 0.000129 0.000154 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 461 96585.189 0.005 0.00188 0.0183 0.0559 0.0396 0.0529 0.137 0.165 0.000306 0.000369 ! Validation 461 96585.189 0.005 0.00209 0.0366 0.0785 0.0417 0.0558 0.191 0.233 0.000425 0.000521 Wall time: 96585.18899345398 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 462 100 0.0499 0.00199 0.01 0.0407 0.0545 0.104 0.122 0.000233 0.000273 462 118 0.0435 0.00197 0.00402 0.0408 0.0542 0.0592 0.0773 0.000132 0.000173 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 462 100 0.028 0.00124 0.00327 0.0331 0.0429 0.0608 0.0697 0.000136 0.000156 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 462 96794.256 0.005 0.00187 0.0139 0.0512 0.0394 0.0527 0.113 0.144 0.000252 0.000321 ! Validation 462 96794.256 0.005 0.00209 0.0278 0.0696 0.0417 0.0557 0.154 0.203 0.000345 0.000454 Wall time: 96794.25616919901 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 463 100 0.0676 0.00197 0.0282 0.0407 0.0541 0.193 0.205 0.000432 0.000457 463 118 0.0417 0.00151 0.0116 0.036 0.0473 0.115 0.131 0.000256 0.000293 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 463 100 0.0435 0.00128 0.018 0.0337 0.0436 0.158 0.164 0.000353 0.000365 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 463 97003.321 0.005 0.00184 0.00965 0.0464 0.0391 0.0523 0.0965 0.12 0.000215 0.000267 ! Validation 463 97003.321 0.005 0.00212 0.0647 0.107 0.0421 0.0561 0.268 0.31 0.000598 0.000692 Wall time: 97003.32176758209 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 464 100 0.129 0.002 0.0891 0.0409 0.0545 0.352 0.364 0.000786 0.000813 464 118 0.0468 0.0022 0.00285 0.0427 0.0571 0.05 0.0651 0.000112 0.000145 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 464 100 0.0465 0.00154 0.0158 0.0365 0.0478 0.151 0.153 0.000338 0.000342 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 464 97212.464 0.005 0.00193 0.0269 0.0655 0.0401 0.0536 0.162 0.201 0.000362 0.000448 ! Validation 464 97212.464 0.005 0.0024 0.0209 0.0689 0.045 0.0597 0.144 0.176 0.000321 0.000394 Wall time: 97212.46425317321 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 465 100 0.057 0.0018 0.0211 0.0387 0.0517 0.167 0.177 0.000374 0.000395 465 118 0.0517 0.00166 0.0185 0.0377 0.0497 0.15 0.166 0.000334 0.00037 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 465 100 0.0292 0.00141 0.00102 0.035 0.0458 0.032 0.039 7.15e-05 8.7e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 465 97421.527 0.005 0.00185 0.0136 0.0505 0.0392 0.0524 0.114 0.142 0.000254 0.000317 ! Validation 465 97421.527 0.005 0.00221 0.0216 0.0659 0.043 0.0574 0.145 0.179 0.000324 0.0004 Wall time: 97421.52716792328 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 466 100 0.0469 0.00169 0.0131 0.0377 0.0501 0.125 0.14 0.000279 0.000312 466 118 0.0372 0.00168 0.00351 0.038 0.05 0.0567 0.0722 0.000126 0.000161 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 466 100 0.0274 0.00132 0.00101 0.0341 0.0443 0.0361 0.0388 8.05e-05 8.67e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 466 97630.584 0.005 0.00183 0.0136 0.0502 0.039 0.0522 0.115 0.143 0.000258 0.000318 ! Validation 466 97630.584 0.005 0.00216 0.0269 0.0701 0.0424 0.0566 0.156 0.2 0.000348 0.000447 Wall time: 97630.58490484813 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 467 100 0.0401 0.00172 0.00581 0.038 0.0505 0.0734 0.093 0.000164 0.000207 467 118 0.0936 0.00179 0.0577 0.0388 0.0516 0.281 0.293 0.000628 0.000654 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 467 100 0.088 0.00124 0.0632 0.0334 0.0429 0.305 0.307 0.000681 0.000685 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 467 97839.734 0.005 0.00183 0.0116 0.0482 0.039 0.0521 0.105 0.13 0.000235 0.00029 ! Validation 467 97839.734 0.005 0.00206 0.0473 0.0885 0.0415 0.0553 0.226 0.265 0.000504 0.000592 Wall time: 97839.73476564512 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 468 100 0.048 0.00181 0.0118 0.0391 0.0519 0.117 0.132 0.00026 0.000296 468 118 0.0455 0.00185 0.00852 0.039 0.0524 0.0933 0.113 0.000208 0.000251 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 468 100 0.0254 0.00122 0.00093 0.0328 0.0426 0.0301 0.0372 6.73e-05 8.3e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 468 98048.857 0.005 0.00188 0.02 0.0575 0.0396 0.0529 0.138 0.173 0.000307 0.000385 ! Validation 468 98048.857 0.005 0.00205 0.0252 0.0662 0.0412 0.0552 0.158 0.194 0.000352 0.000432 Wall time: 98048.85732503515 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 469 100 0.0493 0.00184 0.0125 0.0395 0.0523 0.106 0.136 0.000236 0.000304 469 118 0.0402 0.00187 0.00286 0.0394 0.0527 0.0523 0.0652 0.000117 0.000146 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 469 100 0.0254 0.00124 0.000573 0.0332 0.043 0.0227 0.0292 5.07e-05 6.52e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 469 98257.890 0.005 0.00189 0.0202 0.058 0.0397 0.053 0.13 0.174 0.00029 0.000388 ! Validation 469 98257.890 0.005 0.00205 0.0332 0.0743 0.0413 0.0552 0.178 0.222 0.000397 0.000496 Wall time: 98257.89046984725 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 470 100 0.0496 0.00225 0.00459 0.0434 0.0578 0.0698 0.0826 0.000156 0.000184 470 118 0.0493 0.0016 0.0172 0.037 0.0488 0.142 0.16 0.000317 0.000357 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 470 100 0.0526 0.0013 0.0266 0.034 0.044 0.196 0.199 0.000438 0.000444 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 470 98475.751 0.005 0.00183 0.0108 0.0473 0.039 0.0521 0.0994 0.126 0.000222 0.000282 ! Validation 470 98475.751 0.005 0.00211 0.0677 0.11 0.042 0.0561 0.277 0.317 0.000619 0.000708 Wall time: 98475.7517467523 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 471 100 0.048 0.00186 0.0108 0.0394 0.0526 0.104 0.126 0.000233 0.000282 471 118 0.0377 0.00173 0.00303 0.0379 0.0508 0.0551 0.0671 0.000123 0.00015 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 471 100 0.0365 0.00121 0.0123 0.0325 0.0424 0.133 0.135 0.000296 0.000302 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 471 98685.022 0.005 0.00184 0.0141 0.051 0.0392 0.0523 0.115 0.145 0.000256 0.000324 ! Validation 471 98685.022 0.005 0.00204 0.0181 0.0589 0.0412 0.0551 0.131 0.164 0.000293 0.000367 Wall time: 98685.022064853 ! Best model 471 0.059 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 472 100 0.0578 0.00203 0.0172 0.0415 0.0549 0.142 0.16 0.000317 0.000357 472 118 0.0417 0.0019 0.00362 0.0403 0.0532 0.059 0.0733 0.000132 0.000164 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 472 100 0.0317 0.00139 0.00381 0.0346 0.0455 0.0695 0.0753 0.000155 0.000168 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 472 98894.306 0.005 0.00193 0.0249 0.0634 0.0401 0.0535 0.155 0.193 0.000345 0.000431 ! Validation 472 98894.306 0.005 0.0022 0.018 0.062 0.0429 0.0572 0.133 0.164 0.000296 0.000365 Wall time: 98894.30642981315 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 473 100 0.0402 0.00165 0.00715 0.0373 0.0496 0.0874 0.103 0.000195 0.00023 473 118 0.0755 0.0017 0.0416 0.0378 0.0502 0.243 0.249 0.000541 0.000555 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 473 100 0.104 0.00133 0.0771 0.0341 0.0445 0.338 0.339 0.000755 0.000756 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 473 99103.843 0.005 0.00181 0.0117 0.048 0.0389 0.052 0.105 0.131 0.000235 0.000291 ! Validation 473 99103.843 0.005 0.00213 0.0527 0.0952 0.0423 0.0562 0.242 0.28 0.00054 0.000625 Wall time: 99103.84349784534 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 474 100 0.041 0.00174 0.00628 0.0381 0.0508 0.0814 0.0967 0.000182 0.000216 474 118 0.0448 0.00183 0.00823 0.0392 0.0521 0.0947 0.111 0.000211 0.000247 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 474 100 0.0255 0.00123 0.000972 0.0329 0.0427 0.0288 0.038 6.43e-05 8.49e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 474 99313.118 0.005 0.00185 0.0149 0.0519 0.0393 0.0524 0.12 0.149 0.000268 0.000333 ! Validation 474 99313.118 0.005 0.00203 0.0285 0.069 0.0411 0.0549 0.162 0.206 0.000362 0.000459 Wall time: 99313.11803110922 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 475 100 0.0404 0.00171 0.00625 0.0381 0.0504 0.0786 0.0964 0.000176 0.000215 475 118 0.0529 0.00218 0.00935 0.0419 0.0569 0.0928 0.118 0.000207 0.000263 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 475 100 0.0255 0.00123 0.00091 0.0332 0.0428 0.0325 0.0368 7.25e-05 8.21e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 475 99522.387 0.005 0.002 0.0334 0.0734 0.0409 0.0545 0.162 0.223 0.000362 0.000498 ! Validation 475 99522.387 0.005 0.00205 0.027 0.068 0.0413 0.0552 0.157 0.2 0.00035 0.000447 Wall time: 99522.38794784527 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 476 100 0.0511 0.00201 0.011 0.041 0.0546 0.106 0.128 0.000236 0.000285 476 118 0.0431 0.00186 0.00586 0.0393 0.0526 0.0789 0.0933 0.000176 0.000208 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 476 100 0.0345 0.00116 0.0112 0.0321 0.0416 0.124 0.129 0.000276 0.000288 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 476 99731.668 0.005 0.00181 0.0108 0.0469 0.0388 0.0518 0.0986 0.127 0.00022 0.000283 ! Validation 476 99731.668 0.005 0.00197 0.065 0.104 0.0405 0.0542 0.268 0.311 0.000599 0.000694 Wall time: 99731.66824795911 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 477 100 0.0547 0.00186 0.0175 0.0392 0.0526 0.149 0.161 0.000332 0.00036 477 118 0.0356 0.00142 0.00714 0.0348 0.046 0.0813 0.103 0.000181 0.00023 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 477 100 0.0254 0.00115 0.00239 0.0318 0.0414 0.0528 0.0596 0.000118 0.000133 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 477 99940.931 0.005 0.00179 0.0115 0.0472 0.0386 0.0516 0.103 0.131 0.00023 0.000292 ! Validation 477 99940.931 0.005 0.00197 0.0187 0.0581 0.0406 0.0542 0.135 0.167 0.000302 0.000372 Wall time: 99940.93144885218 ! Best model 477 0.058 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 478 100 0.058 0.00183 0.0215 0.0391 0.0521 0.158 0.179 0.000352 0.000399 478 118 0.0408 0.00174 0.00603 0.0382 0.0508 0.0806 0.0947 0.00018 0.000211 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 478 100 0.0656 0.00124 0.0409 0.0329 0.0429 0.245 0.247 0.000547 0.000551 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 478 100150.280 0.005 0.00193 0.0242 0.0628 0.0402 0.0536 0.146 0.19 0.000325 0.000424 ! Validation 478 100150.280 0.005 0.00204 0.0646 0.106 0.0412 0.0551 0.277 0.31 0.000618 0.000692 Wall time: 100150.28096195823 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 479 100 0.0452 0.00189 0.00733 0.0399 0.053 0.0861 0.104 0.000192 0.000233 479 118 0.0468 0.00207 0.0054 0.0405 0.0555 0.0707 0.0896 0.000158 0.0002 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 479 100 0.0478 0.00121 0.0236 0.0328 0.0424 0.185 0.187 0.000414 0.000418 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 479 100359.741 0.005 0.00182 0.0123 0.0487 0.039 0.052 0.11 0.135 0.000245 0.000302 ! Validation 479 100359.741 0.005 0.00204 0.113 0.154 0.0413 0.0551 0.366 0.411 0.000816 0.000916 Wall time: 100359.74129630905 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 480 100 0.0476 0.00168 0.014 0.0373 0.05 0.128 0.144 0.000286 0.000322 480 118 0.0406 0.00159 0.00866 0.0369 0.0487 0.0997 0.114 0.000223 0.000253 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 480 100 0.0256 0.00116 0.00241 0.0318 0.0415 0.0483 0.0599 0.000108 0.000134 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 480 100568.976 0.005 0.00177 0.0124 0.0478 0.0384 0.0513 0.105 0.136 0.000234 0.000303 ! Validation 480 100568.976 0.005 0.00196 0.0308 0.07 0.0404 0.054 0.173 0.214 0.000386 0.000478 Wall time: 100568.976452359 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 481 100 0.0478 0.00169 0.0139 0.0376 0.0502 0.121 0.144 0.00027 0.000321 481 118 0.0543 0.00183 0.0176 0.0393 0.0522 0.146 0.162 0.000326 0.000361 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 481 100 0.0583 0.0012 0.0344 0.0325 0.0422 0.224 0.226 0.0005 0.000505 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 481 100778.199 0.005 0.00181 0.0116 0.0478 0.0389 0.0519 0.105 0.131 0.000235 0.000292 ! Validation 481 100778.199 0.005 0.00199 0.113 0.152 0.0408 0.0544 0.372 0.409 0.00083 0.000914 Wall time: 100778.19989208318 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 482 100 0.0396 0.00182 0.00312 0.0389 0.0521 0.0539 0.0681 0.00012 0.000152 482 118 0.0417 0.00187 0.00423 0.0395 0.0528 0.0678 0.0794 0.000151 0.000177 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 482 100 0.0281 0.00125 0.00318 0.0332 0.0431 0.0648 0.0687 0.000145 0.000153 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 482 100987.432 0.005 0.00181 0.0158 0.052 0.0389 0.0519 0.126 0.154 0.000281 0.000343 ! Validation 482 100987.432 0.005 0.00205 0.0229 0.0638 0.0414 0.0552 0.14 0.185 0.000313 0.000412 Wall time: 100987.43217699416 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 483 100 0.0893 0.00206 0.048 0.0415 0.0554 0.254 0.267 0.000567 0.000596 483 118 0.0889 0.00247 0.0394 0.0452 0.0607 0.23 0.242 0.000513 0.00054 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 483 100 0.095 0.00142 0.0666 0.035 0.0459 0.312 0.315 0.000696 0.000703 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 483 101196.743 0.005 0.00188 0.0254 0.0629 0.0396 0.0528 0.158 0.194 0.000352 0.000433 ! Validation 483 101196.743 0.005 0.00221 0.18 0.224 0.043 0.0573 0.479 0.517 0.00107 0.00115 Wall time: 101196.74314945797 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 484 100 0.0655 0.00165 0.0325 0.0372 0.0495 0.209 0.22 0.000467 0.000491 484 118 0.0402 0.00173 0.00555 0.0375 0.0508 0.0773 0.0908 0.000173 0.000203 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 484 100 0.026 0.00113 0.00341 0.0315 0.041 0.0626 0.0712 0.00014 0.000159 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 484 101406.320 0.005 0.00192 0.0165 0.055 0.04 0.0535 0.125 0.157 0.00028 0.00035 ! Validation 484 101406.320 0.005 0.00193 0.02 0.0585 0.04 0.0535 0.134 0.172 0.000299 0.000385 Wall time: 101406.32087423513 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 485 100 0.0398 0.00166 0.0066 0.0373 0.0497 0.0787 0.0991 0.000176 0.000221 485 118 0.048 0.00193 0.00946 0.04 0.0535 0.1 0.119 0.000224 0.000265 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 485 100 0.0263 0.00122 0.00182 0.0327 0.0426 0.0479 0.0521 0.000107 0.000116 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 485 101615.541 0.005 0.00178 0.0152 0.0509 0.0386 0.0515 0.119 0.151 0.000266 0.000337 ! Validation 485 101615.541 0.005 0.00201 0.0172 0.0573 0.0409 0.0546 0.128 0.16 0.000286 0.000357 Wall time: 101615.54172098823 ! Best model 485 0.057 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 486 100 0.0454 0.00194 0.0067 0.0403 0.0537 0.083 0.0998 0.000185 0.000223 486 118 0.0445 0.00164 0.0116 0.0371 0.0495 0.118 0.131 0.000263 0.000293 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 486 100 0.0363 0.00131 0.01 0.0342 0.0442 0.111 0.122 0.000249 0.000273 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 486 101824.777 0.005 0.00178 0.0164 0.052 0.0386 0.0515 0.121 0.156 0.000271 0.000349 ! Validation 486 101824.777 0.005 0.00208 0.0218 0.0634 0.0418 0.0556 0.14 0.18 0.000311 0.000402 Wall time: 101824.77746400703 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 487 100 0.0375 0.00166 0.00426 0.0374 0.0497 0.0667 0.0796 0.000149 0.000178 487 118 0.0896 0.00187 0.0522 0.0397 0.0527 0.271 0.279 0.000605 0.000622 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 487 100 0.122 0.00119 0.0977 0.0324 0.0421 0.379 0.381 0.000846 0.000851 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 487 102034.070 0.005 0.00175 0.0132 0.0483 0.0383 0.0511 0.111 0.139 0.000248 0.00031 ! Validation 487 102034.070 0.005 0.00198 0.0539 0.0936 0.0406 0.0543 0.245 0.283 0.000547 0.000632 Wall time: 102034.0708522941 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 488 100 0.0471 0.00185 0.0101 0.0395 0.0524 0.101 0.122 0.000224 0.000273 488 118 0.115 0.0018 0.0793 0.039 0.0517 0.317 0.343 0.000708 0.000767 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 488 100 0.0668 0.00131 0.0406 0.0338 0.0442 0.244 0.246 0.000545 0.000548 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 488 102243.294 0.005 0.0018 0.0202 0.0561 0.0388 0.0517 0.135 0.171 0.000301 0.000383 ! Validation 488 102243.294 0.005 0.0021 0.0962 0.138 0.042 0.0559 0.334 0.378 0.000747 0.000844 Wall time: 102243.29414033098 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 489 100 0.0472 0.00177 0.0118 0.0385 0.0513 0.115 0.132 0.000256 0.000296 489 118 0.0477 0.00161 0.0155 0.0371 0.0489 0.136 0.152 0.000304 0.000339 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 489 100 0.0788 0.00122 0.0544 0.0325 0.0426 0.283 0.285 0.000632 0.000635 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 489 102452.517 0.005 0.00181 0.0148 0.0509 0.0388 0.0519 0.117 0.148 0.000261 0.000331 ! Validation 489 102452.517 0.005 0.00198 0.0456 0.0853 0.0407 0.0543 0.222 0.26 0.000496 0.000581 Wall time: 102452.51760658808 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 490 100 0.0426 0.00182 0.0062 0.0391 0.052 0.0803 0.096 0.000179 0.000214 490 118 0.0436 0.00172 0.00924 0.0381 0.0506 0.107 0.117 0.000239 0.000262 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 490 100 0.0576 0.00121 0.0334 0.0327 0.0425 0.222 0.223 0.000495 0.000497 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 490 102661.741 0.005 0.00185 0.0242 0.0612 0.0393 0.0525 0.148 0.19 0.000331 0.000424 ! Validation 490 102661.741 0.005 0.00201 0.0315 0.0717 0.041 0.0546 0.178 0.217 0.000398 0.000483 Wall time: 102661.74131036038 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 491 100 0.0378 0.00172 0.00346 0.0382 0.0506 0.0578 0.0717 0.000129 0.00016 491 118 0.032 0.00149 0.00217 0.0358 0.0471 0.0465 0.0568 0.000104 0.000127 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 491 100 0.0384 0.00125 0.0134 0.0329 0.0431 0.138 0.141 0.000308 0.000316 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 491 102870.963 0.005 0.0018 0.0157 0.0517 0.0388 0.0518 0.124 0.153 0.000277 0.000342 ! Validation 491 102870.963 0.005 0.00204 0.0289 0.0696 0.0412 0.055 0.171 0.207 0.000382 0.000463 Wall time: 102870.96308451612 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 492 100 0.0484 0.00164 0.0156 0.0371 0.0494 0.136 0.152 0.000304 0.00034 492 118 0.0392 0.00166 0.00613 0.0375 0.0496 0.077 0.0954 0.000172 0.000213 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 492 100 0.0481 0.00114 0.0252 0.0318 0.0412 0.19 0.194 0.000425 0.000432 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 492 103080.258 0.005 0.00171 0.0132 0.0474 0.0378 0.0504 0.112 0.14 0.00025 0.000313 ! Validation 492 103080.258 0.005 0.00193 0.0868 0.125 0.0401 0.0535 0.323 0.359 0.000722 0.000802 Wall time: 103080.25841139536 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 493 100 0.0398 0.00176 0.00452 0.0386 0.0512 0.0656 0.082 0.000146 0.000183 493 118 0.045 0.00191 0.00676 0.0402 0.0533 0.0857 0.1 0.000191 0.000224 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 493 100 0.0417 0.00115 0.0187 0.0318 0.0414 0.164 0.167 0.000366 0.000372 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 493 103289.501 0.005 0.00184 0.0224 0.0592 0.0393 0.0523 0.145 0.183 0.000323 0.000408 ! Validation 493 103289.501 0.005 0.0019 0.0716 0.11 0.0398 0.0532 0.288 0.326 0.000643 0.000728 Wall time: 103289.50136984698 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 494 100 0.0389 0.00159 0.00704 0.0366 0.0487 0.0877 0.102 0.000196 0.000228 494 118 0.0358 0.00149 0.00613 0.0355 0.047 0.0735 0.0955 0.000164 0.000213 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 494 100 0.0308 0.00111 0.00864 0.0313 0.0406 0.108 0.113 0.000242 0.000253 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 494 103498.733 0.005 0.00171 0.0109 0.0451 0.0378 0.0505 0.102 0.127 0.000227 0.000284 ! Validation 494 103498.733 0.005 0.00188 0.0547 0.0924 0.0396 0.0529 0.242 0.285 0.00054 0.000637 Wall time: 103498.73349174298 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 495 100 0.0471 0.00159 0.0152 0.0365 0.0487 0.136 0.15 0.000303 0.000335 495 118 0.0493 0.00167 0.0159 0.0372 0.0498 0.138 0.154 0.000309 0.000343 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 495 100 0.0257 0.0012 0.00161 0.0327 0.0423 0.0387 0.049 8.65e-05 0.000109 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 495 103707.985 0.005 0.00177 0.0164 0.0519 0.0385 0.0514 0.126 0.156 0.000281 0.000349 ! Validation 495 103707.985 0.005 0.00196 0.0254 0.0646 0.0406 0.054 0.152 0.194 0.000339 0.000434 Wall time: 103707.98581352597 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 496 100 0.0393 0.00158 0.00759 0.0366 0.0485 0.0886 0.106 0.000198 0.000237 496 118 0.0374 0.00169 0.00359 0.0377 0.0502 0.0618 0.0731 0.000138 0.000163 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 496 100 0.0312 0.00118 0.00755 0.0322 0.042 0.0987 0.106 0.00022 0.000236 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 496 103917.224 0.005 0.00173 0.0128 0.0474 0.038 0.0507 0.113 0.139 0.000251 0.000309 ! Validation 496 103917.224 0.005 0.00193 0.0509 0.0895 0.0402 0.0536 0.23 0.275 0.000514 0.000614 Wall time: 103917.2241106322 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 497 100 0.0425 0.00165 0.00948 0.0373 0.0495 0.103 0.119 0.000229 0.000265 497 118 0.0554 0.00167 0.0221 0.0373 0.0498 0.16 0.181 0.000356 0.000405 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 497 100 0.0384 0.0014 0.0103 0.0347 0.0457 0.112 0.124 0.00025 0.000276 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 497 104126.521 0.005 0.00177 0.0173 0.0527 0.0385 0.0513 0.128 0.16 0.000286 0.000358 ! Validation 497 104126.521 0.005 0.00215 0.0868 0.13 0.0423 0.0566 0.297 0.359 0.000663 0.000802 Wall time: 104126.52185020596 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 498 100 0.0349 0.00161 0.00276 0.0368 0.0489 0.0521 0.0641 0.000116 0.000143 498 118 0.0562 0.00161 0.0239 0.037 0.049 0.182 0.189 0.000406 0.000421 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 498 100 0.0926 0.00123 0.068 0.033 0.0428 0.315 0.318 0.000703 0.00071 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 498 104335.753 0.005 0.00183 0.0135 0.0501 0.0391 0.0522 0.108 0.141 0.000242 0.000315 ! Validation 498 104335.753 0.005 0.00202 0.0358 0.0761 0.041 0.0548 0.194 0.231 0.000433 0.000515 Wall time: 104335.75303818611 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 499 100 0.037 0.00164 0.0042 0.037 0.0494 0.0621 0.079 0.000139 0.000176 499 118 0.0382 0.00177 0.0028 0.0378 0.0513 0.0529 0.0645 0.000118 0.000144 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 499 100 0.0234 0.00109 0.00153 0.0312 0.0403 0.0388 0.0477 8.67e-05 0.000106 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 499 104544.973 0.005 0.00189 0.0224 0.0601 0.0397 0.053 0.133 0.183 0.000297 0.000408 ! Validation 499 104544.973 0.005 0.00186 0.0316 0.0689 0.0394 0.0526 0.176 0.217 0.000393 0.000484 Wall time: 104544.97383049736 training # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 500 100 0.0444 0.00204 0.00357 0.0417 0.0551 0.0599 0.0728 0.000134 0.000163 500 118 0.0542 0.00177 0.0189 0.0386 0.0513 0.155 0.168 0.000345 0.000374 validation # Epoch batch loss loss_f loss_e f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse 500 100 0.0252 0.0012 0.00118 0.0326 0.0423 0.0389 0.0419 8.69e-05 9.36e-05 Train # Epoch wal LR loss_f loss_e loss f_mae f_rmse e_mae e_rmse e/N_mae e/N_rmse ! Train 500 104754.277 0.005 0.00169 0.014 0.0479 0.0376 0.0502 0.111 0.144 0.000248 0.000322 ! Validation 500 104754.277 0.005 0.00194 0.0218 0.0607 0.0403 0.0537 0.146 0.18 0.000325 0.000402 Wall time: 104754.2776939501 ! Stop training: max epochs Wall time: 104754.29784301901 Cumulative wall time: 104754.29784301901 Using device: cuda Please note that _all_ machine learning models running on CUDA hardware are generally somewhat nondeterministic and that this can manifest in small, generally unimportant variation in the final test errors. Loading model... loaded model Loading dataset... Processing dataset... Done! Loaded dataset specified in test_config.yaml. Using all frames from the specified test dataset, yielding a test set size of 500 frames. Starting... --- Final result: --- f_mae = 0.036640 f_rmse = 0.048407 e_mae = 0.129628 e_rmse = 0.159662 e/N_mae = 0.000289 e/N_rmse = 0.000356 f_mae = 0.036640 f_rmse = 0.048407 e_mae = 0.129628 e_rmse = 0.159662 e/N_mae = 0.000289 e/N_rmse = 0.000356 Train end time: 2024-12-09_15:42:05 Training duration: 29h 10m 59s