2023-03-26 17:39:38,032 44k INFO {'train': {'log_interval': 200, 'eval_interval': 1000, 'seed': 1234, 'epochs': 10, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kageaki': 0}, 'model_dir': './logs\\44k'} 2023-03-26 17:39:38,058 44k WARNING git hash values are different. cea6df30(saved) != fd4d47fd(current) 2023-03-26 17:39:40,379 44k INFO Loaded checkpoint './logs\44k\G_0.pth' (iteration 1) 2023-03-26 17:39:40,933 44k INFO Loaded checkpoint './logs\44k\D_0.pth' (iteration 1) 2023-03-26 17:39:56,094 44k INFO Train Epoch: 1 [0%] 2023-03-26 17:39:56,094 44k INFO Losses: [2.5377755165100098, 2.903019666671753, 10.91149616241455, 27.32355308532715, 4.235077857971191], step: 0, lr: 0.0001 2023-03-26 17:40:00,069 44k INFO Saving model and optimizer state at iteration 1 to ./logs\44k\G_0.pth 2023-03-26 17:40:00,833 44k INFO Saving model and optimizer state at iteration 1 to ./logs\44k\D_0.pth 2023-03-26 17:41:29,168 44k INFO Train Epoch: 1 [27%] 2023-03-26 17:41:29,169 44k INFO Losses: [2.4305684566497803, 2.2155966758728027, 10.047785758972168, 18.85099983215332, 1.3608992099761963], step: 200, lr: 0.0001 2023-03-26 17:42:53,530 44k INFO Train Epoch: 1 [55%] 2023-03-26 17:42:53,530 44k INFO Losses: [2.4205574989318848, 2.6263515949249268, 8.473373413085938, 15.497400283813477, 1.5911765098571777], step: 400, lr: 0.0001 2023-03-26 17:44:18,316 44k INFO Train Epoch: 1 [82%] 2023-03-26 17:44:18,317 44k INFO Losses: [2.2375619411468506, 2.511993646621704, 14.825691223144531, 23.141586303710938, 0.9749691486358643], step: 600, lr: 0.0001 2023-03-26 17:45:13,228 44k INFO ====> Epoch: 1, cost 335.20 s 2023-03-26 17:45:47,332 44k INFO Train Epoch: 2 [10%] 2023-03-26 17:45:47,333 44k INFO Losses: [2.397797107696533, 2.373901844024658, 8.271860122680664, 15.803826332092285, 0.9859277009963989], step: 800, lr: 9.99875e-05 2023-03-26 17:46:56,083 44k INFO Train Epoch: 2 [37%] 2023-03-26 17:46:56,084 44k INFO Losses: [2.59622859954834, 2.0695106983184814, 9.532870292663574, 15.16479778289795, 1.2511309385299683], step: 1000, lr: 9.99875e-05 2023-03-26 17:46:59,140 44k INFO Saving model and optimizer state at iteration 2 to ./logs\44k\G_1000.pth 2023-03-26 17:46:59,838 44k INFO Saving model and optimizer state at iteration 2 to ./logs\44k\D_1000.pth 2023-03-26 17:48:08,202 44k INFO Train Epoch: 2 [65%] 2023-03-26 17:48:08,202 44k INFO Losses: [2.5527453422546387, 1.9406085014343262, 11.667440414428711, 15.09459400177002, 1.3829652070999146], step: 1200, lr: 9.99875e-05 2023-03-26 17:49:16,170 44k INFO Train Epoch: 2 [92%] 2023-03-26 17:49:16,171 44k INFO Losses: [2.694300651550293, 2.3796257972717285, 9.117003440856934, 14.53477954864502, 1.4188555479049683], step: 1400, lr: 9.99875e-05 2023-03-26 17:49:34,893 44k INFO ====> Epoch: 2, cost 261.67 s 2023-03-26 17:50:33,180 44k INFO Train Epoch: 3 [20%] 2023-03-26 17:50:33,180 44k INFO Losses: [2.5938282012939453, 2.1179003715515137, 8.693191528320312, 14.279634475708008, 1.063474416732788], step: 1600, lr: 9.99750015625e-05 2023-03-26 17:51:40,718 44k INFO Train Epoch: 3 [47%] 2023-03-26 17:51:40,719 44k INFO Losses: [2.639939308166504, 2.2141642570495605, 14.255056381225586, 21.819019317626953, 1.4886901378631592], step: 1800, lr: 9.99750015625e-05 2023-03-26 17:52:48,902 44k INFO Train Epoch: 3 [75%] 2023-03-26 17:52:48,903 44k INFO Losses: [2.212458610534668, 2.7131903171539307, 12.33214282989502, 18.393356323242188, 1.5152887105941772], step: 2000, lr: 9.99750015625e-05 2023-03-26 17:52:51,956 44k INFO Saving model and optimizer state at iteration 3 to ./logs\44k\G_2000.pth 2023-03-26 17:52:52,656 44k INFO Saving model and optimizer state at iteration 3 to ./logs\44k\D_2000.pth 2023-03-26 17:53:55,294 44k INFO ====> Epoch: 3, cost 260.40 s 2023-03-26 17:54:10,066 44k INFO Train Epoch: 4 [2%] 2023-03-26 17:54:10,066 44k INFO Losses: [2.425558090209961, 2.2590694427490234, 11.511456489562988, 21.43431854248047, 1.5592743158340454], step: 2200, lr: 9.996250468730469e-05 2023-03-26 17:55:18,218 44k INFO Train Epoch: 4 [30%] 2023-03-26 17:55:18,219 44k INFO Losses: [2.645364999771118, 2.0146186351776123, 9.450831413269043, 17.087810516357422, 1.1853785514831543], step: 2400, lr: 9.996250468730469e-05 2023-03-26 17:56:27,341 44k INFO Train Epoch: 4 [57%] 2023-03-26 17:56:27,341 44k INFO Losses: [2.5699472427368164, 2.794123649597168, 6.703786849975586, 14.36314868927002, 1.4213221073150635], step: 2600, lr: 9.996250468730469e-05 2023-03-26 17:57:35,779 44k INFO Train Epoch: 4 [85%] 2023-03-26 17:57:35,780 44k INFO Losses: [2.6018178462982178, 2.334325075149536, 12.623315811157227, 20.649280548095703, 1.24234139919281], step: 2800, lr: 9.996250468730469e-05 2023-03-26 17:58:13,732 44k INFO ====> Epoch: 4, cost 258.44 s 2023-03-26 17:58:54,260 44k INFO Train Epoch: 5 [12%] 2023-03-26 17:58:54,261 44k INFO Losses: [2.7838244438171387, 2.1547186374664307, 6.306748390197754, 14.336325645446777, 1.6287577152252197], step: 3000, lr: 9.995000937421877e-05 2023-03-26 17:58:57,535 44k INFO Saving model and optimizer state at iteration 5 to ./logs\44k\G_3000.pth 2023-03-26 17:58:58,351 44k INFO Saving model and optimizer state at iteration 5 to ./logs\44k\D_3000.pth 2023-03-26 18:00:08,606 44k INFO Train Epoch: 5 [40%] 2023-03-26 18:00:08,606 44k INFO Losses: [2.3616766929626465, 2.534543037414551, 10.59256649017334, 16.077133178710938, 1.266877293586731], step: 3200, lr: 9.995000937421877e-05 2023-03-26 18:01:16,910 44k INFO Train Epoch: 5 [67%] 2023-03-26 18:01:16,911 44k INFO Losses: [2.5994486808776855, 2.2965142726898193, 8.335776329040527, 14.85177230834961, 1.1047139167785645], step: 3400, lr: 9.995000937421877e-05 2023-03-26 18:02:25,202 44k INFO Train Epoch: 5 [95%] 2023-03-26 18:02:25,202 44k INFO Losses: [2.473801374435425, 2.1478612422943115, 11.251014709472656, 18.507349014282227, 1.3262391090393066], step: 3600, lr: 9.995000937421877e-05 2023-03-26 18:02:38,679 44k INFO ====> Epoch: 5, cost 264.95 s 2023-03-26 18:03:43,268 44k INFO Train Epoch: 6 [22%] 2023-03-26 18:03:43,268 44k INFO Losses: [2.3330955505371094, 2.1014323234558105, 8.225790977478027, 13.843910217285156, 1.0384153127670288], step: 3800, lr: 9.993751562304699e-05 2023-03-26 18:04:51,317 44k INFO Train Epoch: 6 [49%] 2023-03-26 18:04:51,317 44k INFO Losses: [2.625917911529541, 2.358022689819336, 14.715231895446777, 20.801605224609375, 1.2775871753692627], step: 4000, lr: 9.993751562304699e-05 2023-03-26 18:04:54,566 44k INFO Saving model and optimizer state at iteration 6 to ./logs\44k\G_4000.pth 2023-03-26 18:04:55,271 44k INFO Saving model and optimizer state at iteration 6 to ./logs\44k\D_4000.pth 2023-03-26 18:04:55,968 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_1000.pth 2023-03-26 18:04:56,007 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_1000.pth 2023-03-26 18:06:04,309 44k INFO Train Epoch: 6 [77%] 2023-03-26 18:06:04,309 44k INFO Losses: [2.6342339515686035, 2.3278164863586426, 8.934106826782227, 17.502992630004883, 1.3025579452514648], step: 4200, lr: 9.993751562304699e-05 2023-03-26 18:07:01,147 44k INFO ====> Epoch: 6, cost 262.47 s 2023-03-26 18:07:21,228 44k INFO Train Epoch: 7 [4%] 2023-03-26 18:07:21,228 44k INFO Losses: [2.71872878074646, 2.0840580463409424, 9.441400527954102, 13.771171569824219, 2.0024354457855225], step: 4400, lr: 9.99250234335941e-05 2023-03-26 18:08:29,729 44k INFO Train Epoch: 7 [32%] 2023-03-26 18:08:29,729 44k INFO Losses: [2.683831214904785, 2.223313093185425, 9.49968147277832, 16.27731704711914, 1.6022688150405884], step: 4600, lr: 9.99250234335941e-05 2023-03-26 18:09:37,548 44k INFO Train Epoch: 7 [59%] 2023-03-26 18:09:37,549 44k INFO Losses: [2.5431575775146484, 2.251652717590332, 11.220416069030762, 14.8209228515625, 1.8512675762176514], step: 4800, lr: 9.99250234335941e-05 2023-03-26 18:10:45,529 44k INFO Train Epoch: 7 [87%] 2023-03-26 18:10:45,530 44k INFO Losses: [2.621312141418457, 2.5917396545410156, 10.091723442077637, 20.77967071533203, 1.9012998342514038], step: 5000, lr: 9.99250234335941e-05 2023-03-26 18:10:48,617 44k INFO Saving model and optimizer state at iteration 7 to ./logs\44k\G_5000.pth 2023-03-26 18:10:49,344 44k INFO Saving model and optimizer state at iteration 7 to ./logs\44k\D_5000.pth 2023-03-26 18:10:50,044 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_2000.pth 2023-03-26 18:10:50,088 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_2000.pth 2023-03-26 18:11:22,369 44k INFO ====> Epoch: 7, cost 261.22 s 2023-03-26 18:12:07,018 44k INFO Train Epoch: 8 [14%] 2023-03-26 18:12:07,018 44k INFO Losses: [2.364821434020996, 2.5530383586883545, 13.18740463256836, 20.40546417236328, 1.319463849067688], step: 5200, lr: 9.991253280566489e-05 2023-03-26 18:13:15,123 44k INFO Train Epoch: 8 [42%] 2023-03-26 18:13:15,124 44k INFO Losses: [2.7404699325561523, 1.8259596824645996, 6.623441696166992, 11.952010154724121, 1.8055057525634766], step: 5400, lr: 9.991253280566489e-05 2023-03-26 18:14:23,488 44k INFO Train Epoch: 8 [69%] 2023-03-26 18:14:23,488 44k INFO Losses: [2.615143299102783, 2.2321488857269287, 14.727906227111816, 21.101221084594727, 1.0454636812210083], step: 5600, lr: 9.991253280566489e-05 2023-03-26 18:15:31,397 44k INFO Train Epoch: 8 [97%] 2023-03-26 18:15:31,397 44k INFO Losses: [2.510842800140381, 2.3249058723449707, 12.244482040405273, 19.545791625976562, 1.3539787530899048], step: 5800, lr: 9.991253280566489e-05 2023-03-26 18:15:39,508 44k INFO ====> Epoch: 8, cost 257.14 s 2023-03-26 18:16:48,980 44k INFO Train Epoch: 9 [24%] 2023-03-26 18:16:48,980 44k INFO Losses: [2.5039987564086914, 2.5685911178588867, 7.713610649108887, 16.99083137512207, 1.3174961805343628], step: 6000, lr: 9.990004373906418e-05 2023-03-26 18:16:52,077 44k INFO Saving model and optimizer state at iteration 9 to ./logs\44k\G_6000.pth 2023-03-26 18:16:52,794 44k INFO Saving model and optimizer state at iteration 9 to ./logs\44k\D_6000.pth 2023-03-26 18:16:53,496 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_3000.pth 2023-03-26 18:16:53,528 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_3000.pth 2023-03-26 18:18:01,015 44k INFO Train Epoch: 9 [52%] 2023-03-26 18:18:01,015 44k INFO Losses: [2.7452139854431152, 2.009155511856079, 7.5243353843688965, 13.27002239227295, 1.6816840171813965], step: 6200, lr: 9.990004373906418e-05 2023-03-26 18:19:09,591 44k INFO Train Epoch: 9 [79%] 2023-03-26 18:19:09,591 44k INFO Losses: [2.6896116733551025, 2.317214012145996, 13.586854934692383, 21.118850708007812, 1.6939321756362915], step: 6400, lr: 9.990004373906418e-05 2023-03-26 18:20:00,969 44k INFO ====> Epoch: 9, cost 261.46 s 2023-03-26 18:20:26,612 44k INFO Train Epoch: 10 [7%] 2023-03-26 18:20:26,613 44k INFO Losses: [2.38116717338562, 2.4761710166931152, 10.26457405090332, 13.916464805603027, 1.423701524734497], step: 6600, lr: 9.98875562335968e-05 2023-03-26 18:21:36,408 44k INFO Train Epoch: 10 [34%] 2023-03-26 18:21:36,409 44k INFO Losses: [2.6033682823181152, 2.052231788635254, 7.61380672454834, 16.19424819946289, 1.3552354574203491], step: 6800, lr: 9.98875562335968e-05 2023-03-26 18:22:46,088 44k INFO Train Epoch: 10 [62%] 2023-03-26 18:22:46,089 44k INFO Losses: [2.8540406227111816, 2.013615131378174, 9.860207557678223, 16.183984756469727, 1.046617865562439], step: 7000, lr: 9.98875562335968e-05 2023-03-26 18:22:49,269 44k INFO Saving model and optimizer state at iteration 10 to ./logs\44k\G_7000.pth 2023-03-26 18:22:50,010 44k INFO Saving model and optimizer state at iteration 10 to ./logs\44k\D_7000.pth 2023-03-26 18:22:50,776 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_4000.pth 2023-03-26 18:22:50,819 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_4000.pth 2023-03-26 18:23:59,488 44k INFO Train Epoch: 10 [89%] 2023-03-26 18:23:59,488 44k INFO Losses: [2.7886390686035156, 2.085160493850708, 11.16639518737793, 20.36288070678711, 1.4053449630737305], step: 7200, lr: 9.98875562335968e-05 2023-03-26 18:24:26,900 44k INFO ====> Epoch: 10, cost 265.93 s 2023-03-26 18:34:47,550 44k INFO {'train': {'log_interval': 200, 'eval_interval': 1000, 'seed': 1234, 'epochs': 30, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kageaki': 0}, 'model_dir': './logs\\44k'} 2023-03-26 18:34:47,579 44k WARNING git hash values are different. cea6df30(saved) != fd4d47fd(current) 2023-03-26 18:34:49,612 44k INFO Loaded checkpoint './logs\44k\G_7000.pth' (iteration 10) 2023-03-26 18:34:49,971 44k INFO Loaded checkpoint './logs\44k\D_7000.pth' (iteration 10) 2023-03-26 18:35:26,850 44k INFO Train Epoch: 10 [7%] 2023-03-26 18:35:26,851 44k INFO Losses: [2.5533604621887207, 2.188552141189575, 10.346734046936035, 17.699909210205078, 1.293749213218689], step: 6600, lr: 9.987507028906759e-05 2023-03-26 18:36:56,524 44k INFO Train Epoch: 10 [34%] 2023-03-26 18:36:56,524 44k INFO Losses: [2.3003833293914795, 2.4629688262939453, 7.944945812225342, 11.544402122497559, 1.2250933647155762], step: 6800, lr: 9.987507028906759e-05 2023-03-26 18:38:23,483 44k INFO Train Epoch: 10 [62%] 2023-03-26 18:38:23,483 44k INFO Losses: [2.40164852142334, 2.2444326877593994, 9.651368141174316, 16.686256408691406, 0.8747689723968506], step: 7000, lr: 9.987507028906759e-05 2023-03-26 18:38:27,446 44k INFO Saving model and optimizer state at iteration 10 to ./logs\44k\G_7000.pth 2023-03-26 18:38:28,239 44k INFO Saving model and optimizer state at iteration 10 to ./logs\44k\D_7000.pth 2023-03-26 18:40:00,614 44k INFO Train Epoch: 10 [89%] 2023-03-26 18:40:00,614 44k INFO Losses: [2.4287614822387695, 2.3594701290130615, 12.557588577270508, 20.547157287597656, 1.0031839609146118], step: 7200, lr: 9.987507028906759e-05 2023-03-26 18:41:11,212 44k INFO ====> Epoch: 10, cost 383.66 s 2023-03-26 18:42:02,702 44k INFO Train Epoch: 11 [16%] 2023-03-26 18:42:02,703 44k INFO Losses: [2.7678236961364746, 2.112701177597046, 8.429422378540039, 18.075698852539062, 1.3546189069747925], step: 7400, lr: 9.986258590528146e-05 2023-03-26 18:43:12,353 44k INFO Train Epoch: 11 [44%] 2023-03-26 18:43:12,353 44k INFO Losses: [2.4518330097198486, 2.41371488571167, 9.840558052062988, 13.752519607543945, 1.434598445892334], step: 7600, lr: 9.986258590528146e-05 2023-03-26 18:44:22,464 44k INFO Train Epoch: 11 [71%] 2023-03-26 18:44:22,465 44k INFO Losses: [2.507891893386841, 2.341181993484497, 10.208455085754395, 14.264325141906738, 1.0694255828857422], step: 7800, lr: 9.986258590528146e-05 2023-03-26 18:45:32,333 44k INFO Train Epoch: 11 [99%] 2023-03-26 18:45:32,333 44k INFO Losses: [2.123678207397461, 3.1937904357910156, 9.68159008026123, 16.58867645263672, 1.3500096797943115], step: 8000, lr: 9.986258590528146e-05 2023-03-26 18:45:35,497 44k INFO Saving model and optimizer state at iteration 11 to ./logs\44k\G_8000.pth 2023-03-26 18:45:36,281 44k INFO Saving model and optimizer state at iteration 11 to ./logs\44k\D_8000.pth 2023-03-26 18:45:36,963 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_5000.pth 2023-03-26 18:45:36,996 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_5000.pth 2023-03-26 18:45:39,810 44k INFO ====> Epoch: 11, cost 268.60 s 2023-03-26 18:46:55,791 44k INFO Train Epoch: 12 [26%] 2023-03-26 18:46:55,792 44k INFO Losses: [2.426657199859619, 2.042884111404419, 11.742992401123047, 18.878435134887695, 1.4948300123214722], step: 8200, lr: 9.98501030820433e-05 2023-03-26 18:48:03,829 44k INFO Train Epoch: 12 [54%] 2023-03-26 18:48:03,829 44k INFO Losses: [2.3794922828674316, 2.20752215385437, 11.286712646484375, 17.970600128173828, 1.1025506258010864], step: 8400, lr: 9.98501030820433e-05 2023-03-26 18:49:13,845 44k INFO Train Epoch: 12 [81%] 2023-03-26 18:49:13,845 44k INFO Losses: [2.699594020843506, 2.111863851547241, 7.350751876831055, 21.120838165283203, 1.6042152643203735], step: 8600, lr: 9.98501030820433e-05 2023-03-26 18:50:01,973 44k INFO ====> Epoch: 12, cost 262.16 s 2023-03-26 18:50:34,023 44k INFO Train Epoch: 13 [9%] 2023-03-26 18:50:34,024 44k INFO Losses: [2.8004846572875977, 2.1405537128448486, 5.862995624542236, 10.480680465698242, 1.271390438079834], step: 8800, lr: 9.983762181915804e-05 2023-03-26 18:51:44,536 44k INFO Train Epoch: 13 [36%] 2023-03-26 18:51:44,537 44k INFO Losses: [2.434276580810547, 2.4731249809265137, 11.914716720581055, 18.728864669799805, 0.9573069214820862], step: 9000, lr: 9.983762181915804e-05 2023-03-26 18:51:47,636 44k INFO Saving model and optimizer state at iteration 13 to ./logs\44k\G_9000.pth 2023-03-26 18:51:48,360 44k INFO Saving model and optimizer state at iteration 13 to ./logs\44k\D_9000.pth 2023-03-26 18:51:49,031 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_6000.pth 2023-03-26 18:51:49,072 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_6000.pth 2023-03-26 18:52:58,432 44k INFO Train Epoch: 13 [64%] 2023-03-26 18:52:58,433 44k INFO Losses: [2.540266990661621, 2.3790817260742188, 11.579080581665039, 16.329910278320312, 1.2830095291137695], step: 9200, lr: 9.983762181915804e-05 2023-03-26 18:54:08,227 44k INFO Train Epoch: 13 [91%] 2023-03-26 18:54:08,227 44k INFO Losses: [2.422616958618164, 2.354771375656128, 10.294193267822266, 16.54452133178711, 1.2835444211959839], step: 9400, lr: 9.983762181915804e-05 2023-03-26 18:54:30,315 44k INFO ====> Epoch: 13, cost 268.34 s 2023-03-26 18:55:27,748 44k INFO Train Epoch: 14 [19%] 2023-03-26 18:55:27,748 44k INFO Losses: [2.7023301124572754, 2.224945068359375, 9.906207084655762, 17.76188087463379, 0.881771981716156], step: 9600, lr: 9.982514211643064e-05 2023-03-26 18:56:36,426 44k INFO Train Epoch: 14 [46%] 2023-03-26 18:56:36,426 44k INFO Losses: [2.621765613555908, 2.3448433876037598, 11.42595386505127, 17.85344886779785, 0.9860710501670837], step: 9800, lr: 9.982514211643064e-05 2023-03-26 18:57:45,119 44k INFO Train Epoch: 14 [74%] 2023-03-26 18:57:45,120 44k INFO Losses: [2.376032829284668, 2.520024538040161, 11.349810600280762, 20.452016830444336, 1.183514952659607], step: 10000, lr: 9.982514211643064e-05 2023-03-26 18:57:48,255 44k INFO Saving model and optimizer state at iteration 14 to ./logs\44k\G_10000.pth 2023-03-26 18:57:48,965 44k INFO Saving model and optimizer state at iteration 14 to ./logs\44k\D_10000.pth 2023-03-26 18:57:49,630 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_7000.pth 2023-03-26 18:57:49,661 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_7000.pth 2023-03-26 18:58:54,794 44k INFO ====> Epoch: 14, cost 264.48 s 2023-03-26 18:59:07,069 44k INFO Train Epoch: 15 [1%] 2023-03-26 18:59:07,069 44k INFO Losses: [2.5483999252319336, 2.1715004444122314, 9.351938247680664, 13.837029457092285, 1.2241127490997314], step: 10200, lr: 9.981266397366609e-05 2023-03-26 19:00:15,943 44k INFO Train Epoch: 15 [29%] 2023-03-26 19:00:15,943 44k INFO Losses: [2.504495859146118, 2.5773801803588867, 11.583601951599121, 19.028501510620117, 1.3856337070465088], step: 10400, lr: 9.981266397366609e-05 2023-03-26 19:01:24,153 44k INFO Train Epoch: 15 [56%] 2023-03-26 19:01:24,153 44k INFO Losses: [2.5489957332611084, 2.3842098712921143, 7.237650394439697, 15.037101745605469, 1.4804646968841553], step: 10600, lr: 9.981266397366609e-05 2023-03-26 19:02:32,750 44k INFO Train Epoch: 15 [84%] 2023-03-26 19:02:32,750 44k INFO Losses: [2.5680196285247803, 2.1249020099639893, 5.117791652679443, 11.572392463684082, 0.9273977875709534], step: 10800, lr: 9.981266397366609e-05 2023-03-26 19:03:13,519 44k INFO ====> Epoch: 15, cost 258.72 s 2023-03-26 19:03:50,154 44k INFO Train Epoch: 16 [11%] 2023-03-26 19:03:50,154 44k INFO Losses: [2.508527994155884, 2.306581735610962, 10.516632080078125, 19.146289825439453, 1.234512448310852], step: 11000, lr: 9.980018739066937e-05 2023-03-26 19:03:53,273 44k INFO Saving model and optimizer state at iteration 16 to ./logs\44k\G_11000.pth 2023-03-26 19:03:53,998 44k INFO Saving model and optimizer state at iteration 16 to ./logs\44k\D_11000.pth 2023-03-26 19:03:54,672 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_8000.pth 2023-03-26 19:03:54,703 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_8000.pth 2023-03-26 19:05:03,574 44k INFO Train Epoch: 16 [38%] 2023-03-26 19:05:03,574 44k INFO Losses: [2.681306838989258, 1.9447368383407593, 9.382843971252441, 19.791671752929688, 1.1135826110839844], step: 11200, lr: 9.980018739066937e-05 2023-03-26 19:06:12,120 44k INFO Train Epoch: 16 [66%] 2023-03-26 19:06:12,120 44k INFO Losses: [2.779426336288452, 1.8779959678649902, 5.856019020080566, 12.40982437133789, 1.3707494735717773], step: 11400, lr: 9.980018739066937e-05 2023-03-26 19:07:20,636 44k INFO Train Epoch: 16 [93%] 2023-03-26 19:07:20,637 44k INFO Losses: [2.4304559230804443, 2.3551981449127197, 10.659273147583008, 16.688438415527344, 1.0421156883239746], step: 11600, lr: 9.980018739066937e-05 2023-03-26 19:07:36,827 44k INFO ====> Epoch: 16, cost 263.31 s 2023-03-26 19:08:38,579 44k INFO Train Epoch: 17 [21%] 2023-03-26 19:08:38,579 44k INFO Losses: [2.632368326187134, 2.3899827003479004, 8.575041770935059, 15.419698715209961, 1.3164335489273071], step: 11800, lr: 9.978771236724554e-05 2023-03-26 19:09:46,608 44k INFO Train Epoch: 17 [48%] 2023-03-26 19:09:46,608 44k INFO Losses: [2.5371720790863037, 2.409085512161255, 9.42126178741455, 13.208368301391602, 0.8270394206047058], step: 12000, lr: 9.978771236724554e-05 2023-03-26 19:09:49,748 44k INFO Saving model and optimizer state at iteration 17 to ./logs\44k\G_12000.pth 2023-03-26 19:09:50,467 44k INFO Saving model and optimizer state at iteration 17 to ./logs\44k\D_12000.pth 2023-03-26 19:09:51,130 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_9000.pth 2023-03-26 19:09:51,161 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_9000.pth 2023-03-26 19:11:00,090 44k INFO Train Epoch: 17 [76%] 2023-03-26 19:11:00,091 44k INFO Losses: [2.5986952781677246, 2.261148691177368, 9.256317138671875, 16.88166046142578, 0.9449052810668945], step: 12200, lr: 9.978771236724554e-05 2023-03-26 19:11:59,911 44k INFO ====> Epoch: 17, cost 263.08 s 2023-03-26 19:12:17,538 44k INFO Train Epoch: 18 [3%] 2023-03-26 19:12:17,539 44k INFO Losses: [2.5862667560577393, 2.404651165008545, 7.419188022613525, 15.658306121826172, 1.6325950622558594], step: 12400, lr: 9.977523890319963e-05 2023-03-26 19:13:26,408 44k INFO Train Epoch: 18 [31%] 2023-03-26 19:13:26,409 44k INFO Losses: [2.5046818256378174, 2.566683292388916, 11.40576457977295, 20.323307037353516, 0.7503674030303955], step: 12600, lr: 9.977523890319963e-05 2023-03-26 19:14:35,924 44k INFO Train Epoch: 18 [58%] 2023-03-26 19:14:35,924 44k INFO Losses: [2.342552423477173, 2.4409661293029785, 11.510695457458496, 14.227130889892578, 1.170577049255371], step: 12800, lr: 9.977523890319963e-05 2023-03-26 19:15:46,107 44k INFO Train Epoch: 18 [86%] 2023-03-26 19:15:46,107 44k INFO Losses: [2.318747043609619, 2.397803544998169, 14.357951164245605, 21.396282196044922, 1.337156891822815], step: 13000, lr: 9.977523890319963e-05 2023-03-26 19:15:49,305 44k INFO Saving model and optimizer state at iteration 18 to ./logs\44k\G_13000.pth 2023-03-26 19:15:50,034 44k INFO Saving model and optimizer state at iteration 18 to ./logs\44k\D_13000.pth 2023-03-26 19:15:50,709 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_10000.pth 2023-03-26 19:15:50,749 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_10000.pth 2023-03-26 19:16:26,771 44k INFO ====> Epoch: 18, cost 266.86 s 2023-03-26 19:17:09,924 44k INFO Train Epoch: 19 [13%] 2023-03-26 19:17:09,924 44k INFO Losses: [2.453674554824829, 2.147922992706299, 12.158828735351562, 19.503765106201172, 1.561596393585205], step: 13200, lr: 9.976276699833672e-05 2023-03-26 19:18:20,302 44k INFO Train Epoch: 19 [41%] 2023-03-26 19:18:20,302 44k INFO Losses: [2.651553153991699, 2.187086820602417, 6.514610767364502, 12.54338264465332, 1.6053649187088013], step: 13400, lr: 9.976276699833672e-05 2023-03-26 19:19:30,114 44k INFO Train Epoch: 19 [68%] 2023-03-26 19:19:30,114 44k INFO Losses: [2.2522342205047607, 2.427008867263794, 12.093758583068848, 19.923839569091797, 0.429791659116745], step: 13600, lr: 9.976276699833672e-05 2023-03-26 19:20:38,688 44k INFO Train Epoch: 19 [96%] 2023-03-26 19:20:38,688 44k INFO Losses: [2.721558094024658, 1.9361917972564697, 5.935189247131348, 14.535965919494629, 1.0994250774383545], step: 13800, lr: 9.976276699833672e-05 2023-03-26 19:20:49,538 44k INFO ====> Epoch: 19, cost 262.77 s 2023-03-26 19:21:56,853 44k INFO Train Epoch: 20 [23%] 2023-03-26 19:21:56,853 44k INFO Losses: [2.2145438194274902, 2.534202814102173, 11.711210250854492, 19.500167846679688, 1.9434586763381958], step: 14000, lr: 9.975029665246193e-05 2023-03-26 19:21:59,933 44k INFO Saving model and optimizer state at iteration 20 to ./logs\44k\G_14000.pth 2023-03-26 19:22:00,656 44k INFO Saving model and optimizer state at iteration 20 to ./logs\44k\D_14000.pth 2023-03-26 19:22:01,329 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_11000.pth 2023-03-26 19:22:01,371 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_11000.pth 2023-03-26 19:23:09,187 44k INFO Train Epoch: 20 [51%] 2023-03-26 19:23:09,188 44k INFO Losses: [2.522836685180664, 2.355647563934326, 10.368391036987305, 19.6075439453125, 1.434714913368225], step: 14200, lr: 9.975029665246193e-05 2023-03-26 19:24:18,175 44k INFO Train Epoch: 20 [78%] 2023-03-26 19:24:18,175 44k INFO Losses: [2.4127960205078125, 2.2381203174591064, 11.525047302246094, 20.181774139404297, 1.293044924736023], step: 14400, lr: 9.975029665246193e-05 2023-03-26 19:25:12,673 44k INFO ====> Epoch: 20, cost 263.13 s 2023-03-26 19:25:35,634 44k INFO Train Epoch: 21 [5%] 2023-03-26 19:25:35,635 44k INFO Losses: [2.256744384765625, 2.5320334434509277, 11.82845687866211, 20.79422950744629, 1.6302387714385986], step: 14600, lr: 9.973782786538036e-05 2023-03-26 19:26:45,939 44k INFO Train Epoch: 21 [33%] 2023-03-26 19:26:45,940 44k INFO Losses: [2.2314088344573975, 2.476482391357422, 13.351426124572754, 19.801959991455078, 1.2564705610275269], step: 14800, lr: 9.973782786538036e-05 2023-03-26 19:27:57,048 44k INFO Train Epoch: 21 [60%] 2023-03-26 19:27:57,049 44k INFO Losses: [2.4064671993255615, 2.1637048721313477, 10.250255584716797, 16.544618606567383, 1.50368070602417], step: 15000, lr: 9.973782786538036e-05 2023-03-26 19:28:00,452 44k INFO Saving model and optimizer state at iteration 21 to ./logs\44k\G_15000.pth 2023-03-26 19:28:01,207 44k INFO Saving model and optimizer state at iteration 21 to ./logs\44k\D_15000.pth 2023-03-26 19:28:01,887 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_12000.pth 2023-03-26 19:28:01,924 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_12000.pth 2023-03-26 19:29:13,747 44k INFO Train Epoch: 21 [88%] 2023-03-26 19:29:13,747 44k INFO Losses: [2.5564846992492676, 2.322659730911255, 12.607988357543945, 17.174489974975586, 1.3598898649215698], step: 15200, lr: 9.973782786538036e-05 2023-03-26 19:29:44,680 44k INFO ====> Epoch: 21, cost 272.01 s 2023-03-26 19:30:34,591 44k INFO Train Epoch: 22 [15%] 2023-03-26 19:30:34,592 44k INFO Losses: [2.803964138031006, 1.8290067911148071, 11.378033638000488, 18.429790496826172, 0.9149801731109619], step: 15400, lr: 9.972536063689719e-05 2023-03-26 19:31:45,027 44k INFO Train Epoch: 22 [43%] 2023-03-26 19:31:45,028 44k INFO Losses: [2.193533182144165, 2.4682841300964355, 13.08427906036377, 19.601545333862305, 1.1315336227416992], step: 15600, lr: 9.972536063689719e-05 2023-03-26 19:32:55,570 44k INFO Train Epoch: 22 [70%] 2023-03-26 19:32:55,570 44k INFO Losses: [2.635566473007202, 2.428760528564453, 9.543636322021484, 20.080408096313477, 1.233144760131836], step: 15800, lr: 9.972536063689719e-05 2023-03-26 19:34:05,920 44k INFO Train Epoch: 22 [98%] 2023-03-26 19:34:05,921 44k INFO Losses: [2.676663398742676, 2.063350200653076, 5.968891620635986, 12.396279335021973, 1.0327547788619995], step: 16000, lr: 9.972536063689719e-05 2023-03-26 19:34:09,104 44k INFO Saving model and optimizer state at iteration 22 to ./logs\44k\G_16000.pth 2023-03-26 19:34:09,836 44k INFO Saving model and optimizer state at iteration 22 to ./logs\44k\D_16000.pth 2023-03-26 19:34:10,534 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_13000.pth 2023-03-26 19:34:10,575 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_13000.pth 2023-03-26 19:34:16,117 44k INFO ====> Epoch: 22, cost 271.44 s 2023-03-26 19:35:31,344 44k INFO Train Epoch: 23 [25%] 2023-03-26 19:35:31,344 44k INFO Losses: [2.559652328491211, 2.3552708625793457, 6.873598098754883, 15.366195678710938, 1.1844950914382935], step: 16200, lr: 9.971289496681757e-05 2023-03-26 19:36:42,517 44k INFO Train Epoch: 23 [53%] 2023-03-26 19:36:42,518 44k INFO Losses: [2.428539991378784, 2.4367523193359375, 9.574040412902832, 19.215234756469727, 1.4362854957580566], step: 16400, lr: 9.971289496681757e-05 2023-03-26 19:37:53,037 44k INFO Train Epoch: 23 [80%] 2023-03-26 19:37:53,037 44k INFO Losses: [2.409686326980591, 2.3020808696746826, 12.331310272216797, 20.234670639038086, 1.1331175565719604], step: 16600, lr: 9.971289496681757e-05 2023-03-26 19:38:43,155 44k INFO ====> Epoch: 23, cost 267.04 s 2023-03-26 19:39:12,473 44k INFO Train Epoch: 24 [8%] 2023-03-26 19:39:12,474 44k INFO Losses: [2.4319372177124023, 2.559391975402832, 7.149979591369629, 14.216936111450195, 1.3410944938659668], step: 16800, lr: 9.970043085494672e-05 2023-03-26 19:40:24,696 44k INFO Train Epoch: 24 [35%] 2023-03-26 19:40:24,697 44k INFO Losses: [2.5724716186523438, 2.2109527587890625, 13.943380355834961, 19.470436096191406, 1.263765811920166], step: 17000, lr: 9.970043085494672e-05 2023-03-26 19:40:27,837 44k INFO Saving model and optimizer state at iteration 24 to ./logs\44k\G_17000.pth 2023-03-26 19:40:28,573 44k INFO Saving model and optimizer state at iteration 24 to ./logs\44k\D_17000.pth 2023-03-26 19:40:29,280 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_14000.pth 2023-03-26 19:40:29,320 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_14000.pth 2023-03-26 19:41:39,648 44k INFO Train Epoch: 24 [63%] 2023-03-26 19:41:39,648 44k INFO Losses: [2.3632164001464844, 2.2174670696258545, 14.03620719909668, 20.134252548217773, 1.1238229274749756], step: 17200, lr: 9.970043085494672e-05 2023-03-26 19:42:50,124 44k INFO Train Epoch: 24 [90%] 2023-03-26 19:42:50,124 44k INFO Losses: [2.4203712940216064, 2.272968292236328, 8.129199981689453, 18.001089096069336, 1.2764698266983032], step: 17400, lr: 9.970043085494672e-05 2023-03-26 19:43:15,354 44k INFO ====> Epoch: 24, cost 272.20 s 2023-03-26 19:44:33,092 44k INFO Train Epoch: 25 [18%] 2023-03-26 19:44:33,092 44k INFO Losses: [2.627995491027832, 1.990657091140747, 8.367237091064453, 13.38406753540039, 1.816210150718689], step: 17600, lr: 9.968796830108985e-05 2023-03-26 19:46:20,442 44k INFO Train Epoch: 25 [45%] 2023-03-26 19:46:20,442 44k INFO Losses: [2.4526002407073975, 2.442440986633301, 12.983800888061523, 19.65188217163086, 1.2659069299697876], step: 17800, lr: 9.968796830108985e-05 2023-03-26 19:48:05,774 44k INFO Train Epoch: 25 [73%] 2023-03-26 19:48:05,774 44k INFO Losses: [2.3064093589782715, 2.238471508026123, 11.532581329345703, 18.76743507385254, 1.8713524341583252], step: 18000, lr: 9.968796830108985e-05 2023-03-26 19:48:09,439 44k INFO Saving model and optimizer state at iteration 25 to ./logs\44k\G_18000.pth 2023-03-26 19:48:10,574 44k INFO Saving model and optimizer state at iteration 25 to ./logs\44k\D_18000.pth 2023-03-26 19:48:11,672 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_15000.pth 2023-03-26 19:48:11,718 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_15000.pth 2023-03-26 19:49:55,679 44k INFO ====> Epoch: 25, cost 400.33 s 2023-03-26 19:50:05,622 44k INFO Train Epoch: 26 [0%] 2023-03-26 19:50:05,623 44k INFO Losses: [2.5450186729431152, 2.6062145233154297, 11.213815689086914, 19.335420608520508, 1.6555109024047852], step: 18200, lr: 9.967550730505221e-05 2023-03-26 19:51:26,360 44k INFO Train Epoch: 26 [27%] 2023-03-26 19:51:26,361 44k INFO Losses: [2.311784505844116, 2.263667106628418, 8.29580307006836, 14.4016695022583, 1.2868053913116455], step: 18400, lr: 9.967550730505221e-05 2023-03-26 19:52:37,041 44k INFO Train Epoch: 26 [55%] 2023-03-26 19:52:37,041 44k INFO Losses: [2.5044937133789062, 2.6533823013305664, 10.89778995513916, 15.753846168518066, 1.1692155599594116], step: 18600, lr: 9.967550730505221e-05 2023-03-26 19:53:48,719 44k INFO Train Epoch: 26 [82%] 2023-03-26 19:53:48,720 44k INFO Losses: [2.5673725605010986, 2.0504236221313477, 8.545933723449707, 12.596656799316406, 0.950874924659729], step: 18800, lr: 9.967550730505221e-05 2023-03-26 19:54:33,689 44k INFO ====> Epoch: 26, cost 278.01 s 2023-03-26 19:55:08,576 44k INFO Train Epoch: 27 [10%] 2023-03-26 19:55:08,577 44k INFO Losses: [2.646456241607666, 2.1287734508514404, 12.250157356262207, 13.04813003540039, 1.4722940921783447], step: 19000, lr: 9.966304786663908e-05 2023-03-26 19:55:11,721 44k INFO Saving model and optimizer state at iteration 27 to ./logs\44k\G_19000.pth 2023-03-26 19:55:12,442 44k INFO Saving model and optimizer state at iteration 27 to ./logs\44k\D_19000.pth 2023-03-26 19:55:13,143 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_16000.pth 2023-03-26 19:55:13,173 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_16000.pth 2023-03-26 19:56:23,822 44k INFO Train Epoch: 27 [37%] 2023-03-26 19:56:23,822 44k INFO Losses: [2.7408814430236816, 2.0392229557037354, 13.874476432800293, 19.85894012451172, 1.4021087884902954], step: 19200, lr: 9.966304786663908e-05 2023-03-26 19:57:34,496 44k INFO Train Epoch: 27 [65%] 2023-03-26 19:57:34,497 44k INFO Losses: [2.3849587440490723, 2.4459309577941895, 12.022232055664062, 16.191383361816406, 1.4396075010299683], step: 19400, lr: 9.966304786663908e-05 2023-03-26 19:58:46,522 44k INFO Train Epoch: 27 [92%] 2023-03-26 19:58:46,522 44k INFO Losses: [2.5721755027770996, 2.3741061687469482, 10.125975608825684, 15.761423110961914, 1.5095386505126953], step: 19600, lr: 9.966304786663908e-05 2023-03-26 19:59:06,327 44k INFO ====> Epoch: 27, cost 272.64 s 2023-03-26 20:00:08,270 44k INFO Train Epoch: 28 [20%] 2023-03-26 20:00:08,271 44k INFO Losses: [2.6713407039642334, 2.08626127243042, 12.329195022583008, 16.386035919189453, 1.2455395460128784], step: 19800, lr: 9.965058998565574e-05 2023-03-26 20:01:20,601 44k INFO Train Epoch: 28 [47%] 2023-03-26 20:01:20,601 44k INFO Losses: [2.3901543617248535, 2.3985085487365723, 11.719197273254395, 18.378982543945312, 1.1770191192626953], step: 20000, lr: 9.965058998565574e-05 2023-03-26 20:01:23,899 44k INFO Saving model and optimizer state at iteration 28 to ./logs\44k\G_20000.pth 2023-03-26 20:01:24,637 44k INFO Saving model and optimizer state at iteration 28 to ./logs\44k\D_20000.pth 2023-03-26 20:01:25,340 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_17000.pth 2023-03-26 20:01:25,371 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_17000.pth 2023-03-26 20:02:36,668 44k INFO Train Epoch: 28 [75%] 2023-03-26 20:02:36,669 44k INFO Losses: [2.530269145965576, 2.4520785808563232, 13.67650032043457, 20.522214889526367, 1.2241243124008179], step: 20200, lr: 9.965058998565574e-05 2023-03-26 20:03:41,530 44k INFO ====> Epoch: 28, cost 275.20 s 2023-03-26 20:03:57,080 44k INFO Train Epoch: 29 [2%] 2023-03-26 20:03:57,081 44k INFO Losses: [2.7208266258239746, 1.8831546306610107, 8.64593505859375, 13.37922191619873, 1.6189135313034058], step: 20400, lr: 9.963813366190753e-05 2023-03-26 20:05:08,706 44k INFO Train Epoch: 29 [30%] 2023-03-26 20:05:08,707 44k INFO Losses: [2.4101641178131104, 2.4209752082824707, 10.895308494567871, 16.480730056762695, 0.8267279863357544], step: 20600, lr: 9.963813366190753e-05 2023-03-26 20:06:19,103 44k INFO Train Epoch: 29 [57%] 2023-03-26 20:06:19,103 44k INFO Losses: [2.302244186401367, 2.505117416381836, 11.002657890319824, 20.075153350830078, 1.1351284980773926], step: 20800, lr: 9.963813366190753e-05 2023-03-26 20:07:30,254 44k INFO Train Epoch: 29 [85%] 2023-03-26 20:07:30,254 44k INFO Losses: [2.515221118927002, 2.2566659450531006, 7.678509712219238, 13.22794246673584, 0.995908796787262], step: 21000, lr: 9.963813366190753e-05 2023-03-26 20:07:33,374 44k INFO Saving model and optimizer state at iteration 29 to ./logs\44k\G_21000.pth 2023-03-26 20:07:34,209 44k INFO Saving model and optimizer state at iteration 29 to ./logs\44k\D_21000.pth 2023-03-26 20:07:34,929 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_18000.pth 2023-03-26 20:07:34,961 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_18000.pth 2023-03-26 20:08:18,645 44k INFO ====> Epoch: 29, cost 277.12 s 2023-03-26 20:09:14,242 44k INFO Train Epoch: 30 [12%] 2023-03-26 20:09:14,243 44k INFO Losses: [2.5049850940704346, 2.070011615753174, 8.637166023254395, 16.170169830322266, 0.9112614393234253], step: 21200, lr: 9.962567889519979e-05 2023-03-26 20:11:00,343 44k INFO Train Epoch: 30 [40%] 2023-03-26 20:11:00,344 44k INFO Losses: [2.2979304790496826, 2.345335006713867, 12.233209609985352, 18.85540199279785, 1.0354423522949219], step: 21400, lr: 9.962567889519979e-05 2023-03-26 20:12:46,227 44k INFO Train Epoch: 30 [67%] 2023-03-26 20:12:46,228 44k INFO Losses: [2.483952283859253, 2.190819025039673, 6.3163042068481445, 12.24166202545166, 0.9871227741241455], step: 21600, lr: 9.962567889519979e-05 2023-03-26 20:14:31,271 44k INFO Train Epoch: 30 [95%] 2023-03-26 20:14:31,272 44k INFO Losses: [2.5674543380737305, 2.299410581588745, 11.40297794342041, 19.191362380981445, 1.645627498626709], step: 21800, lr: 9.962567889519979e-05 2023-03-26 20:14:52,071 44k INFO ====> Epoch: 30, cost 393.43 s 2023-03-26 20:21:58,724 44k INFO {'train': {'log_interval': 200, 'eval_interval': 1000, 'seed': 1234, 'epochs': 50, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kageaki': 0}, 'model_dir': './logs\\44k'} 2023-03-26 20:21:58,750 44k WARNING git hash values are different. cea6df30(saved) != fd4d47fd(current) 2023-03-26 20:22:00,767 44k INFO Loaded checkpoint './logs\44k\G_21000.pth' (iteration 29) 2023-03-26 20:22:01,116 44k INFO Loaded checkpoint './logs\44k\D_21000.pth' (iteration 29) 2023-03-26 20:22:23,128 44k INFO Train Epoch: 29 [2%] 2023-03-26 20:22:23,129 44k INFO Losses: [2.1818861961364746, 2.5212302207946777, 11.428882598876953, 16.844091415405273, 1.090970754623413], step: 20400, lr: 9.962567889519979e-05 2023-03-26 20:23:52,726 44k INFO Train Epoch: 29 [30%] 2023-03-26 20:23:52,726 44k INFO Losses: [2.506913185119629, 2.2178587913513184, 10.033429145812988, 14.903549194335938, 0.8776416182518005], step: 20600, lr: 9.962567889519979e-05 2023-03-26 20:25:17,515 44k INFO Train Epoch: 29 [57%] 2023-03-26 20:25:17,515 44k INFO Losses: [2.3614919185638428, 2.357292413711548, 10.665281295776367, 19.459697723388672, 1.2023992538452148], step: 20800, lr: 9.962567889519979e-05 2023-03-26 20:26:44,436 44k INFO Train Epoch: 29 [85%] 2023-03-26 20:26:44,437 44k INFO Losses: [2.713144540786743, 2.6324923038482666, 6.779805660247803, 11.422992706298828, 1.450287938117981], step: 21000, lr: 9.962567889519979e-05 2023-03-26 20:26:48,221 44k INFO Saving model and optimizer state at iteration 29 to ./logs\44k\G_21000.pth 2023-03-26 20:26:48,988 44k INFO Saving model and optimizer state at iteration 29 to ./logs\44k\D_21000.pth 2023-03-26 20:27:38,151 44k INFO ====> Epoch: 29, cost 339.43 s 2023-03-26 20:28:18,032 44k INFO Train Epoch: 30 [12%] 2023-03-26 20:28:18,033 44k INFO Losses: [2.406369924545288, 2.177821636199951, 10.211670875549316, 17.45622444152832, 0.887344241142273], step: 21200, lr: 9.961322568533789e-05 2023-03-26 20:29:27,639 44k INFO Train Epoch: 30 [40%] 2023-03-26 20:29:27,640 44k INFO Losses: [2.3278651237487793, 2.299908399581909, 11.354140281677246, 18.174701690673828, 1.1613898277282715], step: 21400, lr: 9.961322568533789e-05 2023-03-26 20:30:37,527 44k INFO Train Epoch: 30 [67%] 2023-03-26 20:30:37,527 44k INFO Losses: [2.475722312927246, 2.324679136276245, 7.398623466491699, 11.126361846923828, 0.7156153321266174], step: 21600, lr: 9.961322568533789e-05 2023-03-26 20:31:47,856 44k INFO Train Epoch: 30 [95%] 2023-03-26 20:31:47,856 44k INFO Losses: [2.4085958003997803, 2.380079984664917, 11.55059814453125, 17.69883155822754, 1.3650686740875244], step: 21800, lr: 9.961322568533789e-05 2023-03-26 20:32:01,545 44k INFO ====> Epoch: 30, cost 263.39 s 2023-03-26 20:33:07,044 44k INFO Train Epoch: 31 [22%] 2023-03-26 20:33:07,044 44k INFO Losses: [2.746565341949463, 2.101046562194824, 11.939288139343262, 18.45298194885254, 1.276940107345581], step: 22000, lr: 9.960077403212722e-05 2023-03-26 20:33:10,119 44k INFO Saving model and optimizer state at iteration 31 to ./logs\44k\G_22000.pth 2023-03-26 20:33:10,900 44k INFO Saving model and optimizer state at iteration 31 to ./logs\44k\D_22000.pth 2023-03-26 20:33:11,606 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_19000.pth 2023-03-26 20:33:11,637 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_19000.pth 2023-03-26 20:34:20,249 44k INFO Train Epoch: 31 [49%] 2023-03-26 20:34:20,250 44k INFO Losses: [2.622921943664551, 2.0744359493255615, 13.793534278869629, 17.437915802001953, 1.4368258714675903], step: 22200, lr: 9.960077403212722e-05 2023-03-26 20:35:29,850 44k INFO Train Epoch: 31 [77%] 2023-03-26 20:35:29,850 44k INFO Losses: [2.7193901538848877, 2.395512104034424, 8.611534118652344, 17.026023864746094, 1.804220199584961], step: 22400, lr: 9.960077403212722e-05 2023-03-26 20:36:27,662 44k INFO ====> Epoch: 31, cost 266.12 s 2023-03-26 20:36:48,354 44k INFO Train Epoch: 32 [4%] 2023-03-26 20:36:48,355 44k INFO Losses: [2.5295565128326416, 2.292466640472412, 13.563910484313965, 19.43305015563965, 1.5943859815597534], step: 22600, lr: 9.95883239353732e-05 2023-03-26 20:37:58,105 44k INFO Train Epoch: 32 [32%] 2023-03-26 20:37:58,106 44k INFO Losses: [2.6768863201141357, 2.205745220184326, 9.992913246154785, 17.748117446899414, 1.507771372795105], step: 22800, lr: 9.95883239353732e-05 2023-03-26 20:39:05,937 44k INFO Train Epoch: 32 [59%] 2023-03-26 20:39:05,938 44k INFO Losses: [2.531132698059082, 2.1875574588775635, 10.714558601379395, 15.263238906860352, 0.9883330464363098], step: 23000, lr: 9.95883239353732e-05 2023-03-26 20:39:08,984 44k INFO Saving model and optimizer state at iteration 32 to ./logs\44k\G_23000.pth 2023-03-26 20:39:09,751 44k INFO Saving model and optimizer state at iteration 32 to ./logs\44k\D_23000.pth 2023-03-26 20:39:10,439 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_20000.pth 2023-03-26 20:39:10,470 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_20000.pth 2023-03-26 20:40:18,245 44k INFO Train Epoch: 32 [87%] 2023-03-26 20:40:18,246 44k INFO Losses: [2.4366674423217773, 2.4468047618865967, 12.758763313293457, 20.2938175201416, 1.3057420253753662], step: 23200, lr: 9.95883239353732e-05 2023-03-26 20:40:50,525 44k INFO ====> Epoch: 32, cost 262.86 s 2023-03-26 20:41:35,186 44k INFO Train Epoch: 33 [14%] 2023-03-26 20:41:35,187 44k INFO Losses: [2.4389867782592773, 2.3163771629333496, 10.620076179504395, 18.002946853637695, 1.3941969871520996], step: 23400, lr: 9.957587539488128e-05 2023-03-26 20:42:43,145 44k INFO Train Epoch: 33 [42%] 2023-03-26 20:42:43,145 44k INFO Losses: [2.4462387561798096, 2.3331246376037598, 12.116209030151367, 18.596160888671875, 1.4406756162643433], step: 23600, lr: 9.957587539488128e-05 2023-03-26 20:43:51,420 44k INFO Train Epoch: 33 [69%] 2023-03-26 20:43:51,420 44k INFO Losses: [2.7295923233032227, 2.0312931537628174, 8.300894737243652, 11.675796508789062, 1.3705692291259766], step: 23800, lr: 9.957587539488128e-05 2023-03-26 20:44:59,297 44k INFO Train Epoch: 33 [97%] 2023-03-26 20:44:59,297 44k INFO Losses: [2.4058234691619873, 2.4040396213531494, 10.637872695922852, 16.23623275756836, 0.804890513420105], step: 24000, lr: 9.957587539488128e-05 2023-03-26 20:45:02,445 44k INFO Saving model and optimizer state at iteration 33 to ./logs\44k\G_24000.pth 2023-03-26 20:45:03,161 44k INFO Saving model and optimizer state at iteration 33 to ./logs\44k\D_24000.pth 2023-03-26 20:45:03,829 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_21000.pth 2023-03-26 20:45:03,859 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_21000.pth 2023-03-26 20:45:11,843 44k INFO ====> Epoch: 33, cost 261.32 s 2023-03-26 20:46:21,395 44k INFO Train Epoch: 34 [24%] 2023-03-26 20:46:21,395 44k INFO Losses: [2.4041364192962646, 2.6986656188964844, 12.151071548461914, 16.378358840942383, 1.0397751331329346], step: 24200, lr: 9.956342841045691e-05 2023-03-26 20:47:29,029 44k INFO Train Epoch: 34 [52%] 2023-03-26 20:47:29,029 44k INFO Losses: [2.4810895919799805, 2.3740761280059814, 8.478053092956543, 15.712090492248535, 0.899149477481842], step: 24400, lr: 9.956342841045691e-05 2023-03-26 20:48:45,633 44k INFO Train Epoch: 34 [79%] 2023-03-26 20:48:45,633 44k INFO Losses: [2.3709893226623535, 2.2401344776153564, 12.932862281799316, 18.402969360351562, 1.4470969438552856], step: 24600, lr: 9.956342841045691e-05 2023-03-26 20:50:03,643 44k INFO ====> Epoch: 34, cost 291.80 s 2023-03-26 20:50:37,796 44k INFO Train Epoch: 35 [7%] 2023-03-26 20:50:37,797 44k INFO Losses: [2.5201334953308105, 2.2755749225616455, 11.022621154785156, 16.03299331665039, 0.8324728608131409], step: 24800, lr: 9.95509829819056e-05 2023-03-26 20:52:20,340 44k INFO Train Epoch: 35 [34%] 2023-03-26 20:52:20,341 44k INFO Losses: [2.6387505531311035, 2.3263511657714844, 9.097269058227539, 15.535866737365723, 1.2891377210617065], step: 25000, lr: 9.95509829819056e-05 2023-03-26 20:52:23,933 44k INFO Saving model and optimizer state at iteration 35 to ./logs\44k\G_25000.pth 2023-03-26 20:52:25,157 44k INFO Saving model and optimizer state at iteration 35 to ./logs\44k\D_25000.pth 2023-03-26 20:52:26,064 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_22000.pth 2023-03-26 20:52:26,118 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_22000.pth 2023-03-26 20:54:08,890 44k INFO Train Epoch: 35 [62%] 2023-03-26 20:54:08,890 44k INFO Losses: [2.4386889934539795, 2.373467445373535, 12.214559555053711, 19.51681137084961, 1.087910771369934], step: 25200, lr: 9.95509829819056e-05 2023-03-26 20:55:52,327 44k INFO Train Epoch: 35 [89%] 2023-03-26 20:55:52,327 44k INFO Losses: [2.387831449508667, 2.203815460205078, 8.673959732055664, 15.515698432922363, 1.1257497072219849], step: 25400, lr: 9.95509829819056e-05 2023-03-26 20:56:33,687 44k INFO ====> Epoch: 35, cost 390.04 s 2023-03-26 20:57:46,815 44k INFO Train Epoch: 36 [16%] 2023-03-26 20:57:46,815 44k INFO Losses: [2.5127129554748535, 2.1894476413726807, 9.067672729492188, 17.541988372802734, 1.7558355331420898], step: 25600, lr: 9.953853910903285e-05 2023-03-26 20:59:30,554 44k INFO Train Epoch: 36 [44%] 2023-03-26 20:59:30,555 44k INFO Losses: [2.5485236644744873, 2.4422123432159424, 9.322126388549805, 14.556734085083008, 1.2394096851348877], step: 25800, lr: 9.953853910903285e-05 2023-03-26 21:00:49,541 44k INFO Train Epoch: 36 [71%] 2023-03-26 21:00:49,541 44k INFO Losses: [2.577131986618042, 2.3183388710021973, 12.595996856689453, 15.74543571472168, 1.211929440498352], step: 26000, lr: 9.953853910903285e-05 2023-03-26 21:00:52,715 44k INFO Saving model and optimizer state at iteration 36 to ./logs\44k\G_26000.pth 2023-03-26 21:00:53,493 44k INFO Saving model and optimizer state at iteration 36 to ./logs\44k\D_26000.pth 2023-03-26 21:00:54,168 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_23000.pth 2023-03-26 21:00:54,198 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_23000.pth 2023-03-26 21:02:03,353 44k INFO Train Epoch: 36 [99%] 2023-03-26 21:02:03,354 44k INFO Losses: [2.140305519104004, 2.6605606079101562, 10.437066078186035, 16.032678604125977, 1.0190494060516357], step: 26200, lr: 9.953853910903285e-05 2023-03-26 21:02:06,222 44k INFO ====> Epoch: 36, cost 332.53 s 2023-03-26 21:03:22,623 44k INFO Train Epoch: 37 [26%] 2023-03-26 21:03:22,624 44k INFO Losses: [2.320822238922119, 2.3764572143554688, 10.913688659667969, 15.41199016571045, 1.2600834369659424], step: 26400, lr: 9.952609679164422e-05 2023-03-26 21:04:31,720 44k INFO Train Epoch: 37 [54%] 2023-03-26 21:04:31,720 44k INFO Losses: [2.369985818862915, 2.22127628326416, 10.650030136108398, 16.435476303100586, 1.1520702838897705], step: 26600, lr: 9.952609679164422e-05 2023-03-26 21:05:41,408 44k INFO Train Epoch: 37 [81%] 2023-03-26 21:05:41,409 44k INFO Losses: [2.4230546951293945, 2.409635543823242, 8.992048263549805, 18.469785690307617, 1.1126004457473755], step: 26800, lr: 9.952609679164422e-05 2023-03-26 21:06:28,293 44k INFO ====> Epoch: 37, cost 262.07 s 2023-03-26 21:07:00,080 44k INFO Train Epoch: 38 [9%] 2023-03-26 21:07:00,081 44k INFO Losses: [2.3237335681915283, 2.4228711128234863, 14.604413032531738, 19.324264526367188, 1.3721626996994019], step: 27000, lr: 9.951365602954526e-05 2023-03-26 21:07:03,187 44k INFO Saving model and optimizer state at iteration 38 to ./logs\44k\G_27000.pth 2023-03-26 21:07:03,970 44k INFO Saving model and optimizer state at iteration 38 to ./logs\44k\D_27000.pth 2023-03-26 21:07:04,661 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_24000.pth 2023-03-26 21:07:04,695 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_24000.pth 2023-03-26 21:08:14,471 44k INFO Train Epoch: 38 [36%] 2023-03-26 21:08:14,472 44k INFO Losses: [2.3177566528320312, 2.325977325439453, 13.018202781677246, 16.41051483154297, 1.1823643445968628], step: 27200, lr: 9.951365602954526e-05 2023-03-26 21:09:23,465 44k INFO Train Epoch: 38 [64%] 2023-03-26 21:09:23,466 44k INFO Losses: [2.4561734199523926, 2.500232458114624, 12.181352615356445, 16.837472915649414, 1.3606674671173096], step: 27400, lr: 9.951365602954526e-05 2023-03-26 21:10:32,957 44k INFO Train Epoch: 38 [91%] 2023-03-26 21:10:32,957 44k INFO Losses: [2.460768461227417, 2.3304383754730225, 11.129576683044434, 16.081220626831055, 1.195226788520813], step: 27600, lr: 9.951365602954526e-05 2023-03-26 21:10:55,092 44k INFO ====> Epoch: 38, cost 266.80 s 2023-03-26 21:11:51,630 44k INFO Train Epoch: 39 [19%] 2023-03-26 21:11:51,631 44k INFO Losses: [2.5549423694610596, 2.332932710647583, 10.694438934326172, 17.75734519958496, 1.2634567022323608], step: 27800, lr: 9.950121682254156e-05 2023-03-26 21:13:00,348 44k INFO Train Epoch: 39 [46%] 2023-03-26 21:13:00,349 44k INFO Losses: [2.560208559036255, 2.319077491760254, 7.480192184448242, 17.929821014404297, 0.8479750156402588], step: 28000, lr: 9.950121682254156e-05 2023-03-26 21:13:03,496 44k INFO Saving model and optimizer state at iteration 39 to ./logs\44k\G_28000.pth 2023-03-26 21:13:04,233 44k INFO Saving model and optimizer state at iteration 39 to ./logs\44k\D_28000.pth 2023-03-26 21:13:04,921 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_25000.pth 2023-03-26 21:13:04,953 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_25000.pth 2023-03-26 21:14:15,399 44k INFO Train Epoch: 39 [74%] 2023-03-26 21:14:15,400 44k INFO Losses: [2.528662919998169, 2.359780788421631, 7.910620212554932, 15.978086471557617, 1.5573182106018066], step: 28200, lr: 9.950121682254156e-05 2023-03-26 21:15:20,264 44k INFO ====> Epoch: 39, cost 265.17 s 2023-03-26 21:15:32,493 44k INFO Train Epoch: 40 [1%] 2023-03-26 21:15:32,493 44k INFO Losses: [2.5307111740112305, 2.0676941871643066, 13.683015823364258, 18.8820743560791, 1.6224257946014404], step: 28400, lr: 9.948877917043875e-05 2023-03-26 21:16:41,916 44k INFO Train Epoch: 40 [29%] 2023-03-26 21:16:41,916 44k INFO Losses: [2.549201488494873, 2.1349565982818604, 8.524430274963379, 13.472315788269043, 1.2798707485198975], step: 28600, lr: 9.948877917043875e-05 2023-03-26 21:17:49,879 44k INFO Train Epoch: 40 [56%] 2023-03-26 21:17:49,879 44k INFO Losses: [2.635751485824585, 2.4183154106140137, 7.132954120635986, 12.586833953857422, 1.2585774660110474], step: 28800, lr: 9.948877917043875e-05 2023-03-26 21:18:58,017 44k INFO Train Epoch: 40 [84%] 2023-03-26 21:18:58,017 44k INFO Losses: [2.346043825149536, 2.3376169204711914, 11.767813682556152, 15.985593795776367, 1.3478666543960571], step: 29000, lr: 9.948877917043875e-05 2023-03-26 21:19:01,152 44k INFO Saving model and optimizer state at iteration 40 to ./logs\44k\G_29000.pth 2023-03-26 21:19:01,873 44k INFO Saving model and optimizer state at iteration 40 to ./logs\44k\D_29000.pth 2023-03-26 21:19:02,533 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_26000.pth 2023-03-26 21:19:02,574 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_26000.pth 2023-03-26 21:19:42,981 44k INFO ====> Epoch: 40, cost 262.72 s 2023-03-26 21:20:19,694 44k INFO Train Epoch: 41 [11%] 2023-03-26 21:20:19,695 44k INFO Losses: [2.565873622894287, 2.290444850921631, 8.034516334533691, 16.443622589111328, 1.4831417798995972], step: 29200, lr: 9.947634307304244e-05 2023-03-26 21:21:28,736 44k INFO Train Epoch: 41 [38%] 2023-03-26 21:21:28,736 44k INFO Losses: [2.4355294704437256, 2.1923205852508545, 9.081827163696289, 16.441974639892578, 1.3548475503921509], step: 29400, lr: 9.947634307304244e-05 2023-03-26 21:22:36,793 44k INFO Train Epoch: 41 [66%] 2023-03-26 21:22:36,794 44k INFO Losses: [2.6268556118011475, 2.113205909729004, 10.946870803833008, 16.89814567565918, 1.0009223222732544], step: 29600, lr: 9.947634307304244e-05 2023-03-26 21:23:44,952 44k INFO Train Epoch: 41 [93%] 2023-03-26 21:23:44,952 44k INFO Losses: [2.552823066711426, 2.2609336376190186, 7.76698637008667, 10.481614112854004, 1.6063477993011475], step: 29800, lr: 9.947634307304244e-05 2023-03-26 21:24:01,053 44k INFO ====> Epoch: 41, cost 258.07 s 2023-03-26 21:25:08,404 44k INFO Train Epoch: 42 [21%] 2023-03-26 21:25:08,404 44k INFO Losses: [2.613840103149414, 2.3464958667755127, 8.959070205688477, 14.424112319946289, 0.8349194526672363], step: 30000, lr: 9.94639085301583e-05 2023-03-26 21:25:11,429 44k INFO Saving model and optimizer state at iteration 42 to ./logs\44k\G_30000.pth 2023-03-26 21:25:12,145 44k INFO Saving model and optimizer state at iteration 42 to ./logs\44k\D_30000.pth 2023-03-26 21:25:12,814 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_27000.pth 2023-03-26 21:25:12,844 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_27000.pth 2023-03-26 21:26:20,242 44k INFO Train Epoch: 42 [48%] 2023-03-26 21:26:20,242 44k INFO Losses: [2.5178046226501465, 1.973719835281372, 9.889934539794922, 16.38356590270996, 1.1818913221359253], step: 30200, lr: 9.94639085301583e-05 2023-03-26 21:27:28,491 44k INFO Train Epoch: 42 [76%] 2023-03-26 21:27:28,492 44k INFO Losses: [2.6279070377349854, 2.250808000564575, 13.214252471923828, 19.34341812133789, 0.671466052532196], step: 30400, lr: 9.94639085301583e-05 2023-03-26 21:28:28,921 44k INFO ====> Epoch: 42, cost 267.87 s 2023-03-26 21:28:46,708 44k INFO Train Epoch: 43 [3%] 2023-03-26 21:28:46,708 44k INFO Losses: [2.517134189605713, 2.1201608180999756, 10.239909172058105, 18.890607833862305, 1.3374667167663574], step: 30600, lr: 9.945147554159202e-05 2023-03-26 21:29:55,054 44k INFO Train Epoch: 43 [31%] 2023-03-26 21:29:55,054 44k INFO Losses: [2.5170860290527344, 2.214848518371582, 13.340782165527344, 19.41912269592285, 1.0389739274978638], step: 30800, lr: 9.945147554159202e-05 2023-03-26 21:31:03,197 44k INFO Train Epoch: 43 [58%] 2023-03-26 21:31:03,198 44k INFO Losses: [2.4435911178588867, 2.495150566101074, 10.39663314819336, 19.238109588623047, 0.9299176335334778], step: 31000, lr: 9.945147554159202e-05 2023-03-26 21:31:06,281 44k INFO Saving model and optimizer state at iteration 43 to ./logs\44k\G_31000.pth 2023-03-26 21:31:07,060 44k INFO Saving model and optimizer state at iteration 43 to ./logs\44k\D_31000.pth 2023-03-26 21:31:07,738 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_28000.pth 2023-03-26 21:31:07,769 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_28000.pth 2023-03-26 21:32:15,961 44k INFO Train Epoch: 43 [86%] 2023-03-26 21:32:15,962 44k INFO Losses: [2.4328112602233887, 2.2417960166931152, 17.485950469970703, 21.571151733398438, 1.7340683937072754], step: 31200, lr: 9.945147554159202e-05 2023-03-26 21:32:51,186 44k INFO ====> Epoch: 43, cost 262.27 s 2023-03-26 21:33:33,308 44k INFO Train Epoch: 44 [13%] 2023-03-26 21:33:33,309 44k INFO Losses: [2.2247700691223145, 2.4395790100097656, 12.175507545471191, 17.37096405029297, 1.353575348854065], step: 31400, lr: 9.943904410714931e-05 2023-03-26 21:34:41,366 44k INFO Train Epoch: 44 [41%] 2023-03-26 21:34:41,366 44k INFO Losses: [2.3662893772125244, 2.7794904708862305, 9.40709400177002, 14.246566772460938, 1.2741729021072388], step: 31600, lr: 9.943904410714931e-05 2023-03-26 21:35:50,774 44k INFO Train Epoch: 44 [68%] 2023-03-26 21:35:50,775 44k INFO Losses: [2.3003039360046387, 2.4221243858337402, 11.224088668823242, 18.253585815429688, 1.1864084005355835], step: 31800, lr: 9.943904410714931e-05 2023-03-26 21:37:00,352 44k INFO Train Epoch: 44 [96%] 2023-03-26 21:37:00,353 44k INFO Losses: [2.63925838470459, 2.0777955055236816, 6.857548713684082, 15.075932502746582, 1.106040358543396], step: 32000, lr: 9.943904410714931e-05 2023-03-26 21:37:03,556 44k INFO Saving model and optimizer state at iteration 44 to ./logs\44k\G_32000.pth 2023-03-26 21:37:04,270 44k INFO Saving model and optimizer state at iteration 44 to ./logs\44k\D_32000.pth 2023-03-26 21:37:04,937 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_29000.pth 2023-03-26 21:37:04,971 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_29000.pth 2023-03-26 21:37:15,788 44k INFO ====> Epoch: 44, cost 264.60 s 2023-03-26 21:38:24,204 44k INFO Train Epoch: 45 [23%] 2023-03-26 21:38:24,204 44k INFO Losses: [2.1979780197143555, 2.6060190200805664, 12.857758522033691, 16.70841407775879, 0.9493151903152466], step: 32200, lr: 9.942661422663591e-05 2023-03-26 21:39:33,149 44k INFO Train Epoch: 45 [51%] 2023-03-26 21:39:33,149 44k INFO Losses: [2.4014205932617188, 2.219925880432129, 10.582074165344238, 15.304710388183594, 1.1473904848098755], step: 32400, lr: 9.942661422663591e-05 2023-03-26 21:40:43,439 44k INFO Train Epoch: 45 [78%] 2023-03-26 21:40:43,440 44k INFO Losses: [2.5148918628692627, 2.2440202236175537, 7.362368583679199, 13.719354629516602, 1.2587121725082397], step: 32600, lr: 9.942661422663591e-05 2023-03-26 21:41:53,654 44k INFO ====> Epoch: 45, cost 277.87 s 2023-03-26 21:42:24,856 44k INFO Train Epoch: 46 [5%] 2023-03-26 21:42:24,857 44k INFO Losses: [2.33835768699646, 2.281447649002075, 12.60938835144043, 18.733139038085938, 1.3569731712341309], step: 32800, lr: 9.941418589985758e-05 2023-03-26 21:44:08,297 44k INFO Train Epoch: 46 [33%] 2023-03-26 21:44:08,298 44k INFO Losses: [2.325303792953491, 2.715770721435547, 12.801007270812988, 16.551340103149414, 0.9130549430847168], step: 33000, lr: 9.941418589985758e-05 2023-03-26 21:44:11,862 44k INFO Saving model and optimizer state at iteration 46 to ./logs\44k\G_33000.pth 2023-03-26 21:44:13,019 44k INFO Saving model and optimizer state at iteration 46 to ./logs\44k\D_33000.pth 2023-03-26 21:44:13,930 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_30000.pth 2023-03-26 21:44:13,975 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_30000.pth 2023-03-26 21:45:36,118 44k INFO Train Epoch: 46 [60%] 2023-03-26 21:45:36,119 44k INFO Losses: [2.4854018688201904, 2.205888032913208, 10.59522819519043, 19.090288162231445, 1.1106901168823242], step: 33200, lr: 9.941418589985758e-05 2023-03-26 21:46:46,249 44k INFO Train Epoch: 46 [88%] 2023-03-26 21:46:46,249 44k INFO Losses: [2.722208261489868, 2.1175336837768555, 10.812979698181152, 16.05909538269043, 0.9372826814651489], step: 33400, lr: 9.941418589985758e-05 2023-03-26 21:47:17,411 44k INFO ====> Epoch: 46, cost 323.76 s 2023-03-26 21:48:08,513 44k INFO Train Epoch: 47 [15%] 2023-03-26 21:48:08,513 44k INFO Losses: [2.556161403656006, 1.921614170074463, 8.697649002075195, 13.587606430053711, 0.8473272919654846], step: 33600, lr: 9.940175912662009e-05 2023-03-26 21:49:20,351 44k INFO Train Epoch: 47 [43%] 2023-03-26 21:49:20,351 44k INFO Losses: [2.4933862686157227, 2.3994638919830322, 15.795209884643555, 19.68379020690918, 1.2790398597717285], step: 33800, lr: 9.940175912662009e-05 2023-03-26 21:50:31,356 44k INFO Train Epoch: 47 [70%] 2023-03-26 21:50:31,357 44k INFO Losses: [2.412247657775879, 2.1590089797973633, 9.830469131469727, 20.107654571533203, 1.542922019958496], step: 34000, lr: 9.940175912662009e-05 2023-03-26 21:50:34,590 44k INFO Saving model and optimizer state at iteration 47 to ./logs\44k\G_34000.pth 2023-03-26 21:50:35,354 44k INFO Saving model and optimizer state at iteration 47 to ./logs\44k\D_34000.pth 2023-03-26 21:50:36,137 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_31000.pth 2023-03-26 21:50:36,169 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_31000.pth 2023-03-26 21:51:47,189 44k INFO Train Epoch: 47 [98%] 2023-03-26 21:51:47,189 44k INFO Losses: [2.346374034881592, 2.297542095184326, 9.829960823059082, 18.47844696044922, 1.2716809511184692], step: 34200, lr: 9.940175912662009e-05 2023-03-26 21:51:52,859 44k INFO ====> Epoch: 47, cost 275.45 s 2023-03-26 21:53:09,004 44k INFO Train Epoch: 48 [25%] 2023-03-26 21:53:09,004 44k INFO Losses: [2.5941741466522217, 2.4173154830932617, 7.852332592010498, 17.6015625, 1.3165394067764282], step: 34400, lr: 9.938933390672926e-05 2023-03-26 21:54:19,624 44k INFO Train Epoch: 48 [53%] 2023-03-26 21:54:19,624 44k INFO Losses: [2.6824872493743896, 2.277327060699463, 10.386296272277832, 18.58452796936035, 1.4477074146270752], step: 34600, lr: 9.938933390672926e-05 2023-03-26 21:55:31,217 44k INFO Train Epoch: 48 [80%] 2023-03-26 21:55:31,218 44k INFO Losses: [2.539008140563965, 2.2044379711151123, 12.39027214050293, 19.257644653320312, 1.1891627311706543], step: 34800, lr: 9.938933390672926e-05 2023-03-26 21:56:22,892 44k INFO ====> Epoch: 48, cost 270.03 s 2023-03-26 21:56:52,841 44k INFO Train Epoch: 49 [8%] 2023-03-26 21:56:52,842 44k INFO Losses: [2.572256088256836, 2.224982261657715, 6.679467678070068, 16.84368896484375, 0.9692319631576538], step: 35000, lr: 9.937691023999092e-05 2023-03-26 21:56:56,206 44k INFO Saving model and optimizer state at iteration 49 to ./logs\44k\G_35000.pth 2023-03-26 21:56:56,962 44k INFO Saving model and optimizer state at iteration 49 to ./logs\44k\D_35000.pth 2023-03-26 21:56:57,684 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_32000.pth 2023-03-26 21:56:57,718 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_32000.pth 2023-03-26 21:58:11,575 44k INFO Train Epoch: 49 [35%] 2023-03-26 21:58:11,576 44k INFO Losses: [2.6269850730895996, 2.288778066635132, 12.382123947143555, 18.322433471679688, 1.1609718799591064], step: 35200, lr: 9.937691023999092e-05 2023-03-26 21:59:24,382 44k INFO Train Epoch: 49 [63%] 2023-03-26 21:59:24,382 44k INFO Losses: [2.4340531826019287, 2.453927993774414, 9.466423034667969, 19.50778579711914, 1.1316777467727661], step: 35400, lr: 9.937691023999092e-05 2023-03-26 22:00:36,339 44k INFO Train Epoch: 49 [90%] 2023-03-26 22:00:36,339 44k INFO Losses: [2.65840744972229, 2.1295597553253174, 10.344992637634277, 19.8116455078125, 1.8000327348709106], step: 35600, lr: 9.937691023999092e-05 2023-03-26 22:01:02,503 44k INFO ====> Epoch: 49, cost 279.61 s 2023-03-26 22:01:58,101 44k INFO Train Epoch: 50 [18%] 2023-03-26 22:01:58,102 44k INFO Losses: [2.1592166423797607, 3.1045475006103516, 8.961448669433594, 10.38120174407959, 1.0541951656341553], step: 35800, lr: 9.936448812621091e-05 2023-03-26 22:03:10,277 44k INFO Train Epoch: 50 [45%] 2023-03-26 22:03:10,277 44k INFO Losses: [2.4462239742279053, 2.322165012359619, 9.349515914916992, 13.102229118347168, 1.3284991979599], step: 36000, lr: 9.936448812621091e-05 2023-03-26 22:03:13,472 44k INFO Saving model and optimizer state at iteration 50 to ./logs\44k\G_36000.pth 2023-03-26 22:03:14,203 44k INFO Saving model and optimizer state at iteration 50 to ./logs\44k\D_36000.pth 2023-03-26 22:03:14,983 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_33000.pth 2023-03-26 22:03:15,013 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_33000.pth 2023-03-26 22:04:26,985 44k INFO Train Epoch: 50 [73%] 2023-03-26 22:04:26,985 44k INFO Losses: [2.4799132347106934, 2.2395243644714355, 11.022801399230957, 19.06214714050293, 1.249161720275879], step: 36200, lr: 9.936448812621091e-05 2023-03-26 22:05:38,022 44k INFO ====> Epoch: 50, cost 275.52 s 2023-03-27 07:15:15,710 44k INFO {'train': {'log_interval': 200, 'eval_interval': 1000, 'seed': 1234, 'epochs': 100, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kageaki': 0}, 'model_dir': './logs\\44k'} 2023-03-27 07:15:15,742 44k WARNING git hash values are different. cea6df30(saved) != fd4d47fd(current) 2023-03-27 07:15:17,744 44k INFO Loaded checkpoint './logs\44k\G_36000.pth' (iteration 50) 2023-03-27 07:15:18,107 44k INFO Loaded checkpoint './logs\44k\D_36000.pth' (iteration 50) 2023-03-27 07:16:27,356 44k INFO Train Epoch: 50 [18%] 2023-03-27 07:16:27,356 44k INFO Losses: [2.439272880554199, 2.551793098449707, 9.877571105957031, 16.454505920410156, 0.7157087326049805], step: 35800, lr: 9.935206756519513e-05 2023-03-27 07:17:53,481 44k INFO Train Epoch: 50 [45%] 2023-03-27 07:17:53,482 44k INFO Losses: [2.5554986000061035, 2.16329026222229, 6.9857892990112305, 13.731989860534668, 1.0998468399047852], step: 36000, lr: 9.935206756519513e-05 2023-03-27 07:17:57,154 44k INFO Saving model and optimizer state at iteration 50 to ./logs\44k\G_36000.pth 2023-03-27 07:17:57,927 44k INFO Saving model and optimizer state at iteration 50 to ./logs\44k\D_36000.pth 2023-03-27 07:19:22,376 44k INFO Train Epoch: 50 [73%] 2023-03-27 07:19:22,376 44k INFO Losses: [2.8222479820251465, 1.8222508430480957, 11.116238594055176, 16.301206588745117, 1.2432457208633423], step: 36200, lr: 9.935206756519513e-05 2023-03-27 07:20:44,564 44k INFO ====> Epoch: 50, cost 328.85 s 2023-03-27 07:20:53,739 44k INFO Train Epoch: 51 [0%] 2023-03-27 07:20:53,739 44k INFO Losses: [2.408649206161499, 2.4921319484710693, 12.450469017028809, 18.094934463500977, 0.6916185617446899], step: 36400, lr: 9.933964855674948e-05 2023-03-27 07:22:01,051 44k INFO Train Epoch: 51 [27%] 2023-03-27 07:22:01,051 44k INFO Losses: [2.5425407886505127, 2.1861369609832764, 8.337100982666016, 10.447877883911133, 0.9519892930984497], step: 36600, lr: 9.933964855674948e-05 2023-03-27 07:23:07,673 44k INFO Train Epoch: 51 [55%] 2023-03-27 07:23:07,673 44k INFO Losses: [2.363527536392212, 2.3665409088134766, 7.563984394073486, 13.141250610351562, 1.0272268056869507], step: 36800, lr: 9.933964855674948e-05 2023-03-27 07:24:14,837 44k INFO Train Epoch: 51 [82%] 2023-03-27 07:24:14,837 44k INFO Losses: [2.4265291690826416, 2.72171688079834, 12.115377426147461, 16.756244659423828, 1.1188489198684692], step: 37000, lr: 9.933964855674948e-05 2023-03-27 07:24:17,819 44k INFO Saving model and optimizer state at iteration 51 to ./logs\44k\G_37000.pth 2023-03-27 07:24:18,628 44k INFO Saving model and optimizer state at iteration 51 to ./logs\44k\D_37000.pth 2023-03-27 07:24:19,433 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_34000.pth 2023-03-27 07:24:19,464 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_34000.pth 2023-03-27 07:25:01,766 44k INFO ====> Epoch: 51, cost 257.20 s 2023-03-27 07:25:35,034 44k INFO Train Epoch: 52 [10%] 2023-03-27 07:25:35,035 44k INFO Losses: [2.3869516849517822, 2.4119739532470703, 12.691248893737793, 18.778392791748047, 1.300383448600769], step: 37200, lr: 9.932723110067987e-05 2023-03-27 07:26:42,173 44k INFO Train Epoch: 52 [37%] 2023-03-27 07:26:42,174 44k INFO Losses: [2.6959099769592285, 1.9351104497909546, 9.62242603302002, 16.26683235168457, 1.2408384084701538], step: 37400, lr: 9.932723110067987e-05 2023-03-27 07:27:48,994 44k INFO Train Epoch: 52 [65%] 2023-03-27 07:27:48,994 44k INFO Losses: [2.379852056503296, 2.2753829956054688, 12.291888236999512, 18.11481475830078, 1.2890477180480957], step: 37600, lr: 9.932723110067987e-05 2023-03-27 07:28:56,054 44k INFO Train Epoch: 52 [92%] 2023-03-27 07:28:56,054 44k INFO Losses: [2.475571393966675, 2.110029935836792, 12.294746398925781, 18.15684700012207, 1.2353252172470093], step: 37800, lr: 9.932723110067987e-05 2023-03-27 07:29:14,493 44k INFO ====> Epoch: 52, cost 252.73 s 2023-03-27 07:30:12,034 44k INFO Train Epoch: 53 [20%] 2023-03-27 07:30:12,034 44k INFO Losses: [2.356746196746826, 2.5480940341949463, 9.407490730285645, 14.142950057983398, 1.0317974090576172], step: 38000, lr: 9.931481519679228e-05 2023-03-27 07:30:15,070 44k INFO Saving model and optimizer state at iteration 53 to ./logs\44k\G_38000.pth 2023-03-27 07:30:15,825 44k INFO Saving model and optimizer state at iteration 53 to ./logs\44k\D_38000.pth 2023-03-27 07:30:16,506 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_35000.pth 2023-03-27 07:30:16,536 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_35000.pth 2023-03-27 07:31:22,834 44k INFO Train Epoch: 53 [47%] 2023-03-27 07:31:22,836 44k INFO Losses: [2.445138931274414, 2.1916863918304443, 10.534404754638672, 17.064529418945312, 1.0119489431381226], step: 38200, lr: 9.931481519679228e-05 2023-03-27 07:32:29,909 44k INFO Train Epoch: 53 [75%] 2023-03-27 07:32:29,909 44k INFO Losses: [2.4968106746673584, 2.2845804691314697, 11.782247543334961, 16.791181564331055, 0.9844977259635925], step: 38400, lr: 9.931481519679228e-05 2023-03-27 07:33:32,557 44k INFO ====> Epoch: 53, cost 258.06 s 2023-03-27 07:33:47,054 44k INFO Train Epoch: 54 [2%] 2023-03-27 07:33:47,054 44k INFO Losses: [2.4071342945098877, 2.0646770000457764, 7.031035900115967, 14.61463451385498, 1.2002465724945068], step: 38600, lr: 9.930240084489267e-05 2023-03-27 07:34:55,041 44k INFO Train Epoch: 54 [30%] 2023-03-27 07:34:55,041 44k INFO Losses: [2.423353672027588, 2.3260793685913086, 12.772974967956543, 19.127399444580078, 1.0438041687011719], step: 38800, lr: 9.930240084489267e-05 2023-03-27 07:36:02,242 44k INFO Train Epoch: 54 [57%] 2023-03-27 07:36:02,242 44k INFO Losses: [2.70426082611084, 2.2825329303741455, 7.793752193450928, 15.704229354858398, 1.069628357887268], step: 39000, lr: 9.930240084489267e-05 2023-03-27 07:36:05,304 44k INFO Saving model and optimizer state at iteration 54 to ./logs\44k\G_39000.pth 2023-03-27 07:36:06,069 44k INFO Saving model and optimizer state at iteration 54 to ./logs\44k\D_39000.pth 2023-03-27 07:36:06,765 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_36000.pth 2023-03-27 07:36:06,800 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_36000.pth 2023-03-27 07:37:13,642 44k INFO Train Epoch: 54 [85%] 2023-03-27 07:37:13,642 44k INFO Losses: [2.408324718475342, 2.4243626594543457, 6.805215835571289, 12.297639846801758, 0.9571473002433777], step: 39200, lr: 9.930240084489267e-05 2023-03-27 07:37:50,765 44k INFO ====> Epoch: 54, cost 258.21 s 2023-03-27 07:38:29,331 44k INFO Train Epoch: 55 [12%] 2023-03-27 07:38:29,331 44k INFO Losses: [2.608395576477051, 2.0660414695739746, 8.355164527893066, 15.556915283203125, 1.1915698051452637], step: 39400, lr: 9.928998804478705e-05 2023-03-27 07:39:36,478 44k INFO Train Epoch: 55 [40%] 2023-03-27 07:39:36,479 44k INFO Losses: [2.370467185974121, 2.3523178100585938, 12.437934875488281, 18.79640769958496, 0.8513891100883484], step: 39600, lr: 9.928998804478705e-05 2023-03-27 07:40:43,368 44k INFO Train Epoch: 55 [67%] 2023-03-27 07:40:43,369 44k INFO Losses: [2.408238649368286, 2.464332103729248, 7.424396514892578, 11.00966739654541, 1.0542269945144653], step: 39800, lr: 9.928998804478705e-05 2023-03-27 07:41:50,294 44k INFO Train Epoch: 55 [95%] 2023-03-27 07:41:50,294 44k INFO Losses: [2.2966928482055664, 2.361558437347412, 11.673095703125, 17.255613327026367, 1.6099590063095093], step: 40000, lr: 9.928998804478705e-05 2023-03-27 07:41:53,249 44k INFO Saving model and optimizer state at iteration 55 to ./logs\44k\G_40000.pth 2023-03-27 07:41:54,010 44k INFO Saving model and optimizer state at iteration 55 to ./logs\44k\D_40000.pth 2023-03-27 07:41:54,689 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_37000.pth 2023-03-27 07:41:54,733 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_37000.pth 2023-03-27 07:42:07,857 44k INFO ====> Epoch: 55, cost 257.09 s 2023-03-27 07:43:10,913 44k INFO Train Epoch: 56 [22%] 2023-03-27 07:43:10,914 44k INFO Losses: [2.548214912414551, 2.0930380821228027, 6.3152313232421875, 12.273805618286133, 0.9773663282394409], step: 40200, lr: 9.927757679628145e-05 2023-03-27 07:44:17,323 44k INFO Train Epoch: 56 [49%] 2023-03-27 07:44:17,324 44k INFO Losses: [2.6898324489593506, 2.370868682861328, 11.837089538574219, 16.003795623779297, 1.0271047353744507], step: 40400, lr: 9.927757679628145e-05 2023-03-27 07:45:24,690 44k INFO Train Epoch: 56 [77%] 2023-03-27 07:45:24,690 44k INFO Losses: [2.4483160972595215, 2.4367058277130127, 7.7856268882751465, 14.39894962310791, 1.3585786819458008], step: 40600, lr: 9.927757679628145e-05 2023-03-27 07:46:20,549 44k INFO ====> Epoch: 56, cost 252.69 s 2023-03-27 07:46:40,319 44k INFO Train Epoch: 57 [4%] 2023-03-27 07:46:40,319 44k INFO Losses: [2.2415366172790527, 2.3912246227264404, 9.676067352294922, 13.743635177612305, 1.2081562280654907], step: 40800, lr: 9.926516709918191e-05 2023-03-27 07:47:47,575 44k INFO Train Epoch: 57 [32%] 2023-03-27 07:47:47,576 44k INFO Losses: [2.8123528957366943, 2.225703477859497, 11.15468692779541, 17.746809005737305, 1.2952674627304077], step: 41000, lr: 9.926516709918191e-05 2023-03-27 07:47:50,541 44k INFO Saving model and optimizer state at iteration 57 to ./logs\44k\G_41000.pth 2023-03-27 07:47:51,302 44k INFO Saving model and optimizer state at iteration 57 to ./logs\44k\D_41000.pth 2023-03-27 07:47:51,985 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_38000.pth 2023-03-27 07:47:52,027 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_38000.pth 2023-03-27 07:48:58,728 44k INFO Train Epoch: 57 [59%] 2023-03-27 07:48:58,728 44k INFO Losses: [2.4293718338012695, 2.3511390686035156, 13.145758628845215, 17.73885726928711, 1.6850676536560059], step: 41200, lr: 9.926516709918191e-05 2023-03-27 07:50:05,578 44k INFO Train Epoch: 57 [87%] 2023-03-27 07:50:05,579 44k INFO Losses: [2.5185396671295166, 2.439281940460205, 10.314142227172852, 19.53677749633789, 1.2114888429641724], step: 41400, lr: 9.926516709918191e-05 2023-03-27 07:50:37,433 44k INFO ====> Epoch: 57, cost 256.88 s 2023-03-27 07:51:21,234 44k INFO Train Epoch: 58 [14%] 2023-03-27 07:51:21,234 44k INFO Losses: [2.5466856956481934, 2.365269899368286, 12.877598762512207, 19.400514602661133, 1.1210414171218872], step: 41600, lr: 9.92527589532945e-05 2023-03-27 07:52:28,164 44k INFO Train Epoch: 58 [42%] 2023-03-27 07:52:28,165 44k INFO Losses: [2.625436305999756, 2.200080633163452, 11.715675354003906, 18.738943099975586, 1.1321464776992798], step: 41800, lr: 9.92527589532945e-05 2023-03-27 07:53:35,336 44k INFO Train Epoch: 58 [69%] 2023-03-27 07:53:35,336 44k INFO Losses: [2.7836830615997314, 2.2318859100341797, 10.502516746520996, 14.056482315063477, 1.2004711627960205], step: 42000, lr: 9.92527589532945e-05 2023-03-27 07:53:38,377 44k INFO Saving model and optimizer state at iteration 58 to ./logs\44k\G_42000.pth 2023-03-27 07:53:39,085 44k INFO Saving model and optimizer state at iteration 58 to ./logs\44k\D_42000.pth 2023-03-27 07:53:39,766 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_39000.pth 2023-03-27 07:53:39,800 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_39000.pth 2023-03-27 07:54:46,446 44k INFO Train Epoch: 58 [97%] 2023-03-27 07:54:46,447 44k INFO Losses: [2.511121988296509, 2.1858129501342773, 10.640735626220703, 16.488994598388672, 0.9038709402084351], step: 42200, lr: 9.92527589532945e-05 2023-03-27 07:54:54,418 44k INFO ====> Epoch: 58, cost 256.98 s 2023-03-27 07:56:02,856 44k INFO Train Epoch: 59 [24%] 2023-03-27 07:56:02,857 44k INFO Losses: [2.3125972747802734, 2.6691434383392334, 9.921504974365234, 14.77564811706543, 1.3596560955047607], step: 42400, lr: 9.924035235842533e-05 2023-03-27 07:57:09,259 44k INFO Train Epoch: 59 [52%] 2023-03-27 07:57:09,259 44k INFO Losses: [2.2020654678344727, 2.584967613220215, 15.30741024017334, 18.90169906616211, 1.8213540315628052], step: 42600, lr: 9.924035235842533e-05 2023-03-27 07:58:16,599 44k INFO Train Epoch: 59 [79%] 2023-03-27 07:58:16,600 44k INFO Losses: [2.3684847354888916, 2.5879104137420654, 12.604324340820312, 19.400144577026367, 1.7623335123062134], step: 42800, lr: 9.924035235842533e-05 2023-03-27 07:59:07,211 44k INFO ====> Epoch: 59, cost 252.79 s 2023-03-27 07:59:32,450 44k INFO Train Epoch: 60 [7%] 2023-03-27 07:59:32,451 44k INFO Losses: [2.244823455810547, 2.333590507507324, 14.408945083618164, 15.799434661865234, 1.1035412549972534], step: 43000, lr: 9.922794731438052e-05 2023-03-27 07:59:35,459 44k INFO Saving model and optimizer state at iteration 60 to ./logs\44k\G_43000.pth 2023-03-27 07:59:36,225 44k INFO Saving model and optimizer state at iteration 60 to ./logs\44k\D_43000.pth 2023-03-27 07:59:36,926 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_40000.pth 2023-03-27 07:59:36,970 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_40000.pth 2023-03-27 08:00:44,303 44k INFO Train Epoch: 60 [34%] 2023-03-27 08:00:44,304 44k INFO Losses: [2.4016001224517822, 2.434709072113037, 9.852156639099121, 18.884023666381836, 1.2980703115463257], step: 43200, lr: 9.922794731438052e-05 2023-03-27 08:01:51,128 44k INFO Train Epoch: 60 [62%] 2023-03-27 08:01:51,129 44k INFO Losses: [2.3090014457702637, 2.4671285152435303, 11.094226837158203, 15.301861763000488, 1.113794207572937], step: 43400, lr: 9.922794731438052e-05 2023-03-27 08:02:57,964 44k INFO Train Epoch: 60 [89%] 2023-03-27 08:02:57,964 44k INFO Losses: [2.369256019592285, 2.246119737625122, 12.488296508789062, 16.282028198242188, 0.9009151458740234], step: 43600, lr: 9.922794731438052e-05 2023-03-27 08:03:24,635 44k INFO ====> Epoch: 60, cost 257.42 s 2023-03-27 08:04:14,104 44k INFO Train Epoch: 61 [16%] 2023-03-27 08:04:14,105 44k INFO Losses: [2.478708028793335, 2.108610153198242, 9.883505821228027, 18.352388381958008, 1.2135659456253052], step: 43800, lr: 9.921554382096622e-05 2023-03-27 08:05:20,970 44k INFO Train Epoch: 61 [44%] 2023-03-27 08:05:20,970 44k INFO Losses: [2.45816707611084, 2.4192912578582764, 9.402213096618652, 12.988810539245605, 1.2950477600097656], step: 44000, lr: 9.921554382096622e-05 2023-03-27 08:05:23,966 44k INFO Saving model and optimizer state at iteration 61 to ./logs\44k\G_44000.pth 2023-03-27 08:05:24,664 44k INFO Saving model and optimizer state at iteration 61 to ./logs\44k\D_44000.pth 2023-03-27 08:05:25,341 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_41000.pth 2023-03-27 08:05:25,384 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_41000.pth 2023-03-27 08:06:32,515 44k INFO Train Epoch: 61 [71%] 2023-03-27 08:06:32,515 44k INFO Losses: [2.6138203144073486, 2.613528251647949, 11.933440208435059, 16.9341983795166, 0.9944063425064087], step: 44200, lr: 9.921554382096622e-05 2023-03-27 08:07:39,304 44k INFO Train Epoch: 61 [99%] 2023-03-27 08:07:39,305 44k INFO Losses: [2.2650885581970215, 2.6486053466796875, 10.271162033081055, 14.67381477355957, 0.8736445307731628], step: 44400, lr: 9.921554382096622e-05 2023-03-27 08:07:42,108 44k INFO ====> Epoch: 61, cost 257.47 s 2023-03-27 08:08:55,958 44k INFO Train Epoch: 62 [26%] 2023-03-27 08:08:55,958 44k INFO Losses: [2.49867844581604, 2.6054153442382812, 11.679630279541016, 16.433103561401367, 1.1053733825683594], step: 44600, lr: 9.92031418779886e-05 2023-03-27 08:10:02,677 44k INFO Train Epoch: 62 [54%] 2023-03-27 08:10:02,677 44k INFO Losses: [2.633824348449707, 2.028231143951416, 10.854639053344727, 19.667694091796875, 0.9255536198616028], step: 44800, lr: 9.92031418779886e-05 2023-03-27 08:11:09,920 44k INFO Train Epoch: 62 [81%] 2023-03-27 08:11:09,921 44k INFO Losses: [2.7703325748443604, 2.029719829559326, 6.438015937805176, 13.480875968933105, 0.780036211013794], step: 45000, lr: 9.92031418779886e-05 2023-03-27 08:11:12,938 44k INFO Saving model and optimizer state at iteration 62 to ./logs\44k\G_45000.pth 2023-03-27 08:11:13,641 44k INFO Saving model and optimizer state at iteration 62 to ./logs\44k\D_45000.pth 2023-03-27 08:11:14,336 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_42000.pth 2023-03-27 08:11:14,375 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_42000.pth 2023-03-27 08:11:59,596 44k INFO ====> Epoch: 62, cost 257.49 s 2023-03-27 08:12:30,347 44k INFO Train Epoch: 63 [9%] 2023-03-27 08:12:30,347 44k INFO Losses: [2.577272415161133, 2.04129695892334, 6.650623798370361, 13.921662330627441, 0.8253363370895386], step: 45200, lr: 9.919074148525384e-05 2023-03-27 08:13:37,846 44k INFO Train Epoch: 63 [36%] 2023-03-27 08:13:37,846 44k INFO Losses: [2.4639358520507812, 2.2167930603027344, 10.774556159973145, 16.839433670043945, 1.022517204284668], step: 45400, lr: 9.919074148525384e-05 2023-03-27 08:14:44,593 44k INFO Train Epoch: 63 [64%] 2023-03-27 08:14:44,593 44k INFO Losses: [2.2040042877197266, 2.5396769046783447, 11.931198120117188, 17.912832260131836, 1.0549203157424927], step: 45600, lr: 9.919074148525384e-05 2023-03-27 08:15:51,758 44k INFO Train Epoch: 63 [91%] 2023-03-27 08:15:51,758 44k INFO Losses: [2.804457426071167, 1.9974802732467651, 10.67963981628418, 14.012529373168945, 1.4511919021606445], step: 45800, lr: 9.919074148525384e-05 2023-03-27 08:16:13,083 44k INFO ====> Epoch: 63, cost 253.49 s 2023-03-27 08:17:07,998 44k INFO Train Epoch: 64 [19%] 2023-03-27 08:17:07,998 44k INFO Losses: [2.355602979660034, 2.22060489654541, 11.917518615722656, 16.96002769470215, 0.8676804900169373], step: 46000, lr: 9.917834264256819e-05 2023-03-27 08:17:10,921 44k INFO Saving model and optimizer state at iteration 64 to ./logs\44k\G_46000.pth 2023-03-27 08:17:11,626 44k INFO Saving model and optimizer state at iteration 64 to ./logs\44k\D_46000.pth 2023-03-27 08:17:12,341 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_43000.pth 2023-03-27 08:17:12,379 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_43000.pth 2023-03-27 08:18:19,138 44k INFO Train Epoch: 64 [46%] 2023-03-27 08:18:19,138 44k INFO Losses: [2.590341091156006, 2.1801178455352783, 12.155670166015625, 17.905271530151367, 1.255314588546753], step: 46200, lr: 9.917834264256819e-05 2023-03-27 08:19:26,392 44k INFO Train Epoch: 64 [74%] 2023-03-27 08:19:26,392 44k INFO Losses: [2.574697494506836, 2.337441921234131, 9.293381690979004, 15.470038414001465, 1.1327518224716187], step: 46400, lr: 9.917834264256819e-05 2023-03-27 08:20:30,555 44k INFO ====> Epoch: 64, cost 257.47 s 2023-03-27 08:20:42,442 44k INFO Train Epoch: 65 [1%] 2023-03-27 08:20:42,442 44k INFO Losses: [2.417219638824463, 2.2131693363189697, 12.746596336364746, 19.03409767150879, 0.949775755405426], step: 46600, lr: 9.916594534973787e-05 2023-03-27 08:21:49,793 44k INFO Train Epoch: 65 [29%] 2023-03-27 08:21:49,794 44k INFO Losses: [2.751617431640625, 2.228302240371704, 9.095353126525879, 18.294748306274414, 1.3548369407653809], step: 46800, lr: 9.916594534973787e-05 2023-03-27 08:22:56,747 44k INFO Train Epoch: 65 [56%] 2023-03-27 08:22:56,748 44k INFO Losses: [2.7602882385253906, 1.9956519603729248, 5.618566513061523, 12.588948249816895, 1.1053184270858765], step: 47000, lr: 9.916594534973787e-05 2023-03-27 08:22:59,799 44k INFO Saving model and optimizer state at iteration 65 to ./logs\44k\G_47000.pth 2023-03-27 08:23:00,501 44k INFO Saving model and optimizer state at iteration 65 to ./logs\44k\D_47000.pth 2023-03-27 08:23:01,194 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_44000.pth 2023-03-27 08:23:01,238 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_44000.pth 2023-03-27 08:24:08,270 44k INFO Train Epoch: 65 [84%] 2023-03-27 08:24:08,270 44k INFO Losses: [2.9330339431762695, 1.6999547481536865, 2.4728968143463135, 7.893375873565674, 1.22569739818573], step: 47200, lr: 9.916594534973787e-05 2023-03-27 08:24:48,316 44k INFO ====> Epoch: 65, cost 257.76 s 2023-03-27 08:25:24,278 44k INFO Train Epoch: 66 [11%] 2023-03-27 08:25:24,278 44k INFO Losses: [2.2653191089630127, 2.2766952514648438, 13.437915802001953, 19.091768264770508, 1.3249320983886719], step: 47400, lr: 9.915354960656915e-05 2023-03-27 08:26:31,644 44k INFO Train Epoch: 66 [38%] 2023-03-27 08:26:31,644 44k INFO Losses: [2.3334031105041504, 2.3181309700012207, 10.529729843139648, 18.511070251464844, 0.6074761748313904], step: 47600, lr: 9.915354960656915e-05 2023-03-27 08:27:38,759 44k INFO Train Epoch: 66 [66%] 2023-03-27 08:27:38,759 44k INFO Losses: [2.6001267433166504, 2.0757830142974854, 6.5456109046936035, 14.123077392578125, 0.8816099166870117], step: 47800, lr: 9.915354960656915e-05 2023-03-27 08:28:45,841 44k INFO Train Epoch: 66 [93%] 2023-03-27 08:28:45,841 44k INFO Losses: [2.35400390625, 2.3757007122039795, 12.253948211669922, 17.04297637939453, 0.9455040097236633], step: 48000, lr: 9.915354960656915e-05 2023-03-27 08:28:48,853 44k INFO Saving model and optimizer state at iteration 66 to ./logs\44k\G_48000.pth 2023-03-27 08:28:49,554 44k INFO Saving model and optimizer state at iteration 66 to ./logs\44k\D_48000.pth 2023-03-27 08:28:50,230 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_45000.pth 2023-03-27 08:28:50,270 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_45000.pth 2023-03-27 08:29:06,060 44k INFO ====> Epoch: 66, cost 257.74 s 2023-03-27 08:30:06,554 44k INFO Train Epoch: 67 [21%] 2023-03-27 08:30:06,554 44k INFO Losses: [2.4292476177215576, 2.2898545265197754, 9.13322639465332, 19.369871139526367, 1.3948007822036743], step: 48200, lr: 9.914115541286833e-05 2023-03-27 08:31:13,137 44k INFO Train Epoch: 67 [48%] 2023-03-27 08:31:13,137 44k INFO Losses: [2.439253568649292, 2.4068827629089355, 10.119111061096191, 17.691139221191406, 1.0451233386993408], step: 48400, lr: 9.914115541286833e-05 2023-03-27 08:32:20,659 44k INFO Train Epoch: 67 [76%] 2023-03-27 08:32:20,659 44k INFO Losses: [2.3717217445373535, 2.5542309284210205, 11.383543014526367, 19.37965965270996, 0.9877816438674927], step: 48600, lr: 9.914115541286833e-05 2023-03-27 08:33:19,387 44k INFO ====> Epoch: 67, cost 253.33 s 2023-03-27 08:33:36,777 44k INFO Train Epoch: 68 [3%] 2023-03-27 08:33:36,778 44k INFO Losses: [2.4363417625427246, 1.970969319343567, 10.106698036193848, 16.813566207885742, 1.3568806648254395], step: 48800, lr: 9.912876276844171e-05 2023-03-27 08:34:44,192 44k INFO Train Epoch: 68 [31%] 2023-03-27 08:34:44,193 44k INFO Losses: [2.514279365539551, 2.2566275596618652, 14.26534366607666, 18.72598648071289, 1.4553413391113281], step: 49000, lr: 9.912876276844171e-05 2023-03-27 08:34:47,161 44k INFO Saving model and optimizer state at iteration 68 to ./logs\44k\G_49000.pth 2023-03-27 08:34:47,874 44k INFO Saving model and optimizer state at iteration 68 to ./logs\44k\D_49000.pth 2023-03-27 08:34:48,558 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_46000.pth 2023-03-27 08:34:48,601 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_46000.pth 2023-03-27 08:35:55,536 44k INFO Train Epoch: 68 [58%] 2023-03-27 08:35:55,536 44k INFO Losses: [2.324829339981079, 2.2598965167999268, 13.1254301071167, 19.971885681152344, 1.4181081056594849], step: 49200, lr: 9.912876276844171e-05 2023-03-27 08:37:02,530 44k INFO Train Epoch: 68 [86%] 2023-03-27 08:37:02,530 44k INFO Losses: [2.258756637573242, 2.5916836261749268, 16.431076049804688, 20.990314483642578, 1.4004255533218384], step: 49400, lr: 9.912876276844171e-05 2023-03-27 08:37:37,090 44k INFO ====> Epoch: 68, cost 257.70 s 2023-03-27 08:38:18,461 44k INFO Train Epoch: 69 [13%] 2023-03-27 08:38:18,461 44k INFO Losses: [2.5158984661102295, 2.223151683807373, 12.039804458618164, 17.176530838012695, 1.3132669925689697], step: 49600, lr: 9.911637167309565e-05 2023-03-27 08:39:25,534 44k INFO Train Epoch: 69 [41%] 2023-03-27 08:39:25,534 44k INFO Losses: [2.5750861167907715, 2.4297473430633545, 10.819536209106445, 17.961881637573242, 1.4946011304855347], step: 49800, lr: 9.911637167309565e-05 2023-03-27 08:40:32,509 44k INFO Train Epoch: 69 [68%] 2023-03-27 08:40:32,509 44k INFO Losses: [2.4331626892089844, 2.306232452392578, 12.642422676086426, 18.909610748291016, 1.103945016860962], step: 50000, lr: 9.911637167309565e-05 2023-03-27 08:40:35,568 44k INFO Saving model and optimizer state at iteration 69 to ./logs\44k\G_50000.pth 2023-03-27 08:40:36,323 44k INFO Saving model and optimizer state at iteration 69 to ./logs\44k\D_50000.pth 2023-03-27 08:40:37,000 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_47000.pth 2023-03-27 08:40:37,036 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_47000.pth 2023-03-27 08:41:43,882 44k INFO Train Epoch: 69 [96%] 2023-03-27 08:41:43,882 44k INFO Losses: [2.514343738555908, 2.2019259929656982, 4.496665954589844, 8.232398986816406, 0.6628571152687073], step: 50200, lr: 9.911637167309565e-05 2023-03-27 08:41:54,465 44k INFO ====> Epoch: 69, cost 257.38 s 2023-03-27 08:43:00,368 44k INFO Train Epoch: 70 [23%] 2023-03-27 08:43:00,369 44k INFO Losses: [2.3283638954162598, 2.360450029373169, 11.544236183166504, 17.859086990356445, 0.4431965947151184], step: 50400, lr: 9.910398212663652e-05 2023-03-27 08:44:06,663 44k INFO Train Epoch: 70 [51%] 2023-03-27 08:44:06,664 44k INFO Losses: [2.1192874908447266, 2.6293697357177734, 10.26603889465332, 15.43351936340332, 0.9887738823890686], step: 50600, lr: 9.910398212663652e-05 2023-03-27 08:45:13,890 44k INFO Train Epoch: 70 [78%] 2023-03-27 08:45:13,890 44k INFO Losses: [2.2911312580108643, 2.364351987838745, 10.513965606689453, 16.594308853149414, 1.1489366292953491], step: 50800, lr: 9.910398212663652e-05 2023-03-27 08:46:07,172 44k INFO ====> Epoch: 70, cost 252.71 s 2023-03-27 08:46:29,790 44k INFO Train Epoch: 71 [5%] 2023-03-27 08:46:29,790 44k INFO Losses: [2.380854606628418, 2.227107048034668, 12.712158203125, 16.282180786132812, 1.47393000125885], step: 51000, lr: 9.909159412887068e-05 2023-03-27 08:46:32,884 44k INFO Saving model and optimizer state at iteration 71 to ./logs\44k\G_51000.pth 2023-03-27 08:46:33,600 44k INFO Saving model and optimizer state at iteration 71 to ./logs\44k\D_51000.pth 2023-03-27 08:46:34,327 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_48000.pth 2023-03-27 08:46:34,371 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_48000.pth 2023-03-27 08:47:41,781 44k INFO Train Epoch: 71 [33%] 2023-03-27 08:47:41,782 44k INFO Losses: [2.245232105255127, 2.650056838989258, 10.456927299499512, 15.763895988464355, 1.509503960609436], step: 51200, lr: 9.909159412887068e-05 2023-03-27 08:48:48,760 44k INFO Train Epoch: 71 [60%] 2023-03-27 08:48:48,761 44k INFO Losses: [2.4527876377105713, 2.2343475818634033, 11.567914962768555, 18.552228927612305, 0.588832676410675], step: 51400, lr: 9.909159412887068e-05 2023-03-27 08:49:55,695 44k INFO Train Epoch: 71 [88%] 2023-03-27 08:49:55,695 44k INFO Losses: [2.4599967002868652, 2.8101401329040527, 13.028704643249512, 18.768552780151367, 1.428633451461792], step: 51600, lr: 9.909159412887068e-05 2023-03-27 08:50:25,149 44k INFO ====> Epoch: 71, cost 257.98 s 2023-03-27 08:51:12,071 44k INFO Train Epoch: 72 [15%] 2023-03-27 08:51:12,071 44k INFO Losses: [2.612532138824463, 1.9285584688186646, 7.47923469543457, 13.984415054321289, 0.7528865337371826], step: 51800, lr: 9.907920767960457e-05 2023-03-27 08:52:19,101 44k INFO Train Epoch: 72 [43%] 2023-03-27 08:52:19,101 44k INFO Losses: [2.1794114112854004, 2.6065738201141357, 15.174079895019531, 19.188138961791992, 1.353792428970337], step: 52000, lr: 9.907920767960457e-05 2023-03-27 08:52:22,161 44k INFO Saving model and optimizer state at iteration 72 to ./logs\44k\G_52000.pth 2023-03-27 08:52:22,874 44k INFO Saving model and optimizer state at iteration 72 to ./logs\44k\D_52000.pth 2023-03-27 08:52:23,614 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_49000.pth 2023-03-27 08:52:23,643 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_49000.pth 2023-03-27 08:53:30,801 44k INFO Train Epoch: 72 [70%] 2023-03-27 08:53:30,801 44k INFO Losses: [2.406280279159546, 2.214538097381592, 9.801506996154785, 19.0402774810791, 1.0820358991622925], step: 52200, lr: 9.907920767960457e-05 2023-03-27 08:54:37,657 44k INFO Train Epoch: 72 [98%] 2023-03-27 08:54:37,658 44k INFO Losses: [2.6462512016296387, 2.0470852851867676, 6.258289813995361, 12.651925086975098, 1.5252031087875366], step: 52400, lr: 9.907920767960457e-05 2023-03-27 08:54:43,100 44k INFO ====> Epoch: 72, cost 257.95 s 2023-03-27 08:55:54,357 44k INFO Train Epoch: 73 [25%] 2023-03-27 08:55:54,357 44k INFO Losses: [2.1774866580963135, 2.7009177207946777, 10.36209774017334, 14.559038162231445, 1.0799500942230225], step: 52600, lr: 9.906682277864462e-05 2023-03-27 08:57:00,952 44k INFO Train Epoch: 73 [53%] 2023-03-27 08:57:00,952 44k INFO Losses: [2.44010853767395, 2.3556549549102783, 10.437493324279785, 15.935498237609863, 1.107322096824646], step: 52800, lr: 9.906682277864462e-05 2023-03-27 08:58:08,333 44k INFO Train Epoch: 73 [80%] 2023-03-27 08:58:08,334 44k INFO Losses: [2.380119800567627, 2.2932958602905273, 8.417401313781738, 18.55266571044922, 1.1677244901657104], step: 53000, lr: 9.906682277864462e-05 2023-03-27 08:58:11,398 44k INFO Saving model and optimizer state at iteration 73 to ./logs\44k\G_53000.pth 2023-03-27 08:58:12,120 44k INFO Saving model and optimizer state at iteration 73 to ./logs\44k\D_53000.pth 2023-03-27 08:58:12,824 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_50000.pth 2023-03-27 08:58:12,862 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_50000.pth 2023-03-27 08:59:00,859 44k INFO ====> Epoch: 73, cost 257.76 s 2023-03-27 08:59:28,806 44k INFO Train Epoch: 74 [8%] 2023-03-27 08:59:28,806 44k INFO Losses: [2.4849491119384766, 2.3189971446990967, 10.498376846313477, 18.538747787475586, 1.192650318145752], step: 53200, lr: 9.905443942579728e-05 2023-03-27 09:00:36,271 44k INFO Train Epoch: 74 [35%] 2023-03-27 09:00:36,272 44k INFO Losses: [2.5191385746002197, 2.2135612964630127, 11.77253532409668, 16.3004207611084, 0.9401774406433105], step: 53400, lr: 9.905443942579728e-05 2023-03-27 09:01:43,200 44k INFO Train Epoch: 74 [63%] 2023-03-27 09:01:43,200 44k INFO Losses: [2.576526641845703, 2.1391115188598633, 10.905852317810059, 16.06879425048828, 0.9409028887748718], step: 53600, lr: 9.905443942579728e-05 2023-03-27 09:02:50,326 44k INFO Train Epoch: 74 [90%] 2023-03-27 09:02:50,326 44k INFO Losses: [2.6117124557495117, 2.2026305198669434, 9.521074295043945, 19.9237060546875, 1.4228111505508423], step: 53800, lr: 9.905443942579728e-05 2023-03-27 09:03:14,449 44k INFO ====> Epoch: 74, cost 253.59 s 2023-03-27 09:04:06,928 44k INFO Train Epoch: 75 [18%] 2023-03-27 09:04:06,929 44k INFO Losses: [2.6445083618164062, 2.309598207473755, 11.815102577209473, 16.004886627197266, 1.1979665756225586], step: 54000, lr: 9.904205762086905e-05 2023-03-27 09:04:10,100 44k INFO Saving model and optimizer state at iteration 75 to ./logs\44k\G_54000.pth 2023-03-27 09:04:10,833 44k INFO Saving model and optimizer state at iteration 75 to ./logs\44k\D_54000.pth 2023-03-27 09:04:11,516 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_51000.pth 2023-03-27 09:04:11,547 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_51000.pth 2023-03-27 09:05:18,318 44k INFO Train Epoch: 75 [45%] 2023-03-27 09:05:18,319 44k INFO Losses: [2.4360475540161133, 2.3844707012176514, 7.995604038238525, 13.082183837890625, 1.0375971794128418], step: 54200, lr: 9.904205762086905e-05 2023-03-27 09:06:25,696 44k INFO Train Epoch: 75 [73%] 2023-03-27 09:06:25,697 44k INFO Losses: [2.2689263820648193, 2.360481023788452, 14.13078498840332, 18.41478157043457, 0.592846155166626], step: 54400, lr: 9.904205762086905e-05 2023-03-27 09:07:32,745 44k INFO ====> Epoch: 75, cost 258.30 s 2023-03-27 09:07:42,142 44k INFO Train Epoch: 76 [0%] 2023-03-27 09:07:42,143 44k INFO Losses: [2.5342416763305664, 2.6253576278686523, 8.630722999572754, 14.879083633422852, 1.5597903728485107], step: 54600, lr: 9.902967736366644e-05 2023-03-27 09:08:49,771 44k INFO Train Epoch: 76 [27%] 2023-03-27 09:08:49,771 44k INFO Losses: [2.5071299076080322, 2.3492166996002197, 9.460559844970703, 16.63587760925293, 0.7479650378227234], step: 54800, lr: 9.902967736366644e-05 2023-03-27 09:09:56,674 44k INFO Train Epoch: 76 [55%] 2023-03-27 09:09:56,675 44k INFO Losses: [2.448258638381958, 2.424220085144043, 11.648221969604492, 18.77943992614746, 0.9916595816612244], step: 55000, lr: 9.902967736366644e-05 2023-03-27 09:09:59,640 44k INFO Saving model and optimizer state at iteration 76 to ./logs\44k\G_55000.pth 2023-03-27 09:10:00,414 44k INFO Saving model and optimizer state at iteration 76 to ./logs\44k\D_55000.pth 2023-03-27 09:10:01,098 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_52000.pth 2023-03-27 09:10:01,127 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_52000.pth 2023-03-27 09:11:08,398 44k INFO Train Epoch: 76 [82%] 2023-03-27 09:11:08,399 44k INFO Losses: [2.4801511764526367, 2.0579822063446045, 10.920303344726562, 14.480094909667969, 0.7020146250724792], step: 55200, lr: 9.902967736366644e-05 2023-03-27 09:11:51,209 44k INFO ====> Epoch: 76, cost 258.46 s 2023-03-27 09:12:24,740 44k INFO Train Epoch: 77 [10%] 2023-03-27 09:12:24,740 44k INFO Losses: [2.4954776763916016, 2.161871910095215, 13.287503242492676, 19.668434143066406, 1.184342622756958], step: 55400, lr: 9.901729865399597e-05 2023-03-27 09:13:32,346 44k INFO Train Epoch: 77 [37%] 2023-03-27 09:13:32,347 44k INFO Losses: [2.6736936569213867, 2.3389627933502197, 8.611727714538574, 13.857053756713867, 1.114434003829956], step: 55600, lr: 9.901729865399597e-05 2023-03-27 09:14:39,481 44k INFO Train Epoch: 77 [65%] 2023-03-27 09:14:39,481 44k INFO Losses: [2.6568355560302734, 2.1494619846343994, 9.386954307556152, 16.363704681396484, 1.428241491317749], step: 55800, lr: 9.901729865399597e-05 2023-03-27 09:15:46,951 44k INFO Train Epoch: 77 [92%] 2023-03-27 09:15:46,951 44k INFO Losses: [2.571241617202759, 2.1371710300445557, 10.406064987182617, 16.92513656616211, 1.3157871961593628], step: 56000, lr: 9.901729865399597e-05 2023-03-27 09:15:49,929 44k INFO Saving model and optimizer state at iteration 77 to ./logs\44k\G_56000.pth 2023-03-27 09:15:50,642 44k INFO Saving model and optimizer state at iteration 77 to ./logs\44k\D_56000.pth 2023-03-27 09:15:51,326 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_53000.pth 2023-03-27 09:15:51,356 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_53000.pth 2023-03-27 09:16:09,915 44k INFO ====> Epoch: 77, cost 258.71 s 2023-03-27 09:17:08,155 44k INFO Train Epoch: 78 [20%] 2023-03-27 09:17:08,156 44k INFO Losses: [2.4894423484802246, 2.0982768535614014, 8.572176933288574, 11.395413398742676, 1.2614377737045288], step: 56200, lr: 9.900492149166423e-05 2023-03-27 09:18:15,346 44k INFO Train Epoch: 78 [47%] 2023-03-27 09:18:15,346 44k INFO Losses: [2.4894745349884033, 2.033205032348633, 12.513121604919434, 17.252788543701172, 1.1480735540390015], step: 56400, lr: 9.900492149166423e-05 2023-03-27 09:19:23,147 44k INFO Train Epoch: 78 [75%] 2023-03-27 09:19:23,147 44k INFO Losses: [2.7561724185943604, 2.347557783126831, 8.912809371948242, 13.371015548706055, 1.0286234617233276], step: 56600, lr: 9.900492149166423e-05 2023-03-27 09:20:25,107 44k INFO ====> Epoch: 78, cost 255.19 s 2023-03-27 09:20:39,749 44k INFO Train Epoch: 79 [2%] 2023-03-27 09:20:39,749 44k INFO Losses: [2.480497360229492, 2.0194153785705566, 10.260211944580078, 15.519022941589355, 0.5704618692398071], step: 56800, lr: 9.899254587647776e-05 2023-03-27 09:21:47,593 44k INFO Train Epoch: 79 [30%] 2023-03-27 09:21:47,594 44k INFO Losses: [2.4960885047912598, 2.5035061836242676, 11.370020866394043, 15.287582397460938, 0.8488313555717468], step: 57000, lr: 9.899254587647776e-05 2023-03-27 09:21:50,612 44k INFO Saving model and optimizer state at iteration 79 to ./logs\44k\G_57000.pth 2023-03-27 09:21:51,326 44k INFO Saving model and optimizer state at iteration 79 to ./logs\44k\D_57000.pth 2023-03-27 09:21:52,003 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_54000.pth 2023-03-27 09:21:52,033 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_54000.pth 2023-03-27 09:22:59,560 44k INFO Train Epoch: 79 [57%] 2023-03-27 09:22:59,560 44k INFO Losses: [2.369487762451172, 2.401801109313965, 12.197699546813965, 19.666080474853516, 0.9156815409660339], step: 57200, lr: 9.899254587647776e-05 2023-03-27 09:24:07,231 44k INFO Train Epoch: 79 [85%] 2023-03-27 09:24:07,232 44k INFO Losses: [2.41656494140625, 2.434971570968628, 9.103594779968262, 15.229981422424316, 1.0728240013122559], step: 57400, lr: 9.899254587647776e-05 2023-03-27 09:24:44,905 44k INFO ====> Epoch: 79, cost 259.80 s 2023-03-27 09:25:23,825 44k INFO Train Epoch: 80 [12%] 2023-03-27 09:25:23,826 44k INFO Losses: [2.6013574600219727, 2.033143997192383, 9.87981128692627, 16.772701263427734, 0.5126125812530518], step: 57600, lr: 9.89801718082432e-05 2023-03-27 09:26:31,611 44k INFO Train Epoch: 80 [40%] 2023-03-27 09:26:31,611 44k INFO Losses: [2.437298536300659, 2.210724115371704, 11.168230056762695, 15.828474044799805, 1.754092812538147], step: 57800, lr: 9.89801718082432e-05 2023-03-27 09:27:39,197 44k INFO Train Epoch: 80 [67%] 2023-03-27 09:27:39,197 44k INFO Losses: [2.7317264080047607, 1.988978624343872, 8.552817344665527, 11.852904319763184, 1.2488106489181519], step: 58000, lr: 9.89801718082432e-05 2023-03-27 09:27:42,258 44k INFO Saving model and optimizer state at iteration 80 to ./logs\44k\G_58000.pth 2023-03-27 09:27:42,961 44k INFO Saving model and optimizer state at iteration 80 to ./logs\44k\D_58000.pth 2023-03-27 09:27:43,629 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_55000.pth 2023-03-27 09:27:43,658 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_55000.pth 2023-03-27 09:28:51,457 44k INFO Train Epoch: 80 [95%] 2023-03-27 09:28:51,457 44k INFO Losses: [2.6654438972473145, 1.8952109813690186, 11.002762794494629, 12.596260070800781, 1.3561766147613525], step: 58200, lr: 9.89801718082432e-05 2023-03-27 09:29:04,813 44k INFO ====> Epoch: 80, cost 259.91 s 2023-03-27 09:30:08,577 44k INFO Train Epoch: 81 [22%] 2023-03-27 09:30:08,577 44k INFO Losses: [2.757061004638672, 2.138958215713501, 8.559090614318848, 15.467161178588867, 0.7661004662513733], step: 58400, lr: 9.896779928676716e-05 2023-03-27 09:31:15,827 44k INFO Train Epoch: 81 [49%] 2023-03-27 09:31:15,828 44k INFO Losses: [2.3698627948760986, 2.256129741668701, 11.400449752807617, 13.041911125183105, 1.1506130695343018], step: 58600, lr: 9.896779928676716e-05 2023-03-27 09:32:23,762 44k INFO Train Epoch: 81 [77%] 2023-03-27 09:32:23,762 44k INFO Losses: [2.2278361320495605, 2.9400899410247803, 7.398653984069824, 11.099531173706055, 1.3681344985961914], step: 58800, lr: 9.896779928676716e-05 2023-03-27 09:33:20,289 44k INFO ====> Epoch: 81, cost 255.48 s 2023-03-27 09:33:40,299 44k INFO Train Epoch: 82 [4%] 2023-03-27 09:33:40,300 44k INFO Losses: [2.5044875144958496, 2.217015504837036, 10.401374816894531, 16.01725959777832, 0.881864070892334], step: 59000, lr: 9.895542831185631e-05 2023-03-27 09:33:43,350 44k INFO Saving model and optimizer state at iteration 82 to ./logs\44k\G_59000.pth 2023-03-27 09:33:44,064 44k INFO Saving model and optimizer state at iteration 82 to ./logs\44k\D_59000.pth 2023-03-27 09:33:44,752 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_56000.pth 2023-03-27 09:33:44,782 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_56000.pth 2023-03-27 09:34:52,751 44k INFO Train Epoch: 82 [32%] 2023-03-27 09:34:52,751 44k INFO Losses: [2.5263190269470215, 2.3537802696228027, 12.890419006347656, 19.255563735961914, 1.4706729650497437], step: 59200, lr: 9.895542831185631e-05 2023-03-27 09:36:00,435 44k INFO Train Epoch: 82 [59%] 2023-03-27 09:36:00,436 44k INFO Losses: [2.501584053039551, 2.37062406539917, 13.14730453491211, 18.04471206665039, 1.194952130317688], step: 59400, lr: 9.895542831185631e-05 2023-03-27 09:37:08,144 44k INFO Train Epoch: 82 [87%] 2023-03-27 09:37:08,144 44k INFO Losses: [2.5431067943573, 2.2430522441864014, 10.63089370727539, 17.376996994018555, 1.565305233001709], step: 59600, lr: 9.895542831185631e-05 2023-03-27 09:37:40,482 44k INFO ====> Epoch: 82, cost 260.19 s 2023-03-27 09:38:25,025 44k INFO Train Epoch: 83 [14%] 2023-03-27 09:38:25,025 44k INFO Losses: [2.6206650733947754, 2.159938097000122, 10.948968887329102, 15.664529800415039, 1.1302636861801147], step: 59800, lr: 9.894305888331732e-05 2023-03-27 09:39:32,738 44k INFO Train Epoch: 83 [42%] 2023-03-27 09:39:32,738 44k INFO Losses: [2.3239502906799316, 2.2825515270233154, 12.042638778686523, 18.646574020385742, 1.213418960571289], step: 60000, lr: 9.894305888331732e-05 2023-03-27 09:39:35,831 44k INFO Saving model and optimizer state at iteration 83 to ./logs\44k\G_60000.pth 2023-03-27 09:39:36,540 44k INFO Saving model and optimizer state at iteration 83 to ./logs\44k\D_60000.pth 2023-03-27 09:39:37,211 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_57000.pth 2023-03-27 09:39:37,257 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_57000.pth 2023-03-27 09:40:45,174 44k INFO Train Epoch: 83 [69%] 2023-03-27 09:40:45,174 44k INFO Losses: [2.3723936080932617, 2.2629446983337402, 15.236505508422852, 16.495887756347656, 1.253631353378296], step: 60200, lr: 9.894305888331732e-05 2023-03-27 09:41:52,769 44k INFO Train Epoch: 83 [97%] 2023-03-27 09:41:52,769 44k INFO Losses: [2.420388698577881, 2.220708131790161, 9.649846076965332, 15.504828453063965, 0.7926158308982849], step: 60400, lr: 9.894305888331732e-05 2023-03-27 09:42:00,900 44k INFO ====> Epoch: 83, cost 260.42 s 2023-03-27 09:43:10,259 44k INFO Train Epoch: 84 [24%] 2023-03-27 09:43:10,259 44k INFO Losses: [2.2579102516174316, 3.0494768619537354, 8.847410202026367, 14.881556510925293, 1.2498337030410767], step: 60600, lr: 9.89306910009569e-05 2023-03-27 09:44:17,534 44k INFO Train Epoch: 84 [52%] 2023-03-27 09:44:17,535 44k INFO Losses: [2.229739189147949, 2.603872537612915, 12.864727020263672, 16.156009674072266, 1.298993468284607], step: 60800, lr: 9.89306910009569e-05 2023-03-27 09:45:25,628 44k INFO Train Epoch: 84 [79%] 2023-03-27 09:45:25,629 44k INFO Losses: [2.0978479385375977, 2.632643699645996, 15.802327156066895, 20.593666076660156, 0.8586452007293701], step: 61000, lr: 9.89306910009569e-05 2023-03-27 09:45:28,656 44k INFO Saving model and optimizer state at iteration 84 to ./logs\44k\G_61000.pth 2023-03-27 09:45:29,359 44k INFO Saving model and optimizer state at iteration 84 to ./logs\44k\D_61000.pth 2023-03-27 09:45:29,991 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_58000.pth 2023-03-27 09:45:30,035 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_58000.pth 2023-03-27 09:46:21,246 44k INFO ====> Epoch: 84, cost 260.35 s 2023-03-27 09:46:46,721 44k INFO Train Epoch: 85 [7%] 2023-03-27 09:46:46,721 44k INFO Losses: [2.446881055831909, 2.3691234588623047, 13.274109840393066, 18.739173889160156, 0.9626705050468445], step: 61200, lr: 9.891832466458178e-05 2023-03-27 09:47:54,829 44k INFO Train Epoch: 85 [34%] 2023-03-27 09:47:54,829 44k INFO Losses: [2.6148881912231445, 2.008669853210449, 10.052166938781738, 15.988804817199707, 0.7912470698356628], step: 61400, lr: 9.891832466458178e-05 2023-03-27 09:49:02,547 44k INFO Train Epoch: 85 [62%] 2023-03-27 09:49:02,548 44k INFO Losses: [2.10653018951416, 2.7517142295837402, 10.341085433959961, 16.993736267089844, 1.4500058889389038], step: 61600, lr: 9.891832466458178e-05 2023-03-27 09:50:10,284 44k INFO Train Epoch: 85 [89%] 2023-03-27 09:50:10,284 44k INFO Losses: [2.5794146060943604, 2.1420674324035645, 9.77205753326416, 15.793490409851074, 1.2589272260665894], step: 61800, lr: 9.891832466458178e-05 2023-03-27 09:50:37,257 44k INFO ====> Epoch: 85, cost 256.01 s 2023-03-27 09:51:27,205 44k INFO Train Epoch: 86 [16%] 2023-03-27 09:51:27,205 44k INFO Losses: [2.645372152328491, 2.064566135406494, 6.887193202972412, 14.77082347869873, 1.1582878828048706], step: 62000, lr: 9.89059598739987e-05 2023-03-27 09:51:30,205 44k INFO Saving model and optimizer state at iteration 86 to ./logs\44k\G_62000.pth 2023-03-27 09:51:30,916 44k INFO Saving model and optimizer state at iteration 86 to ./logs\44k\D_62000.pth 2023-03-27 09:51:31,607 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_59000.pth 2023-03-27 09:51:31,650 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_59000.pth 2023-03-27 09:52:39,360 44k INFO Train Epoch: 86 [44%] 2023-03-27 09:52:39,360 44k INFO Losses: [2.9559242725372314, 1.8947367668151855, 8.411797523498535, 12.7291259765625, 1.0017719268798828], step: 62200, lr: 9.89059598739987e-05 2023-03-27 09:53:47,362 44k INFO Train Epoch: 86 [71%] 2023-03-27 09:53:47,363 44k INFO Losses: [2.5458288192749023, 2.639307975769043, 11.652161598205566, 12.950305938720703, 1.098060965538025], step: 62400, lr: 9.89059598739987e-05 2023-03-27 09:54:55,207 44k INFO Train Epoch: 86 [99%] 2023-03-27 09:54:55,207 44k INFO Losses: [2.4177370071411133, 2.3100101947784424, 7.988073348999023, 13.904374122619629, 1.2510274648666382], step: 62600, lr: 9.89059598739987e-05 2023-03-27 09:54:58,041 44k INFO ====> Epoch: 86, cost 260.78 s 2023-03-27 09:56:12,578 44k INFO Train Epoch: 87 [26%] 2023-03-27 09:56:12,578 44k INFO Losses: [2.698122978210449, 1.8715286254882812, 6.978188514709473, 15.623985290527344, 1.0246827602386475], step: 62800, lr: 9.889359662901445e-05 2023-03-27 09:57:20,060 44k INFO Train Epoch: 87 [54%] 2023-03-27 09:57:20,060 44k INFO Losses: [2.3434505462646484, 2.4575462341308594, 14.63865852355957, 19.44290542602539, 0.9221001267433167], step: 63000, lr: 9.889359662901445e-05 2023-03-27 09:57:23,076 44k INFO Saving model and optimizer state at iteration 87 to ./logs\44k\G_63000.pth 2023-03-27 09:57:23,776 44k INFO Saving model and optimizer state at iteration 87 to ./logs\44k\D_63000.pth 2023-03-27 09:57:24,458 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_60000.pth 2023-03-27 09:57:24,500 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_60000.pth 2023-03-27 09:58:32,425 44k INFO Train Epoch: 87 [81%] 2023-03-27 09:58:32,425 44k INFO Losses: [2.363724708557129, 2.596806049346924, 7.8439812660217285, 14.056066513061523, 1.124273419380188], step: 63200, lr: 9.889359662901445e-05 2023-03-27 09:59:18,404 44k INFO ====> Epoch: 87, cost 260.36 s 2023-03-27 09:59:49,244 44k INFO Train Epoch: 88 [9%] 2023-03-27 09:59:49,244 44k INFO Losses: [2.5618557929992676, 2.1410717964172363, 11.616302490234375, 18.56657600402832, 1.073987603187561], step: 63400, lr: 9.888123492943583e-05 2023-03-27 10:00:57,509 44k INFO Train Epoch: 88 [36%] 2023-03-27 10:00:57,509 44k INFO Losses: [2.5383176803588867, 2.1512532234191895, 10.708680152893066, 14.692358016967773, 0.8654391169548035], step: 63600, lr: 9.888123492943583e-05 2023-03-27 10:02:05,123 44k INFO Train Epoch: 88 [64%] 2023-03-27 10:02:05,123 44k INFO Losses: [2.676496982574463, 2.2927393913269043, 11.521885871887207, 18.887176513671875, 0.8159588575363159], step: 63800, lr: 9.888123492943583e-05 2023-03-27 10:03:13,007 44k INFO Train Epoch: 88 [91%] 2023-03-27 10:03:13,007 44k INFO Losses: [2.2897891998291016, 2.447648525238037, 6.326834201812744, 11.280462265014648, 1.217918038368225], step: 64000, lr: 9.888123492943583e-05 2023-03-27 10:03:16,107 44k INFO Saving model and optimizer state at iteration 88 to ./logs\44k\G_64000.pth 2023-03-27 10:03:16,820 44k INFO Saving model and optimizer state at iteration 88 to ./logs\44k\D_64000.pth 2023-03-27 10:03:17,510 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_61000.pth 2023-03-27 10:03:17,553 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_61000.pth 2023-03-27 10:03:39,009 44k INFO ====> Epoch: 88, cost 260.61 s 2023-03-27 10:04:34,669 44k INFO Train Epoch: 89 [19%] 2023-03-27 10:04:34,670 44k INFO Losses: [2.652923107147217, 2.200139045715332, 11.767621040344238, 15.182207107543945, 1.0573797225952148], step: 64200, lr: 9.886887477506964e-05 2023-03-27 10:05:42,202 44k INFO Train Epoch: 89 [46%] 2023-03-27 10:05:42,202 44k INFO Losses: [2.611407995223999, 2.226300001144409, 10.234874725341797, 18.68907928466797, 0.6211757063865662], step: 64400, lr: 9.886887477506964e-05 2023-03-27 10:06:50,268 44k INFO Train Epoch: 89 [74%] 2023-03-27 10:06:50,269 44k INFO Losses: [2.533033847808838, 2.7254953384399414, 9.827921867370605, 16.067947387695312, 1.2690056562423706], step: 64600, lr: 9.886887477506964e-05 2023-03-27 10:07:55,302 44k INFO ====> Epoch: 89, cost 256.29 s 2023-03-27 10:08:07,295 44k INFO Train Epoch: 90 [1%] 2023-03-27 10:08:07,295 44k INFO Losses: [2.5049080848693848, 2.2607452869415283, 12.30127239227295, 18.018543243408203, 1.2478681802749634], step: 64800, lr: 9.885651616572276e-05 2023-03-27 10:09:15,418 44k INFO Train Epoch: 90 [29%] 2023-03-27 10:09:15,419 44k INFO Losses: [2.4821276664733887, 2.1922714710235596, 9.178498268127441, 16.43462562561035, 0.6456231474876404], step: 65000, lr: 9.885651616572276e-05 2023-03-27 10:09:18,437 44k INFO Saving model and optimizer state at iteration 90 to ./logs\44k\G_65000.pth 2023-03-27 10:09:19,149 44k INFO Saving model and optimizer state at iteration 90 to ./logs\44k\D_65000.pth 2023-03-27 10:09:19,822 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_62000.pth 2023-03-27 10:09:19,864 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_62000.pth 2023-03-27 10:10:27,743 44k INFO Train Epoch: 90 [56%] 2023-03-27 10:10:27,743 44k INFO Losses: [2.428854465484619, 2.237983226776123, 11.192479133605957, 12.6817045211792, 0.9100015759468079], step: 65200, lr: 9.885651616572276e-05 2023-03-27 10:11:35,609 44k INFO Train Epoch: 90 [84%] 2023-03-27 10:11:35,610 44k INFO Losses: [2.4400575160980225, 2.6023688316345215, 9.13033390045166, 13.045367240905762, 1.1036146879196167], step: 65400, lr: 9.885651616572276e-05 2023-03-27 10:12:16,274 44k INFO ====> Epoch: 90, cost 260.97 s 2023-03-27 10:12:52,660 44k INFO Train Epoch: 91 [11%] 2023-03-27 10:12:52,660 44k INFO Losses: [2.458656072616577, 2.2818825244903564, 9.076275825500488, 17.8560791015625, 1.1643422842025757], step: 65600, lr: 9.884415910120204e-05 2023-03-27 10:14:00,867 44k INFO Train Epoch: 91 [38%] 2023-03-27 10:14:00,867 44k INFO Losses: [2.3943705558776855, 2.008321762084961, 9.396245002746582, 16.650938034057617, 1.015990138053894], step: 65800, lr: 9.884415910120204e-05 2023-03-27 10:15:08,729 44k INFO Train Epoch: 91 [66%] 2023-03-27 10:15:08,729 44k INFO Losses: [2.3966665267944336, 2.3784823417663574, 8.899850845336914, 18.146732330322266, 0.980642557144165], step: 66000, lr: 9.884415910120204e-05 2023-03-27 10:15:11,854 44k INFO Saving model and optimizer state at iteration 91 to ./logs\44k\G_66000.pth 2023-03-27 10:15:12,565 44k INFO Saving model and optimizer state at iteration 91 to ./logs\44k\D_66000.pth 2023-03-27 10:15:13,203 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_63000.pth 2023-03-27 10:15:13,247 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_63000.pth 2023-03-27 10:16:21,051 44k INFO Train Epoch: 91 [93%] 2023-03-27 10:16:21,051 44k INFO Losses: [2.401566505432129, 2.4048519134521484, 12.53559398651123, 17.551095962524414, 0.7241575717926025], step: 66200, lr: 9.884415910120204e-05 2023-03-27 10:16:37,092 44k INFO ====> Epoch: 91, cost 260.82 s 2023-03-27 10:17:38,379 44k INFO Train Epoch: 92 [21%] 2023-03-27 10:17:38,380 44k INFO Losses: [2.57940673828125, 2.2354397773742676, 10.238306999206543, 16.382198333740234, 1.1249817609786987], step: 66400, lr: 9.883180358131438e-05 2023-03-27 10:18:45,601 44k INFO Train Epoch: 92 [48%] 2023-03-27 10:18:45,601 44k INFO Losses: [2.5629360675811768, 2.3697333335876465, 7.760651588439941, 17.12743377685547, 1.1867032051086426], step: 66600, lr: 9.883180358131438e-05 2023-03-27 10:19:53,691 44k INFO Train Epoch: 92 [76%] 2023-03-27 10:19:53,692 44k INFO Losses: [2.30710768699646, 2.605785846710205, 11.880829811096191, 20.20801544189453, 0.9133543968200684], step: 66800, lr: 9.883180358131438e-05 2023-03-27 10:20:52,926 44k INFO ====> Epoch: 92, cost 255.83 s 2023-03-27 10:21:10,690 44k INFO Train Epoch: 93 [3%] 2023-03-27 10:21:10,691 44k INFO Losses: [2.368762969970703, 2.275001287460327, 8.692131042480469, 14.863801956176758, 1.3694063425064087], step: 67000, lr: 9.881944960586671e-05 2023-03-27 10:21:13,781 44k INFO Saving model and optimizer state at iteration 93 to ./logs\44k\G_67000.pth 2023-03-27 10:21:14,500 44k INFO Saving model and optimizer state at iteration 93 to ./logs\44k\D_67000.pth 2023-03-27 10:21:15,182 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_64000.pth 2023-03-27 10:21:15,211 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_64000.pth 2023-03-27 10:22:23,132 44k INFO Train Epoch: 93 [31%] 2023-03-27 10:22:23,132 44k INFO Losses: [2.4633541107177734, 2.5360605716705322, 12.693946838378906, 18.599523544311523, 1.53462553024292], step: 67200, lr: 9.881944960586671e-05 2023-03-27 10:23:30,942 44k INFO Train Epoch: 93 [58%] 2023-03-27 10:23:30,943 44k INFO Losses: [2.324860095977783, 2.4223039150238037, 13.684011459350586, 19.509849548339844, 1.481391429901123], step: 67400, lr: 9.881944960586671e-05 2023-03-27 10:24:38,805 44k INFO Train Epoch: 93 [86%] 2023-03-27 10:24:38,805 44k INFO Losses: [2.5111193656921387, 2.4296417236328125, 14.481210708618164, 19.52738380432129, 1.2184337377548218], step: 67600, lr: 9.881944960586671e-05 2023-03-27 10:25:13,738 44k INFO ====> Epoch: 93, cost 260.81 s 2023-03-27 10:25:55,738 44k INFO Train Epoch: 94 [13%] 2023-03-27 10:25:55,739 44k INFO Losses: [2.6084084510803223, 2.2933523654937744, 11.146780014038086, 17.421825408935547, 1.5685172080993652], step: 67800, lr: 9.880709717466598e-05 2023-03-27 10:27:03,662 44k INFO Train Epoch: 94 [41%] 2023-03-27 10:27:03,663 44k INFO Losses: [2.8471169471740723, 2.1127469539642334, 9.294934272766113, 16.581998825073242, 1.8182706832885742], step: 68000, lr: 9.880709717466598e-05 2023-03-27 10:27:06,727 44k INFO Saving model and optimizer state at iteration 94 to ./logs\44k\G_68000.pth 2023-03-27 10:27:07,437 44k INFO Saving model and optimizer state at iteration 94 to ./logs\44k\D_68000.pth 2023-03-27 10:27:08,126 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_65000.pth 2023-03-27 10:27:08,157 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_65000.pth 2023-03-27 10:28:16,048 44k INFO Train Epoch: 94 [68%] 2023-03-27 10:28:16,048 44k INFO Losses: [2.6941723823547363, 2.103513240814209, 8.850523948669434, 17.6321964263916, 0.6080607771873474], step: 68200, lr: 9.880709717466598e-05 2023-03-27 10:29:23,884 44k INFO Train Epoch: 94 [96%] 2023-03-27 10:29:23,884 44k INFO Losses: [2.4888651371002197, 2.2022500038146973, 8.31200885772705, 13.02242660522461, 1.215746283531189], step: 68400, lr: 9.880709717466598e-05 2023-03-27 10:29:34,580 44k INFO ====> Epoch: 94, cost 260.84 s 2023-03-27 10:30:41,275 44k INFO Train Epoch: 95 [23%] 2023-03-27 10:30:41,276 44k INFO Losses: [2.1030433177948, 2.4543075561523438, 12.44829273223877, 18.340776443481445, 1.222771406173706], step: 68600, lr: 9.879474628751914e-05 2023-03-27 10:31:48,485 44k INFO Train Epoch: 95 [51%] 2023-03-27 10:31:48,486 44k INFO Losses: [2.3531479835510254, 2.309704065322876, 11.59144401550293, 17.722187042236328, 1.3691909313201904], step: 68800, lr: 9.879474628751914e-05 2023-03-27 10:32:56,689 44k INFO Train Epoch: 95 [78%] 2023-03-27 10:32:56,690 44k INFO Losses: [2.389883518218994, 2.3950564861297607, 11.576066017150879, 19.307968139648438, 0.9323539137840271], step: 69000, lr: 9.879474628751914e-05 2023-03-27 10:32:59,936 44k INFO Saving model and optimizer state at iteration 95 to ./logs\44k\G_69000.pth 2023-03-27 10:33:00,655 44k INFO Saving model and optimizer state at iteration 95 to ./logs\44k\D_69000.pth 2023-03-27 10:33:01,339 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_66000.pth 2023-03-27 10:33:01,379 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_66000.pth 2023-03-27 10:33:55,441 44k INFO ====> Epoch: 95, cost 260.86 s 2023-03-27 10:34:18,465 44k INFO Train Epoch: 96 [5%] 2023-03-27 10:34:18,465 44k INFO Losses: [2.443260669708252, 2.2726147174835205, 12.488019943237305, 19.576217651367188, 1.2582616806030273], step: 69200, lr: 9.87823969442332e-05 2023-03-27 10:35:26,724 44k INFO Train Epoch: 96 [33%] 2023-03-27 10:35:26,725 44k INFO Losses: [2.1468734741210938, 2.5233166217803955, 12.892313957214355, 16.5212345123291, 1.1461820602416992], step: 69400, lr: 9.87823969442332e-05 2023-03-27 10:36:34,493 44k INFO Train Epoch: 96 [60%] 2023-03-27 10:36:34,493 44k INFO Losses: [2.3785758018493652, 2.255988836288452, 12.241409301757812, 18.599990844726562, 0.7781578898429871], step: 69600, lr: 9.87823969442332e-05 2023-03-27 10:37:42,318 44k INFO Train Epoch: 96 [88%] 2023-03-27 10:37:42,318 44k INFO Losses: [2.696678876876831, 2.2925474643707275, 11.508566856384277, 17.313579559326172, 1.0286533832550049], step: 69800, lr: 9.87823969442332e-05 2023-03-27 10:38:12,108 44k INFO ====> Epoch: 96, cost 256.67 s 2023-03-27 10:38:59,610 44k INFO Train Epoch: 97 [15%] 2023-03-27 10:38:59,610 44k INFO Losses: [2.643277406692505, 2.0512044429779053, 7.638540267944336, 12.838200569152832, 0.7692441940307617], step: 70000, lr: 9.877004914461517e-05 2023-03-27 10:39:02,672 44k INFO Saving model and optimizer state at iteration 97 to ./logs\44k\G_70000.pth 2023-03-27 10:39:03,377 44k INFO Saving model and optimizer state at iteration 97 to ./logs\44k\D_70000.pth 2023-03-27 10:39:04,053 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_67000.pth 2023-03-27 10:39:04,083 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_67000.pth 2023-03-27 10:40:11,914 44k INFO Train Epoch: 97 [43%] 2023-03-27 10:40:11,914 44k INFO Losses: [2.168562889099121, 2.708719491958618, 14.94810676574707, 17.953975677490234, 0.8051972985267639], step: 70200, lr: 9.877004914461517e-05 2023-03-27 10:41:20,069 44k INFO Train Epoch: 97 [70%] 2023-03-27 10:41:20,070 44k INFO Losses: [2.499677896499634, 2.1549808979034424, 7.863663673400879, 18.47486686706543, 1.098233699798584], step: 70400, lr: 9.877004914461517e-05 2023-03-27 10:42:27,958 44k INFO Train Epoch: 97 [98%] 2023-03-27 10:42:27,958 44k INFO Losses: [2.8005690574645996, 2.240535259246826, 6.180616855621338, 9.663869857788086, 0.7978716492652893], step: 70600, lr: 9.877004914461517e-05 2023-03-27 10:42:33,424 44k INFO ====> Epoch: 97, cost 261.32 s 2023-03-27 10:43:45,478 44k INFO Train Epoch: 98 [25%] 2023-03-27 10:43:45,478 44k INFO Losses: [2.19970703125, 2.8031203746795654, 6.03167724609375, 11.370527267456055, 1.09547758102417], step: 70800, lr: 9.875770288847208e-05 2023-03-27 10:44:52,899 44k INFO Train Epoch: 98 [53%] 2023-03-27 10:44:52,900 44k INFO Losses: [2.294466972351074, 2.4905385971069336, 12.073990821838379, 17.916563034057617, 1.0834671258926392], step: 71000, lr: 9.875770288847208e-05 2023-03-27 10:44:55,933 44k INFO Saving model and optimizer state at iteration 98 to ./logs\44k\G_71000.pth 2023-03-27 10:44:56,637 44k INFO Saving model and optimizer state at iteration 98 to ./logs\44k\D_71000.pth 2023-03-27 10:44:57,305 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_68000.pth 2023-03-27 10:44:57,334 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_68000.pth 2023-03-27 10:46:05,264 44k INFO Train Epoch: 98 [80%] 2023-03-27 10:46:05,264 44k INFO Losses: [2.4300363063812256, 2.497838020324707, 9.2059965133667, 18.86980628967285, 1.0573924779891968], step: 71200, lr: 9.875770288847208e-05 2023-03-27 10:46:53,949 44k INFO ====> Epoch: 98, cost 260.52 s 2023-03-27 10:47:22,115 44k INFO Train Epoch: 99 [8%] 2023-03-27 10:47:22,116 44k INFO Losses: [2.602912425994873, 2.3608086109161377, 10.209246635437012, 16.99795913696289, 1.1690219640731812], step: 71400, lr: 9.874535817561101e-05 2023-03-27 10:48:30,223 44k INFO Train Epoch: 99 [35%] 2023-03-27 10:48:30,224 44k INFO Losses: [2.4729363918304443, 2.2970941066741943, 10.366796493530273, 16.536209106445312, 1.2900009155273438], step: 71600, lr: 9.874535817561101e-05 2023-03-27 10:49:37,956 44k INFO Train Epoch: 99 [63%] 2023-03-27 10:49:37,956 44k INFO Losses: [2.5005741119384766, 2.462015151977539, 11.020744323730469, 18.92160415649414, 0.6700186729431152], step: 71800, lr: 9.874535817561101e-05 2023-03-27 10:50:45,847 44k INFO Train Epoch: 99 [90%] 2023-03-27 10:50:45,848 44k INFO Losses: [2.3498570919036865, 2.5182924270629883, 15.790327072143555, 23.053510665893555, 1.3967245817184448], step: 72000, lr: 9.874535817561101e-05 2023-03-27 10:50:48,890 44k INFO Saving model and optimizer state at iteration 99 to ./logs\44k\G_72000.pth 2023-03-27 10:50:49,640 44k INFO Saving model and optimizer state at iteration 99 to ./logs\44k\D_72000.pth 2023-03-27 10:50:50,298 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_69000.pth 2023-03-27 10:50:50,330 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_69000.pth 2023-03-27 10:51:14,551 44k INFO ====> Epoch: 99, cost 260.60 s 2023-03-27 10:52:07,442 44k INFO Train Epoch: 100 [18%] 2023-03-27 10:52:07,442 44k INFO Losses: [2.833078384399414, 1.8384217023849487, 9.7644624710083, 13.030872344970703, 0.9471670389175415], step: 72200, lr: 9.873301500583906e-05 2023-03-27 10:53:15,109 44k INFO Train Epoch: 100 [45%] 2023-03-27 10:53:15,109 44k INFO Losses: [2.7899234294891357, 1.903559684753418, 8.450926780700684, 15.736007690429688, 0.8415020108222961], step: 72400, lr: 9.873301500583906e-05 2023-03-27 10:54:23,035 44k INFO Train Epoch: 100 [73%] 2023-03-27 10:54:23,035 44k INFO Losses: [2.52764892578125, 1.9666798114776611, 13.632672309875488, 18.309648513793945, 1.66559898853302], step: 72600, lr: 9.873301500583906e-05 2023-03-27 10:55:30,673 44k INFO ====> Epoch: 100, cost 256.12 s 2023-03-27 12:13:35,360 44k INFO {'train': {'log_interval': 200, 'eval_interval': 1000, 'seed': 1234, 'epochs': 200, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kageaki': 0}, 'model_dir': './logs\\44k'} 2023-03-27 12:13:35,387 44k WARNING git hash values are different. cea6df30(saved) != fd4d47fd(current) 2023-03-27 12:13:37,389 44k INFO Loaded checkpoint './logs\44k\G_72000.pth' (iteration 99) 2023-03-27 12:13:37,799 44k INFO Loaded checkpoint './logs\44k\D_72000.pth' (iteration 99) 2023-03-27 12:14:17,317 44k INFO Train Epoch: 99 [8%] 2023-03-27 12:14:17,317 44k INFO Losses: [2.4348437786102295, 2.5712356567382812, 8.39177131652832, 12.634210586547852, 0.8774119019508362], step: 71400, lr: 9.873301500583906e-05 2023-03-27 12:15:39,401 44k INFO Train Epoch: 99 [35%] 2023-03-27 12:15:39,402 44k INFO Losses: [2.651897430419922, 2.146714687347412, 14.591571807861328, 18.341400146484375, 0.7078648209571838], step: 71600, lr: 9.873301500583906e-05 2023-03-27 12:17:01,236 44k INFO Train Epoch: 99 [63%] 2023-03-27 12:17:01,237 44k INFO Losses: [2.2096171379089355, 2.514815330505371, 11.411490440368652, 17.86860466003418, 1.3949413299560547], step: 71800, lr: 9.873301500583906e-05 2023-03-27 12:19:03,016 44k INFO Train Epoch: 99 [90%] 2023-03-27 12:19:03,016 44k INFO Losses: [2.5020246505737305, 2.4017527103424072, 11.527349472045898, 16.090713500976562, 0.9901983141899109], step: 72000, lr: 9.873301500583906e-05 2023-03-27 12:19:07,423 44k INFO Saving model and optimizer state at iteration 99 to ./logs\44k\G_72000.pth 2023-03-27 12:19:08,736 44k INFO Saving model and optimizer state at iteration 99 to ./logs\44k\D_72000.pth 2023-03-27 12:19:59,550 44k INFO ====> Epoch: 99, cost 384.19 s 2023-03-27 12:21:15,781 44k INFO Train Epoch: 100 [18%] 2023-03-27 12:21:15,782 44k INFO Losses: [2.6060869693756104, 2.0447590351104736, 8.84882640838623, 12.711616516113281, 1.2293591499328613], step: 72200, lr: 9.872067337896332e-05 2023-03-27 12:22:45,289 44k INFO Train Epoch: 100 [45%] 2023-03-27 12:22:45,290 44k INFO Losses: [2.4637222290039062, 2.3659462928771973, 10.681584358215332, 16.34398651123047, 0.8613430261611938], step: 72400, lr: 9.872067337896332e-05 2023-03-27 12:23:54,567 44k INFO Train Epoch: 100 [73%] 2023-03-27 12:23:54,567 44k INFO Losses: [2.2245213985443115, 2.206394672393799, 14.194622993469238, 18.18129539489746, 1.1893965005874634], step: 72600, lr: 9.872067337896332e-05 2023-03-27 12:25:02,303 44k INFO ====> Epoch: 100, cost 302.75 s 2023-03-27 12:25:11,509 44k INFO Train Epoch: 101 [0%] 2023-03-27 12:25:11,509 44k INFO Losses: [2.4752614498138428, 2.4758574962615967, 12.119150161743164, 18.3714599609375, 0.8370764851570129], step: 72800, lr: 9.870833329479095e-05 2023-03-27 12:26:19,019 44k INFO Train Epoch: 101 [27%] 2023-03-27 12:26:19,019 44k INFO Losses: [2.4892377853393555, 2.5481040477752686, 9.334031105041504, 12.267107009887695, 1.1779216527938843], step: 73000, lr: 9.870833329479095e-05 2023-03-27 12:26:22,015 44k INFO Saving model and optimizer state at iteration 101 to ./logs\44k\G_73000.pth 2023-03-27 12:26:22,766 44k INFO Saving model and optimizer state at iteration 101 to ./logs\44k\D_73000.pth 2023-03-27 12:26:23,432 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_70000.pth 2023-03-27 12:26:23,462 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_70000.pth 2023-03-27 12:27:30,117 44k INFO Train Epoch: 101 [55%] 2023-03-27 12:27:30,118 44k INFO Losses: [2.3131895065307617, 2.472618341445923, 12.410420417785645, 16.442771911621094, 0.862964391708374], step: 73200, lr: 9.870833329479095e-05 2023-03-27 12:28:37,712 44k INFO Train Epoch: 101 [82%] 2023-03-27 12:28:37,712 44k INFO Losses: [2.59165620803833, 2.0692942142486572, 8.069720268249512, 13.383352279663086, 0.7775800824165344], step: 73400, lr: 9.870833329479095e-05 2023-03-27 12:29:21,660 44k INFO ====> Epoch: 101, cost 259.36 s 2023-03-27 12:29:55,157 44k INFO Train Epoch: 102 [10%] 2023-03-27 12:29:55,157 44k INFO Losses: [2.7009117603302, 2.3749582767486572, 11.927902221679688, 15.293622970581055, 1.0188430547714233], step: 73600, lr: 9.86959947531291e-05 2023-03-27 12:31:03,038 44k INFO Train Epoch: 102 [37%] 2023-03-27 12:31:03,038 44k INFO Losses: [2.5282649993896484, 2.0014209747314453, 11.034998893737793, 14.594316482543945, 1.3118880987167358], step: 73800, lr: 9.86959947531291e-05 2023-03-27 12:32:09,746 44k INFO Train Epoch: 102 [65%] 2023-03-27 12:32:09,747 44k INFO Losses: [2.2230703830718994, 2.7719953060150146, 15.792462348937988, 21.89268684387207, 0.8152710199356079], step: 74000, lr: 9.86959947531291e-05 2023-03-27 12:32:12,831 44k INFO Saving model and optimizer state at iteration 102 to ./logs\44k\G_74000.pth 2023-03-27 12:32:13,550 44k INFO Saving model and optimizer state at iteration 102 to ./logs\44k\D_74000.pth 2023-03-27 12:32:14,231 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_71000.pth 2023-03-27 12:32:14,261 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_71000.pth 2023-03-27 12:33:21,489 44k INFO Train Epoch: 102 [92%] 2023-03-27 12:33:21,489 44k INFO Losses: [2.3974928855895996, 2.1928038597106934, 14.533015251159668, 19.143226623535156, 1.3842853307724], step: 74200, lr: 9.86959947531291e-05 2023-03-27 12:33:40,074 44k INFO ====> Epoch: 102, cost 258.42 s 2023-03-27 12:34:38,554 44k INFO Train Epoch: 103 [20%] 2023-03-27 12:34:38,555 44k INFO Losses: [2.443582057952881, 2.2670466899871826, 10.582295417785645, 15.578861236572266, 1.322725772857666], step: 74400, lr: 9.868365775378495e-05 2023-03-27 12:35:46,688 44k INFO Train Epoch: 103 [47%] 2023-03-27 12:35:46,688 44k INFO Losses: [2.5593314170837402, 2.337940216064453, 12.922138214111328, 18.703842163085938, 1.0991365909576416], step: 74600, lr: 9.868365775378495e-05 2023-03-27 12:36:54,702 44k INFO Train Epoch: 103 [75%] 2023-03-27 12:36:54,702 44k INFO Losses: [2.471512794494629, 2.6648130416870117, 11.05189037322998, 18.465909957885742, 0.7434163689613342], step: 74800, lr: 9.868365775378495e-05 2023-03-27 12:37:56,539 44k INFO ====> Epoch: 103, cost 256.47 s 2023-03-27 12:38:11,073 44k INFO Train Epoch: 104 [2%] 2023-03-27 12:38:11,073 44k INFO Losses: [2.465465545654297, 2.0937652587890625, 10.516608238220215, 16.3874568939209, 0.8988189697265625], step: 75000, lr: 9.867132229656573e-05 2023-03-27 12:38:13,992 44k INFO Saving model and optimizer state at iteration 104 to ./logs\44k\G_75000.pth 2023-03-27 12:38:14,751 44k INFO Saving model and optimizer state at iteration 104 to ./logs\44k\D_75000.pth 2023-03-27 12:38:15,427 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_72000.pth 2023-03-27 12:38:15,457 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_72000.pth 2023-03-27 12:39:22,933 44k INFO Train Epoch: 104 [30%] 2023-03-27 12:39:22,933 44k INFO Losses: [2.5167758464813232, 2.2083561420440674, 12.65423583984375, 17.339153289794922, 1.2356699705123901], step: 75200, lr: 9.867132229656573e-05 2023-03-27 12:40:30,294 44k INFO Train Epoch: 104 [57%] 2023-03-27 12:40:30,295 44k INFO Losses: [2.3871705532073975, 2.480431079864502, 12.6452054977417, 15.679166793823242, 0.843224048614502], step: 75400, lr: 9.867132229656573e-05 2023-03-27 12:41:38,023 44k INFO Train Epoch: 104 [85%] 2023-03-27 12:41:38,024 44k INFO Losses: [2.420592784881592, 2.4057936668395996, 11.165764808654785, 16.020112991333008, 1.1586799621582031], step: 75600, lr: 9.867132229656573e-05 2023-03-27 12:42:15,491 44k INFO ====> Epoch: 104, cost 258.95 s 2023-03-27 12:42:54,189 44k INFO Train Epoch: 105 [12%] 2023-03-27 12:42:54,190 44k INFO Losses: [2.6001663208007812, 2.202397108078003, 9.97301959991455, 14.058331489562988, 0.9148542284965515], step: 75800, lr: 9.865898838127865e-05 2023-03-27 12:44:01,903 44k INFO Train Epoch: 105 [40%] 2023-03-27 12:44:01,904 44k INFO Losses: [2.463834524154663, 2.2824764251708984, 13.756756782531738, 18.136884689331055, 0.5447065234184265], step: 76000, lr: 9.865898838127865e-05 2023-03-27 12:44:04,875 44k INFO Saving model and optimizer state at iteration 105 to ./logs\44k\G_76000.pth 2023-03-27 12:44:05,636 44k INFO Saving model and optimizer state at iteration 105 to ./logs\44k\D_76000.pth 2023-03-27 12:44:06,311 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_73000.pth 2023-03-27 12:44:06,346 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_73000.pth 2023-03-27 12:45:13,660 44k INFO Train Epoch: 105 [67%] 2023-03-27 12:45:13,660 44k INFO Losses: [2.601439952850342, 2.2808635234832764, 7.377594470977783, 11.197066307067871, 1.1623420715332031], step: 76200, lr: 9.865898838127865e-05 2023-03-27 12:46:21,329 44k INFO Train Epoch: 105 [95%] 2023-03-27 12:46:21,330 44k INFO Losses: [2.529184579849243, 2.2835798263549805, 11.05025577545166, 14.239054679870605, 1.2312272787094116], step: 76400, lr: 9.865898838127865e-05 2023-03-27 12:46:34,645 44k INFO ====> Epoch: 105, cost 259.15 s 2023-03-27 12:47:37,977 44k INFO Train Epoch: 106 [22%] 2023-03-27 12:47:37,978 44k INFO Losses: [2.798792839050293, 1.92172110080719, 7.6208696365356445, 13.695590019226074, 0.9409084320068359], step: 76600, lr: 9.864665600773098e-05 2023-03-27 12:48:44,960 44k INFO Train Epoch: 106 [49%] 2023-03-27 12:48:44,960 44k INFO Losses: [2.343873977661133, 2.353515148162842, 15.509693145751953, 19.006458282470703, 1.1046106815338135], step: 76800, lr: 9.864665600773098e-05 2023-03-27 12:49:52,790 44k INFO Train Epoch: 106 [77%] 2023-03-27 12:49:52,791 44k INFO Losses: [2.6160166263580322, 2.2875919342041016, 7.154178619384766, 15.258353233337402, 1.0502036809921265], step: 77000, lr: 9.864665600773098e-05 2023-03-27 12:49:55,743 44k INFO Saving model and optimizer state at iteration 106 to ./logs\44k\G_77000.pth 2023-03-27 12:49:56,488 44k INFO Saving model and optimizer state at iteration 106 to ./logs\44k\D_77000.pth 2023-03-27 12:49:57,154 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_74000.pth 2023-03-27 12:49:57,185 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_74000.pth 2023-03-27 12:50:53,471 44k INFO ====> Epoch: 106, cost 258.83 s 2023-03-27 12:51:13,370 44k INFO Train Epoch: 107 [4%] 2023-03-27 12:51:13,370 44k INFO Losses: [2.203268527984619, 2.5181057453155518, 13.976646423339844, 16.563934326171875, 0.6826151013374329], step: 77200, lr: 9.863432517573002e-05 2023-03-27 12:52:21,204 44k INFO Train Epoch: 107 [32%] 2023-03-27 12:52:21,204 44k INFO Losses: [2.27400279045105, 2.5690054893493652, 12.847528457641602, 16.360177993774414, 1.2894262075424194], step: 77400, lr: 9.863432517573002e-05 2023-03-27 12:53:28,489 44k INFO Train Epoch: 107 [59%] 2023-03-27 12:53:28,489 44k INFO Losses: [2.303175210952759, 2.4103751182556152, 13.342178344726562, 17.28183937072754, 0.836655855178833], step: 77600, lr: 9.863432517573002e-05 2023-03-27 12:54:36,049 44k INFO Train Epoch: 107 [87%] 2023-03-27 12:54:36,049 44k INFO Losses: [2.503974437713623, 2.163163423538208, 8.760103225708008, 15.991340637207031, 0.9702242612838745], step: 77800, lr: 9.863432517573002e-05 2023-03-27 12:55:08,164 44k INFO ====> Epoch: 107, cost 254.69 s 2023-03-27 12:55:52,394 44k INFO Train Epoch: 108 [14%] 2023-03-27 12:55:52,395 44k INFO Losses: [2.1355862617492676, 2.834092140197754, 10.547039031982422, 13.025786399841309, 1.1245698928833008], step: 78000, lr: 9.862199588508305e-05 2023-03-27 12:55:55,415 44k INFO Saving model and optimizer state at iteration 108 to ./logs\44k\G_78000.pth 2023-03-27 12:55:56,119 44k INFO Saving model and optimizer state at iteration 108 to ./logs\44k\D_78000.pth 2023-03-27 12:55:56,797 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_75000.pth 2023-03-27 12:55:56,838 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_75000.pth 2023-03-27 12:57:04,226 44k INFO Train Epoch: 108 [42%] 2023-03-27 12:57:04,226 44k INFO Losses: [2.378483295440674, 2.3517823219299316, 14.680780410766602, 17.86983871459961, 1.4253157377243042], step: 78200, lr: 9.862199588508305e-05 2023-03-27 12:58:11,965 44k INFO Train Epoch: 108 [69%] 2023-03-27 12:58:11,966 44k INFO Losses: [2.3196725845336914, 2.5335030555725098, 10.268716812133789, 12.672445297241211, 1.127027988433838], step: 78400, lr: 9.862199588508305e-05 2023-03-27 12:59:19,389 44k INFO Train Epoch: 108 [97%] 2023-03-27 12:59:19,389 44k INFO Losses: [2.5724217891693115, 2.0518925189971924, 11.528395652770996, 17.576967239379883, 1.791426658630371], step: 78600, lr: 9.862199588508305e-05 2023-03-27 12:59:27,462 44k INFO ====> Epoch: 108, cost 259.30 s 2023-03-27 13:00:36,280 44k INFO Train Epoch: 109 [24%] 2023-03-27 13:00:36,281 44k INFO Losses: [2.6780686378479004, 1.990158200263977, 4.01690673828125, 9.786018371582031, 1.6045328378677368], step: 78800, lr: 9.86096681355974e-05 2023-03-27 13:01:43,253 44k INFO Train Epoch: 109 [52%] 2023-03-27 13:01:43,253 44k INFO Losses: [2.414707660675049, 2.227842330932617, 10.66486930847168, 15.470245361328125, 0.5932280421257019], step: 79000, lr: 9.86096681355974e-05 2023-03-27 13:01:46,266 44k INFO Saving model and optimizer state at iteration 109 to ./logs\44k\G_79000.pth 2023-03-27 13:01:46,970 44k INFO Saving model and optimizer state at iteration 109 to ./logs\44k\D_79000.pth 2023-03-27 13:01:47,651 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_76000.pth 2023-03-27 13:01:47,689 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_76000.pth 2023-03-27 13:02:55,472 44k INFO Train Epoch: 109 [79%] 2023-03-27 13:02:55,473 44k INFO Losses: [2.2089273929595947, 2.6002044677734375, 15.727290153503418, 19.947200775146484, 1.6000703573226929], step: 79200, lr: 9.86096681355974e-05 2023-03-27 13:03:46,447 44k INFO ====> Epoch: 109, cost 258.99 s 2023-03-27 13:04:11,799 44k INFO Train Epoch: 110 [7%] 2023-03-27 13:04:11,799 44k INFO Losses: [2.127159357070923, 2.5274534225463867, 15.780406951904297, 17.78264808654785, 0.952700138092041], step: 79400, lr: 9.859734192708044e-05 2023-03-27 13:05:19,729 44k INFO Train Epoch: 110 [34%] 2023-03-27 13:05:19,729 44k INFO Losses: [2.556774139404297, 1.9934130907058716, 11.546521186828613, 16.752378463745117, 0.8224717378616333], step: 79600, lr: 9.859734192708044e-05 2023-03-27 13:06:27,124 44k INFO Train Epoch: 110 [62%] 2023-03-27 13:06:27,124 44k INFO Losses: [2.34139084815979, 2.2031123638153076, 11.949182510375977, 15.673786163330078, 0.5135354399681091], step: 79800, lr: 9.859734192708044e-05 2023-03-27 13:07:34,733 44k INFO Train Epoch: 110 [89%] 2023-03-27 13:07:34,733 44k INFO Losses: [2.5691781044006348, 1.862890362739563, 10.39355754852295, 13.057476043701172, 0.5731720924377441], step: 80000, lr: 9.859734192708044e-05 2023-03-27 13:07:37,723 44k INFO Saving model and optimizer state at iteration 110 to ./logs\44k\G_80000.pth 2023-03-27 13:07:38,428 44k INFO Saving model and optimizer state at iteration 110 to ./logs\44k\D_80000.pth 2023-03-27 13:07:39,133 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_77000.pth 2023-03-27 13:07:39,177 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_77000.pth 2023-03-27 13:08:05,992 44k INFO ====> Epoch: 110, cost 259.54 s 2023-03-27 13:08:56,001 44k INFO Train Epoch: 111 [16%] 2023-03-27 13:08:56,001 44k INFO Losses: [2.474480152130127, 2.3595356941223145, 6.8975300788879395, 13.602337837219238, 1.1703765392303467], step: 80200, lr: 9.858501725933955e-05 2023-03-27 13:10:03,349 44k INFO Train Epoch: 111 [44%] 2023-03-27 13:10:03,349 44k INFO Losses: [2.64355731010437, 2.337001085281372, 5.414426326751709, 9.962859153747559, 0.8919627666473389], step: 80400, lr: 9.858501725933955e-05 2023-03-27 13:11:11,027 44k INFO Train Epoch: 111 [71%] 2023-03-27 13:11:11,028 44k INFO Losses: [2.679555892944336, 2.162147045135498, 11.415311813354492, 15.856695175170898, 0.6383696794509888], step: 80600, lr: 9.858501725933955e-05 2023-03-27 13:12:18,483 44k INFO Train Epoch: 111 [99%] 2023-03-27 13:12:18,484 44k INFO Losses: [2.6848974227905273, 2.112835168838501, 12.171782493591309, 15.803391456604004, 1.357503890991211], step: 80800, lr: 9.858501725933955e-05 2023-03-27 13:12:21,286 44k INFO ====> Epoch: 111, cost 255.29 s 2023-03-27 13:13:36,182 44k INFO Train Epoch: 112 [26%] 2023-03-27 13:13:36,183 44k INFO Losses: [2.724102735519409, 2.054863691329956, 8.025656700134277, 13.621991157531738, 1.3960614204406738], step: 81000, lr: 9.857269413218213e-05 2023-03-27 13:13:39,190 44k INFO Saving model and optimizer state at iteration 112 to ./logs\44k\G_81000.pth 2023-03-27 13:13:39,902 44k INFO Saving model and optimizer state at iteration 112 to ./logs\44k\D_81000.pth 2023-03-27 13:13:40,591 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_78000.pth 2023-03-27 13:13:40,634 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_78000.pth 2023-03-27 13:14:47,775 44k INFO Train Epoch: 112 [54%] 2023-03-27 13:14:47,775 44k INFO Losses: [2.3164775371551514, 2.278571367263794, 11.871475219726562, 16.55125617980957, 1.3624759912490845], step: 81200, lr: 9.857269413218213e-05 2023-03-27 13:15:55,675 44k INFO Train Epoch: 112 [81%] 2023-03-27 13:15:55,676 44k INFO Losses: [2.6310672760009766, 2.0589427947998047, 11.208613395690918, 18.50446128845215, 0.9529449939727783], step: 81400, lr: 9.857269413218213e-05 2023-03-27 13:16:41,434 44k INFO ====> Epoch: 112, cost 260.15 s 2023-03-27 13:17:12,073 44k INFO Train Epoch: 113 [9%] 2023-03-27 13:17:12,073 44k INFO Losses: [2.180023193359375, 2.5004353523254395, 15.920842170715332, 18.626564025878906, 0.8610014319419861], step: 81600, lr: 9.85603725454156e-05 2023-03-27 13:18:20,237 44k INFO Train Epoch: 113 [36%] 2023-03-27 13:18:20,238 44k INFO Losses: [2.456219434738159, 1.9856081008911133, 11.5902681350708, 14.547880172729492, 0.8561496734619141], step: 81800, lr: 9.85603725454156e-05 2023-03-27 13:19:27,656 44k INFO Train Epoch: 113 [64%] 2023-03-27 13:19:27,656 44k INFO Losses: [2.10225248336792, 2.781777858734131, 11.171092987060547, 17.04320526123047, 0.6312467455863953], step: 82000, lr: 9.85603725454156e-05 2023-03-27 13:19:30,681 44k INFO Saving model and optimizer state at iteration 113 to ./logs\44k\G_82000.pth 2023-03-27 13:19:31,388 44k INFO Saving model and optimizer state at iteration 113 to ./logs\44k\D_82000.pth 2023-03-27 13:19:32,075 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_79000.pth 2023-03-27 13:19:32,115 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_79000.pth 2023-03-27 13:20:39,777 44k INFO Train Epoch: 113 [91%] 2023-03-27 13:20:39,777 44k INFO Losses: [2.373352527618408, 2.666152000427246, 8.027796745300293, 13.553764343261719, 1.1292203664779663], step: 82200, lr: 9.85603725454156e-05 2023-03-27 13:21:01,281 44k INFO ====> Epoch: 113, cost 259.85 s 2023-03-27 13:21:56,565 44k INFO Train Epoch: 114 [19%] 2023-03-27 13:21:56,565 44k INFO Losses: [2.5420944690704346, 2.1668472290039062, 12.619447708129883, 18.443052291870117, 0.9266313910484314], step: 82400, lr: 9.854805249884741e-05 2023-03-27 13:23:03,946 44k INFO Train Epoch: 114 [46%] 2023-03-27 13:23:03,947 44k INFO Losses: [2.6177918910980225, 2.119377374649048, 10.283655166625977, 15.406288146972656, 1.2504183053970337], step: 82600, lr: 9.854805249884741e-05 2023-03-27 13:24:11,884 44k INFO Train Epoch: 114 [74%] 2023-03-27 13:24:11,884 44k INFO Losses: [2.6256606578826904, 2.3324456214904785, 7.041474342346191, 14.254242897033691, 1.3699219226837158], step: 82800, lr: 9.854805249884741e-05 2023-03-27 13:25:16,615 44k INFO ====> Epoch: 114, cost 255.33 s 2023-03-27 13:25:28,451 44k INFO Train Epoch: 115 [1%] 2023-03-27 13:25:28,451 44k INFO Losses: [2.541729688644409, 2.0798444747924805, 12.779916763305664, 18.400514602661133, 1.1198002099990845], step: 83000, lr: 9.853573399228505e-05 2023-03-27 13:25:31,407 44k INFO Saving model and optimizer state at iteration 115 to ./logs\44k\G_83000.pth 2023-03-27 13:25:32,116 44k INFO Saving model and optimizer state at iteration 115 to ./logs\44k\D_83000.pth 2023-03-27 13:25:32,802 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_80000.pth 2023-03-27 13:25:32,843 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_80000.pth 2023-03-27 13:26:40,796 44k INFO Train Epoch: 115 [29%] 2023-03-27 13:26:40,797 44k INFO Losses: [2.42712664604187, 2.162043333053589, 10.940171241760254, 16.15842056274414, 0.6202219724655151], step: 83200, lr: 9.853573399228505e-05 2023-03-27 13:27:48,369 44k INFO Train Epoch: 115 [56%] 2023-03-27 13:27:48,369 44k INFO Losses: [2.708317995071411, 1.9868583679199219, 9.093711853027344, 15.559638023376465, 1.1399279832839966], step: 83400, lr: 9.853573399228505e-05 2023-03-27 13:28:56,178 44k INFO Train Epoch: 115 [84%] 2023-03-27 13:28:56,178 44k INFO Losses: [2.623553991317749, 2.1352577209472656, 9.555756568908691, 15.787287712097168, 0.9375064969062805], step: 83600, lr: 9.853573399228505e-05 2023-03-27 13:29:36,651 44k INFO ====> Epoch: 115, cost 260.04 s 2023-03-27 13:30:12,862 44k INFO Train Epoch: 116 [11%] 2023-03-27 13:30:12,863 44k INFO Losses: [2.6370203495025635, 2.4510903358459473, 8.803317070007324, 18.191526412963867, 0.9313756823539734], step: 83800, lr: 9.8523417025536e-05 2023-03-27 13:31:20,895 44k INFO Train Epoch: 116 [38%] 2023-03-27 13:31:20,896 44k INFO Losses: [2.524714469909668, 2.030845880508423, 11.53816032409668, 17.30068588256836, 0.6842089891433716], step: 84000, lr: 9.8523417025536e-05 2023-03-27 13:31:23,884 44k INFO Saving model and optimizer state at iteration 116 to ./logs\44k\G_84000.pth 2023-03-27 13:31:24,599 44k INFO Saving model and optimizer state at iteration 116 to ./logs\44k\D_84000.pth 2023-03-27 13:31:25,273 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_81000.pth 2023-03-27 13:31:25,316 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_81000.pth 2023-03-27 13:32:33,040 44k INFO Train Epoch: 116 [66%] 2023-03-27 13:32:33,041 44k INFO Losses: [2.194868564605713, 2.435614824295044, 8.63675594329834, 13.368159294128418, 1.2561767101287842], step: 84200, lr: 9.8523417025536e-05 2023-03-27 13:33:40,888 44k INFO Train Epoch: 116 [93%] 2023-03-27 13:33:40,889 44k INFO Losses: [2.1960082054138184, 2.6319186687469482, 9.576705932617188, 13.304611206054688, 1.2090452909469604], step: 84400, lr: 9.8523417025536e-05 2023-03-27 13:33:56,965 44k INFO ====> Epoch: 116, cost 260.31 s 2023-03-27 13:34:57,931 44k INFO Train Epoch: 117 [21%] 2023-03-27 13:34:57,931 44k INFO Losses: [2.649101734161377, 2.078822135925293, 7.6171417236328125, 15.24785041809082, 0.8831215500831604], step: 84600, lr: 9.851110159840781e-05 2023-03-27 13:36:05,171 44k INFO Train Epoch: 117 [48%] 2023-03-27 13:36:05,171 44k INFO Losses: [2.640836238861084, 2.282090902328491, 8.354618072509766, 15.465442657470703, 1.3392205238342285], step: 84800, lr: 9.851110159840781e-05 2023-03-27 13:37:13,377 44k INFO Train Epoch: 117 [76%] 2023-03-27 13:37:13,378 44k INFO Losses: [2.664289712905884, 2.157564401626587, 11.378355979919434, 19.247217178344727, 0.8662973046302795], step: 85000, lr: 9.851110159840781e-05 2023-03-27 13:37:16,364 44k INFO Saving model and optimizer state at iteration 117 to ./logs\44k\G_85000.pth 2023-03-27 13:37:17,066 44k INFO Saving model and optimizer state at iteration 117 to ./logs\44k\D_85000.pth 2023-03-27 13:37:17,745 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_82000.pth 2023-03-27 13:37:17,787 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_82000.pth 2023-03-27 13:38:17,034 44k INFO ====> Epoch: 117, cost 260.07 s 2023-03-27 13:38:34,258 44k INFO Train Epoch: 118 [3%] 2023-03-27 13:38:34,258 44k INFO Losses: [2.611931562423706, 2.088862895965576, 10.430831909179688, 15.212746620178223, 0.8558792471885681], step: 85200, lr: 9.8498787710708e-05 2023-03-27 13:39:42,171 44k INFO Train Epoch: 118 [31%] 2023-03-27 13:39:42,171 44k INFO Losses: [2.5694851875305176, 2.528306007385254, 13.160137176513672, 18.801305770874023, 1.0350357294082642], step: 85400, lr: 9.8498787710708e-05 2023-03-27 13:40:50,007 44k INFO Train Epoch: 118 [58%] 2023-03-27 13:40:50,007 44k INFO Losses: [2.5096335411071777, 2.3541483879089355, 9.525830268859863, 19.643125534057617, 1.0550159215927124], step: 85600, lr: 9.8498787710708e-05 2023-03-27 13:41:58,091 44k INFO Train Epoch: 118 [86%] 2023-03-27 13:41:58,091 44k INFO Losses: [2.176217794418335, 2.6089282035827637, 18.118877410888672, 20.49241828918457, 1.1078695058822632], step: 85800, lr: 9.8498787710708e-05 2023-03-27 13:42:33,083 44k INFO ====> Epoch: 118, cost 256.05 s 2023-03-27 13:43:14,669 44k INFO Train Epoch: 119 [13%] 2023-03-27 13:43:14,669 44k INFO Losses: [2.274916887283325, 2.4879000186920166, 10.44037914276123, 15.744054794311523, 1.4524390697479248], step: 86000, lr: 9.848647536224416e-05 2023-03-27 13:43:17,633 44k INFO Saving model and optimizer state at iteration 119 to ./logs\44k\G_86000.pth 2023-03-27 13:43:18,350 44k INFO Saving model and optimizer state at iteration 119 to ./logs\44k\D_86000.pth 2023-03-27 13:43:19,024 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_83000.pth 2023-03-27 13:43:19,062 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_83000.pth 2023-03-27 13:44:26,986 44k INFO Train Epoch: 119 [41%] 2023-03-27 13:44:26,986 44k INFO Losses: [2.5259358882904053, 2.6662046909332275, 12.670214653015137, 19.52736473083496, 1.0412936210632324], step: 86200, lr: 9.848647536224416e-05 2023-03-27 13:45:35,009 44k INFO Train Epoch: 119 [68%] 2023-03-27 13:45:35,009 44k INFO Losses: [2.2850208282470703, 2.3220601081848145, 11.486074447631836, 15.63641357421875, 0.7805141806602478], step: 86400, lr: 9.848647536224416e-05 2023-03-27 13:46:42,938 44k INFO Train Epoch: 119 [96%] 2023-03-27 13:46:42,939 44k INFO Losses: [2.57535982131958, 2.1463816165924072, 7.440435886383057, 12.506307601928711, 1.1697274446487427], step: 86600, lr: 9.848647536224416e-05 2023-03-27 13:46:53,715 44k INFO ====> Epoch: 119, cost 260.63 s 2023-03-27 13:48:00,206 44k INFO Train Epoch: 120 [23%] 2023-03-27 13:48:00,206 44k INFO Losses: [2.4117231369018555, 2.178804874420166, 12.348126411437988, 16.350168228149414, 1.253532886505127], step: 86800, lr: 9.847416455282387e-05 2023-03-27 13:49:07,493 44k INFO Train Epoch: 120 [51%] 2023-03-27 13:49:07,493 44k INFO Losses: [2.425507068634033, 2.2284810543060303, 11.459692001342773, 15.559046745300293, 1.1339112520217896], step: 87000, lr: 9.847416455282387e-05 2023-03-27 13:49:10,513 44k INFO Saving model and optimizer state at iteration 120 to ./logs\44k\G_87000.pth 2023-03-27 13:49:11,261 44k INFO Saving model and optimizer state at iteration 120 to ./logs\44k\D_87000.pth 2023-03-27 13:49:11,935 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_84000.pth 2023-03-27 13:49:11,977 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_84000.pth 2023-03-27 13:50:20,081 44k INFO Train Epoch: 120 [78%] 2023-03-27 13:50:20,081 44k INFO Losses: [2.313100576400757, 2.414890766143799, 7.014580726623535, 16.158422470092773, 1.2937185764312744], step: 87200, lr: 9.847416455282387e-05 2023-03-27 13:51:14,102 44k INFO ====> Epoch: 120, cost 260.39 s 2023-03-27 13:51:36,745 44k INFO Train Epoch: 121 [5%] 2023-03-27 13:51:36,745 44k INFO Losses: [2.3715641498565674, 2.143541097640991, 9.41915225982666, 14.613849639892578, 1.320333480834961], step: 87400, lr: 9.846185528225477e-05 2023-03-27 13:52:45,065 44k INFO Train Epoch: 121 [33%] 2023-03-27 13:52:45,066 44k INFO Losses: [2.417620897293091, 2.5016586780548096, 14.392127990722656, 19.61948585510254, 0.8386576771736145], step: 87600, lr: 9.846185528225477e-05 2023-03-27 13:53:52,951 44k INFO Train Epoch: 121 [60%] 2023-03-27 13:53:52,952 44k INFO Losses: [2.303069591522217, 2.4034230709075928, 13.988245010375977, 18.910375595092773, 1.2991782426834106], step: 87800, lr: 9.846185528225477e-05 2023-03-27 13:55:00,845 44k INFO Train Epoch: 121 [88%] 2023-03-27 13:55:00,845 44k INFO Losses: [2.3199329376220703, 2.5918655395507812, 13.291627883911133, 15.782567977905273, 1.4391109943389893], step: 88000, lr: 9.846185528225477e-05 2023-03-27 13:55:03,815 44k INFO Saving model and optimizer state at iteration 121 to ./logs\44k\G_88000.pth 2023-03-27 13:55:04,530 44k INFO Saving model and optimizer state at iteration 121 to ./logs\44k\D_88000.pth 2023-03-27 13:55:05,169 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_85000.pth 2023-03-27 13:55:05,211 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_85000.pth 2023-03-27 13:55:34,913 44k INFO ====> Epoch: 121, cost 260.81 s 2023-03-27 13:56:22,115 44k INFO Train Epoch: 122 [15%] 2023-03-27 13:56:22,116 44k INFO Losses: [2.7308802604675293, 2.061307191848755, 10.29356575012207, 17.962575912475586, 0.591837465763092], step: 88200, lr: 9.84495475503445e-05 2023-03-27 13:57:30,061 44k INFO Train Epoch: 122 [43%] 2023-03-27 13:57:30,061 44k INFO Losses: [2.241570234298706, 2.654221296310425, 15.993969917297363, 19.82196807861328, 0.9829413294792175], step: 88400, lr: 9.84495475503445e-05 2023-03-27 13:58:38,153 44k INFO Train Epoch: 122 [70%] 2023-03-27 13:58:38,153 44k INFO Losses: [2.3974883556365967, 2.300828695297241, 10.449115753173828, 18.890296936035156, 1.666900396347046], step: 88600, lr: 9.84495475503445e-05 2023-03-27 13:59:46,075 44k INFO Train Epoch: 122 [98%] 2023-03-27 13:59:46,076 44k INFO Losses: [2.517932891845703, 2.024247407913208, 6.202577114105225, 10.079996109008789, 0.970980167388916], step: 88800, lr: 9.84495475503445e-05 2023-03-27 13:59:51,554 44k INFO ====> Epoch: 122, cost 256.64 s 2023-03-27 14:01:03,378 44k INFO Train Epoch: 123 [25%] 2023-03-27 14:01:03,379 44k INFO Losses: [2.7180895805358887, 2.0692272186279297, 9.984378814697266, 20.102041244506836, 0.9706941843032837], step: 89000, lr: 9.84372413569007e-05 2023-03-27 14:01:06,382 44k INFO Saving model and optimizer state at iteration 123 to ./logs\44k\G_89000.pth 2023-03-27 14:01:07,139 44k INFO Saving model and optimizer state at iteration 123 to ./logs\44k\D_89000.pth 2023-03-27 14:01:07,825 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_86000.pth 2023-03-27 14:01:07,867 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_86000.pth 2023-03-27 14:02:15,316 44k INFO Train Epoch: 123 [53%] 2023-03-27 14:02:15,316 44k INFO Losses: [2.143726348876953, 2.94023060798645, 8.493382453918457, 12.534914016723633, 1.4559874534606934], step: 89200, lr: 9.84372413569007e-05 2023-03-27 14:03:23,642 44k INFO Train Epoch: 123 [80%] 2023-03-27 14:03:23,643 44k INFO Losses: [2.6396396160125732, 2.0959489345550537, 7.3073649406433105, 12.132244110107422, 1.4792531728744507], step: 89400, lr: 9.84372413569007e-05 2023-03-27 14:04:12,328 44k INFO ====> Epoch: 123, cost 260.77 s 2023-03-27 14:04:40,427 44k INFO Train Epoch: 124 [8%] 2023-03-27 14:04:40,427 44k INFO Losses: [2.2964463233947754, 2.4600906372070312, 8.195866584777832, 13.579864501953125, 1.2955306768417358], step: 89600, lr: 9.842493670173108e-05 2023-03-27 14:05:48,879 44k INFO Train Epoch: 124 [35%] 2023-03-27 14:05:48,880 44k INFO Losses: [2.623645305633545, 2.445340394973755, 11.764336585998535, 18.0286808013916, 1.386080265045166], step: 89800, lr: 9.842493670173108e-05 2023-03-27 14:06:56,732 44k INFO Train Epoch: 124 [63%] 2023-03-27 14:06:56,733 44k INFO Losses: [2.4131460189819336, 2.335277557373047, 9.214767456054688, 16.533306121826172, 0.810753345489502], step: 90000, lr: 9.842493670173108e-05 2023-03-27 14:06:59,740 44k INFO Saving model and optimizer state at iteration 124 to ./logs\44k\G_90000.pth 2023-03-27 14:07:00,500 44k INFO Saving model and optimizer state at iteration 124 to ./logs\44k\D_90000.pth 2023-03-27 14:07:01,175 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_87000.pth 2023-03-27 14:07:01,217 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_87000.pth 2023-03-27 14:08:09,175 44k INFO Train Epoch: 124 [90%] 2023-03-27 14:08:09,175 44k INFO Losses: [2.605581045150757, 2.1606476306915283, 6.098758220672607, 10.581480026245117, 1.205618143081665], step: 90200, lr: 9.842493670173108e-05 2023-03-27 14:08:33,542 44k INFO ====> Epoch: 124, cost 261.21 s 2023-03-27 14:09:26,327 44k INFO Train Epoch: 125 [18%] 2023-03-27 14:09:26,328 44k INFO Losses: [2.3343300819396973, 2.400332450866699, 10.80459976196289, 15.311936378479004, 1.1067113876342773], step: 90400, lr: 9.841263358464336e-05 2023-03-27 14:10:34,076 44k INFO Train Epoch: 125 [45%] 2023-03-27 14:10:34,077 44k INFO Losses: [2.468352794647217, 2.077848196029663, 8.279953002929688, 15.702730178833008, 1.082648515701294], step: 90600, lr: 9.841263358464336e-05 2023-03-27 14:11:42,312 44k INFO Train Epoch: 125 [73%] 2023-03-27 14:11:42,312 44k INFO Losses: [2.2950875759124756, 2.340846061706543, 13.293840408325195, 19.21088409423828, 0.3491358160972595], step: 90800, lr: 9.841263358464336e-05 2023-03-27 14:12:50,172 44k INFO ====> Epoch: 125, cost 256.63 s 2023-03-27 14:12:59,505 44k INFO Train Epoch: 126 [0%] 2023-03-27 14:12:59,505 44k INFO Losses: [2.4738352298736572, 2.7565836906433105, 14.72641658782959, 18.949485778808594, 1.366944670677185], step: 91000, lr: 9.840033200544528e-05 2023-03-27 14:13:02,580 44k INFO Saving model and optimizer state at iteration 126 to ./logs\44k\G_91000.pth 2023-03-27 14:13:03,308 44k INFO Saving model and optimizer state at iteration 126 to ./logs\44k\D_91000.pth 2023-03-27 14:13:04,001 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_88000.pth 2023-03-27 14:13:04,045 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_88000.pth 2023-03-27 14:14:12,200 44k INFO Train Epoch: 126 [27%] 2023-03-27 14:14:12,201 44k INFO Losses: [2.6147589683532715, 2.1034915447235107, 9.158388137817383, 13.210701942443848, 0.995079517364502], step: 91200, lr: 9.840033200544528e-05 2023-03-27 14:15:19,753 44k INFO Train Epoch: 126 [55%] 2023-03-27 14:15:19,753 44k INFO Losses: [2.6738967895507812, 2.2981300354003906, 10.085954666137695, 12.697712898254395, 1.0554875135421753], step: 91400, lr: 9.840033200544528e-05 2023-03-27 14:16:27,946 44k INFO Train Epoch: 126 [82%] 2023-03-27 14:16:27,946 44k INFO Losses: [2.3990859985351562, 2.2725510597229004, 12.445283889770508, 16.0645751953125, 0.6943820118904114], step: 91600, lr: 9.840033200544528e-05 2023-03-27 14:17:11,223 44k INFO ====> Epoch: 126, cost 261.05 s 2023-03-27 14:17:44,983 44k INFO Train Epoch: 127 [10%] 2023-03-27 14:17:44,984 44k INFO Losses: [2.278993606567383, 2.648637056350708, 13.523165702819824, 16.500654220581055, 1.2258127927780151], step: 91800, lr: 9.838803196394459e-05 2023-03-27 14:18:53,298 44k INFO Train Epoch: 127 [37%] 2023-03-27 14:18:53,298 44k INFO Losses: [2.504955768585205, 2.1485142707824707, 11.31916618347168, 14.286796569824219, 1.490210771560669], step: 92000, lr: 9.838803196394459e-05 2023-03-27 14:18:56,317 44k INFO Saving model and optimizer state at iteration 127 to ./logs\44k\G_92000.pth 2023-03-27 14:18:57,045 44k INFO Saving model and optimizer state at iteration 127 to ./logs\44k\D_92000.pth 2023-03-27 14:18:57,721 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_89000.pth 2023-03-27 14:18:57,751 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_89000.pth 2023-03-27 14:20:05,589 44k INFO Train Epoch: 127 [65%] 2023-03-27 14:20:05,590 44k INFO Losses: [2.2720296382904053, 2.650409460067749, 10.728671073913574, 13.801231384277344, 1.238929033279419], step: 92200, lr: 9.838803196394459e-05 2023-03-27 14:21:13,672 44k INFO Train Epoch: 127 [92%] 2023-03-27 14:21:13,672 44k INFO Losses: [2.451129913330078, 2.323255777359009, 12.86911678314209, 19.724218368530273, 1.2852063179016113], step: 92400, lr: 9.838803196394459e-05 2023-03-27 14:21:32,388 44k INFO ====> Epoch: 127, cost 261.16 s 2023-03-27 14:22:30,803 44k INFO Train Epoch: 128 [20%] 2023-03-27 14:22:30,804 44k INFO Losses: [2.441765785217285, 2.3499560356140137, 12.566608428955078, 16.090864181518555, 1.2203800678253174], step: 92600, lr: 9.837573345994909e-05 2023-03-27 14:23:38,430 44k INFO Train Epoch: 128 [47%] 2023-03-27 14:23:38,430 44k INFO Losses: [2.247427225112915, 2.602201461791992, 17.82349395751953, 20.305980682373047, 1.3381398916244507], step: 92800, lr: 9.837573345994909e-05 2023-03-27 14:24:46,555 44k INFO Train Epoch: 128 [75%] 2023-03-27 14:24:46,555 44k INFO Losses: [2.577996253967285, 2.1698861122131348, 12.894185066223145, 16.62539291381836, 1.0367746353149414], step: 93000, lr: 9.837573345994909e-05 2023-03-27 14:24:49,585 44k INFO Saving model and optimizer state at iteration 128 to ./logs\44k\G_93000.pth 2023-03-27 14:24:50,297 44k INFO Saving model and optimizer state at iteration 128 to ./logs\44k\D_93000.pth 2023-03-27 14:24:50,959 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_90000.pth 2023-03-27 14:24:50,996 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_90000.pth 2023-03-27 14:25:53,175 44k INFO ====> Epoch: 128, cost 260.79 s 2023-03-27 14:26:07,948 44k INFO Train Epoch: 129 [2%] 2023-03-27 14:26:07,948 44k INFO Losses: [2.2519760131835938, 2.4910011291503906, 9.473901748657227, 13.260887145996094, 1.3339678049087524], step: 93200, lr: 9.836343649326659e-05 2023-03-27 14:27:15,973 44k INFO Train Epoch: 129 [30%] 2023-03-27 14:27:15,974 44k INFO Losses: [2.2672696113586426, 2.346916913986206, 13.35938549041748, 14.922430038452148, 0.7104300260543823], step: 93400, lr: 9.836343649326659e-05 2023-03-27 14:28:23,848 44k INFO Train Epoch: 129 [57%] 2023-03-27 14:28:23,849 44k INFO Losses: [2.322690963745117, 2.811371088027954, 14.771958351135254, 19.046003341674805, 0.9448843002319336], step: 93600, lr: 9.836343649326659e-05 2023-03-27 14:29:31,952 44k INFO Train Epoch: 129 [85%] 2023-03-27 14:29:31,952 44k INFO Losses: [2.6401140689849854, 2.1454553604125977, 9.756328582763672, 13.059168815612793, 0.8442462086677551], step: 93800, lr: 9.836343649326659e-05 2023-03-27 14:30:09,723 44k INFO ====> Epoch: 129, cost 256.55 s 2023-03-27 14:30:48,845 44k INFO Train Epoch: 130 [12%] 2023-03-27 14:30:48,845 44k INFO Losses: [2.637706756591797, 1.9094399213790894, 9.75123119354248, 14.142217636108398, 1.0872833728790283], step: 94000, lr: 9.835114106370493e-05 2023-03-27 14:30:51,874 44k INFO Saving model and optimizer state at iteration 130 to ./logs\44k\G_94000.pth 2023-03-27 14:30:52,587 44k INFO Saving model and optimizer state at iteration 130 to ./logs\44k\D_94000.pth 2023-03-27 14:30:53,268 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_91000.pth 2023-03-27 14:30:53,297 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_91000.pth 2023-03-27 14:32:01,341 44k INFO Train Epoch: 130 [40%] 2023-03-27 14:32:01,341 44k INFO Losses: [2.4986302852630615, 2.113473892211914, 11.459731101989746, 17.573223114013672, 1.0043880939483643], step: 94200, lr: 9.835114106370493e-05 2023-03-27 14:33:09,186 44k INFO Train Epoch: 130 [67%] 2023-03-27 14:33:09,186 44k INFO Losses: [2.1630566120147705, 2.5992746353149414, 10.064453125, 11.365838050842285, 1.1850638389587402], step: 94400, lr: 9.835114106370493e-05 2023-03-27 14:34:17,232 44k INFO Train Epoch: 130 [95%] 2023-03-27 14:34:17,232 44k INFO Losses: [2.6287331581115723, 2.105196952819824, 11.576847076416016, 17.186288833618164, 1.1629074811935425], step: 94600, lr: 9.835114106370493e-05 2023-03-27 14:34:30,581 44k INFO ====> Epoch: 130, cost 260.86 s 2023-03-27 14:35:34,755 44k INFO Train Epoch: 131 [22%] 2023-03-27 14:35:34,756 44k INFO Losses: [2.5036888122558594, 2.251817226409912, 10.505562782287598, 15.144889831542969, 1.0145012140274048], step: 94800, lr: 9.833884717107196e-05 2023-03-27 14:36:42,196 44k INFO Train Epoch: 131 [49%] 2023-03-27 14:36:42,197 44k INFO Losses: [2.348914861679077, 2.392517566680908, 14.428376197814941, 19.006141662597656, 1.054394006729126], step: 95000, lr: 9.833884717107196e-05 2023-03-27 14:36:45,309 44k INFO Saving model and optimizer state at iteration 131 to ./logs\44k\G_95000.pth 2023-03-27 14:36:46,027 44k INFO Saving model and optimizer state at iteration 131 to ./logs\44k\D_95000.pth 2023-03-27 14:36:46,695 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_92000.pth 2023-03-27 14:36:46,724 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_92000.pth 2023-03-27 14:37:54,807 44k INFO Train Epoch: 131 [77%] 2023-03-27 14:37:54,807 44k INFO Losses: [2.561729907989502, 2.4310176372528076, 8.27837085723877, 13.641626358032227, 1.368639588356018], step: 95200, lr: 9.833884717107196e-05 2023-03-27 14:38:51,586 44k INFO ====> Epoch: 131, cost 261.01 s 2023-03-27 14:39:11,830 44k INFO Train Epoch: 132 [4%] 2023-03-27 14:39:11,830 44k INFO Losses: [2.365891933441162, 2.4172942638397217, 9.230785369873047, 12.603485107421875, 1.1986470222473145], step: 95400, lr: 9.832655481517557e-05 2023-03-27 14:40:20,187 44k INFO Train Epoch: 132 [32%] 2023-03-27 14:40:20,187 44k INFO Losses: [2.21093487739563, 2.392054796218872, 13.396657943725586, 19.0899658203125, 1.1902186870574951], step: 95600, lr: 9.832655481517557e-05 2023-03-27 14:41:28,153 44k INFO Train Epoch: 132 [59%] 2023-03-27 14:41:28,153 44k INFO Losses: [2.3397505283355713, 2.4707789421081543, 10.663190841674805, 12.734728813171387, 1.0408005714416504], step: 95800, lr: 9.832655481517557e-05 2023-03-27 14:42:36,289 44k INFO Train Epoch: 132 [87%] 2023-03-27 14:42:36,289 44k INFO Losses: [2.387188673019409, 2.308434009552002, 13.141822814941406, 20.08868408203125, 1.1010388135910034], step: 96000, lr: 9.832655481517557e-05 2023-03-27 14:42:39,373 44k INFO Saving model and optimizer state at iteration 132 to ./logs\44k\G_96000.pth 2023-03-27 14:42:40,087 44k INFO Saving model and optimizer state at iteration 132 to ./logs\44k\D_96000.pth 2023-03-27 14:42:40,759 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_93000.pth 2023-03-27 14:42:40,788 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_93000.pth 2023-03-27 14:43:13,090 44k INFO ====> Epoch: 132, cost 261.50 s 2023-03-27 14:43:57,873 44k INFO Train Epoch: 133 [14%] 2023-03-27 14:43:57,874 44k INFO Losses: [2.4841949939727783, 2.118575096130371, 8.97131061553955, 14.887880325317383, 0.759651243686676], step: 96200, lr: 9.831426399582366e-05 2023-03-27 14:45:05,859 44k INFO Train Epoch: 133 [42%] 2023-03-27 14:45:05,860 44k INFO Losses: [2.403714179992676, 2.265246868133545, 12.613387107849121, 16.49553680419922, 1.3381483554840088], step: 96400, lr: 9.831426399582366e-05 2023-03-27 14:46:14,245 44k INFO Train Epoch: 133 [69%] 2023-03-27 14:46:14,246 44k INFO Losses: [2.1047136783599854, 2.7479944229125977, 11.447468757629395, 14.411041259765625, 0.7167878150939941], step: 96600, lr: 9.831426399582366e-05 2023-03-27 14:47:22,349 44k INFO Train Epoch: 133 [97%] 2023-03-27 14:47:22,349 44k INFO Losses: [2.166965961456299, 2.523128032684326, 14.369072914123535, 18.409772872924805, 1.1904950141906738], step: 96800, lr: 9.831426399582366e-05 2023-03-27 14:47:30,470 44k INFO ====> Epoch: 133, cost 257.38 s 2023-03-27 14:48:39,789 44k INFO Train Epoch: 134 [24%] 2023-03-27 14:48:39,789 44k INFO Losses: [2.305598735809326, 2.683852434158325, 13.826458930969238, 19.00025749206543, 1.3864483833312988], step: 97000, lr: 9.830197471282419e-05 2023-03-27 14:48:42,869 44k INFO Saving model and optimizer state at iteration 134 to ./logs\44k\G_97000.pth 2023-03-27 14:48:43,598 44k INFO Saving model and optimizer state at iteration 134 to ./logs\44k\D_97000.pth 2023-03-27 14:48:44,307 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_94000.pth 2023-03-27 14:48:44,338 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_94000.pth 2023-03-27 14:49:51,866 44k INFO Train Epoch: 134 [52%] 2023-03-27 14:49:51,867 44k INFO Losses: [2.46956205368042, 2.2305238246917725, 10.669975280761719, 16.3850154876709, 0.6860055923461914], step: 97200, lr: 9.830197471282419e-05 2023-03-27 14:51:00,166 44k INFO Train Epoch: 134 [79%] 2023-03-27 14:51:00,166 44k INFO Losses: [2.504387855529785, 2.320173501968384, 16.26110076904297, 20.069215774536133, 1.466880440711975], step: 97400, lr: 9.830197471282419e-05 2023-03-27 14:51:51,761 44k INFO ====> Epoch: 134, cost 261.29 s 2023-03-27 14:52:17,468 44k INFO Train Epoch: 135 [7%] 2023-03-27 14:52:17,469 44k INFO Losses: [2.6631360054016113, 2.145369529724121, 12.21800708770752, 14.610208511352539, 1.6118963956832886], step: 97600, lr: 9.828968696598508e-05 2023-03-27 14:53:25,940 44k INFO Train Epoch: 135 [34%] 2023-03-27 14:53:25,941 44k INFO Losses: [2.4203219413757324, 2.4113621711730957, 9.930030822753906, 12.872572898864746, 0.6050727963447571], step: 97800, lr: 9.828968696598508e-05 2023-03-27 14:54:33,812 44k INFO Train Epoch: 135 [62%] 2023-03-27 14:54:33,813 44k INFO Losses: [2.4057984352111816, 2.1684584617614746, 10.371155738830566, 12.391551971435547, 1.008943796157837], step: 98000, lr: 9.828968696598508e-05 2023-03-27 14:54:36,936 44k INFO Saving model and optimizer state at iteration 135 to ./logs\44k\G_98000.pth 2023-03-27 14:54:37,649 44k INFO Saving model and optimizer state at iteration 135 to ./logs\44k\D_98000.pth 2023-03-27 14:54:38,319 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_95000.pth 2023-03-27 14:54:38,348 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_95000.pth 2023-03-27 14:55:46,431 44k INFO Train Epoch: 135 [89%] 2023-03-27 14:55:46,431 44k INFO Losses: [2.403571128845215, 2.290073871612549, 14.366744041442871, 19.334867477416992, 1.361986517906189], step: 98200, lr: 9.828968696598508e-05 2023-03-27 14:56:13,502 44k INFO ====> Epoch: 135, cost 261.74 s 2023-03-27 14:57:03,741 44k INFO Train Epoch: 136 [16%] 2023-03-27 14:57:03,742 44k INFO Losses: [2.6262168884277344, 2.1490728855133057, 8.030757904052734, 12.999443054199219, 1.2259801626205444], step: 98400, lr: 9.827740075511432e-05 2023-03-27 14:58:11,716 44k INFO Train Epoch: 136 [44%] 2023-03-27 14:58:11,716 44k INFO Losses: [2.5191750526428223, 2.411257266998291, 9.710187911987305, 15.61648178100586, 0.9205350279808044], step: 98600, lr: 9.827740075511432e-05 2023-03-27 14:59:19,954 44k INFO Train Epoch: 136 [71%] 2023-03-27 14:59:19,955 44k INFO Losses: [2.6556243896484375, 2.386894702911377, 11.972926139831543, 18.036529541015625, 0.27573081851005554], step: 98800, lr: 9.827740075511432e-05 2023-03-27 15:00:28,121 44k INFO Train Epoch: 136 [99%] 2023-03-27 15:00:28,121 44k INFO Losses: [2.533910036087036, 1.9907898902893066, 10.633753776550293, 14.944938659667969, 1.1219426393508911], step: 99000, lr: 9.827740075511432e-05 2023-03-27 15:00:31,207 44k INFO Saving model and optimizer state at iteration 136 to ./logs\44k\G_99000.pth 2023-03-27 15:00:31,915 44k INFO Saving model and optimizer state at iteration 136 to ./logs\44k\D_99000.pth 2023-03-27 15:00:32,596 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_96000.pth 2023-03-27 15:00:32,626 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_96000.pth 2023-03-27 15:00:35,302 44k INFO ====> Epoch: 136, cost 261.80 s 2023-03-27 15:01:50,028 44k INFO Train Epoch: 137 [26%] 2023-03-27 15:01:50,029 44k INFO Losses: [2.797995090484619, 2.2133543491363525, 10.750372886657715, 16.179981231689453, 1.1933153867721558], step: 99200, lr: 9.826511608001993e-05 2023-03-27 15:02:57,836 44k INFO Train Epoch: 137 [54%] 2023-03-27 15:02:57,836 44k INFO Losses: [2.691668748855591, 2.0290465354919434, 13.734187126159668, 19.059707641601562, 1.1915942430496216], step: 99400, lr: 9.826511608001993e-05 2023-03-27 15:04:06,151 44k INFO Train Epoch: 137 [81%] 2023-03-27 15:04:06,151 44k INFO Losses: [2.5505921840667725, 2.1899213790893555, 10.821756362915039, 19.117794036865234, 1.147972822189331], step: 99600, lr: 9.826511608001993e-05 2023-03-27 15:04:52,236 44k INFO ====> Epoch: 137, cost 256.93 s 2023-03-27 15:05:23,452 44k INFO Train Epoch: 138 [9%] 2023-03-27 15:05:23,452 44k INFO Losses: [2.5873024463653564, 2.2780096530914307, 7.501673698425293, 13.136208534240723, 1.199578881263733], step: 99800, lr: 9.825283294050992e-05 2023-03-27 15:06:31,959 44k INFO Train Epoch: 138 [36%] 2023-03-27 15:06:31,960 44k INFO Losses: [2.320906400680542, 2.3057990074157715, 10.177536964416504, 13.376052856445312, 0.7447084784507751], step: 100000, lr: 9.825283294050992e-05 2023-03-27 15:06:34,939 44k INFO Saving model and optimizer state at iteration 138 to ./logs\44k\G_100000.pth 2023-03-27 15:06:35,697 44k INFO Saving model and optimizer state at iteration 138 to ./logs\44k\D_100000.pth 2023-03-27 15:06:36,383 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_97000.pth 2023-03-27 15:06:36,412 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_97000.pth 2023-03-27 15:07:44,147 44k INFO Train Epoch: 138 [64%] 2023-03-27 15:07:44,147 44k INFO Losses: [2.4789953231811523, 2.4276041984558105, 9.278261184692383, 15.1497163772583, 1.1019043922424316], step: 100200, lr: 9.825283294050992e-05 2023-03-27 15:08:52,463 44k INFO Train Epoch: 138 [91%] 2023-03-27 15:08:52,464 44k INFO Losses: [2.441234827041626, 2.7757809162139893, 11.05512809753418, 15.480584144592285, 1.1469541788101196], step: 100400, lr: 9.825283294050992e-05 2023-03-27 15:09:14,112 44k INFO ====> Epoch: 138, cost 261.88 s 2023-03-27 15:10:09,856 44k INFO Train Epoch: 139 [19%] 2023-03-27 15:10:09,857 44k INFO Losses: [2.560626983642578, 2.223137378692627, 11.110403060913086, 15.485923767089844, 0.6847054958343506], step: 100600, lr: 9.824055133639235e-05 2023-03-27 15:11:17,679 44k INFO Train Epoch: 139 [46%] 2023-03-27 15:11:17,680 44k INFO Losses: [2.497349739074707, 2.347006320953369, 9.21784496307373, 16.132675170898438, 0.8796485662460327], step: 100800, lr: 9.824055133639235e-05 2023-03-27 15:12:26,031 44k INFO Train Epoch: 139 [74%] 2023-03-27 15:12:26,032 44k INFO Losses: [2.8209919929504395, 2.5743134021759033, 8.535096168518066, 14.310879707336426, 1.112034797668457], step: 101000, lr: 9.824055133639235e-05 2023-03-27 15:12:29,056 44k INFO Saving model and optimizer state at iteration 139 to ./logs\44k\G_101000.pth 2023-03-27 15:12:29,773 44k INFO Saving model and optimizer state at iteration 139 to ./logs\44k\D_101000.pth 2023-03-27 15:12:30,458 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_98000.pth 2023-03-27 15:12:30,486 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_98000.pth 2023-03-27 15:13:35,639 44k INFO ====> Epoch: 139, cost 261.53 s 2023-03-27 15:13:47,666 44k INFO Train Epoch: 140 [1%] 2023-03-27 15:13:47,666 44k INFO Losses: [2.3303937911987305, 2.327293872833252, 14.093713760375977, 18.407052993774414, 1.0323868989944458], step: 101200, lr: 9.822827126747529e-05 2023-03-27 15:14:56,030 44k INFO Train Epoch: 140 [29%] 2023-03-27 15:14:56,030 44k INFO Losses: [2.3175666332244873, 2.6379330158233643, 11.994147300720215, 16.47373390197754, 1.1077686548233032], step: 101400, lr: 9.822827126747529e-05 2023-03-27 15:16:05,596 44k INFO Train Epoch: 140 [56%] 2023-03-27 15:16:05,597 44k INFO Losses: [2.8513920307159424, 2.3540453910827637, 7.544042587280273, 14.959235191345215, 1.0039589405059814], step: 101600, lr: 9.822827126747529e-05 2023-03-27 15:17:14,223 44k INFO Train Epoch: 140 [84%] 2023-03-27 15:17:14,223 44k INFO Losses: [2.564058780670166, 2.176462173461914, 6.91490364074707, 10.18822193145752, 0.8111037015914917], step: 101800, lr: 9.822827126747529e-05 2023-03-27 15:17:55,257 44k INFO ====> Epoch: 140, cost 259.62 s 2023-03-27 15:18:37,652 44k INFO Train Epoch: 141 [11%] 2023-03-27 15:18:37,652 44k INFO Losses: [2.446866750717163, 2.268315315246582, 12.65282154083252, 17.996355056762695, 0.7340155243873596], step: 102000, lr: 9.821599273356685e-05 2023-03-27 15:18:40,654 44k INFO Saving model and optimizer state at iteration 141 to ./logs\44k\G_102000.pth 2023-03-27 15:18:41,393 44k INFO Saving model and optimizer state at iteration 141 to ./logs\44k\D_102000.pth 2023-03-27 15:18:42,060 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_99000.pth 2023-03-27 15:18:42,093 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_99000.pth 2023-03-27 15:19:50,707 44k INFO Train Epoch: 141 [38%] 2023-03-27 15:19:50,707 44k INFO Losses: [2.4789106845855713, 2.1786108016967773, 10.740713119506836, 16.614931106567383, 0.823322594165802], step: 102200, lr: 9.821599273356685e-05 2023-03-27 15:20:59,198 44k INFO Train Epoch: 141 [66%] 2023-03-27 15:20:59,199 44k INFO Losses: [2.7804951667785645, 1.963358759880066, 7.151495456695557, 14.601998329162598, 1.1938724517822266], step: 102400, lr: 9.821599273356685e-05 2023-03-27 15:22:07,884 44k INFO Train Epoch: 141 [93%] 2023-03-27 15:22:07,884 44k INFO Losses: [2.3062028884887695, 2.3293066024780273, 12.568824768066406, 19.241811752319336, 0.8996400833129883], step: 102600, lr: 9.821599273356685e-05 2023-03-27 15:22:24,140 44k INFO ====> Epoch: 141, cost 268.88 s 2023-03-27 15:23:25,666 44k INFO Train Epoch: 142 [21%] 2023-03-27 15:23:25,666 44k INFO Losses: [2.3966143131256104, 2.0955870151519775, 11.193705558776855, 13.73548412322998, 1.0542786121368408], step: 102800, lr: 9.820371573447515e-05 2023-03-27 15:24:33,651 44k INFO Train Epoch: 142 [48%] 2023-03-27 15:24:33,651 44k INFO Losses: [2.052743434906006, 2.8343052864074707, 12.61574649810791, 17.90175437927246, 0.8378251194953918], step: 103000, lr: 9.820371573447515e-05 2023-03-27 15:24:36,577 44k INFO Saving model and optimizer state at iteration 142 to ./logs\44k\G_103000.pth 2023-03-27 15:24:37,345 44k INFO Saving model and optimizer state at iteration 142 to ./logs\44k\D_103000.pth 2023-03-27 15:24:38,018 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_100000.pth 2023-03-27 15:24:38,051 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_100000.pth 2023-03-27 15:25:46,811 44k INFO Train Epoch: 142 [76%] 2023-03-27 15:25:46,812 44k INFO Losses: [2.4170782566070557, 2.393254280090332, 11.33652400970459, 16.777761459350586, 1.2143515348434448], step: 103200, lr: 9.820371573447515e-05 2023-03-27 15:26:46,916 44k INFO ====> Epoch: 142, cost 262.78 s 2023-03-27 15:27:04,326 44k INFO Train Epoch: 143 [3%] 2023-03-27 15:27:04,327 44k INFO Losses: [2.515658140182495, 1.994367003440857, 10.241500854492188, 15.472397804260254, 0.9583818316459656], step: 103400, lr: 9.819144027000834e-05 2023-03-27 15:28:13,081 44k INFO Train Epoch: 143 [31%] 2023-03-27 15:28:13,082 44k INFO Losses: [2.4173216819763184, 2.165848970413208, 15.156213760375977, 18.625574111938477, 0.8276651501655579], step: 103600, lr: 9.819144027000834e-05 2023-03-27 15:29:21,540 44k INFO Train Epoch: 143 [58%] 2023-03-27 15:29:21,540 44k INFO Losses: [2.2089574337005615, 2.2145512104034424, 14.092761039733887, 19.316118240356445, 1.0718265771865845], step: 103800, lr: 9.819144027000834e-05 2023-03-27 15:30:30,251 44k INFO Train Epoch: 143 [86%] 2023-03-27 15:30:30,252 44k INFO Losses: [2.476694107055664, 2.0918447971343994, 15.288888931274414, 16.877153396606445, 1.1814228296279907], step: 104000, lr: 9.819144027000834e-05 2023-03-27 15:30:33,244 44k INFO Saving model and optimizer state at iteration 143 to ./logs\44k\G_104000.pth 2023-03-27 15:30:33,997 44k INFO Saving model and optimizer state at iteration 143 to ./logs\44k\D_104000.pth 2023-03-27 15:30:34,660 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_101000.pth 2023-03-27 15:30:34,690 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_101000.pth 2023-03-27 15:31:09,894 44k INFO ====> Epoch: 143, cost 262.98 s 2023-03-27 15:31:51,751 44k INFO Train Epoch: 144 [13%] 2023-03-27 15:31:51,751 44k INFO Losses: [2.392612934112549, 2.335249900817871, 11.470584869384766, 18.331056594848633, 0.9374159574508667], step: 104200, lr: 9.817916633997459e-05 2023-03-27 15:33:00,503 44k INFO Train Epoch: 144 [41%] 2023-03-27 15:33:00,504 44k INFO Losses: [2.566739559173584, 2.281898260116577, 12.114194869995117, 15.239818572998047, 1.4907586574554443], step: 104400, lr: 9.817916633997459e-05 2023-03-27 15:34:09,344 44k INFO Train Epoch: 144 [68%] 2023-03-27 15:34:09,345 44k INFO Losses: [2.5958800315856934, 2.234344005584717, 10.780561447143555, 15.696884155273438, 1.2032078504562378], step: 104600, lr: 9.817916633997459e-05 2023-03-27 15:35:18,351 44k INFO Train Epoch: 144 [96%] 2023-03-27 15:35:18,352 44k INFO Losses: [2.5928730964660645, 2.2744438648223877, 10.834430694580078, 16.79573631286621, 1.4878917932510376], step: 104800, lr: 9.817916633997459e-05 2023-03-27 15:35:29,205 44k INFO ====> Epoch: 144, cost 259.31 s 2023-03-27 15:36:36,388 44k INFO Train Epoch: 145 [23%] 2023-03-27 15:36:36,388 44k INFO Losses: [2.281496524810791, 2.3641138076782227, 12.91144847869873, 16.09404945373535, 0.7579247355461121], step: 105000, lr: 9.816689394418209e-05 2023-03-27 15:36:39,334 44k INFO Saving model and optimizer state at iteration 145 to ./logs\44k\G_105000.pth 2023-03-27 15:36:40,097 44k INFO Saving model and optimizer state at iteration 145 to ./logs\44k\D_105000.pth 2023-03-27 15:36:40,783 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_102000.pth 2023-03-27 15:36:40,825 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_102000.pth 2023-03-27 15:37:48,824 44k INFO Train Epoch: 145 [51%] 2023-03-27 15:37:48,824 44k INFO Losses: [2.461190938949585, 2.4796411991119385, 12.19224739074707, 15.984720230102539, 1.1716458797454834], step: 105200, lr: 9.816689394418209e-05 2023-03-27 15:38:57,861 44k INFO Train Epoch: 145 [78%] 2023-03-27 15:38:57,861 44k INFO Losses: [2.281503200531006, 2.310351610183716, 11.135293006896973, 16.44614028930664, 0.9785498380661011], step: 105400, lr: 9.816689394418209e-05 2023-03-27 15:39:52,653 44k INFO ====> Epoch: 145, cost 263.45 s 2023-03-27 15:40:15,485 44k INFO Train Epoch: 146 [5%] 2023-03-27 15:40:15,486 44k INFO Losses: [2.411257266998291, 2.3170292377471924, 12.845120429992676, 18.278425216674805, 0.7444893717765808], step: 105600, lr: 9.815462308243906e-05 2023-03-27 15:41:24,440 44k INFO Train Epoch: 146 [33%] 2023-03-27 15:41:24,440 44k INFO Losses: [2.1618168354034424, 2.4192562103271484, 14.243244171142578, 18.255889892578125, 1.2335437536239624], step: 105800, lr: 9.815462308243906e-05 2023-03-27 15:42:33,042 44k INFO Train Epoch: 146 [60%] 2023-03-27 15:42:33,042 44k INFO Losses: [2.6400723457336426, 2.123934507369995, 11.498104095458984, 19.26065444946289, 1.118171215057373], step: 106000, lr: 9.815462308243906e-05 2023-03-27 15:42:36,022 44k INFO Saving model and optimizer state at iteration 146 to ./logs\44k\G_106000.pth 2023-03-27 15:42:36,780 44k INFO Saving model and optimizer state at iteration 146 to ./logs\44k\D_106000.pth 2023-03-27 15:42:37,457 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_103000.pth 2023-03-27 15:42:37,501 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_103000.pth 2023-03-27 15:43:46,133 44k INFO Train Epoch: 146 [88%] 2023-03-27 15:43:46,134 44k INFO Losses: [2.430358648300171, 2.7110471725463867, 15.19665241241455, 18.553945541381836, 0.9975407719612122], step: 106200, lr: 9.815462308243906e-05 2023-03-27 15:44:16,278 44k INFO ====> Epoch: 146, cost 263.62 s 2023-03-27 15:45:03,883 44k INFO Train Epoch: 147 [15%] 2023-03-27 15:45:03,883 44k INFO Losses: [2.231302261352539, 2.4412341117858887, 11.63550090789795, 15.827920913696289, 1.0656617879867554], step: 106400, lr: 9.814235375455375e-05 2023-03-27 15:46:12,483 44k INFO Train Epoch: 147 [43%] 2023-03-27 15:46:12,484 44k INFO Losses: [2.1474103927612305, 2.816983222961426, 15.528166770935059, 18.543962478637695, 1.0800002813339233], step: 106600, lr: 9.814235375455375e-05 2023-03-27 15:47:21,358 44k INFO Train Epoch: 147 [70%] 2023-03-27 15:47:21,359 44k INFO Losses: [2.484678030014038, 2.356426477432251, 12.3342924118042, 18.855270385742188, 0.8206634521484375], step: 106800, lr: 9.814235375455375e-05 2023-03-27 15:48:30,029 44k INFO Train Epoch: 147 [98%] 2023-03-27 15:48:30,029 44k INFO Losses: [2.53023362159729, 2.1864187717437744, 4.323704719543457, 7.623577117919922, 0.48531046509742737], step: 107000, lr: 9.814235375455375e-05 2023-03-27 15:48:33,053 44k INFO Saving model and optimizer state at iteration 147 to ./logs\44k\G_107000.pth 2023-03-27 15:48:33,823 44k INFO Saving model and optimizer state at iteration 147 to ./logs\44k\D_107000.pth 2023-03-27 15:48:34,510 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_104000.pth 2023-03-27 15:48:34,554 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_104000.pth 2023-03-27 15:48:39,997 44k INFO ====> Epoch: 147, cost 263.72 s 2023-03-27 15:49:52,550 44k INFO Train Epoch: 148 [25%] 2023-03-27 15:49:52,551 44k INFO Losses: [2.511340379714966, 2.3214828968048096, 9.588933944702148, 13.52132511138916, 1.171046495437622], step: 107200, lr: 9.813008596033443e-05 2023-03-27 15:51:01,116 44k INFO Train Epoch: 148 [53%] 2023-03-27 15:51:01,116 44k INFO Losses: [2.279045581817627, 2.453948736190796, 12.130851745605469, 18.342464447021484, 1.2860060930252075], step: 107400, lr: 9.813008596033443e-05 2023-03-27 15:52:10,248 44k INFO Train Epoch: 148 [80%] 2023-03-27 15:52:10,248 44k INFO Losses: [2.652639389038086, 2.0672097206115723, 10.794867515563965, 16.171913146972656, 1.1010161638259888], step: 107600, lr: 9.813008596033443e-05 2023-03-27 15:52:59,819 44k INFO ====> Epoch: 148, cost 259.82 s 2023-03-27 15:53:28,053 44k INFO Train Epoch: 149 [8%] 2023-03-27 15:53:28,053 44k INFO Losses: [2.4835262298583984, 2.5635528564453125, 10.919281005859375, 16.072265625, 1.2529665231704712], step: 107800, lr: 9.811781969958938e-05 2023-03-27 15:54:37,197 44k INFO Train Epoch: 149 [35%] 2023-03-27 15:54:37,197 44k INFO Losses: [2.482320547103882, 2.376978635787964, 12.63044261932373, 16.570560455322266, 0.9747631549835205], step: 108000, lr: 9.811781969958938e-05 2023-03-27 15:54:40,169 44k INFO Saving model and optimizer state at iteration 149 to ./logs\44k\G_108000.pth 2023-03-27 15:54:40,887 44k INFO Saving model and optimizer state at iteration 149 to ./logs\44k\D_108000.pth 2023-03-27 15:54:41,580 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_105000.pth 2023-03-27 15:54:41,624 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_105000.pth 2023-03-27 15:55:50,349 44k INFO Train Epoch: 149 [63%] 2023-03-27 15:55:50,349 44k INFO Losses: [2.673086643218994, 2.208462715148926, 9.384316444396973, 12.75721549987793, 1.1423656940460205], step: 108200, lr: 9.811781969958938e-05 2023-03-27 15:56:59,407 44k INFO Train Epoch: 149 [90%] 2023-03-27 15:56:59,407 44k INFO Losses: [2.495377540588379, 2.3748793601989746, 9.517926216125488, 16.50485610961914, 1.0121296644210815], step: 108400, lr: 9.811781969958938e-05 2023-03-27 15:57:24,059 44k INFO ====> Epoch: 149, cost 264.24 s 2023-03-27 15:58:17,302 44k INFO Train Epoch: 150 [18%] 2023-03-27 15:58:17,302 44k INFO Losses: [2.744767904281616, 2.101043939590454, 11.088508605957031, 13.527839660644531, 0.35443115234375], step: 108600, lr: 9.810555497212693e-05 2023-03-27 15:59:25,964 44k INFO Train Epoch: 150 [45%] 2023-03-27 15:59:25,964 44k INFO Losses: [2.511256694793701, 2.2848715782165527, 8.688575744628906, 16.163558959960938, 0.8986338973045349], step: 108800, lr: 9.810555497212693e-05 2023-03-27 16:00:35,131 44k INFO Train Epoch: 150 [73%] 2023-03-27 16:00:35,131 44k INFO Losses: [2.196725845336914, 2.4140071868896484, 14.374187469482422, 18.607765197753906, 0.956157386302948], step: 109000, lr: 9.810555497212693e-05 2023-03-27 16:00:38,133 44k INFO Saving model and optimizer state at iteration 150 to ./logs\44k\G_109000.pth 2023-03-27 16:00:38,890 44k INFO Saving model and optimizer state at iteration 150 to ./logs\44k\D_109000.pth 2023-03-27 16:00:39,580 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_106000.pth 2023-03-27 16:00:39,618 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_106000.pth 2023-03-27 16:01:48,179 44k INFO ====> Epoch: 150, cost 264.12 s 2023-03-27 16:01:57,278 44k INFO Train Epoch: 151 [0%] 2023-03-27 16:01:57,278 44k INFO Losses: [2.3804078102111816, 2.445009469985962, 11.998247146606445, 17.334800720214844, 1.0470854043960571], step: 109200, lr: 9.809329177775541e-05 2023-03-27 16:03:06,507 44k INFO Train Epoch: 151 [27%] 2023-03-27 16:03:06,507 44k INFO Losses: [2.382734775543213, 2.3333182334899902, 13.34786605834961, 15.998337745666504, 0.9428315758705139], step: 109400, lr: 9.809329177775541e-05 2023-03-27 16:04:15,137 44k INFO Train Epoch: 151 [55%] 2023-03-27 16:04:15,138 44k INFO Losses: [2.3502371311187744, 2.7872936725616455, 9.348569869995117, 12.911959648132324, 0.7766223549842834], step: 109600, lr: 9.809329177775541e-05 2023-03-27 16:05:24,284 44k INFO Train Epoch: 151 [82%] 2023-03-27 16:05:24,284 44k INFO Losses: [2.2946367263793945, 2.346892833709717, 10.290151596069336, 15.537335395812988, 0.8673840165138245], step: 109800, lr: 9.809329177775541e-05 2023-03-27 16:06:08,306 44k INFO ====> Epoch: 151, cost 260.13 s 2023-03-27 16:06:42,209 44k INFO Train Epoch: 152 [10%] 2023-03-27 16:06:42,209 44k INFO Losses: [2.5822649002075195, 1.9618663787841797, 7.820509910583496, 13.877532005310059, 0.757935643196106], step: 110000, lr: 9.808103011628319e-05 2023-03-27 16:06:45,204 44k INFO Saving model and optimizer state at iteration 152 to ./logs\44k\G_110000.pth 2023-03-27 16:06:45,917 44k INFO Saving model and optimizer state at iteration 152 to ./logs\44k\D_110000.pth 2023-03-27 16:06:46,608 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_107000.pth 2023-03-27 16:06:46,651 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_107000.pth 2023-03-27 16:07:55,760 44k INFO Train Epoch: 152 [37%] 2023-03-27 16:07:55,761 44k INFO Losses: [2.284034490585327, 2.4276137351989746, 12.943780899047852, 18.83637809753418, 1.0574368238449097], step: 110200, lr: 9.808103011628319e-05 2023-03-27 16:09:04,688 44k INFO Train Epoch: 152 [65%] 2023-03-27 16:09:04,688 44k INFO Losses: [2.3385541439056396, 2.517451047897339, 12.560903549194336, 16.122268676757812, 1.0021511316299438], step: 110400, lr: 9.808103011628319e-05 2023-03-27 16:10:14,022 44k INFO Train Epoch: 152 [92%] 2023-03-27 16:10:14,022 44k INFO Losses: [2.5504608154296875, 2.594095230102539, 12.882997512817383, 18.916473388671875, 0.8530874252319336], step: 110600, lr: 9.808103011628319e-05 2023-03-27 16:10:33,129 44k INFO ====> Epoch: 152, cost 264.82 s 2023-03-27 16:11:32,114 44k INFO Train Epoch: 153 [20%] 2023-03-27 16:11:32,115 44k INFO Losses: [2.6256964206695557, 2.437201499938965, 8.929479598999023, 12.713617324829102, 1.3047256469726562], step: 110800, lr: 9.806876998751865e-05 2023-03-27 16:12:40,688 44k INFO Train Epoch: 153 [47%] 2023-03-27 16:12:40,688 44k INFO Losses: [2.6090869903564453, 2.3571605682373047, 10.890995025634766, 16.272247314453125, 0.6007969975471497], step: 111000, lr: 9.806876998751865e-05 2023-03-27 16:12:43,682 44k INFO Saving model and optimizer state at iteration 153 to ./logs\44k\G_111000.pth 2023-03-27 16:12:44,394 44k INFO Saving model and optimizer state at iteration 153 to ./logs\44k\D_111000.pth 2023-03-27 16:12:45,073 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_108000.pth 2023-03-27 16:12:45,113 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_108000.pth 2023-03-27 16:13:54,232 44k INFO Train Epoch: 153 [75%] 2023-03-27 16:13:54,232 44k INFO Losses: [2.475864887237549, 2.4089221954345703, 12.019197463989258, 15.507795333862305, 0.8701485991477966], step: 111200, lr: 9.806876998751865e-05 2023-03-27 16:14:57,496 44k INFO ====> Epoch: 153, cost 264.37 s 2023-03-27 16:15:12,183 44k INFO Train Epoch: 154 [2%] 2023-03-27 16:15:12,183 44k INFO Losses: [2.1250665187835693, 2.6811563968658447, 14.211736679077148, 18.996070861816406, 1.074386477470398], step: 111400, lr: 9.80565113912702e-05 2023-03-27 16:16:21,909 44k INFO Train Epoch: 154 [30%] 2023-03-27 16:16:21,909 44k INFO Losses: [2.4001052379608154, 2.467453718185425, 12.708344459533691, 17.890933990478516, 1.5374265909194946], step: 111600, lr: 9.80565113912702e-05 2023-03-27 16:17:30,880 44k INFO Train Epoch: 154 [57%] 2023-03-27 16:17:30,880 44k INFO Losses: [2.468012809753418, 2.386470079421997, 13.21467399597168, 18.877986907958984, 0.5791723132133484], step: 111800, lr: 9.80565113912702e-05 2023-03-27 16:18:39,755 44k INFO Train Epoch: 154 [85%] 2023-03-27 16:18:39,756 44k INFO Losses: [2.5672690868377686, 2.1408092975616455, 7.5616912841796875, 14.040592193603516, 0.9744880199432373], step: 112000, lr: 9.80565113912702e-05 2023-03-27 16:18:42,851 44k INFO Saving model and optimizer state at iteration 154 to ./logs\44k\G_112000.pth 2023-03-27 16:18:43,568 44k INFO Saving model and optimizer state at iteration 154 to ./logs\44k\D_112000.pth 2023-03-27 16:18:44,237 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_109000.pth 2023-03-27 16:18:44,276 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_109000.pth 2023-03-27 16:19:22,435 44k INFO ====> Epoch: 154, cost 264.94 s 2023-03-27 16:20:01,864 44k INFO Train Epoch: 155 [12%] 2023-03-27 16:20:01,864 44k INFO Losses: [2.433595657348633, 2.0929839611053467, 12.648330688476562, 17.326133728027344, 1.195742130279541], step: 112200, lr: 9.804425432734629e-05 2023-03-27 16:21:10,716 44k INFO Train Epoch: 155 [40%] 2023-03-27 16:21:10,716 44k INFO Losses: [2.6927924156188965, 1.9279417991638184, 9.105682373046875, 12.121459007263184, 0.6283277273178101], step: 112400, lr: 9.804425432734629e-05 2023-03-27 16:22:19,450 44k INFO Train Epoch: 155 [67%] 2023-03-27 16:22:19,451 44k INFO Losses: [2.8824753761291504, 2.0762221813201904, 7.149835109710693, 12.536945343017578, 1.1095268726348877], step: 112600, lr: 9.804425432734629e-05 2023-03-27 16:23:28,418 44k INFO Train Epoch: 155 [95%] 2023-03-27 16:23:28,419 44k INFO Losses: [2.5198771953582764, 2.203423023223877, 9.960841178894043, 16.504091262817383, 1.2638556957244873], step: 112800, lr: 9.804425432734629e-05 2023-03-27 16:23:41,928 44k INFO ====> Epoch: 155, cost 259.49 s 2023-03-27 16:24:46,591 44k INFO Train Epoch: 156 [22%] 2023-03-27 16:24:46,591 44k INFO Losses: [2.4950125217437744, 2.4631261825561523, 9.943055152893066, 13.437765121459961, 0.5246902704238892], step: 113000, lr: 9.803199879555537e-05 2023-03-27 16:24:49,541 44k INFO Saving model and optimizer state at iteration 156 to ./logs\44k\G_113000.pth 2023-03-27 16:24:50,245 44k INFO Saving model and optimizer state at iteration 156 to ./logs\44k\D_113000.pth 2023-03-27 16:24:50,916 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_110000.pth 2023-03-27 16:24:50,953 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_110000.pth 2023-03-27 16:25:59,167 44k INFO Train Epoch: 156 [49%] 2023-03-27 16:25:59,167 44k INFO Losses: [2.3707869052886963, 2.3826370239257812, 13.220686912536621, 18.130834579467773, 1.0481154918670654], step: 113200, lr: 9.803199879555537e-05 2023-03-27 16:27:08,285 44k INFO Train Epoch: 156 [77%] 2023-03-27 16:27:08,285 44k INFO Losses: [2.701129674911499, 2.4030613899230957, 7.959865093231201, 16.456981658935547, 0.9539077281951904], step: 113400, lr: 9.803199879555537e-05 2023-03-27 16:28:05,825 44k INFO ====> Epoch: 156, cost 263.90 s 2023-03-27 16:28:26,158 44k INFO Train Epoch: 157 [4%] 2023-03-27 16:28:26,159 44k INFO Losses: [2.5178310871124268, 2.3295929431915283, 11.983153343200684, 18.58550453186035, 0.8681645393371582], step: 113600, lr: 9.801974479570593e-05 2023-03-27 16:29:35,225 44k INFO Train Epoch: 157 [32%] 2023-03-27 16:29:35,226 44k INFO Losses: [2.3837788105010986, 2.467172145843506, 11.634822845458984, 12.780050277709961, 1.1831272840499878], step: 113800, lr: 9.801974479570593e-05 2023-03-27 16:30:43,911 44k INFO Train Epoch: 157 [59%] 2023-03-27 16:30:43,912 44k INFO Losses: [2.3638339042663574, 2.5710644721984863, 10.028963088989258, 13.999186515808105, 1.0320866107940674], step: 114000, lr: 9.801974479570593e-05 2023-03-27 16:30:46,839 44k INFO Saving model and optimizer state at iteration 157 to ./logs\44k\G_114000.pth 2023-03-27 16:30:47,544 44k INFO Saving model and optimizer state at iteration 157 to ./logs\44k\D_114000.pth 2023-03-27 16:30:48,217 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_111000.pth 2023-03-27 16:30:48,253 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_111000.pth 2023-03-27 16:31:57,184 44k INFO Train Epoch: 157 [87%] 2023-03-27 16:31:57,184 44k INFO Losses: [2.334481716156006, 2.4236536026000977, 11.641983032226562, 18.942459106445312, 0.8031088709831238], step: 114200, lr: 9.801974479570593e-05 2023-03-27 16:32:29,958 44k INFO ====> Epoch: 157, cost 264.13 s 2023-03-27 16:33:14,935 44k INFO Train Epoch: 158 [14%] 2023-03-27 16:33:14,936 44k INFO Losses: [2.3327016830444336, 2.3628854751586914, 12.163458824157715, 18.25168228149414, 1.5844008922576904], step: 114400, lr: 9.800749232760646e-05 2023-03-27 16:34:23,584 44k INFO Train Epoch: 158 [42%] 2023-03-27 16:34:23,585 44k INFO Losses: [2.3533105850219727, 2.4780383110046387, 11.684494018554688, 15.262700080871582, 0.9038111567497253], step: 114600, lr: 9.800749232760646e-05 2023-03-27 16:35:32,692 44k INFO Train Epoch: 158 [69%] 2023-03-27 16:35:32,692 44k INFO Losses: [2.232180595397949, 2.5337305068969727, 11.66378402709961, 13.831449508666992, 0.8648684620857239], step: 114800, lr: 9.800749232760646e-05 2023-03-27 16:36:41,561 44k INFO Train Epoch: 158 [97%] 2023-03-27 16:36:41,561 44k INFO Losses: [2.544600248336792, 2.2293753623962402, 12.649467468261719, 14.502257347106934, 1.4933068752288818], step: 115000, lr: 9.800749232760646e-05 2023-03-27 16:36:44,489 44k INFO Saving model and optimizer state at iteration 158 to ./logs\44k\G_115000.pth 2023-03-27 16:36:45,203 44k INFO Saving model and optimizer state at iteration 158 to ./logs\44k\D_115000.pth 2023-03-27 16:36:45,876 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_112000.pth 2023-03-27 16:36:45,904 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_112000.pth 2023-03-27 16:36:53,960 44k INFO ====> Epoch: 158, cost 264.00 s 2023-03-27 16:38:04,179 44k INFO Train Epoch: 159 [24%] 2023-03-27 16:38:04,179 44k INFO Losses: [2.6496686935424805, 2.256444215774536, 5.808878421783447, 11.181488990783691, 1.1807869672775269], step: 115200, lr: 9.79952413910655e-05 2023-03-27 16:39:12,574 44k INFO Train Epoch: 159 [52%] 2023-03-27 16:39:12,574 44k INFO Losses: [2.5296685695648193, 2.221635341644287, 10.030228614807129, 15.825188636779785, 1.0115402936935425], step: 115400, lr: 9.79952413910655e-05 2023-03-27 16:40:21,720 44k INFO Train Epoch: 159 [79%] 2023-03-27 16:40:21,720 44k INFO Losses: [2.2269299030303955, 2.6199495792388916, 14.676745414733887, 19.993534088134766, 1.6519849300384521], step: 115600, lr: 9.79952413910655e-05 2023-03-27 16:41:13,774 44k INFO ====> Epoch: 159, cost 259.81 s 2023-03-27 16:41:39,719 44k INFO Train Epoch: 160 [7%] 2023-03-27 16:41:39,719 44k INFO Losses: [2.3218743801116943, 2.6530637741088867, 12.838098526000977, 16.33716583251953, 1.3555442094802856], step: 115800, lr: 9.798299198589162e-05 2023-03-27 16:42:48,860 44k INFO Train Epoch: 160 [34%] 2023-03-27 16:42:48,861 44k INFO Losses: [2.3468191623687744, 2.6511406898498535, 9.792174339294434, 14.329109191894531, 1.328987956047058], step: 116000, lr: 9.798299198589162e-05 2023-03-27 16:42:51,937 44k INFO Saving model and optimizer state at iteration 160 to ./logs\44k\G_116000.pth 2023-03-27 16:42:52,647 44k INFO Saving model and optimizer state at iteration 160 to ./logs\44k\D_116000.pth 2023-03-27 16:42:53,323 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_113000.pth 2023-03-27 16:42:53,351 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_113000.pth 2023-03-27 16:44:02,170 44k INFO Train Epoch: 160 [62%] 2023-03-27 16:44:02,170 44k INFO Losses: [2.4162545204162598, 2.0930349826812744, 6.160330295562744, 12.295461654663086, 1.1804018020629883], step: 116200, lr: 9.798299198589162e-05 2023-03-27 16:45:11,144 44k INFO Train Epoch: 160 [89%] 2023-03-27 16:45:11,144 44k INFO Losses: [2.652506113052368, 2.3407464027404785, 10.284538269042969, 17.62289047241211, 1.325880527496338], step: 116400, lr: 9.798299198589162e-05 2023-03-27 16:45:38,499 44k INFO ====> Epoch: 160, cost 264.72 s 2023-03-27 16:46:29,104 44k INFO Train Epoch: 161 [16%] 2023-03-27 16:46:29,104 44k INFO Losses: [2.3178303241729736, 2.429457664489746, 13.014402389526367, 19.14322280883789, 1.0692616701126099], step: 116600, lr: 9.797074411189339e-05 2023-03-27 16:47:37,882 44k INFO Train Epoch: 161 [44%] 2023-03-27 16:47:37,882 44k INFO Losses: [2.773195266723633, 2.3085920810699463, 12.912271499633789, 18.421642303466797, 1.0790684223175049], step: 116800, lr: 9.797074411189339e-05 2023-03-27 16:48:46,925 44k INFO Train Epoch: 161 [71%] 2023-03-27 16:48:46,926 44k INFO Losses: [2.4024434089660645, 2.383526086807251, 13.659016609191895, 19.74610710144043, 0.8940369486808777], step: 117000, lr: 9.797074411189339e-05 2023-03-27 16:48:50,016 44k INFO Saving model and optimizer state at iteration 161 to ./logs\44k\G_117000.pth 2023-03-27 16:48:50,725 44k INFO Saving model and optimizer state at iteration 161 to ./logs\44k\D_117000.pth 2023-03-27 16:48:51,410 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_114000.pth 2023-03-27 16:48:51,438 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_114000.pth 2023-03-27 16:50:00,421 44k INFO Train Epoch: 161 [99%] 2023-03-27 16:50:00,422 44k INFO Losses: [2.2828264236450195, 2.372591733932495, 11.03242015838623, 11.36934757232666, 1.0112143754959106], step: 117200, lr: 9.797074411189339e-05 2023-03-27 16:50:03,284 44k INFO ====> Epoch: 161, cost 264.79 s 2023-03-27 16:51:18,979 44k INFO Train Epoch: 162 [26%] 2023-03-27 16:51:18,979 44k INFO Losses: [2.499332904815674, 2.1111326217651367, 13.488838195800781, 18.210800170898438, 1.5277645587921143], step: 117400, lr: 9.795849776887939e-05 2023-03-27 16:52:27,693 44k INFO Train Epoch: 162 [54%] 2023-03-27 16:52:27,693 44k INFO Losses: [2.451474189758301, 2.0135891437530518, 9.861040115356445, 12.613106727600098, 0.9305912256240845], step: 117600, lr: 9.795849776887939e-05 2023-03-27 16:53:36,760 44k INFO Train Epoch: 162 [81%] 2023-03-27 16:53:36,760 44k INFO Losses: [2.6232969760894775, 2.117617130279541, 7.409360885620117, 14.254253387451172, 0.4483109414577484], step: 117800, lr: 9.795849776887939e-05 2023-03-27 16:54:23,574 44k INFO ====> Epoch: 162, cost 260.29 s 2023-03-27 16:54:55,008 44k INFO Train Epoch: 163 [9%] 2023-03-27 16:54:55,009 44k INFO Losses: [2.5209648609161377, 1.8980255126953125, 6.665624141693115, 13.864020347595215, 0.6684970855712891], step: 118000, lr: 9.794625295665828e-05 2023-03-27 16:54:58,046 44k INFO Saving model and optimizer state at iteration 163 to ./logs\44k\G_118000.pth 2023-03-27 16:54:58,805 44k INFO Saving model and optimizer state at iteration 163 to ./logs\44k\D_118000.pth 2023-03-27 16:54:59,485 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_115000.pth 2023-03-27 16:54:59,514 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_115000.pth 2023-03-27 16:56:08,721 44k INFO Train Epoch: 163 [36%] 2023-03-27 16:56:08,722 44k INFO Losses: [2.4563117027282715, 2.3114709854125977, 11.963663101196289, 13.152925491333008, 0.716852605342865], step: 118200, lr: 9.794625295665828e-05 2023-03-27 16:57:17,623 44k INFO Train Epoch: 163 [64%] 2023-03-27 16:57:17,624 44k INFO Losses: [2.5046780109405518, 2.3791632652282715, 12.391881942749023, 18.84956932067871, 1.1806079149246216], step: 118400, lr: 9.794625295665828e-05 2023-03-27 16:58:26,959 44k INFO Train Epoch: 163 [91%] 2023-03-27 16:58:26,960 44k INFO Losses: [2.334045886993408, 2.466439962387085, 10.11211109161377, 13.395720481872559, 1.1789485216140747], step: 118600, lr: 9.794625295665828e-05 2023-03-27 16:58:48,799 44k INFO ====> Epoch: 163, cost 265.22 s 2023-03-27 16:59:45,232 44k INFO Train Epoch: 164 [19%] 2023-03-27 16:59:45,232 44k INFO Losses: [2.433868169784546, 2.0612871646881104, 12.54976749420166, 15.925246238708496, 1.0242600440979004], step: 118800, lr: 9.79340096750387e-05 2023-03-27 17:00:54,035 44k INFO Train Epoch: 164 [46%] 2023-03-27 17:00:54,036 44k INFO Losses: [2.255990743637085, 2.285569906234741, 10.534886360168457, 16.137237548828125, 1.1109412908554077], step: 119000, lr: 9.79340096750387e-05 2023-03-27 17:00:57,006 44k INFO Saving model and optimizer state at iteration 164 to ./logs\44k\G_119000.pth 2023-03-27 17:00:57,775 44k INFO Saving model and optimizer state at iteration 164 to ./logs\44k\D_119000.pth 2023-03-27 17:00:58,453 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_116000.pth 2023-03-27 17:00:58,482 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_116000.pth 2023-03-27 17:02:07,689 44k INFO Train Epoch: 164 [74%] 2023-03-27 17:02:07,689 44k INFO Losses: [2.60512638092041, 2.3231475353240967, 7.133685111999512, 12.931039810180664, 1.1963117122650146], step: 119200, lr: 9.79340096750387e-05 2023-03-27 17:03:13,880 44k INFO ====> Epoch: 164, cost 265.08 s 2023-03-27 17:03:26,038 44k INFO Train Epoch: 165 [1%] 2023-03-27 17:03:26,038 44k INFO Losses: [2.392080307006836, 2.212521553039551, 9.963323593139648, 17.81024169921875, 1.3723399639129639], step: 119400, lr: 9.792176792382932e-05 2023-03-27 17:04:35,328 44k INFO Train Epoch: 165 [29%] 2023-03-27 17:04:35,328 44k INFO Losses: [2.3926830291748047, 2.582815647125244, 8.388181686401367, 14.656987190246582, 1.0874896049499512], step: 119600, lr: 9.792176792382932e-05 2023-03-27 17:05:44,506 44k INFO Train Epoch: 165 [56%] 2023-03-27 17:05:44,507 44k INFO Losses: [2.382122039794922, 2.638789415359497, 12.865196228027344, 17.069091796875, 1.0855484008789062], step: 119800, lr: 9.792176792382932e-05 2023-03-27 17:06:53,806 44k INFO Train Epoch: 165 [84%] 2023-03-27 17:06:53,807 44k INFO Losses: [2.640019178390503, 2.1623170375823975, 8.032410621643066, 9.617674827575684, 0.7818123698234558], step: 120000, lr: 9.792176792382932e-05 2023-03-27 17:06:56,861 44k INFO Saving model and optimizer state at iteration 165 to ./logs\44k\G_120000.pth 2023-03-27 17:06:57,648 44k INFO Saving model and optimizer state at iteration 165 to ./logs\44k\D_120000.pth 2023-03-27 17:06:58,319 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_117000.pth 2023-03-27 17:06:58,348 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_117000.pth 2023-03-27 17:07:39,670 44k INFO ====> Epoch: 165, cost 265.79 s 2023-03-27 17:08:16,693 44k INFO Train Epoch: 166 [11%] 2023-03-27 17:08:16,693 44k INFO Losses: [2.2795042991638184, 2.483602523803711, 11.202448844909668, 18.18661880493164, 1.1602277755737305], step: 120200, lr: 9.790952770283884e-05 2023-03-27 17:09:26,078 44k INFO Train Epoch: 166 [38%] 2023-03-27 17:09:26,079 44k INFO Losses: [2.281266212463379, 2.461491346359253, 10.251667022705078, 16.385879516601562, 0.7740108966827393], step: 120400, lr: 9.790952770283884e-05 2023-03-27 17:10:35,300 44k INFO Train Epoch: 166 [66%] 2023-03-27 17:10:35,300 44k INFO Losses: [2.5046725273132324, 2.3374907970428467, 9.342535972595215, 15.615907669067383, 0.7310588359832764], step: 120600, lr: 9.790952770283884e-05 2023-03-27 17:11:45,064 44k INFO Train Epoch: 166 [93%] 2023-03-27 17:11:45,064 44k INFO Losses: [2.270218849182129, 2.742818832397461, 11.890448570251465, 15.785032272338867, 0.7089086771011353], step: 120800, lr: 9.790952770283884e-05 2023-03-27 17:12:01,363 44k INFO ====> Epoch: 166, cost 261.69 s 2023-03-27 17:13:03,818 44k INFO Train Epoch: 167 [21%] 2023-03-27 17:13:03,819 44k INFO Losses: [2.2709102630615234, 2.4068543910980225, 11.670205116271973, 16.23775291442871, 1.0331850051879883], step: 121000, lr: 9.789728901187598e-05 2023-03-27 17:13:06,766 44k INFO Saving model and optimizer state at iteration 167 to ./logs\44k\G_121000.pth 2023-03-27 17:13:07,468 44k INFO Saving model and optimizer state at iteration 167 to ./logs\44k\D_121000.pth 2023-03-27 17:13:08,162 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_118000.pth 2023-03-27 17:13:08,194 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_118000.pth 2023-03-27 17:14:16,897 44k INFO Train Epoch: 167 [48%] 2023-03-27 17:14:16,897 44k INFO Losses: [2.439810276031494, 2.3134193420410156, 13.116922378540039, 18.7906494140625, 1.1338294744491577], step: 121200, lr: 9.789728901187598e-05 2023-03-27 17:15:26,603 44k INFO Train Epoch: 167 [76%] 2023-03-27 17:15:26,603 44k INFO Losses: [2.248420238494873, 2.605456590652466, 14.427824020385742, 18.773836135864258, 1.3133102655410767], step: 121400, lr: 9.789728901187598e-05 2023-03-27 17:16:27,668 44k INFO ====> Epoch: 167, cost 266.31 s 2023-03-27 17:16:45,235 44k INFO Train Epoch: 168 [3%] 2023-03-27 17:16:45,235 44k INFO Losses: [2.348447322845459, 2.34781551361084, 12.898009300231934, 18.49281120300293, 0.7429022192955017], step: 121600, lr: 9.78850518507495e-05 2023-03-27 17:17:55,147 44k INFO Train Epoch: 168 [31%] 2023-03-27 17:17:55,147 44k INFO Losses: [2.321514129638672, 2.3792498111724854, 11.352128982543945, 17.82683563232422, 1.3914108276367188], step: 121800, lr: 9.78850518507495e-05 2023-03-27 17:19:04,966 44k INFO Train Epoch: 168 [58%] 2023-03-27 17:19:04,967 44k INFO Losses: [2.4761006832122803, 2.2350354194641113, 13.706618309020996, 17.42738151550293, 1.3565607070922852], step: 122000, lr: 9.78850518507495e-05 2023-03-27 17:19:07,994 44k INFO Saving model and optimizer state at iteration 168 to ./logs\44k\G_122000.pth 2023-03-27 17:19:08,714 44k INFO Saving model and optimizer state at iteration 168 to ./logs\44k\D_122000.pth 2023-03-27 17:19:09,396 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_119000.pth 2023-03-27 17:19:09,427 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_119000.pth 2023-03-27 17:20:19,276 44k INFO Train Epoch: 168 [86%] 2023-03-27 17:20:19,277 44k INFO Losses: [2.3246729373931885, 2.2970693111419678, 17.1214656829834, 20.695051193237305, 1.01203453540802], step: 122200, lr: 9.78850518507495e-05 2023-03-27 17:20:55,183 44k INFO ====> Epoch: 168, cost 267.52 s 2023-03-27 17:21:37,807 44k INFO Train Epoch: 169 [13%] 2023-03-27 17:21:37,807 44k INFO Losses: [2.3365182876586914, 2.3369956016540527, 13.16711139678955, 18.009037017822266, 0.604544997215271], step: 122400, lr: 9.787281621926815e-05 2023-03-27 17:22:47,496 44k INFO Train Epoch: 169 [41%] 2023-03-27 17:22:47,496 44k INFO Losses: [2.4500207901000977, 2.3934779167175293, 12.591423034667969, 18.97565460205078, 1.2623945474624634], step: 122600, lr: 9.787281621926815e-05 2023-03-27 17:23:57,390 44k INFO Train Epoch: 169 [68%] 2023-03-27 17:23:57,390 44k INFO Losses: [2.470968723297119, 2.193331241607666, 13.620561599731445, 17.663997650146484, 1.0860627889633179], step: 122800, lr: 9.787281621926815e-05 2023-03-27 17:25:07,324 44k INFO Train Epoch: 169 [96%] 2023-03-27 17:25:07,324 44k INFO Losses: [2.4222545623779297, 2.6926076412200928, 8.6229829788208, 13.866355895996094, 1.087327480316162], step: 123000, lr: 9.787281621926815e-05 2023-03-27 17:25:10,287 44k INFO Saving model and optimizer state at iteration 169 to ./logs\44k\G_123000.pth 2023-03-27 17:25:10,987 44k INFO Saving model and optimizer state at iteration 169 to ./logs\44k\D_123000.pth 2023-03-27 17:25:11,657 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_120000.pth 2023-03-27 17:25:11,698 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_120000.pth 2023-03-27 17:25:22,611 44k INFO ====> Epoch: 169, cost 267.43 s 2023-03-27 17:26:30,713 44k INFO Train Epoch: 170 [23%] 2023-03-27 17:26:30,714 44k INFO Losses: [2.1514499187469482, 2.702333688735962, 12.8300142288208, 16.440980911254883, 0.8341614603996277], step: 123200, lr: 9.786058211724074e-05 2023-03-27 17:27:40,138 44k INFO Train Epoch: 170 [51%] 2023-03-27 17:27:40,138 44k INFO Losses: [2.314253807067871, 2.4751172065734863, 13.762581825256348, 18.53729820251465, 0.9141523241996765], step: 123400, lr: 9.786058211724074e-05 2023-03-27 17:28:50,090 44k INFO Train Epoch: 170 [78%] 2023-03-27 17:28:50,090 44k INFO Losses: [2.4484832286834717, 2.1457958221435547, 10.035672187805176, 15.781368255615234, 1.6299636363983154], step: 123600, lr: 9.786058211724074e-05 2023-03-27 17:29:45,604 44k INFO ====> Epoch: 170, cost 262.99 s 2023-03-27 17:30:08,761 44k INFO Train Epoch: 171 [5%] 2023-03-27 17:30:08,762 44k INFO Losses: [2.396064043045044, 2.3398525714874268, 11.433820724487305, 16.398733139038086, 1.2280956506729126], step: 123800, lr: 9.784834954447608e-05 2023-03-27 17:31:18,729 44k INFO Train Epoch: 171 [33%] 2023-03-27 17:31:18,729 44k INFO Losses: [2.333228349685669, 2.3897271156311035, 13.123848915100098, 17.76328468322754, 0.6831649541854858], step: 124000, lr: 9.784834954447608e-05 2023-03-27 17:31:21,766 44k INFO Saving model and optimizer state at iteration 171 to ./logs\44k\G_124000.pth 2023-03-27 17:31:22,523 44k INFO Saving model and optimizer state at iteration 171 to ./logs\44k\D_124000.pth 2023-03-27 17:31:23,203 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_121000.pth 2023-03-27 17:31:23,232 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_121000.pth 2023-03-27 17:32:32,894 44k INFO Train Epoch: 171 [60%] 2023-03-27 17:32:32,894 44k INFO Losses: [2.4463307857513428, 2.3815464973449707, 13.541386604309082, 17.60829734802246, 1.0806419849395752], step: 124200, lr: 9.784834954447608e-05 2023-03-27 17:33:42,869 44k INFO Train Epoch: 171 [88%] 2023-03-27 17:33:42,869 44k INFO Losses: [2.607841968536377, 2.305100917816162, 9.218168258666992, 14.15418815612793, 0.9442626237869263], step: 124400, lr: 9.784834954447608e-05 2023-03-27 17:34:13,455 44k INFO ====> Epoch: 171, cost 267.85 s 2023-03-27 17:35:01,768 44k INFO Train Epoch: 172 [15%] 2023-03-27 17:35:01,768 44k INFO Losses: [2.3940911293029785, 2.3329145908355713, 11.99533462524414, 15.887142181396484, 1.278151273727417], step: 124600, lr: 9.783611850078301e-05 2023-03-27 17:36:11,526 44k INFO Train Epoch: 172 [43%] 2023-03-27 17:36:11,526 44k INFO Losses: [2.3444530963897705, 2.467311382293701, 15.392029762268066, 18.86194610595703, 0.7741926312446594], step: 124800, lr: 9.783611850078301e-05 2023-03-27 17:37:21,548 44k INFO Train Epoch: 172 [70%] 2023-03-27 17:37:21,549 44k INFO Losses: [2.37176513671875, 2.2932634353637695, 11.418245315551758, 18.618946075439453, 0.9160822033882141], step: 125000, lr: 9.783611850078301e-05 2023-03-27 17:37:24,558 44k INFO Saving model and optimizer state at iteration 172 to ./logs\44k\G_125000.pth 2023-03-27 17:37:25,259 44k INFO Saving model and optimizer state at iteration 172 to ./logs\44k\D_125000.pth 2023-03-27 17:37:25,940 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_122000.pth 2023-03-27 17:37:25,984 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_122000.pth 2023-03-27 17:38:35,922 44k INFO Train Epoch: 172 [98%] 2023-03-27 17:38:35,922 44k INFO Losses: [2.620575428009033, 2.069877862930298, 7.715834617614746, 12.704996109008789, 0.949543833732605], step: 125200, lr: 9.783611850078301e-05 2023-03-27 17:38:41,479 44k INFO ====> Epoch: 172, cost 268.02 s 2023-03-27 17:39:55,055 44k INFO Train Epoch: 173 [25%] 2023-03-27 17:39:55,056 44k INFO Losses: [2.4854207038879395, 2.177079200744629, 10.556364059448242, 16.082748413085938, 0.9007883071899414], step: 125400, lr: 9.782388898597041e-05 2023-03-27 17:41:04,757 44k INFO Train Epoch: 173 [53%] 2023-03-27 17:41:04,758 44k INFO Losses: [2.430957794189453, 2.3716232776641846, 10.708355903625488, 17.339527130126953, 0.8509302735328674], step: 125600, lr: 9.782388898597041e-05 2023-03-27 17:42:14,913 44k INFO Train Epoch: 173 [80%] 2023-03-27 17:42:14,914 44k INFO Losses: [2.456007480621338, 2.4925320148468018, 10.195554733276367, 15.725909233093262, 0.953844428062439], step: 125800, lr: 9.782388898597041e-05 2023-03-27 17:43:05,144 44k INFO ====> Epoch: 173, cost 263.67 s 2023-03-27 17:43:33,944 44k INFO Train Epoch: 174 [8%] 2023-03-27 17:43:33,945 44k INFO Losses: [2.362027406692505, 2.58577561378479, 9.372467041015625, 12.480069160461426, 0.9993513822555542], step: 126000, lr: 9.781166099984716e-05 2023-03-27 17:43:36,915 44k INFO Saving model and optimizer state at iteration 174 to ./logs\44k\G_126000.pth 2023-03-27 17:43:37,629 44k INFO Saving model and optimizer state at iteration 174 to ./logs\44k\D_126000.pth 2023-03-27 17:43:38,307 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_123000.pth 2023-03-27 17:43:38,349 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_123000.pth 2023-03-27 17:44:48,464 44k INFO Train Epoch: 174 [35%] 2023-03-27 17:44:48,464 44k INFO Losses: [2.4812581539154053, 2.3348302841186523, 15.185503959655762, 17.68130874633789, 1.0304592847824097], step: 126200, lr: 9.781166099984716e-05 2023-03-27 17:45:58,446 44k INFO Train Epoch: 174 [63%] 2023-03-27 17:45:58,447 44k INFO Losses: [2.5544302463531494, 2.3469033241271973, 6.7269182205200195, 12.20604419708252, 1.1267578601837158], step: 126400, lr: 9.781166099984716e-05 2023-03-27 17:47:08,656 44k INFO Train Epoch: 174 [90%] 2023-03-27 17:47:08,656 44k INFO Losses: [2.569550037384033, 2.1751744747161865, 9.88553237915039, 16.264352798461914, 1.4756594896316528], step: 126600, lr: 9.781166099984716e-05 2023-03-27 17:47:33,652 44k INFO ====> Epoch: 174, cost 268.51 s 2023-03-27 17:48:27,681 44k INFO Train Epoch: 175 [18%] 2023-03-27 17:48:27,682 44k INFO Losses: [2.316260814666748, 2.3986012935638428, 9.081997871398926, 12.279876708984375, 1.0394350290298462], step: 126800, lr: 9.779943454222217e-05 2023-03-27 17:49:37,534 44k INFO Train Epoch: 175 [45%] 2023-03-27 17:49:37,535 44k INFO Losses: [2.343385934829712, 2.199869155883789, 12.122344970703125, 14.794171333312988, 1.0282261371612549], step: 127000, lr: 9.779943454222217e-05 2023-03-27 17:49:40,588 44k INFO Saving model and optimizer state at iteration 175 to ./logs\44k\G_127000.pth 2023-03-27 17:49:41,291 44k INFO Saving model and optimizer state at iteration 175 to ./logs\44k\D_127000.pth 2023-03-27 17:49:41,966 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_124000.pth 2023-03-27 17:49:42,009 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_124000.pth 2023-03-27 17:50:52,100 44k INFO Train Epoch: 175 [73%] 2023-03-27 17:50:52,101 44k INFO Losses: [2.4607133865356445, 2.2723968029022217, 11.82693862915039, 17.377431869506836, 0.3970387876033783], step: 127200, lr: 9.779943454222217e-05 2023-03-27 17:52:02,042 44k INFO ====> Epoch: 175, cost 268.39 s 2023-03-27 17:52:11,028 44k INFO Train Epoch: 176 [0%] 2023-03-27 17:52:11,028 44k INFO Losses: [2.5906434059143066, 2.3625576496124268, 13.82233715057373, 17.260345458984375, 1.3217264413833618], step: 127400, lr: 9.778720961290439e-05 2023-03-27 17:53:21,597 44k INFO Train Epoch: 176 [27%] 2023-03-27 17:53:21,597 44k INFO Losses: [2.602332830429077, 2.0217111110687256, 5.510940074920654, 6.844113826751709, 1.2394295930862427], step: 127600, lr: 9.778720961290439e-05 2023-03-27 17:54:31,624 44k INFO Train Epoch: 176 [55%] 2023-03-27 17:54:31,624 44k INFO Losses: [2.4241154193878174, 2.2270467281341553, 11.048941612243652, 16.956989288330078, 0.672456681728363], step: 127800, lr: 9.778720961290439e-05 2023-03-27 17:55:41,824 44k INFO Train Epoch: 176 [82%] 2023-03-27 17:55:41,824 44k INFO Losses: [2.434438705444336, 2.3750834465026855, 10.168817520141602, 14.761211395263672, 1.2984319925308228], step: 128000, lr: 9.778720961290439e-05 2023-03-27 17:55:44,872 44k INFO Saving model and optimizer state at iteration 176 to ./logs\44k\G_128000.pth 2023-03-27 17:55:45,587 44k INFO Saving model and optimizer state at iteration 176 to ./logs\44k\D_128000.pth 2023-03-27 17:55:46,263 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_125000.pth 2023-03-27 17:55:46,305 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_125000.pth 2023-03-27 17:56:31,183 44k INFO ====> Epoch: 176, cost 269.14 s 2023-03-27 17:57:05,718 44k INFO Train Epoch: 177 [10%] 2023-03-27 17:57:05,718 44k INFO Losses: [2.568655014038086, 2.3006162643432617, 9.99456787109375, 15.842120170593262, 1.3628286123275757], step: 128200, lr: 9.777498621170277e-05 2023-03-27 17:58:16,017 44k INFO Train Epoch: 177 [37%] 2023-03-27 17:58:16,018 44k INFO Losses: [2.4327876567840576, 2.3575992584228516, 13.707687377929688, 16.87925910949707, 0.9115961790084839], step: 128400, lr: 9.777498621170277e-05 2023-03-27 17:59:26,266 44k INFO Train Epoch: 177 [65%] 2023-03-27 17:59:26,266 44k INFO Losses: [2.4877729415893555, 2.3270065784454346, 13.358565330505371, 19.410602569580078, 0.8932791352272034], step: 128600, lr: 9.777498621170277e-05 2023-03-27 18:00:36,762 44k INFO Train Epoch: 177 [92%] 2023-03-27 18:00:36,763 44k INFO Losses: [2.5053699016571045, 2.307882308959961, 13.377015113830566, 17.47154426574707, 1.246735692024231], step: 128800, lr: 9.777498621170277e-05 2023-03-27 18:00:56,126 44k INFO ====> Epoch: 177, cost 264.94 s 2023-03-27 18:01:56,177 44k INFO Train Epoch: 178 [20%] 2023-03-27 18:01:56,177 44k INFO Losses: [2.6582717895507812, 1.9379496574401855, 5.979320526123047, 9.493206977844238, 0.497969388961792], step: 129000, lr: 9.776276433842631e-05 2023-03-27 18:01:59,273 44k INFO Saving model and optimizer state at iteration 178 to ./logs\44k\G_129000.pth 2023-03-27 18:01:59,984 44k INFO Saving model and optimizer state at iteration 178 to ./logs\44k\D_129000.pth 2023-03-27 18:02:00,672 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_126000.pth 2023-03-27 18:02:00,714 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_126000.pth 2023-03-27 18:03:10,698 44k INFO Train Epoch: 178 [47%] 2023-03-27 18:03:10,699 44k INFO Losses: [2.602003574371338, 2.176957130432129, 11.236597061157227, 15.685440063476562, 1.445418357849121], step: 129200, lr: 9.776276433842631e-05 2023-03-27 18:04:21,068 44k INFO Train Epoch: 178 [75%] 2023-03-27 18:04:21,068 44k INFO Losses: [2.5215792655944824, 2.1710410118103027, 11.20630931854248, 12.705233573913574, 1.0001511573791504], step: 129400, lr: 9.776276433842631e-05 2023-03-27 18:05:25,545 44k INFO ====> Epoch: 178, cost 269.42 s 2023-03-27 18:05:40,319 44k INFO Train Epoch: 179 [2%] 2023-03-27 18:05:40,320 44k INFO Losses: [2.27616810798645, 2.4831902980804443, 12.284912109375, 18.29532814025879, 1.132308006286621], step: 129600, lr: 9.7750543992884e-05 2023-03-27 18:06:50,875 44k INFO Train Epoch: 179 [30%] 2023-03-27 18:06:50,875 44k INFO Losses: [2.665318012237549, 2.1516685485839844, 12.165548324584961, 15.486702919006348, 0.6542673707008362], step: 129800, lr: 9.7750543992884e-05 2023-03-27 18:08:01,247 44k INFO Train Epoch: 179 [57%] 2023-03-27 18:08:01,247 44k INFO Losses: [2.3628604412078857, 2.7889623641967773, 11.206024169921875, 16.6512508392334, 0.7762317061424255], step: 130000, lr: 9.7750543992884e-05 2023-03-27 18:08:04,241 44k INFO Saving model and optimizer state at iteration 179 to ./logs\44k\G_130000.pth 2023-03-27 18:08:04,947 44k INFO Saving model and optimizer state at iteration 179 to ./logs\44k\D_130000.pth 2023-03-27 18:08:05,625 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_127000.pth 2023-03-27 18:08:05,667 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_127000.pth 2023-03-27 18:09:16,007 44k INFO Train Epoch: 179 [85%] 2023-03-27 18:09:16,007 44k INFO Losses: [2.3443596363067627, 2.938110113143921, 6.402492523193359, 7.909160614013672, 1.1665546894073486], step: 130200, lr: 9.7750543992884e-05 2023-03-27 18:09:55,119 44k INFO ====> Epoch: 179, cost 269.57 s 2023-03-27 18:10:35,261 44k INFO Train Epoch: 180 [12%] 2023-03-27 18:10:35,261 44k INFO Losses: [2.4856107234954834, 2.484408378601074, 13.133910179138184, 16.327804565429688, 0.9414535760879517], step: 130400, lr: 9.773832517488488e-05 2023-03-27 18:11:45,770 44k INFO Train Epoch: 180 [40%] 2023-03-27 18:11:45,770 44k INFO Losses: [2.4860846996307373, 1.8760896921157837, 11.627530097961426, 14.046818733215332, 1.0686498880386353], step: 130600, lr: 9.773832517488488e-05 2023-03-27 18:12:56,175 44k INFO Train Epoch: 180 [67%] 2023-03-27 18:12:56,175 44k INFO Losses: [2.538848638534546, 1.990905523300171, 9.775440216064453, 14.279139518737793, 1.0238522291183472], step: 130800, lr: 9.773832517488488e-05 2023-03-27 18:14:06,868 44k INFO Train Epoch: 180 [95%] 2023-03-27 18:14:06,869 44k INFO Losses: [2.4545440673828125, 2.1755945682525635, 12.586639404296875, 16.226110458374023, 1.1417278051376343], step: 131000, lr: 9.773832517488488e-05 2023-03-27 18:14:09,898 44k INFO Saving model and optimizer state at iteration 180 to ./logs\44k\G_131000.pth 2023-03-27 18:14:10,610 44k INFO Saving model and optimizer state at iteration 180 to ./logs\44k\D_131000.pth 2023-03-27 18:14:11,257 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_128000.pth 2023-03-27 18:14:11,293 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_128000.pth 2023-03-27 18:14:25,031 44k INFO ====> Epoch: 180, cost 269.91 s 2023-03-27 18:15:30,917 44k INFO Train Epoch: 181 [22%] 2023-03-27 18:15:30,917 44k INFO Losses: [2.7005715370178223, 2.137972593307495, 8.888958930969238, 12.668450355529785, 0.8319202065467834], step: 131200, lr: 9.772610788423802e-05 2023-03-27 18:16:40,802 44k INFO Train Epoch: 181 [49%] 2023-03-27 18:16:40,802 44k INFO Losses: [2.561494827270508, 2.324634075164795, 13.682188987731934, 16.33731460571289, 1.3251464366912842], step: 131400, lr: 9.772610788423802e-05 2023-03-27 18:17:51,301 44k INFO Train Epoch: 181 [77%] 2023-03-27 18:17:51,301 44k INFO Losses: [2.3655714988708496, 2.3318958282470703, 10.114705085754395, 15.899140357971191, 1.209835410118103], step: 131600, lr: 9.772610788423802e-05 2023-03-27 18:18:50,189 44k INFO ====> Epoch: 181, cost 265.16 s 2023-03-27 18:19:10,834 44k INFO Train Epoch: 182 [4%] 2023-03-27 18:19:10,834 44k INFO Losses: [2.6111881732940674, 2.1698477268218994, 6.693398475646973, 10.287968635559082, 1.085585117340088], step: 131800, lr: 9.771389212075249e-05 2023-03-27 18:20:21,327 44k INFO Train Epoch: 182 [32%] 2023-03-27 18:20:21,328 44k INFO Losses: [2.4188759326934814, 2.392732620239258, 13.795336723327637, 17.36394500732422, 0.9453993439674377], step: 132000, lr: 9.771389212075249e-05 2023-03-27 18:20:24,366 44k INFO Saving model and optimizer state at iteration 182 to ./logs\44k\G_132000.pth 2023-03-27 18:20:25,085 44k INFO Saving model and optimizer state at iteration 182 to ./logs\44k\D_132000.pth 2023-03-27 18:20:25,758 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_129000.pth 2023-03-27 18:20:25,793 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_129000.pth 2023-03-27 18:21:36,149 44k INFO Train Epoch: 182 [59%] 2023-03-27 18:21:36,150 44k INFO Losses: [2.112567186355591, 3.2829229831695557, 14.035988807678223, 15.20745849609375, 0.9430005550384521], step: 132200, lr: 9.771389212075249e-05 2023-03-27 18:22:46,649 44k INFO Train Epoch: 182 [87%] 2023-03-27 18:22:46,649 44k INFO Losses: [2.4086008071899414, 2.4841761589050293, 11.395578384399414, 15.37138557434082, 1.064834713935852], step: 132400, lr: 9.771389212075249e-05 2023-03-27 18:23:21,444 44k INFO ====> Epoch: 182, cost 271.25 s 2023-03-27 18:24:07,617 44k INFO Train Epoch: 183 [14%] 2023-03-27 18:24:07,617 44k INFO Losses: [2.611403226852417, 2.105809211730957, 10.442655563354492, 15.772554397583008, 1.0080938339233398], step: 132600, lr: 9.77016778842374e-05 2023-03-27 18:25:17,892 44k INFO Train Epoch: 183 [42%] 2023-03-27 18:25:17,893 44k INFO Losses: [2.3997159004211426, 2.189664363861084, 11.091224670410156, 15.468740463256836, 1.2481204271316528], step: 132800, lr: 9.77016778842374e-05 2023-03-27 18:26:28,662 44k INFO Train Epoch: 183 [69%] 2023-03-27 18:26:28,663 44k INFO Losses: [2.3255233764648438, 2.273611068725586, 10.180839538574219, 13.347036361694336, 0.7448719143867493], step: 133000, lr: 9.77016778842374e-05 2023-03-27 18:26:31,773 44k INFO Saving model and optimizer state at iteration 183 to ./logs\44k\G_133000.pth 2023-03-27 18:26:32,493 44k INFO Saving model and optimizer state at iteration 183 to ./logs\44k\D_133000.pth 2023-03-27 18:26:33,175 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_130000.pth 2023-03-27 18:26:33,217 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_130000.pth 2023-03-27 18:27:44,155 44k INFO Train Epoch: 183 [97%] 2023-03-27 18:27:44,155 44k INFO Losses: [2.5576655864715576, 2.335876941680908, 11.218441009521484, 16.46881103515625, 1.164275884628296], step: 133200, lr: 9.77016778842374e-05 2023-03-27 18:27:52,682 44k INFO ====> Epoch: 183, cost 271.24 s 2023-03-27 18:29:41,851 44k INFO Train Epoch: 184 [24%] 2023-03-27 18:29:41,852 44k INFO Losses: [2.4022409915924072, 2.353070020675659, 10.246682167053223, 16.159765243530273, 0.6519449353218079], step: 133400, lr: 9.768946517450186e-05 2023-03-27 18:31:42,721 44k INFO Train Epoch: 184 [52%] 2023-03-27 18:31:42,721 44k INFO Losses: [1.8662364482879639, 3.2477855682373047, 11.753564834594727, 14.737924575805664, 0.7032389640808105], step: 133600, lr: 9.768946517450186e-05 2023-03-27 18:33:40,405 44k INFO Train Epoch: 184 [79%] 2023-03-27 18:33:40,406 44k INFO Losses: [2.4017605781555176, 2.4001569747924805, 16.337886810302734, 19.515825271606445, 0.7387650609016418], step: 133800, lr: 9.768946517450186e-05 2023-03-27 18:35:09,972 44k INFO ====> Epoch: 184, cost 437.29 s 2023-03-27 18:35:48,223 44k INFO Train Epoch: 185 [7%] 2023-03-27 18:35:48,224 44k INFO Losses: [2.171499490737915, 2.733581781387329, 10.629803657531738, 13.150168418884277, 1.4239521026611328], step: 134000, lr: 9.767725399135504e-05 2023-03-27 18:35:51,680 44k INFO Saving model and optimizer state at iteration 185 to ./logs\44k\G_134000.pth 2023-03-27 18:35:52,920 44k INFO Saving model and optimizer state at iteration 185 to ./logs\44k\D_134000.pth 2023-03-27 18:35:53,908 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_131000.pth 2023-03-27 18:35:53,968 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_131000.pth 2023-03-27 18:37:51,433 44k INFO Train Epoch: 185 [34%] 2023-03-27 18:37:51,433 44k INFO Losses: [2.4641056060791016, 2.8395533561706543, 9.019354820251465, 12.724245071411133, 1.0646169185638428], step: 134200, lr: 9.767725399135504e-05 2023-03-27 18:39:50,551 44k INFO Train Epoch: 185 [62%] 2023-03-27 18:39:50,552 44k INFO Losses: [2.3749678134918213, 2.456462860107422, 14.637182235717773, 20.57145881652832, 1.3534231185913086], step: 134400, lr: 9.767725399135504e-05 2023-03-27 18:41:48,504 44k INFO Train Epoch: 185 [89%] 2023-03-27 18:41:48,505 44k INFO Losses: [2.614518404006958, 2.1720948219299316, 11.737131118774414, 18.159448623657227, 0.6881380677223206], step: 134600, lr: 9.767725399135504e-05 2023-03-27 18:42:35,369 44k INFO ====> Epoch: 185, cost 445.40 s 2023-03-27 18:43:55,916 44k INFO Train Epoch: 186 [16%] 2023-03-27 18:43:55,916 44k INFO Losses: [2.344783067703247, 2.391119956970215, 11.666471481323242, 15.507871627807617, 1.4854692220687866], step: 134800, lr: 9.766504433460612e-05 2023-03-27 18:45:54,819 44k INFO Train Epoch: 186 [44%] 2023-03-27 18:45:54,819 44k INFO Losses: [2.794485569000244, 1.953847885131836, 5.246312618255615, 10.23617935180664, 0.7440398931503296], step: 135000, lr: 9.766504433460612e-05 2023-03-27 18:45:58,251 44k INFO Saving model and optimizer state at iteration 186 to ./logs\44k\G_135000.pth 2023-03-27 18:45:59,516 44k INFO Saving model and optimizer state at iteration 186 to ./logs\44k\D_135000.pth 2023-03-27 18:46:00,459 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_132000.pth 2023-03-27 18:46:00,518 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_132000.pth 2023-03-27 18:47:58,911 44k INFO Train Epoch: 186 [71%] 2023-03-27 18:47:58,911 44k INFO Losses: [2.4106063842773438, 2.430926561355591, 13.35350513458252, 17.54926109313965, 0.8882709741592407], step: 135200, lr: 9.766504433460612e-05 2023-03-27 18:49:57,863 44k INFO Train Epoch: 186 [99%] 2023-03-27 18:49:57,863 44k INFO Losses: [2.5052895545959473, 2.090672492980957, 9.3340482711792, 15.751490592956543, 0.8864313364028931], step: 135400, lr: 9.766504433460612e-05 2023-03-27 18:50:02,811 44k INFO ====> Epoch: 186, cost 447.44 s 2023-03-27 18:52:06,096 44k INFO Train Epoch: 187 [26%] 2023-03-27 18:52:06,097 44k INFO Losses: [2.457874059677124, 2.3619155883789062, 12.709022521972656, 18.864398956298828, 0.9187091588973999], step: 135600, lr: 9.765283620406429e-05 2023-03-27 18:54:06,213 44k INFO Train Epoch: 187 [54%] 2023-03-27 18:54:06,214 44k INFO Losses: [2.6637744903564453, 2.0744848251342773, 12.244755744934082, 17.70392417907715, 1.1547750234603882], step: 135800, lr: 9.765283620406429e-05 2023-03-27 18:56:03,999 44k INFO Train Epoch: 187 [81%] 2023-03-27 18:56:03,999 44k INFO Losses: [2.519225597381592, 2.3907387256622314, 7.636226177215576, 18.44424057006836, 0.9992062449455261], step: 136000, lr: 9.765283620406429e-05 2023-03-27 18:56:07,451 44k INFO Saving model and optimizer state at iteration 187 to ./logs\44k\G_136000.pth 2023-03-27 18:56:08,647 44k INFO Saving model and optimizer state at iteration 187 to ./logs\44k\D_136000.pth 2023-03-27 18:56:09,534 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_133000.pth 2023-03-27 18:56:09,589 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_133000.pth 2023-03-27 18:57:30,656 44k INFO ====> Epoch: 187, cost 447.84 s 2023-03-27 18:58:18,693 44k INFO Train Epoch: 188 [9%] 2023-03-27 18:58:18,693 44k INFO Losses: [2.762451648712158, 2.1222169399261475, 3.5949108600616455, 6.408213138580322, 0.7731075882911682], step: 136200, lr: 9.764062959953878e-05 2023-03-27 19:00:17,931 44k INFO Train Epoch: 188 [36%] 2023-03-27 19:00:17,931 44k INFO Losses: [2.212099075317383, 2.386256694793701, 17.12603187561035, 19.449317932128906, 0.9064273238182068], step: 136400, lr: 9.764062959953878e-05 2023-03-27 19:02:18,223 44k INFO Train Epoch: 188 [64%] 2023-03-27 19:02:18,224 44k INFO Losses: [2.457453966140747, 2.3389828205108643, 14.96295166015625, 18.248695373535156, 0.8560264110565186], step: 136600, lr: 9.764062959953878e-05 2023-03-27 19:04:11,102 44k INFO Train Epoch: 188 [91%] 2023-03-27 19:04:11,102 44k INFO Losses: [2.380077600479126, 2.389202117919922, 12.926817893981934, 16.646018981933594, 1.1096956729888916], step: 136800, lr: 9.764062959953878e-05 2023-03-27 19:04:34,039 44k INFO ====> Epoch: 188, cost 423.38 s 2023-03-27 19:05:59,347 44k INFO Train Epoch: 189 [19%] 2023-03-27 19:05:59,347 44k INFO Losses: [2.4508824348449707, 2.3083102703094482, 13.859821319580078, 18.2222957611084, 1.063199758529663], step: 137000, lr: 9.762842452083883e-05 2023-03-27 19:06:03,055 44k INFO Saving model and optimizer state at iteration 189 to ./logs\44k\G_137000.pth 2023-03-27 19:06:04,383 44k INFO Saving model and optimizer state at iteration 189 to ./logs\44k\D_137000.pth 2023-03-27 19:06:05,402 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_134000.pth 2023-03-27 19:06:05,461 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_134000.pth 2023-03-27 19:08:10,184 44k INFO Train Epoch: 189 [46%] 2023-03-27 19:08:10,185 44k INFO Losses: [2.5186033248901367, 2.257539987564087, 11.743875503540039, 15.75949764251709, 1.1062092781066895], step: 137200, lr: 9.762842452083883e-05 2023-03-27 19:10:09,243 44k INFO Train Epoch: 189 [74%] 2023-03-27 19:10:09,243 44k INFO Losses: [2.5647528171539307, 2.6023049354553223, 7.874855995178223, 12.7445068359375, 1.003122329711914], step: 137400, lr: 9.762842452083883e-05 2023-03-27 19:12:03,968 44k INFO ====> Epoch: 189, cost 449.93 s 2023-03-27 19:12:18,458 44k INFO Train Epoch: 190 [1%] 2023-03-27 19:12:18,459 44k INFO Losses: [2.3579704761505127, 2.143578052520752, 12.160978317260742, 18.103628158569336, 1.344075083732605], step: 137600, lr: 9.761622096777372e-05 2023-03-27 19:14:18,988 44k INFO Train Epoch: 190 [29%] 2023-03-27 19:14:18,989 44k INFO Losses: [2.3292367458343506, 2.4283018112182617, 10.187582969665527, 13.868294715881348, 0.7467020750045776], step: 137800, lr: 9.761622096777372e-05 2023-03-27 19:16:20,055 44k INFO Train Epoch: 190 [56%] 2023-03-27 19:16:20,055 44k INFO Losses: [2.2837259769439697, 2.4549899101257324, 13.193291664123535, 17.45569610595703, 0.7637121081352234], step: 138000, lr: 9.761622096777372e-05 2023-03-27 19:16:23,594 44k INFO Saving model and optimizer state at iteration 190 to ./logs\44k\G_138000.pth 2023-03-27 19:16:24,816 44k INFO Saving model and optimizer state at iteration 190 to ./logs\44k\D_138000.pth 2023-03-27 19:16:25,826 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_135000.pth 2023-03-27 19:16:25,881 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_135000.pth 2023-03-27 19:18:25,012 44k INFO Train Epoch: 190 [84%] 2023-03-27 19:18:25,012 44k INFO Losses: [2.717865467071533, 2.0687553882598877, 6.770575523376465, 12.204886436462402, 0.5999082326889038], step: 138200, lr: 9.761622096777372e-05 2023-03-27 19:19:37,018 44k INFO ====> Epoch: 190, cost 453.05 s 2023-03-27 19:20:35,394 44k INFO Train Epoch: 191 [11%] 2023-03-27 19:20:35,395 44k INFO Losses: [2.5068352222442627, 2.308448553085327, 10.919577598571777, 17.932260513305664, 1.4263895750045776], step: 138400, lr: 9.760401894015275e-05 2023-03-27 19:22:35,340 44k INFO Train Epoch: 191 [38%] 2023-03-27 19:22:35,340 44k INFO Losses: [2.2786097526550293, 2.2209770679473877, 9.656487464904785, 18.11559295654297, 1.3045752048492432], step: 138600, lr: 9.760401894015275e-05 2023-03-27 19:24:36,238 44k INFO Train Epoch: 191 [66%] 2023-03-27 19:24:36,238 44k INFO Losses: [2.67101788520813, 2.064164876937866, 8.61523151397705, 14.85780143737793, 0.9501770734786987], step: 138800, lr: 9.760401894015275e-05 2023-03-27 19:26:37,103 44k INFO Train Epoch: 191 [93%] 2023-03-27 19:26:37,104 44k INFO Losses: [2.558389186859131, 2.175037145614624, 8.562606811523438, 11.976133346557617, 1.1441177129745483], step: 139000, lr: 9.760401894015275e-05 2023-03-27 19:26:40,563 44k INFO Saving model and optimizer state at iteration 191 to ./logs\44k\G_139000.pth 2023-03-27 19:26:41,777 44k INFO Saving model and optimizer state at iteration 191 to ./logs\44k\D_139000.pth 2023-03-27 19:26:42,655 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_136000.pth 2023-03-27 19:26:42,716 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_136000.pth 2023-03-27 19:27:11,215 44k INFO ====> Epoch: 191, cost 454.20 s 2023-03-27 19:28:52,429 44k INFO Train Epoch: 192 [21%] 2023-03-27 19:28:52,429 44k INFO Losses: [2.2611024379730225, 2.3733296394348145, 12.18364143371582, 14.79821491241455, 0.23727697134017944], step: 139200, lr: 9.759181843778522e-05 2023-03-27 19:30:53,710 44k INFO Train Epoch: 192 [48%] 2023-03-27 19:30:53,711 44k INFO Losses: [2.708000659942627, 2.295978307723999, 11.40211296081543, 14.875353813171387, 0.683947741985321], step: 139400, lr: 9.759181843778522e-05 2023-03-27 19:32:54,017 44k INFO Train Epoch: 192 [76%] 2023-03-27 19:32:54,017 44k INFO Losses: [2.2797460556030273, 2.5308191776275635, 12.309176445007324, 15.805910110473633, 0.8755463361740112], step: 139600, lr: 9.759181843778522e-05 2023-03-27 19:34:40,398 44k INFO ====> Epoch: 192, cost 449.18 s 2023-03-27 19:35:04,776 44k INFO Train Epoch: 193 [3%] 2023-03-27 19:35:04,777 44k INFO Losses: [2.395631790161133, 2.3058242797851562, 10.332178115844727, 15.966340065002441, 1.4357479810714722], step: 139800, lr: 9.757961946048049e-05 2023-03-27 19:37:07,518 44k INFO Train Epoch: 193 [31%] 2023-03-27 19:37:07,519 44k INFO Losses: [2.406252861022949, 2.339590072631836, 13.349493026733398, 19.099979400634766, 0.8507081866264343], step: 140000, lr: 9.757961946048049e-05 2023-03-27 19:37:16,934 44k INFO Saving model and optimizer state at iteration 193 to ./logs\44k\G_140000.pth 2023-03-27 19:37:18,138 44k INFO Saving model and optimizer state at iteration 193 to ./logs\44k\D_140000.pth 2023-03-27 19:37:19,145 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_137000.pth 2023-03-27 19:37:19,190 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_137000.pth 2023-03-27 19:39:21,466 44k INFO Train Epoch: 193 [58%] 2023-03-27 19:39:21,466 44k INFO Losses: [2.331926107406616, 2.2670581340789795, 16.31711196899414, 19.426807403564453, 1.165295124053955], step: 140200, lr: 9.757961946048049e-05 2023-03-27 19:41:23,649 44k INFO Train Epoch: 193 [86%] 2023-03-27 19:41:23,650 44k INFO Losses: [2.7779183387756348, 2.540911912918091, 18.165939331054688, 18.961484909057617, 1.1274611949920654], step: 140400, lr: 9.757961946048049e-05 2023-03-27 19:42:26,771 44k INFO ====> Epoch: 193, cost 466.37 s 2023-03-27 19:43:35,764 44k INFO Train Epoch: 194 [13%] 2023-03-27 19:43:35,764 44k INFO Losses: [2.3911075592041016, 2.500920534133911, 11.169350624084473, 17.27707290649414, 1.2337517738342285], step: 140600, lr: 9.756742200804793e-05 2023-03-27 19:45:36,782 44k INFO Train Epoch: 194 [41%] 2023-03-27 19:45:36,782 44k INFO Losses: [2.51815128326416, 2.315237283706665, 15.758282661437988, 17.378787994384766, 1.7161879539489746], step: 140800, lr: 9.756742200804793e-05 2023-03-27 19:47:38,487 44k INFO Train Epoch: 194 [68%] 2023-03-27 19:47:38,488 44k INFO Losses: [2.5225439071655273, 2.2937750816345215, 7.947906494140625, 13.809956550598145, 1.1029837131500244], step: 141000, lr: 9.756742200804793e-05 2023-03-27 19:47:41,993 44k INFO Saving model and optimizer state at iteration 194 to ./logs\44k\G_141000.pth 2023-03-27 19:47:43,166 44k INFO Saving model and optimizer state at iteration 194 to ./logs\44k\D_141000.pth 2023-03-27 19:47:44,126 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_138000.pth 2023-03-27 19:47:44,182 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_138000.pth 2023-03-27 19:49:46,482 44k INFO Train Epoch: 194 [96%] 2023-03-27 19:49:46,482 44k INFO Losses: [2.406101942062378, 2.3243350982666016, 8.236207008361816, 12.970823287963867, 0.6001091599464417], step: 141200, lr: 9.756742200804793e-05 2023-03-27 19:50:05,747 44k INFO ====> Epoch: 194, cost 458.98 s 2023-03-27 19:51:57,934 44k INFO Train Epoch: 195 [23%] 2023-03-27 19:51:57,934 44k INFO Losses: [2.1626389026641846, 2.612274408340454, 14.415027618408203, 19.070098876953125, 0.3283042311668396], step: 141400, lr: 9.755522608029692e-05 2023-03-27 19:54:00,056 44k INFO Train Epoch: 195 [51%] 2023-03-27 19:54:00,057 44k INFO Losses: [2.5563974380493164, 2.2144289016723633, 11.154911994934082, 13.501533508300781, 1.065426230430603], step: 141600, lr: 9.755522608029692e-05 2023-03-27 19:55:33,737 44k INFO Train Epoch: 195 [78%] 2023-03-27 19:55:33,737 44k INFO Losses: [2.333996534347534, 2.4732425212860107, 10.201565742492676, 15.393624305725098, 1.1897131204605103], step: 141800, lr: 9.755522608029692e-05 2023-03-27 19:56:31,357 44k INFO ====> Epoch: 195, cost 385.61 s 2023-03-27 19:56:55,108 44k INFO Train Epoch: 196 [5%] 2023-03-27 19:56:55,108 44k INFO Losses: [2.425924301147461, 2.368091583251953, 11.31054401397705, 16.61667251586914, 1.2943885326385498], step: 142000, lr: 9.754303167703689e-05 2023-03-27 19:56:58,170 44k INFO Saving model and optimizer state at iteration 196 to ./logs\44k\G_142000.pth 2023-03-27 19:56:58,895 44k INFO Saving model and optimizer state at iteration 196 to ./logs\44k\D_142000.pth 2023-03-27 19:56:59,587 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_139000.pth 2023-03-27 19:56:59,623 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_139000.pth 2023-03-27 19:58:12,120 44k INFO Train Epoch: 196 [33%] 2023-03-27 19:58:12,120 44k INFO Losses: [2.3957157135009766, 2.4963765144348145, 10.86655044555664, 17.056705474853516, 1.0954058170318604], step: 142200, lr: 9.754303167703689e-05 2023-03-27 19:59:25,230 44k INFO Train Epoch: 196 [60%] 2023-03-27 19:59:25,231 44k INFO Losses: [2.2853784561157227, 2.453718900680542, 15.369231224060059, 17.99281120300293, 1.2726362943649292], step: 142400, lr: 9.754303167703689e-05 2023-03-27 20:00:37,829 44k INFO Train Epoch: 196 [88%] 2023-03-27 20:00:37,829 44k INFO Losses: [2.5181896686553955, 2.236295700073242, 11.772687911987305, 16.5927677154541, 0.6095744967460632], step: 142600, lr: 9.754303167703689e-05 2023-03-27 20:01:09,633 44k INFO ====> Epoch: 196, cost 278.28 s 2023-03-27 20:01:59,516 44k INFO Train Epoch: 197 [15%] 2023-03-27 20:01:59,516 44k INFO Losses: [2.578033208847046, 1.8322361707687378, 9.245861053466797, 15.060131072998047, 1.3539458513259888], step: 142800, lr: 9.753083879807726e-05 2023-03-27 20:03:12,148 44k INFO Train Epoch: 197 [43%] 2023-03-27 20:03:12,149 44k INFO Losses: [2.3513896465301514, 2.5542171001434326, 15.272041320800781, 18.901708602905273, 0.9450777769088745], step: 143000, lr: 9.753083879807726e-05 2023-03-27 20:03:15,098 44k INFO Saving model and optimizer state at iteration 197 to ./logs\44k\G_143000.pth 2023-03-27 20:03:15,869 44k INFO Saving model and optimizer state at iteration 197 to ./logs\44k\D_143000.pth 2023-03-27 20:03:16,555 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_140000.pth 2023-03-27 20:03:16,593 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_140000.pth 2023-03-27 20:04:29,230 44k INFO Train Epoch: 197 [70%] 2023-03-27 20:04:29,230 44k INFO Losses: [2.2096316814422607, 2.6833879947662354, 10.701552391052246, 18.427349090576172, 1.7417802810668945], step: 143200, lr: 9.753083879807726e-05 2023-03-27 20:05:42,155 44k INFO Train Epoch: 197 [98%] 2023-03-27 20:05:42,155 44k INFO Losses: [2.6104671955108643, 1.972489356994629, 7.714804172515869, 13.311616897583008, 0.9258630871772766], step: 143400, lr: 9.753083879807726e-05 2023-03-27 20:05:47,913 44k INFO ====> Epoch: 197, cost 278.28 s 2023-03-27 20:07:04,498 44k INFO Train Epoch: 198 [25%] 2023-03-27 20:07:04,499 44k INFO Losses: [2.577582597732544, 2.3066415786743164, 9.341288566589355, 15.835091590881348, 1.2013955116271973], step: 143600, lr: 9.75186474432275e-05 2023-03-27 20:08:17,691 44k INFO Train Epoch: 198 [53%] 2023-03-27 20:08:17,691 44k INFO Losses: [2.4827418327331543, 2.1423988342285156, 11.042562484741211, 16.920835494995117, 1.010384440422058], step: 143800, lr: 9.75186474432275e-05 2023-03-27 20:09:30,672 44k INFO Train Epoch: 198 [80%] 2023-03-27 20:09:30,673 44k INFO Losses: [2.2888846397399902, 2.518862247467041, 13.30599594116211, 16.139951705932617, 1.1573389768600464], step: 144000, lr: 9.75186474432275e-05 2023-03-27 20:09:33,707 44k INFO Saving model and optimizer state at iteration 198 to ./logs\44k\G_144000.pth 2023-03-27 20:09:34,474 44k INFO Saving model and optimizer state at iteration 198 to ./logs\44k\D_144000.pth 2023-03-27 20:09:35,142 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_141000.pth 2023-03-27 20:09:35,180 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_141000.pth 2023-03-27 20:10:27,740 44k INFO ====> Epoch: 198, cost 279.83 s 2023-03-27 20:10:57,689 44k INFO Train Epoch: 199 [8%] 2023-03-27 20:10:57,689 44k INFO Losses: [2.5150864124298096, 2.205507755279541, 10.99285888671875, 16.062257766723633, 1.1212217807769775], step: 144200, lr: 9.750645761229709e-05 2023-03-27 20:12:10,654 44k INFO Train Epoch: 199 [35%] 2023-03-27 20:12:10,655 44k INFO Losses: [2.4877192974090576, 2.3777170181274414, 13.299330711364746, 17.281198501586914, 1.340556263923645], step: 144400, lr: 9.750645761229709e-05 2023-03-27 20:13:24,013 44k INFO Train Epoch: 199 [63%] 2023-03-27 20:13:24,014 44k INFO Losses: [2.612548828125, 2.3656506538391113, 10.026275634765625, 13.32587718963623, 0.7123855352401733], step: 144600, lr: 9.750645761229709e-05 2023-03-27 20:14:37,202 44k INFO Train Epoch: 199 [90%] 2023-03-27 20:14:37,203 44k INFO Losses: [2.395843744277954, 2.7118046283721924, 17.16822052001953, 22.106035232543945, 1.110530972480774], step: 144800, lr: 9.750645761229709e-05 2023-03-27 20:15:03,453 44k INFO ====> Epoch: 199, cost 275.71 s 2023-03-27 20:15:59,826 44k INFO Train Epoch: 200 [18%] 2023-03-27 20:15:59,826 44k INFO Losses: [2.492321491241455, 2.2742919921875, 9.012547492980957, 15.917848587036133, 0.7083293795585632], step: 145000, lr: 9.749426930509556e-05 2023-03-27 20:16:02,829 44k INFO Saving model and optimizer state at iteration 200 to ./logs\44k\G_145000.pth 2023-03-27 20:16:03,608 44k INFO Saving model and optimizer state at iteration 200 to ./logs\44k\D_145000.pth 2023-03-27 20:16:04,290 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_142000.pth 2023-03-27 20:16:04,324 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_142000.pth 2023-03-27 20:17:17,880 44k INFO Train Epoch: 200 [45%] 2023-03-27 20:17:17,881 44k INFO Losses: [2.574789524078369, 1.9460774660110474, 10.922989845275879, 14.888471603393555, 0.9487313032150269], step: 145200, lr: 9.749426930509556e-05 2023-03-27 20:18:31,061 44k INFO Train Epoch: 200 [73%] 2023-03-27 20:18:31,062 44k INFO Losses: [2.493708848953247, 2.2140517234802246, 12.346317291259766, 17.394468307495117, 0.8876485824584961], step: 145400, lr: 9.749426930509556e-05 2023-03-27 20:19:44,687 44k INFO ====> Epoch: 200, cost 281.23 s 2023-03-27 23:14:27,624 44k INFO {'train': {'log_interval': 200, 'eval_interval': 1000, 'seed': 1234, 'epochs': 300, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kageaki': 0}, 'model_dir': './logs\\44k'} 2023-03-27 23:14:27,653 44k WARNING git hash values are different. cea6df30(saved) != fd4d47fd(current) 2023-03-27 23:14:29,664 44k INFO Loaded checkpoint './logs\44k\G_145000.pth' (iteration 200) 2023-03-27 23:14:30,006 44k INFO Loaded checkpoint './logs\44k\D_145000.pth' (iteration 200) 2023-03-27 23:17:31,393 44k INFO {'train': {'log_interval': 200, 'eval_interval': 1000, 'seed': 1234, 'epochs': 300, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kageaki': 0}, 'model_dir': './logs\\44k'} 2023-03-27 23:17:31,419 44k WARNING git hash values are different. cea6df30(saved) != fd4d47fd(current) 2023-03-27 23:17:33,437 44k INFO Loaded checkpoint './logs\44k\G_145000.pth' (iteration 200) 2023-03-27 23:17:33,785 44k INFO Loaded checkpoint './logs\44k\D_145000.pth' (iteration 200) 2023-03-27 23:18:41,373 44k INFO Train Epoch: 200 [18%] 2023-03-27 23:18:41,374 44k INFO Losses: [2.6295700073242188, 2.2805099487304688, 10.520780563354492, 15.424920082092285, 0.5140539407730103], step: 145000, lr: 9.748208252143241e-05 2023-03-27 23:18:45,068 44k INFO Saving model and optimizer state at iteration 200 to ./logs\44k\G_145000.pth 2023-03-27 23:18:45,825 44k INFO Saving model and optimizer state at iteration 200 to ./logs\44k\D_145000.pth 2023-03-27 23:20:12,124 44k INFO Train Epoch: 200 [45%] 2023-03-27 23:20:12,125 44k INFO Losses: [2.8118667602539062, 1.9169654846191406, 4.102195739746094, 13.320213317871094, 1.3006114959716797], step: 145200, lr: 9.748208252143241e-05 2023-03-27 23:21:34,824 44k INFO Train Epoch: 200 [73%] 2023-03-27 23:21:34,824 44k INFO Losses: [2.4362378120422363, 2.229292869567871, 12.326255798339844, 14.870326042175293, 1.0914028882980347], step: 145400, lr: 9.748208252143241e-05 2023-03-27 23:22:56,971 44k INFO ====> Epoch: 200, cost 325.58 s 2023-03-27 23:23:06,076 44k INFO Train Epoch: 201 [0%] 2023-03-27 23:23:06,076 44k INFO Losses: [2.522116184234619, 2.315206289291382, 13.33376693725586, 17.440086364746094, 0.6209852695465088], step: 145600, lr: 9.746989726111722e-05 2023-03-27 23:24:13,638 44k INFO Train Epoch: 201 [27%] 2023-03-27 23:24:13,639 44k INFO Losses: [2.450591564178467, 2.1720428466796875, 8.87289047241211, 10.257354736328125, 0.8083175420761108], step: 145800, lr: 9.746989726111722e-05 2023-03-27 23:25:20,352 44k INFO Train Epoch: 201 [55%] 2023-03-27 23:25:20,353 44k INFO Losses: [2.549207925796509, 2.0439255237579346, 7.795276641845703, 12.503348350524902, 0.9623158574104309], step: 146000, lr: 9.746989726111722e-05 2023-03-27 23:25:23,334 44k INFO Saving model and optimizer state at iteration 201 to ./logs\44k\G_146000.pth 2023-03-27 23:25:24,187 44k INFO Saving model and optimizer state at iteration 201 to ./logs\44k\D_146000.pth 2023-03-27 23:25:25,045 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_143000.pth 2023-03-27 23:25:25,078 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_143000.pth 2023-03-27 23:26:32,363 44k INFO Train Epoch: 201 [82%] 2023-03-27 23:26:32,364 44k INFO Losses: [2.4395885467529297, 2.2506091594696045, 11.0452299118042, 16.23900604248047, 1.1028486490249634], step: 146200, lr: 9.746989726111722e-05 2023-03-27 23:27:14,949 44k INFO ====> Epoch: 201, cost 257.98 s 2023-03-27 23:27:48,095 44k INFO Train Epoch: 202 [10%] 2023-03-27 23:27:48,095 44k INFO Losses: [2.2367684841156006, 2.4364233016967773, 13.821547508239746, 17.376903533935547, 1.2147448062896729], step: 146400, lr: 9.745771352395957e-05 2023-03-27 23:28:55,633 44k INFO Train Epoch: 202 [37%] 2023-03-27 23:28:55,634 44k INFO Losses: [2.5322256088256836, 2.065255641937256, 10.902167320251465, 15.912827491760254, 1.0995049476623535], step: 146600, lr: 9.745771352395957e-05 2023-03-27 23:30:02,604 44k INFO Train Epoch: 202 [65%] 2023-03-27 23:30:02,604 44k INFO Losses: [2.266462802886963, 2.298971176147461, 13.311721801757812, 17.37950325012207, 1.125520944595337], step: 146800, lr: 9.745771352395957e-05 2023-03-27 23:31:09,700 44k INFO Train Epoch: 202 [92%] 2023-03-27 23:31:09,701 44k INFO Losses: [2.4120049476623535, 2.3233296871185303, 12.571407318115234, 17.20088005065918, 1.0911911725997925], step: 147000, lr: 9.745771352395957e-05 2023-03-27 23:31:12,652 44k INFO Saving model and optimizer state at iteration 202 to ./logs\44k\G_147000.pth 2023-03-27 23:31:13,398 44k INFO Saving model and optimizer state at iteration 202 to ./logs\44k\D_147000.pth 2023-03-27 23:31:14,032 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_144000.pth 2023-03-27 23:31:14,076 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_144000.pth 2023-03-27 23:31:32,439 44k INFO ====> Epoch: 202, cost 257.49 s 2023-03-27 23:32:30,009 44k INFO Train Epoch: 203 [20%] 2023-03-27 23:32:30,009 44k INFO Losses: [2.363846778869629, 2.640907049179077, 7.763689994812012, 11.70701789855957, 1.0778807401657104], step: 147200, lr: 9.744553130976908e-05 2023-03-27 23:33:36,786 44k INFO Train Epoch: 203 [47%] 2023-03-27 23:33:36,787 44k INFO Losses: [2.575544834136963, 2.212747812271118, 10.958637237548828, 15.520188331604004, 0.8300553560256958], step: 147400, lr: 9.744553130976908e-05 2023-03-27 23:34:44,469 44k INFO Train Epoch: 203 [75%] 2023-03-27 23:34:44,469 44k INFO Losses: [2.629086494445801, 2.192251205444336, 13.077086448669434, 16.11085319519043, 0.8167790770530701], step: 147600, lr: 9.744553130976908e-05 2023-03-27 23:35:45,788 44k INFO ====> Epoch: 203, cost 253.35 s 2023-03-27 23:36:00,227 44k INFO Train Epoch: 204 [2%] 2023-03-27 23:36:00,228 44k INFO Losses: [2.255709409713745, 2.4295876026153564, 8.336153984069824, 14.115283012390137, 1.1108721494674683], step: 147800, lr: 9.743335061835535e-05 2023-03-27 23:37:07,486 44k INFO Train Epoch: 204 [30%] 2023-03-27 23:37:07,486 44k INFO Losses: [2.4584031105041504, 2.740936040878296, 13.37163257598877, 17.885610580444336, 0.9221645593643188], step: 148000, lr: 9.743335061835535e-05 2023-03-27 23:37:10,525 44k INFO Saving model and optimizer state at iteration 204 to ./logs\44k\G_148000.pth 2023-03-27 23:37:11,229 44k INFO Saving model and optimizer state at iteration 204 to ./logs\44k\D_148000.pth 2023-03-27 23:37:11,899 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_145000.pth 2023-03-27 23:37:11,929 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_145000.pth 2023-03-27 23:38:18,777 44k INFO Train Epoch: 204 [57%] 2023-03-27 23:38:18,777 44k INFO Losses: [2.637502670288086, 2.094529628753662, 11.544476509094238, 14.857647895812988, 0.8502796292304993], step: 148200, lr: 9.743335061835535e-05 2023-03-27 23:39:25,992 44k INFO Train Epoch: 204 [85%] 2023-03-27 23:39:25,992 44k INFO Losses: [2.4629175662994385, 2.180807113647461, 6.815831184387207, 11.886754035949707, 0.8205798268318176], step: 148400, lr: 9.743335061835535e-05 2023-03-27 23:40:03,189 44k INFO ====> Epoch: 204, cost 257.40 s 2023-03-27 23:40:41,556 44k INFO Train Epoch: 205 [12%] 2023-03-27 23:40:41,556 44k INFO Losses: [2.1450371742248535, 2.6513679027557373, 10.038383483886719, 15.067492485046387, 1.0561209917068481], step: 148600, lr: 9.742117144952805e-05 2023-03-27 23:41:48,901 44k INFO Train Epoch: 205 [40%] 2023-03-27 23:41:48,902 44k INFO Losses: [2.405231237411499, 2.254415512084961, 14.282098770141602, 18.814910888671875, 0.7454627752304077], step: 148800, lr: 9.742117144952805e-05 2023-03-27 23:42:56,132 44k INFO Train Epoch: 205 [67%] 2023-03-27 23:42:56,133 44k INFO Losses: [2.529339075088501, 2.2790770530700684, 7.676848888397217, 10.605709075927734, 0.9104920029640198], step: 149000, lr: 9.742117144952805e-05 2023-03-27 23:42:59,184 44k INFO Saving model and optimizer state at iteration 205 to ./logs\44k\G_149000.pth 2023-03-27 23:42:59,896 44k INFO Saving model and optimizer state at iteration 205 to ./logs\44k\D_149000.pth 2023-03-27 23:43:00,581 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_146000.pth 2023-03-27 23:43:00,623 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_146000.pth 2023-03-27 23:44:07,665 44k INFO Train Epoch: 205 [95%] 2023-03-27 23:44:07,665 44k INFO Losses: [2.470421314239502, 1.936044692993164, 13.083686828613281, 15.41603946685791, 1.0213648080825806], step: 149200, lr: 9.742117144952805e-05 2023-03-27 23:44:20,926 44k INFO ====> Epoch: 205, cost 257.74 s 2023-03-27 23:45:24,058 44k INFO Train Epoch: 206 [22%] 2023-03-27 23:45:24,059 44k INFO Losses: [2.66035532951355, 2.1134185791015625, 6.474897384643555, 12.481965065002441, 0.8770398497581482], step: 149400, lr: 9.740899380309685e-05 2023-03-27 23:46:30,841 44k INFO Train Epoch: 206 [49%] 2023-03-27 23:46:30,841 44k INFO Losses: [2.3613548278808594, 2.7724051475524902, 13.944478988647461, 15.426020622253418, 0.9012429714202881], step: 149600, lr: 9.740899380309685e-05 2023-03-27 23:47:38,441 44k INFO Train Epoch: 206 [77%] 2023-03-27 23:47:38,441 44k INFO Losses: [2.0758347511291504, 2.8793811798095703, 9.310397148132324, 13.703941345214844, 1.2380566596984863], step: 149800, lr: 9.740899380309685e-05 2023-03-27 23:48:34,532 44k INFO ====> Epoch: 206, cost 253.61 s 2023-03-27 23:48:54,327 44k INFO Train Epoch: 207 [4%] 2023-03-27 23:48:54,327 44k INFO Losses: [2.6456007957458496, 1.9377191066741943, 9.993732452392578, 12.637792587280273, 1.042197823524475], step: 150000, lr: 9.739681767887146e-05 2023-03-27 23:48:57,294 44k INFO Saving model and optimizer state at iteration 207 to ./logs\44k\G_150000.pth 2023-03-27 23:48:57,996 44k INFO Saving model and optimizer state at iteration 207 to ./logs\44k\D_150000.pth 2023-03-27 23:48:58,689 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_147000.pth 2023-03-27 23:48:58,727 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_147000.pth 2023-03-27 23:50:06,528 44k INFO Train Epoch: 207 [32%] 2023-03-27 23:50:06,529 44k INFO Losses: [2.4724981784820557, 2.3944177627563477, 13.510619163513184, 18.625497817993164, 1.3260475397109985], step: 150200, lr: 9.739681767887146e-05 2023-03-27 23:51:13,656 44k INFO Train Epoch: 207 [59%] 2023-03-27 23:51:13,657 44k INFO Losses: [2.3675479888916016, 2.7005295753479004, 15.070945739746094, 17.289161682128906, 1.5206645727157593], step: 150400, lr: 9.739681767887146e-05 2023-03-27 23:52:20,880 44k INFO Train Epoch: 207 [87%] 2023-03-27 23:52:20,881 44k INFO Losses: [2.283729076385498, 2.478341817855835, 11.865056991577148, 18.56964683532715, 1.1079576015472412], step: 150600, lr: 9.739681767887146e-05 2023-03-27 23:52:52,934 44k INFO ====> Epoch: 207, cost 258.40 s 2023-03-27 23:53:36,946 44k INFO Train Epoch: 208 [14%] 2023-03-27 23:53:36,946 44k INFO Losses: [2.468014717102051, 2.1440467834472656, 13.99179744720459, 18.317758560180664, 1.0128626823425293], step: 150800, lr: 9.73846430766616e-05 2023-03-27 23:54:44,228 44k INFO Train Epoch: 208 [42%] 2023-03-27 23:54:44,228 44k INFO Losses: [2.326314687728882, 2.5509400367736816, 13.284374237060547, 17.851470947265625, 0.9738100171089172], step: 151000, lr: 9.73846430766616e-05 2023-03-27 23:54:47,265 44k INFO Saving model and optimizer state at iteration 208 to ./logs\44k\G_151000.pth 2023-03-27 23:54:47,967 44k INFO Saving model and optimizer state at iteration 208 to ./logs\44k\D_151000.pth 2023-03-27 23:54:48,682 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_148000.pth 2023-03-27 23:54:48,725 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_148000.pth 2023-03-27 23:55:56,120 44k INFO Train Epoch: 208 [69%] 2023-03-27 23:55:56,121 44k INFO Losses: [2.5814497470855713, 2.1347274780273438, 9.818440437316895, 10.426322937011719, 0.9300828576087952], step: 151200, lr: 9.73846430766616e-05 2023-03-27 23:57:03,193 44k INFO Train Epoch: 208 [97%] 2023-03-27 23:57:03,193 44k INFO Losses: [2.4104652404785156, 2.3328123092651367, 12.264719009399414, 15.848502159118652, 0.7210817933082581], step: 151400, lr: 9.73846430766616e-05 2023-03-27 23:57:11,224 44k INFO ====> Epoch: 208, cost 258.29 s 2023-03-27 23:58:19,788 44k INFO Train Epoch: 209 [24%] 2023-03-27 23:58:19,788 44k INFO Losses: [2.4045581817626953, 2.4170846939086914, 10.660704612731934, 14.126261711120605, 1.2506674528121948], step: 151600, lr: 9.7372469996277e-05 2023-03-27 23:59:26,483 44k INFO Train Epoch: 209 [52%] 2023-03-27 23:59:26,483 44k INFO Losses: [2.3686234951019287, 2.357727527618408, 15.784347534179688, 18.02891731262207, 1.6830520629882812], step: 151800, lr: 9.7372469996277e-05 2023-03-28 00:00:34,196 44k INFO Train Epoch: 209 [79%] 2023-03-28 00:00:34,196 44k INFO Losses: [2.3907430171966553, 2.573622465133667, 14.375849723815918, 19.472980499267578, 1.5437452793121338], step: 152000, lr: 9.7372469996277e-05 2023-03-28 00:00:37,187 44k INFO Saving model and optimizer state at iteration 209 to ./logs\44k\G_152000.pth 2023-03-28 00:00:37,887 44k INFO Saving model and optimizer state at iteration 209 to ./logs\44k\D_152000.pth 2023-03-28 00:00:38,549 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_149000.pth 2023-03-28 00:00:38,579 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_149000.pth 2023-03-28 00:01:29,184 44k INFO ====> Epoch: 209, cost 257.96 s 2023-03-28 00:01:54,399 44k INFO Train Epoch: 210 [7%] 2023-03-28 00:01:54,400 44k INFO Losses: [2.5155112743377686, 2.4151365756988525, 13.548646926879883, 16.588010787963867, 0.4522778391838074], step: 152200, lr: 9.736029843752747e-05 2023-03-28 00:03:02,164 44k INFO Train Epoch: 210 [34%] 2023-03-28 00:03:02,165 44k INFO Losses: [2.280620813369751, 2.497539520263672, 10.677286148071289, 18.399188995361328, 1.212050199508667], step: 152400, lr: 9.736029843752747e-05 2023-03-28 00:04:09,186 44k INFO Train Epoch: 210 [62%] 2023-03-28 00:04:09,187 44k INFO Losses: [2.4744181632995605, 2.0138535499572754, 11.112613677978516, 14.335602760314941, 0.9818848371505737], step: 152600, lr: 9.736029843752747e-05 2023-03-28 00:05:16,365 44k INFO Train Epoch: 210 [89%] 2023-03-28 00:05:16,365 44k INFO Losses: [2.5711238384246826, 2.0526387691497803, 12.204059600830078, 15.291352272033691, 0.734311580657959], step: 152800, lr: 9.736029843752747e-05 2023-03-28 00:05:43,144 44k INFO ====> Epoch: 210, cost 253.96 s 2023-03-28 00:06:32,616 44k INFO Train Epoch: 211 [16%] 2023-03-28 00:06:32,617 44k INFO Losses: [2.436908721923828, 2.205681085586548, 10.792365074157715, 17.584890365600586, 1.0386147499084473], step: 153000, lr: 9.734812840022278e-05 2023-03-28 00:06:35,523 44k INFO Saving model and optimizer state at iteration 211 to ./logs\44k\G_153000.pth 2023-03-28 00:06:36,271 44k INFO Saving model and optimizer state at iteration 211 to ./logs\44k\D_153000.pth 2023-03-28 00:06:36,941 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_150000.pth 2023-03-28 00:06:36,977 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_150000.pth 2023-03-28 00:07:44,085 44k INFO Train Epoch: 211 [44%] 2023-03-28 00:07:44,086 44k INFO Losses: [2.4976298809051514, 2.408097982406616, 9.864250183105469, 15.682534217834473, 1.1292909383773804], step: 153200, lr: 9.734812840022278e-05 2023-03-28 00:08:51,645 44k INFO Train Epoch: 211 [71%] 2023-03-28 00:08:51,645 44k INFO Losses: [2.351140260696411, 2.521085262298584, 13.289148330688477, 16.05813980102539, 0.8326223492622375], step: 153400, lr: 9.734812840022278e-05 2023-03-28 00:09:58,798 44k INFO Train Epoch: 211 [99%] 2023-03-28 00:09:58,798 44k INFO Losses: [2.3977696895599365, 2.5468673706054688, 10.52647876739502, 13.951887130737305, 0.7874658703804016], step: 153600, lr: 9.734812840022278e-05 2023-03-28 00:10:01,593 44k INFO ====> Epoch: 211, cost 258.45 s 2023-03-28 00:11:15,544 44k INFO Train Epoch: 212 [26%] 2023-03-28 00:11:15,544 44k INFO Losses: [2.2894797325134277, 2.502462387084961, 12.46194076538086, 15.949249267578125, 1.0450725555419922], step: 153800, lr: 9.733595988417275e-05 2023-03-28 00:12:22,553 44k INFO Train Epoch: 212 [54%] 2023-03-28 00:12:22,553 44k INFO Losses: [2.5823259353637695, 2.2345869541168213, 11.868338584899902, 18.998409271240234, 0.778569757938385], step: 154000, lr: 9.733595988417275e-05 2023-03-28 00:12:25,572 44k INFO Saving model and optimizer state at iteration 212 to ./logs\44k\G_154000.pth 2023-03-28 00:12:26,325 44k INFO Saving model and optimizer state at iteration 212 to ./logs\44k\D_154000.pth 2023-03-28 00:12:27,004 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_151000.pth 2023-03-28 00:12:27,041 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_151000.pth 2023-03-28 00:13:34,556 44k INFO Train Epoch: 212 [81%] 2023-03-28 00:13:34,556 44k INFO Losses: [2.33954119682312, 2.4840879440307617, 11.824288368225098, 16.697946548461914, 1.2479517459869385], step: 154200, lr: 9.733595988417275e-05 2023-03-28 00:14:20,023 44k INFO ====> Epoch: 212, cost 258.43 s 2023-03-28 00:14:50,798 44k INFO Train Epoch: 213 [9%] 2023-03-28 00:14:50,798 44k INFO Losses: [2.3250162601470947, 2.2692208290100098, 7.755817413330078, 13.222332954406738, 0.6755272150039673], step: 154400, lr: 9.732379288918723e-05 2023-03-28 00:15:58,674 44k INFO Train Epoch: 213 [36%] 2023-03-28 00:15:58,674 44k INFO Losses: [2.381943464279175, 2.488792896270752, 12.294549942016602, 15.587014198303223, 0.9304174780845642], step: 154600, lr: 9.732379288918723e-05 2023-03-28 00:17:05,734 44k INFO Train Epoch: 213 [64%] 2023-03-28 00:17:05,735 44k INFO Losses: [2.1908175945281982, 2.5602898597717285, 13.113363265991211, 17.293581008911133, 0.9503764510154724], step: 154800, lr: 9.732379288918723e-05 2023-03-28 00:18:13,114 44k INFO Train Epoch: 213 [91%] 2023-03-28 00:18:13,115 44k INFO Losses: [2.5153114795684814, 1.9680471420288086, 12.00586223602295, 13.548628807067871, 1.3579400777816772], step: 155000, lr: 9.732379288918723e-05 2023-03-28 00:18:16,070 44k INFO Saving model and optimizer state at iteration 213 to ./logs\44k\G_155000.pth 2023-03-28 00:18:16,765 44k INFO Saving model and optimizer state at iteration 213 to ./logs\44k\D_155000.pth 2023-03-28 00:18:17,407 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_152000.pth 2023-03-28 00:18:17,439 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_152000.pth 2023-03-28 00:18:38,721 44k INFO ====> Epoch: 213, cost 258.70 s 2023-03-28 00:19:33,756 44k INFO Train Epoch: 214 [19%] 2023-03-28 00:19:33,757 44k INFO Losses: [2.629568099975586, 2.095581531524658, 11.583584785461426, 15.085766792297363, 0.6082473397254944], step: 155200, lr: 9.731162741507607e-05 2023-03-28 00:20:40,879 44k INFO Train Epoch: 214 [46%] 2023-03-28 00:20:40,880 44k INFO Losses: [2.4399778842926025, 2.2290942668914795, 12.975153923034668, 16.977619171142578, 1.158812165260315], step: 155400, lr: 9.731162741507607e-05 2023-03-28 00:21:48,562 44k INFO Train Epoch: 214 [74%] 2023-03-28 00:21:48,563 44k INFO Losses: [2.434842109680176, 2.6532695293426514, 9.544766426086426, 14.982544898986816, 0.9968459606170654], step: 155600, lr: 9.731162741507607e-05 2023-03-28 00:22:52,923 44k INFO ====> Epoch: 214, cost 254.20 s 2023-03-28 00:23:04,983 44k INFO Train Epoch: 215 [1%] 2023-03-28 00:23:04,984 44k INFO Losses: [2.368793487548828, 2.3242812156677246, 13.05549430847168, 18.844221115112305, 0.8095265030860901], step: 155800, lr: 9.729946346164919e-05 2023-03-28 00:24:12,303 44k INFO Train Epoch: 215 [29%] 2023-03-28 00:24:12,303 44k INFO Losses: [2.543311357498169, 2.371553897857666, 9.593096733093262, 17.398452758789062, 1.1716976165771484], step: 156000, lr: 9.729946346164919e-05 2023-03-28 00:24:15,329 44k INFO Saving model and optimizer state at iteration 215 to ./logs\44k\G_156000.pth 2023-03-28 00:24:16,020 44k INFO Saving model and optimizer state at iteration 215 to ./logs\44k\D_156000.pth 2023-03-28 00:24:16,697 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_153000.pth 2023-03-28 00:24:16,737 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_153000.pth 2023-03-28 00:25:23,678 44k INFO Train Epoch: 215 [56%] 2023-03-28 00:25:23,678 44k INFO Losses: [2.524212598800659, 2.336064577102661, 9.742196083068848, 14.131464958190918, 1.0461419820785522], step: 156200, lr: 9.729946346164919e-05 2023-03-28 00:26:30,939 44k INFO Train Epoch: 215 [84%] 2023-03-28 00:26:30,939 44k INFO Losses: [2.7559475898742676, 2.104506015777588, 4.1693034172058105, 8.035669326782227, 1.1267285346984863], step: 156400, lr: 9.729946346164919e-05 2023-03-28 00:27:10,939 44k INFO ====> Epoch: 215, cost 258.02 s 2023-03-28 00:27:46,900 44k INFO Train Epoch: 216 [11%] 2023-03-28 00:27:46,901 44k INFO Losses: [2.348644733428955, 2.2504332065582275, 14.190638542175293, 17.92361068725586, 1.2300751209259033], step: 156600, lr: 9.728730102871649e-05 2023-03-28 00:28:54,323 44k INFO Train Epoch: 216 [38%] 2023-03-28 00:28:54,324 44k INFO Losses: [2.2533490657806396, 2.3713786602020264, 11.136995315551758, 18.18634605407715, 0.54134601354599], step: 156800, lr: 9.728730102871649e-05 2023-03-28 00:30:01,293 44k INFO Train Epoch: 216 [66%] 2023-03-28 00:30:01,293 44k INFO Losses: [2.7698559761047363, 2.2357373237609863, 8.065740585327148, 13.901379585266113, 0.8024920225143433], step: 157000, lr: 9.728730102871649e-05 2023-03-28 00:30:04,385 44k INFO Saving model and optimizer state at iteration 216 to ./logs\44k\G_157000.pth 2023-03-28 00:30:05,142 44k INFO Saving model and optimizer state at iteration 216 to ./logs\44k\D_157000.pth 2023-03-28 00:30:05,811 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_154000.pth 2023-03-28 00:30:05,852 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_154000.pth 2023-03-28 00:31:12,907 44k INFO Train Epoch: 216 [93%] 2023-03-28 00:31:12,908 44k INFO Losses: [2.3941574096679688, 2.393864393234253, 13.235722541809082, 16.48375129699707, 0.8456860184669495], step: 157200, lr: 9.728730102871649e-05 2023-03-28 00:31:28,826 44k INFO ====> Epoch: 216, cost 257.89 s 2023-03-28 00:32:29,208 44k INFO Train Epoch: 217 [21%] 2023-03-28 00:32:29,209 44k INFO Losses: [2.344559907913208, 2.2787160873413086, 10.095307350158691, 18.83137321472168, 1.2564560174942017], step: 157400, lr: 9.727514011608789e-05 2023-03-28 00:33:35,702 44k INFO Train Epoch: 217 [48%] 2023-03-28 00:33:35,702 44k INFO Losses: [2.364703893661499, 2.4957683086395264, 11.603338241577148, 16.846017837524414, 0.936181902885437], step: 157600, lr: 9.727514011608789e-05 2023-03-28 00:34:43,218 44k INFO Train Epoch: 217 [76%] 2023-03-28 00:34:43,218 44k INFO Losses: [2.341438055038452, 2.6017568111419678, 12.609556198120117, 18.84037971496582, 0.8819213509559631], step: 157800, lr: 9.727514011608789e-05 2023-03-28 00:35:41,944 44k INFO ====> Epoch: 217, cost 253.12 s 2023-03-28 00:35:59,332 44k INFO Train Epoch: 218 [3%] 2023-03-28 00:35:59,332 44k INFO Losses: [2.4556632041931152, 1.9676597118377686, 10.612035751342773, 16.09865951538086, 1.183340311050415], step: 158000, lr: 9.726298072357337e-05 2023-03-28 00:36:02,316 44k INFO Saving model and optimizer state at iteration 218 to ./logs\44k\G_158000.pth 2023-03-28 00:36:03,064 44k INFO Saving model and optimizer state at iteration 218 to ./logs\44k\D_158000.pth 2023-03-28 00:36:03,741 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_155000.pth 2023-03-28 00:36:03,784 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_155000.pth 2023-03-28 00:37:11,136 44k INFO Train Epoch: 218 [31%] 2023-03-28 00:37:11,136 44k INFO Losses: [2.404050588607788, 2.268989324569702, 13.041009902954102, 17.594804763793945, 0.8388211727142334], step: 158200, lr: 9.726298072357337e-05 2023-03-28 00:38:18,363 44k INFO Train Epoch: 218 [58%] 2023-03-28 00:38:18,363 44k INFO Losses: [2.3283474445343018, 2.278191328048706, 14.81308650970459, 18.552318572998047, 1.325757384300232], step: 158400, lr: 9.726298072357337e-05 2023-03-28 00:39:25,714 44k INFO Train Epoch: 218 [86%] 2023-03-28 00:39:25,715 44k INFO Losses: [2.2847518920898438, 2.481142282485962, 17.1648006439209, 19.261241912841797, 1.3317395448684692], step: 158600, lr: 9.726298072357337e-05 2023-03-28 00:40:00,268 44k INFO ====> Epoch: 218, cost 258.32 s 2023-03-28 00:40:41,653 44k INFO Train Epoch: 219 [13%] 2023-03-28 00:40:41,653 44k INFO Losses: [2.369666814804077, 2.381175994873047, 12.058052062988281, 16.264074325561523, 1.1925729513168335], step: 158800, lr: 9.725082285098293e-05 2023-03-28 00:41:48,903 44k INFO Train Epoch: 219 [41%] 2023-03-28 00:41:48,904 44k INFO Losses: [2.5712814331054688, 2.6317825317382812, 12.461825370788574, 17.924219131469727, 1.3118122816085815], step: 159000, lr: 9.725082285098293e-05 2023-03-28 00:41:51,936 44k INFO Saving model and optimizer state at iteration 219 to ./logs\44k\G_159000.pth 2023-03-28 00:41:52,681 44k INFO Saving model and optimizer state at iteration 219 to ./logs\44k\D_159000.pth 2023-03-28 00:41:53,363 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_156000.pth 2023-03-28 00:41:53,392 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_156000.pth 2023-03-28 00:43:00,568 44k INFO Train Epoch: 219 [68%] 2023-03-28 00:43:00,568 44k INFO Losses: [2.427757501602173, 2.3118865489959717, 12.655099868774414, 16.06748390197754, 0.9897794723510742], step: 159200, lr: 9.725082285098293e-05 2023-03-28 00:44:07,688 44k INFO Train Epoch: 219 [96%] 2023-03-28 00:44:07,688 44k INFO Losses: [2.2903361320495605, 3.023547887802124, 6.04966926574707, 8.058762550354004, 0.5347076654434204], step: 159400, lr: 9.725082285098293e-05 2023-03-28 00:44:18,268 44k INFO ====> Epoch: 219, cost 258.00 s 2023-03-28 00:45:24,137 44k INFO Train Epoch: 220 [23%] 2023-03-28 00:45:24,137 44k INFO Losses: [2.20373272895813, 2.6352672576904297, 13.357023239135742, 17.071640014648438, 0.30776938796043396], step: 159600, lr: 9.723866649812655e-05 2023-03-28 00:46:30,782 44k INFO Train Epoch: 220 [51%] 2023-03-28 00:46:30,782 44k INFO Losses: [2.3584508895874023, 2.2395007610321045, 9.70170783996582, 14.314813613891602, 0.9075280427932739], step: 159800, lr: 9.723866649812655e-05 2023-03-28 00:47:38,322 44k INFO Train Epoch: 220 [78%] 2023-03-28 00:47:38,323 44k INFO Losses: [2.309356927871704, 2.3737082481384277, 11.236969947814941, 15.833280563354492, 1.0339505672454834], step: 160000, lr: 9.723866649812655e-05 2023-03-28 00:47:41,290 44k INFO Saving model and optimizer state at iteration 220 to ./logs\44k\G_160000.pth 2023-03-28 00:47:42,004 44k INFO Saving model and optimizer state at iteration 220 to ./logs\44k\D_160000.pth 2023-03-28 00:47:42,693 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_157000.pth 2023-03-28 00:47:42,722 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_157000.pth 2023-03-28 00:48:36,214 44k INFO ====> Epoch: 220, cost 257.95 s 2023-03-28 00:48:58,961 44k INFO Train Epoch: 221 [5%] 2023-03-28 00:48:58,961 44k INFO Losses: [2.329592704772949, 2.454169511795044, 11.93664264678955, 15.903014183044434, 1.2026816606521606], step: 160200, lr: 9.722651166481428e-05 2023-03-28 00:50:06,563 44k INFO Train Epoch: 221 [33%] 2023-03-28 00:50:06,563 44k INFO Losses: [2.616584300994873, 2.3818418979644775, 10.217843055725098, 14.285658836364746, 1.3114185333251953], step: 160400, lr: 9.722651166481428e-05 2023-03-28 00:51:13,592 44k INFO Train Epoch: 221 [60%] 2023-03-28 00:51:13,592 44k INFO Losses: [2.2732560634613037, 2.172494649887085, 13.437975883483887, 18.305326461791992, 0.4106939435005188], step: 160600, lr: 9.722651166481428e-05 2023-03-28 00:52:20,824 44k INFO Train Epoch: 221 [88%] 2023-03-28 00:52:20,825 44k INFO Losses: [2.39888596534729, 2.4117183685302734, 13.914600372314453, 18.24996566772461, 1.2198164463043213], step: 160800, lr: 9.722651166481428e-05 2023-03-28 00:52:50,318 44k INFO ====> Epoch: 221, cost 254.10 s 2023-03-28 00:53:37,336 44k INFO Train Epoch: 222 [15%] 2023-03-28 00:53:37,336 44k INFO Losses: [2.3303062915802, 2.6669375896453857, 9.278690338134766, 13.538252830505371, 0.6540575623512268], step: 161000, lr: 9.721435835085619e-05 2023-03-28 00:53:40,340 44k INFO Saving model and optimizer state at iteration 222 to ./logs\44k\G_161000.pth 2023-03-28 00:53:41,042 44k INFO Saving model and optimizer state at iteration 222 to ./logs\44k\D_161000.pth 2023-03-28 00:53:41,717 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_158000.pth 2023-03-28 00:53:41,746 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_158000.pth 2023-03-28 00:54:48,855 44k INFO Train Epoch: 222 [43%] 2023-03-28 00:54:48,856 44k INFO Losses: [2.2848379611968994, 2.582167863845825, 16.745500564575195, 18.650951385498047, 0.9852872490882874], step: 161200, lr: 9.721435835085619e-05 2023-03-28 00:55:56,284 44k INFO Train Epoch: 222 [70%] 2023-03-28 00:55:56,284 44k INFO Losses: [2.3918590545654297, 2.195383071899414, 10.662549018859863, 18.152263641357422, 0.8566103577613831], step: 161400, lr: 9.721435835085619e-05 2023-03-28 00:57:03,323 44k INFO Train Epoch: 222 [98%] 2023-03-28 00:57:03,324 44k INFO Losses: [2.6102283000946045, 2.0714316368103027, 6.696726322174072, 11.734916687011719, 1.3753793239593506], step: 161600, lr: 9.721435835085619e-05 2023-03-28 00:57:08,775 44k INFO ====> Epoch: 222, cost 258.46 s 2023-03-28 00:58:20,064 44k INFO Train Epoch: 223 [25%] 2023-03-28 00:58:20,065 44k INFO Losses: [2.5618059635162354, 2.335240602493286, 9.121556282043457, 12.973976135253906, 0.9503701329231262], step: 161800, lr: 9.720220655606233e-05 2023-03-28 00:59:26,944 44k INFO Train Epoch: 223 [53%] 2023-03-28 00:59:26,944 44k INFO Losses: [2.4499611854553223, 2.2927374839782715, 11.355008125305176, 15.53852367401123, 1.0645198822021484], step: 162000, lr: 9.720220655606233e-05 2023-03-28 00:59:29,986 44k INFO Saving model and optimizer state at iteration 223 to ./logs\44k\G_162000.pth 2023-03-28 00:59:30,692 44k INFO Saving model and optimizer state at iteration 223 to ./logs\44k\D_162000.pth 2023-03-28 00:59:31,382 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_159000.pth 2023-03-28 00:59:31,412 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_159000.pth 2023-03-28 01:00:38,953 44k INFO Train Epoch: 223 [80%] 2023-03-28 01:00:38,954 44k INFO Losses: [2.5092825889587402, 2.440598726272583, 9.557066917419434, 16.368379592895508, 0.723616898059845], step: 162200, lr: 9.720220655606233e-05 2023-03-28 01:01:27,236 44k INFO ====> Epoch: 223, cost 258.46 s 2023-03-28 01:01:55,254 44k INFO Train Epoch: 224 [8%] 2023-03-28 01:01:55,254 44k INFO Losses: [2.4408698081970215, 2.3408288955688477, 11.456347465515137, 17.92219352722168, 1.0851608514785767], step: 162400, lr: 9.719005628024282e-05 2023-03-28 01:03:02,971 44k INFO Train Epoch: 224 [35%] 2023-03-28 01:03:02,972 44k INFO Losses: [2.383344888687134, 2.3278543949127197, 11.997212409973145, 15.540338516235352, 0.8255746364593506], step: 162600, lr: 9.719005628024282e-05 2023-03-28 01:04:10,149 44k INFO Train Epoch: 224 [63%] 2023-03-28 01:04:10,149 44k INFO Losses: [2.558523654937744, 2.448843479156494, 11.732719421386719, 15.975915908813477, 0.7894180417060852], step: 162800, lr: 9.719005628024282e-05 2023-03-28 01:05:17,439 44k INFO Train Epoch: 224 [90%] 2023-03-28 01:05:17,439 44k INFO Losses: [2.5776212215423584, 2.4149699211120605, 10.464372634887695, 19.283185958862305, 1.2830396890640259], step: 163000, lr: 9.719005628024282e-05 2023-03-28 01:05:20,467 44k INFO Saving model and optimizer state at iteration 224 to ./logs\44k\G_163000.pth 2023-03-28 01:05:21,174 44k INFO Saving model and optimizer state at iteration 224 to ./logs\44k\D_163000.pth 2023-03-28 01:05:21,845 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_160000.pth 2023-03-28 01:05:21,873 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_160000.pth 2023-03-28 01:05:45,970 44k INFO ====> Epoch: 224, cost 258.73 s 2023-03-28 01:06:38,386 44k INFO Train Epoch: 225 [18%] 2023-03-28 01:06:38,387 44k INFO Losses: [2.3843815326690674, 2.436028003692627, 10.251869201660156, 15.770206451416016, 0.8364884257316589], step: 163200, lr: 9.717790752320778e-05 2023-03-28 01:07:45,559 44k INFO Train Epoch: 225 [45%] 2023-03-28 01:07:45,560 44k INFO Losses: [2.4193685054779053, 2.5855588912963867, 8.926447868347168, 12.53689956665039, 0.9551175832748413], step: 163400, lr: 9.717790752320778e-05 2023-03-28 01:08:53,203 44k INFO Train Epoch: 225 [73%] 2023-03-28 01:08:53,203 44k INFO Losses: [2.11983323097229, 2.6021289825439453, 14.845989227294922, 17.863162994384766, 0.4296134412288666], step: 163600, lr: 9.717790752320778e-05 2023-03-28 01:10:00,343 44k INFO ====> Epoch: 225, cost 254.37 s 2023-03-28 01:10:09,480 44k INFO Train Epoch: 226 [0%] 2023-03-28 01:10:09,480 44k INFO Losses: [2.750969886779785, 2.261580467224121, 9.057339668273926, 14.535750389099121, 1.5136398077011108], step: 163800, lr: 9.716576028476738e-05 2023-03-28 01:11:17,142 44k INFO Train Epoch: 226 [27%] 2023-03-28 01:11:17,143 44k INFO Losses: [2.5343496799468994, 2.507807970046997, 10.452397346496582, 16.277629852294922, 0.692219078540802], step: 164000, lr: 9.716576028476738e-05 2023-03-28 01:11:20,175 44k INFO Saving model and optimizer state at iteration 226 to ./logs\44k\G_164000.pth 2023-03-28 01:11:20,877 44k INFO Saving model and optimizer state at iteration 226 to ./logs\44k\D_164000.pth 2023-03-28 01:11:21,604 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_161000.pth 2023-03-28 01:11:21,633 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_161000.pth 2023-03-28 01:12:28,509 44k INFO Train Epoch: 226 [55%] 2023-03-28 01:12:28,510 44k INFO Losses: [2.6553471088409424, 2.236541748046875, 11.8535737991333, 15.369684219360352, 1.3894336223602295], step: 164200, lr: 9.716576028476738e-05 2023-03-28 01:13:36,039 44k INFO Train Epoch: 226 [82%] 2023-03-28 01:13:36,039 44k INFO Losses: [2.646812915802002, 1.9501280784606934, 10.896429061889648, 14.095290184020996, 0.587001383304596], step: 164400, lr: 9.716576028476738e-05 2023-03-28 01:14:18,853 44k INFO ====> Epoch: 226, cost 258.51 s 2023-03-28 01:14:52,372 44k INFO Train Epoch: 227 [10%] 2023-03-28 01:14:52,373 44k INFO Losses: [2.1730074882507324, 2.5681936740875244, 15.240942001342773, 18.352190017700195, 1.057045340538025], step: 164600, lr: 9.715361456473177e-05 2023-03-28 01:16:00,382 44k INFO Train Epoch: 227 [37%] 2023-03-28 01:16:00,383 44k INFO Losses: [2.7293288707733154, 2.209444284439087, 8.631799697875977, 12.688267707824707, 0.9987932443618774], step: 164800, lr: 9.715361456473177e-05 2023-03-28 01:17:08,036 44k INFO Train Epoch: 227 [65%] 2023-03-28 01:17:08,036 44k INFO Losses: [2.6139163970947266, 2.301684856414795, 11.000359535217285, 15.449405670166016, 1.2748652696609497], step: 165000, lr: 9.715361456473177e-05 2023-03-28 01:17:11,047 44k INFO Saving model and optimizer state at iteration 227 to ./logs\44k\G_165000.pth 2023-03-28 01:17:11,764 44k INFO Saving model and optimizer state at iteration 227 to ./logs\44k\D_165000.pth 2023-03-28 01:17:12,439 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_162000.pth 2023-03-28 01:17:12,469 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_162000.pth 2023-03-28 01:18:20,245 44k INFO Train Epoch: 227 [92%] 2023-03-28 01:18:20,246 44k INFO Losses: [2.4463958740234375, 2.4713382720947266, 15.482124328613281, 18.91501235961914, 0.7822593450546265], step: 165200, lr: 9.715361456473177e-05 2023-03-28 01:18:38,932 44k INFO ====> Epoch: 227, cost 260.08 s 2023-03-28 01:19:36,976 44k INFO Train Epoch: 228 [20%] 2023-03-28 01:19:36,976 44k INFO Losses: [2.4857373237609863, 2.0918030738830566, 8.780402183532715, 11.250349998474121, 1.1519620418548584], step: 165400, lr: 9.714147036291117e-05 2023-03-28 01:20:44,429 44k INFO Train Epoch: 228 [47%] 2023-03-28 01:20:44,430 44k INFO Losses: [2.4518187046051025, 2.204535961151123, 13.782402992248535, 16.593149185180664, 1.0346651077270508], step: 165600, lr: 9.714147036291117e-05 2023-03-28 01:21:52,421 44k INFO Train Epoch: 228 [75%] 2023-03-28 01:21:52,421 44k INFO Losses: [2.5869665145874023, 2.2857418060302734, 9.786745071411133, 12.892786026000977, 0.9085033535957336], step: 165800, lr: 9.714147036291117e-05 2023-03-28 01:22:54,539 44k INFO ====> Epoch: 228, cost 255.61 s 2023-03-28 01:23:09,093 44k INFO Train Epoch: 229 [2%] 2023-03-28 01:23:09,093 44k INFO Losses: [2.3838205337524414, 2.1102871894836426, 11.411391258239746, 14.934593200683594, 0.4242997467517853], step: 166000, lr: 9.71293276791158e-05 2023-03-28 01:23:12,087 44k INFO Saving model and optimizer state at iteration 229 to ./logs\44k\G_166000.pth 2023-03-28 01:23:12,787 44k INFO Saving model and optimizer state at iteration 229 to ./logs\44k\D_166000.pth 2023-03-28 01:23:13,464 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_163000.pth 2023-03-28 01:23:13,493 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_163000.pth 2023-03-28 01:24:21,418 44k INFO Train Epoch: 229 [30%] 2023-03-28 01:24:21,419 44k INFO Losses: [2.5651891231536865, 2.6353402137756348, 15.301815032958984, 17.11634063720703, 1.1759192943572998], step: 166200, lr: 9.71293276791158e-05 2023-03-28 01:25:29,094 44k INFO Train Epoch: 229 [57%] 2023-03-28 01:25:29,095 44k INFO Losses: [2.4334962368011475, 2.421905040740967, 13.740138053894043, 19.194475173950195, 0.7933359146118164], step: 166400, lr: 9.71293276791158e-05 2023-03-28 01:26:36,995 44k INFO Train Epoch: 229 [85%] 2023-03-28 01:26:36,996 44k INFO Losses: [2.5449371337890625, 2.3645482063293457, 9.298897743225098, 14.968024253845215, 0.9615478515625], step: 166600, lr: 9.71293276791158e-05 2023-03-28 01:27:14,678 44k INFO ====> Epoch: 229, cost 260.14 s 2023-03-28 01:27:53,525 44k INFO Train Epoch: 230 [12%] 2023-03-28 01:27:53,525 44k INFO Losses: [2.3510799407958984, 2.3402068614959717, 10.422197341918945, 16.359634399414062, 0.4248294234275818], step: 166800, lr: 9.711718651315591e-05 2023-03-28 01:29:01,615 44k INFO Train Epoch: 230 [40%] 2023-03-28 01:29:01,616 44k INFO Losses: [2.3500654697418213, 2.3212685585021973, 11.898679733276367, 15.244062423706055, 1.6114964485168457], step: 167000, lr: 9.711718651315591e-05 2023-03-28 01:29:04,614 44k INFO Saving model and optimizer state at iteration 230 to ./logs\44k\G_167000.pth 2023-03-28 01:29:05,313 44k INFO Saving model and optimizer state at iteration 230 to ./logs\44k\D_167000.pth 2023-03-28 01:29:05,997 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_164000.pth 2023-03-28 01:29:06,027 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_164000.pth 2023-03-28 01:30:13,788 44k INFO Train Epoch: 230 [67%] 2023-03-28 01:30:13,788 44k INFO Losses: [2.795795440673828, 2.0109269618988037, 8.394283294677734, 14.799938201904297, 1.2923723459243774], step: 167200, lr: 9.711718651315591e-05 2023-03-28 01:31:21,781 44k INFO Train Epoch: 230 [95%] 2023-03-28 01:31:21,781 44k INFO Losses: [2.4127655029296875, 2.1294593811035156, 11.725732803344727, 12.848069190979004, 1.1961135864257812], step: 167400, lr: 9.711718651315591e-05 2023-03-28 01:31:35,228 44k INFO ====> Epoch: 230, cost 260.55 s 2023-03-28 01:32:38,811 44k INFO Train Epoch: 231 [22%] 2023-03-28 01:32:38,811 44k INFO Losses: [2.5984325408935547, 2.19071102142334, 9.819446563720703, 15.007372856140137, 0.6679907441139221], step: 167600, lr: 9.710504686484176e-05 2023-03-28 01:33:46,266 44k INFO Train Epoch: 231 [49%] 2023-03-28 01:33:46,266 44k INFO Losses: [2.5112931728363037, 2.468823194503784, 11.944804191589355, 13.264846801757812, 0.9854975342750549], step: 167800, lr: 9.710504686484176e-05 2023-03-28 01:34:54,451 44k INFO Train Epoch: 231 [77%] 2023-03-28 01:34:54,452 44k INFO Losses: [2.7259833812713623, 2.2438769340515137, 6.057558536529541, 10.248835563659668, 1.2538334131240845], step: 168000, lr: 9.710504686484176e-05 2023-03-28 01:34:57,491 44k INFO Saving model and optimizer state at iteration 231 to ./logs\44k\G_168000.pth 2023-03-28 01:34:58,187 44k INFO Saving model and optimizer state at iteration 231 to ./logs\44k\D_168000.pth 2023-03-28 01:34:58,863 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_165000.pth 2023-03-28 01:34:58,905 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_165000.pth 2023-03-28 01:35:55,554 44k INFO ====> Epoch: 231, cost 260.33 s 2023-03-28 01:36:15,450 44k INFO Train Epoch: 232 [4%] 2023-03-28 01:36:15,450 44k INFO Losses: [2.3262534141540527, 2.1276607513427734, 12.804216384887695, 13.85434341430664, 1.1956967115402222], step: 168200, lr: 9.709290873398365e-05 2023-03-28 01:37:23,675 44k INFO Train Epoch: 232 [32%] 2023-03-28 01:37:23,676 44k INFO Losses: [2.325432538986206, 2.603527545928955, 12.823297500610352, 18.991535186767578, 1.2462177276611328], step: 168400, lr: 9.709290873398365e-05 2023-03-28 01:38:31,557 44k INFO Train Epoch: 232 [59%] 2023-03-28 01:38:31,558 44k INFO Losses: [2.4294495582580566, 2.308703899383545, 14.234713554382324, 17.02052879333496, 1.0108951330184937], step: 168600, lr: 9.709290873398365e-05 2023-03-28 01:39:39,519 44k INFO Train Epoch: 232 [87%] 2023-03-28 01:39:39,519 44k INFO Losses: [2.4532570838928223, 2.2687652111053467, 11.606612205505371, 16.11750602722168, 1.4106141328811646], step: 168800, lr: 9.709290873398365e-05 2023-03-28 01:40:11,870 44k INFO ====> Epoch: 232, cost 256.32 s 2023-03-28 01:40:56,195 44k INFO Train Epoch: 233 [14%] 2023-03-28 01:40:56,196 44k INFO Losses: [2.650139093399048, 2.0096280574798584, 12.028899192810059, 15.328922271728516, 0.9565348029136658], step: 169000, lr: 9.70807721203919e-05 2023-03-28 01:40:59,201 44k INFO Saving model and optimizer state at iteration 233 to ./logs\44k\G_169000.pth 2023-03-28 01:40:59,902 44k INFO Saving model and optimizer state at iteration 233 to ./logs\44k\D_169000.pth 2023-03-28 01:41:00,583 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_166000.pth 2023-03-28 01:41:00,618 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_166000.pth 2023-03-28 01:42:08,453 44k INFO Train Epoch: 233 [42%] 2023-03-28 01:42:08,453 44k INFO Losses: [2.5723979473114014, 2.3011233806610107, 11.306856155395508, 18.01384925842285, 0.815255880355835], step: 169200, lr: 9.70807721203919e-05 2023-03-28 01:43:16,757 44k INFO Train Epoch: 233 [69%] 2023-03-28 01:43:16,757 44k INFO Losses: [2.127253532409668, 2.442934036254883, 17.28144073486328, 15.870183944702148, 1.123315453529358], step: 169400, lr: 9.70807721203919e-05 2023-03-28 01:44:24,705 44k INFO Train Epoch: 233 [97%] 2023-03-28 01:44:24,706 44k INFO Losses: [2.3185038566589355, 2.270463705062866, 10.437347412109375, 14.862586975097656, 0.6049783825874329], step: 169600, lr: 9.70807721203919e-05 2023-03-28 01:44:32,788 44k INFO ====> Epoch: 233, cost 260.92 s 2023-03-28 01:45:41,913 44k INFO Train Epoch: 234 [24%] 2023-03-28 01:45:41,914 44k INFO Losses: [2.588482141494751, 2.105128049850464, 7.7143073081970215, 13.956334114074707, 1.1278889179229736], step: 169800, lr: 9.706863702387684e-05 2023-03-28 01:46:49,447 44k INFO Train Epoch: 234 [52%] 2023-03-28 01:46:49,447 44k INFO Losses: [2.3480560779571533, 2.478860855102539, 13.952293395996094, 15.32504653930664, 1.1787818670272827], step: 170000, lr: 9.706863702387684e-05 2023-03-28 01:46:52,501 44k INFO Saving model and optimizer state at iteration 234 to ./logs\44k\G_170000.pth 2023-03-28 01:46:53,203 44k INFO Saving model and optimizer state at iteration 234 to ./logs\44k\D_170000.pth 2023-03-28 01:46:53,867 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_167000.pth 2023-03-28 01:46:53,909 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_167000.pth 2023-03-28 01:48:02,114 44k INFO Train Epoch: 234 [79%] 2023-03-28 01:48:02,114 44k INFO Losses: [2.439085006713867, 2.5748839378356934, 13.857667922973633, 16.593141555786133, 1.3403735160827637], step: 170200, lr: 9.706863702387684e-05 2023-03-28 01:48:53,566 44k INFO ====> Epoch: 234, cost 260.78 s 2023-03-28 01:49:18,906 44k INFO Train Epoch: 235 [7%] 2023-03-28 01:49:18,907 44k INFO Losses: [2.4385626316070557, 2.3224399089813232, 13.635377883911133, 17.815792083740234, 0.8769676089286804], step: 170400, lr: 9.705650344424885e-05 2023-03-28 01:50:27,205 44k INFO Train Epoch: 235 [34%] 2023-03-28 01:50:27,205 44k INFO Losses: [2.5705974102020264, 1.953465223312378, 10.081927299499512, 15.94638729095459, 0.6853747367858887], step: 170600, lr: 9.705650344424885e-05 2023-03-28 01:51:35,094 44k INFO Train Epoch: 235 [62%] 2023-03-28 01:51:35,095 44k INFO Losses: [2.384981632232666, 2.377631425857544, 10.50052261352539, 16.276710510253906, 1.2457606792449951], step: 170800, lr: 9.705650344424885e-05 2023-03-28 01:52:43,052 44k INFO Train Epoch: 235 [89%] 2023-03-28 01:52:43,053 44k INFO Losses: [2.7615199089050293, 1.9998562335968018, 11.891149520874023, 15.496355056762695, 1.070400595664978], step: 171000, lr: 9.705650344424885e-05 2023-03-28 01:52:45,989 44k INFO Saving model and optimizer state at iteration 235 to ./logs\44k\G_171000.pth 2023-03-28 01:52:46,734 44k INFO Saving model and optimizer state at iteration 235 to ./logs\44k\D_171000.pth 2023-03-28 01:52:47,396 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_168000.pth 2023-03-28 01:52:47,435 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_168000.pth 2023-03-28 01:53:14,461 44k INFO ====> Epoch: 235, cost 260.89 s 2023-03-28 01:54:04,265 44k INFO Train Epoch: 236 [16%] 2023-03-28 01:54:04,265 44k INFO Losses: [2.5054261684417725, 2.155773162841797, 12.66735553741455, 15.441973686218262, 0.9911179542541504], step: 171200, lr: 9.704437138131832e-05 2023-03-28 01:55:12,103 44k INFO Train Epoch: 236 [44%] 2023-03-28 01:55:12,103 44k INFO Losses: [2.66507887840271, 2.1371867656707764, 8.756752014160156, 12.630935668945312, 0.9152307510375977], step: 171400, lr: 9.704437138131832e-05 2023-03-28 01:56:20,274 44k INFO Train Epoch: 236 [71%] 2023-03-28 01:56:20,274 44k INFO Losses: [2.501739501953125, 2.355682134628296, 13.230114936828613, 12.40927505493164, 0.9849969148635864], step: 171600, lr: 9.704437138131832e-05 2023-03-28 01:57:28,226 44k INFO Train Epoch: 236 [99%] 2023-03-28 01:57:28,227 44k INFO Losses: [2.4024267196655273, 2.4215617179870605, 9.345523834228516, 13.771957397460938, 1.135597825050354], step: 171800, lr: 9.704437138131832e-05 2023-03-28 01:57:31,050 44k INFO ====> Epoch: 236, cost 256.59 s 2023-03-28 01:58:45,695 44k INFO Train Epoch: 237 [26%] 2023-03-28 01:58:45,695 44k INFO Losses: [2.608912944793701, 2.017448902130127, 7.927137851715088, 14.989805221557617, 0.8653097152709961], step: 172000, lr: 9.703224083489565e-05 2023-03-28 01:58:48,636 44k INFO Saving model and optimizer state at iteration 237 to ./logs\44k\G_172000.pth 2023-03-28 01:58:49,386 44k INFO Saving model and optimizer state at iteration 237 to ./logs\44k\D_172000.pth 2023-03-28 01:58:50,111 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_169000.pth 2023-03-28 01:58:50,146 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_169000.pth 2023-03-28 01:59:57,742 44k INFO Train Epoch: 237 [54%] 2023-03-28 01:59:57,742 44k INFO Losses: [2.4390504360198975, 2.1808085441589355, 9.674973487854004, 11.567317008972168, 0.9530776739120483], step: 172200, lr: 9.703224083489565e-05 2023-03-28 02:01:06,031 44k INFO Train Epoch: 237 [81%] 2023-03-28 02:01:06,031 44k INFO Losses: [2.4667210578918457, 2.5819787979125977, 8.458169937133789, 13.712631225585938, 1.045716404914856], step: 172400, lr: 9.703224083489565e-05 2023-03-28 02:01:52,090 44k INFO ====> Epoch: 237, cost 261.04 s 2023-03-28 02:02:22,829 44k INFO Train Epoch: 238 [9%] 2023-03-28 02:02:22,829 44k INFO Losses: [2.3949992656707764, 2.310032844543457, 12.673617362976074, 17.777376174926758, 1.0064042806625366], step: 172600, lr: 9.702011180479129e-05 2023-03-28 02:03:31,388 44k INFO Train Epoch: 238 [36%] 2023-03-28 02:03:31,388 44k INFO Losses: [2.442535638809204, 2.143587589263916, 11.860711097717285, 14.186671257019043, 0.7503846287727356], step: 172800, lr: 9.702011180479129e-05 2023-03-28 02:04:39,238 44k INFO Train Epoch: 238 [64%] 2023-03-28 02:04:39,238 44k INFO Losses: [2.3349642753601074, 2.538036346435547, 12.024328231811523, 18.215072631835938, 0.6811875104904175], step: 173000, lr: 9.702011180479129e-05 2023-03-28 02:04:42,248 44k INFO Saving model and optimizer state at iteration 238 to ./logs\44k\G_173000.pth 2023-03-28 02:04:42,998 44k INFO Saving model and optimizer state at iteration 238 to ./logs\44k\D_173000.pth 2023-03-28 02:04:43,669 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_170000.pth 2023-03-28 02:04:43,713 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_170000.pth 2023-03-28 02:05:51,797 44k INFO Train Epoch: 238 [91%] 2023-03-28 02:05:51,797 44k INFO Losses: [2.7827212810516357, 2.021937131881714, 4.775430679321289, 11.314640045166016, 1.1536381244659424], step: 173200, lr: 9.702011180479129e-05 2023-03-28 02:06:13,417 44k INFO ====> Epoch: 238, cost 261.33 s 2023-03-28 02:07:09,029 44k INFO Train Epoch: 239 [19%] 2023-03-28 02:07:09,030 44k INFO Losses: [2.5268783569335938, 2.2764036655426025, 12.748993873596191, 15.05023193359375, 0.9533264636993408], step: 173400, lr: 9.700798429081568e-05 2023-03-28 02:08:16,863 44k INFO Train Epoch: 239 [46%] 2023-03-28 02:08:16,863 44k INFO Losses: [2.549506664276123, 2.5692663192749023, 11.552533149719238, 17.995365142822266, 0.5189872980117798], step: 173600, lr: 9.700798429081568e-05 2023-03-28 02:09:25,167 44k INFO Train Epoch: 239 [74%] 2023-03-28 02:09:25,167 44k INFO Losses: [2.2786307334899902, 2.6002306938171387, 11.224221229553223, 15.06608772277832, 1.202688217163086], step: 173800, lr: 9.700798429081568e-05 2023-03-28 02:10:30,368 44k INFO ====> Epoch: 239, cost 256.95 s 2023-03-28 02:10:42,233 44k INFO Train Epoch: 240 [1%] 2023-03-28 02:10:42,233 44k INFO Losses: [2.454895496368408, 2.2620933055877686, 13.381155014038086, 17.198457717895508, 1.075763463973999], step: 174000, lr: 9.699585829277933e-05 2023-03-28 02:10:45,171 44k INFO Saving model and optimizer state at iteration 240 to ./logs\44k\G_174000.pth 2023-03-28 02:10:45,931 44k INFO Saving model and optimizer state at iteration 240 to ./logs\44k\D_174000.pth 2023-03-28 02:10:46,596 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_171000.pth 2023-03-28 02:10:46,629 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_171000.pth 2023-03-28 02:11:55,001 44k INFO Train Epoch: 240 [29%] 2023-03-28 02:11:55,002 44k INFO Losses: [2.421445846557617, 2.789440393447876, 9.925230026245117, 15.033824920654297, 0.8320799469947815], step: 174200, lr: 9.699585829277933e-05 2023-03-28 02:13:03,230 44k INFO Train Epoch: 240 [56%] 2023-03-28 02:13:03,230 44k INFO Losses: [2.4149134159088135, 2.2284510135650635, 10.83193588256836, 12.746230125427246, 0.8141133189201355], step: 174400, lr: 9.699585829277933e-05 2023-03-28 02:14:11,368 44k INFO Train Epoch: 240 [84%] 2023-03-28 02:14:11,369 44k INFO Losses: [2.2227139472961426, 2.8542227745056152, 10.348478317260742, 12.815756797790527, 0.99773108959198], step: 174600, lr: 9.699585829277933e-05 2023-03-28 02:14:52,123 44k INFO ====> Epoch: 240, cost 261.76 s 2023-03-28 02:15:28,421 44k INFO Train Epoch: 241 [11%] 2023-03-28 02:15:28,421 44k INFO Losses: [2.4871060848236084, 2.0896835327148438, 9.709208488464355, 17.412050247192383, 0.9854288697242737], step: 174800, lr: 9.698373381049272e-05 2023-03-28 02:16:36,557 44k INFO Train Epoch: 241 [38%] 2023-03-28 02:16:36,558 44k INFO Losses: [2.299419403076172, 2.1669907569885254, 10.134954452514648, 16.331396102905273, 0.8006454110145569], step: 175000, lr: 9.698373381049272e-05 2023-03-28 02:16:39,450 44k INFO Saving model and optimizer state at iteration 241 to ./logs\44k\G_175000.pth 2023-03-28 02:16:40,206 44k INFO Saving model and optimizer state at iteration 241 to ./logs\44k\D_175000.pth 2023-03-28 02:16:40,884 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_172000.pth 2023-03-28 02:16:40,914 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_172000.pth 2023-03-28 02:17:48,807 44k INFO Train Epoch: 241 [66%] 2023-03-28 02:17:48,807 44k INFO Losses: [2.4171504974365234, 2.2884678840637207, 8.900922775268555, 11.648716926574707, 0.6651352643966675], step: 175200, lr: 9.698373381049272e-05 2023-03-28 02:18:56,892 44k INFO Train Epoch: 241 [93%] 2023-03-28 02:18:56,892 44k INFO Losses: [2.5352842807769775, 2.3641929626464844, 13.589726448059082, 16.662099838256836, 0.6047176122665405], step: 175400, lr: 9.698373381049272e-05 2023-03-28 02:19:12,944 44k INFO ====> Epoch: 241, cost 260.82 s 2023-03-28 02:20:13,915 44k INFO Train Epoch: 242 [21%] 2023-03-28 02:20:13,915 44k INFO Losses: [2.54333758354187, 2.2519774436950684, 11.838475227355957, 15.545459747314453, 1.0696502923965454], step: 175600, lr: 9.69716108437664e-05 2023-03-28 02:21:21,286 44k INFO Train Epoch: 242 [48%] 2023-03-28 02:21:21,287 44k INFO Losses: [2.641335964202881, 2.1586484909057617, 9.027826309204102, 16.743865966796875, 1.0626534223556519], step: 175800, lr: 9.69716108437664e-05 2023-03-28 02:22:29,585 44k INFO Train Epoch: 242 [76%] 2023-03-28 02:22:29,585 44k INFO Losses: [2.2187721729278564, 2.6387064456939697, 13.344440460205078, 19.1540470123291, 0.7999750375747681], step: 176000, lr: 9.69716108437664e-05 2023-03-28 02:22:32,544 44k INFO Saving model and optimizer state at iteration 242 to ./logs\44k\G_176000.pth 2023-03-28 02:22:33,252 44k INFO Saving model and optimizer state at iteration 242 to ./logs\44k\D_176000.pth 2023-03-28 02:22:33,918 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_173000.pth 2023-03-28 02:22:33,959 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_173000.pth 2023-03-28 02:23:33,388 44k INFO ====> Epoch: 242, cost 260.44 s 2023-03-28 02:23:50,822 44k INFO Train Epoch: 243 [3%] 2023-03-28 02:23:50,822 44k INFO Losses: [2.373426914215088, 2.3073642253875732, 11.42915153503418, 17.623449325561523, 0.9771531224250793], step: 176200, lr: 9.695948939241093e-05 2023-03-28 02:24:58,938 44k INFO Train Epoch: 243 [31%] 2023-03-28 02:24:58,938 44k INFO Losses: [2.3176560401916504, 2.549562931060791, 14.743273735046387, 17.655664443969727, 1.4177019596099854], step: 176400, lr: 9.695948939241093e-05 2023-03-28 02:26:06,960 44k INFO Train Epoch: 243 [58%] 2023-03-28 02:26:06,960 44k INFO Losses: [2.2378077507019043, 2.5454366207122803, 14.636201858520508, 19.267343521118164, 1.3861892223358154], step: 176600, lr: 9.695948939241093e-05 2023-03-28 02:27:15,081 44k INFO Train Epoch: 243 [86%] 2023-03-28 02:27:15,082 44k INFO Losses: [2.503922939300537, 2.463235378265381, 18.113771438598633, 19.559232711791992, 1.2592309713363647], step: 176800, lr: 9.695948939241093e-05 2023-03-28 02:27:50,105 44k INFO ====> Epoch: 243, cost 256.72 s 2023-03-28 02:28:31,923 44k INFO Train Epoch: 244 [13%] 2023-03-28 02:28:31,923 44k INFO Losses: [2.3177852630615234, 2.5620787143707275, 12.577972412109375, 16.474802017211914, 1.453508734703064], step: 177000, lr: 9.694736945623688e-05 2023-03-28 02:28:34,948 44k INFO Saving model and optimizer state at iteration 244 to ./logs\44k\G_177000.pth 2023-03-28 02:28:35,650 44k INFO Saving model and optimizer state at iteration 244 to ./logs\44k\D_177000.pth 2023-03-28 02:28:36,332 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_174000.pth 2023-03-28 02:28:36,365 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_174000.pth 2023-03-28 02:29:44,457 44k INFO Train Epoch: 244 [41%] 2023-03-28 02:29:44,458 44k INFO Losses: [2.4312736988067627, 2.4743614196777344, 14.253067016601562, 15.027427673339844, 1.162600040435791], step: 177200, lr: 9.694736945623688e-05 2023-03-28 02:30:52,580 44k INFO Train Epoch: 244 [68%] 2023-03-28 02:30:52,580 44k INFO Losses: [2.473856210708618, 2.242067813873291, 10.105144500732422, 16.403024673461914, 0.5212377309799194], step: 177400, lr: 9.694736945623688e-05 2023-03-28 02:32:00,651 44k INFO Train Epoch: 244 [96%] 2023-03-28 02:32:00,651 44k INFO Losses: [2.314690589904785, 2.895296573638916, 9.456132888793945, 12.926642417907715, 1.1687753200531006], step: 177600, lr: 9.694736945623688e-05 2023-03-28 02:32:11,396 44k INFO ====> Epoch: 244, cost 261.29 s 2023-03-28 02:33:18,034 44k INFO Train Epoch: 245 [23%] 2023-03-28 02:33:18,034 44k INFO Losses: [2.0768086910247803, 2.5507309436798096, 13.679519653320312, 17.70592498779297, 1.13560950756073], step: 177800, lr: 9.693525103505484e-05 2023-03-28 02:34:25,385 44k INFO Train Epoch: 245 [51%] 2023-03-28 02:34:25,386 44k INFO Losses: [2.384892702102661, 2.5981292724609375, 12.949722290039062, 16.964994430541992, 1.25724458694458], step: 178000, lr: 9.693525103505484e-05 2023-03-28 02:34:28,420 44k INFO Saving model and optimizer state at iteration 245 to ./logs\44k\G_178000.pth 2023-03-28 02:34:29,132 44k INFO Saving model and optimizer state at iteration 245 to ./logs\44k\D_178000.pth 2023-03-28 02:34:29,786 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_175000.pth 2023-03-28 02:34:29,816 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_175000.pth 2023-03-28 02:35:37,970 44k INFO Train Epoch: 245 [78%] 2023-03-28 02:35:37,970 44k INFO Losses: [2.2958943843841553, 2.613189697265625, 11.444589614868164, 19.011436462402344, 1.0177844762802124], step: 178200, lr: 9.693525103505484e-05 2023-03-28 02:36:32,010 44k INFO ====> Epoch: 245, cost 260.61 s 2023-03-28 02:36:54,978 44k INFO Train Epoch: 246 [5%] 2023-03-28 02:36:54,979 44k INFO Losses: [2.2806293964385986, 2.4469475746154785, 13.55453872680664, 18.47661590576172, 1.1764559745788574], step: 178400, lr: 9.692313412867544e-05 2023-03-28 02:38:03,324 44k INFO Train Epoch: 246 [33%] 2023-03-28 02:38:03,325 44k INFO Losses: [2.334369659423828, 2.357508420944214, 13.241400718688965, 15.744019508361816, 1.036879539489746], step: 178600, lr: 9.692313412867544e-05 2023-03-28 02:39:11,319 44k INFO Train Epoch: 246 [60%] 2023-03-28 02:39:11,319 44k INFO Losses: [2.344015121459961, 2.3363170623779297, 13.002141952514648, 17.862520217895508, 0.7181509733200073], step: 178800, lr: 9.692313412867544e-05 2023-03-28 02:40:19,338 44k INFO Train Epoch: 246 [88%] 2023-03-28 02:40:19,339 44k INFO Losses: [2.573349952697754, 2.3700850009918213, 12.56455135345459, 16.98846435546875, 0.9264994859695435], step: 179000, lr: 9.692313412867544e-05 2023-03-28 02:40:22,343 44k INFO Saving model and optimizer state at iteration 246 to ./logs\44k\G_179000.pth 2023-03-28 02:40:23,103 44k INFO Saving model and optimizer state at iteration 246 to ./logs\44k\D_179000.pth 2023-03-28 02:40:23,772 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_176000.pth 2023-03-28 02:40:23,800 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_176000.pth 2023-03-28 02:40:53,498 44k INFO ====> Epoch: 246, cost 261.49 s 2023-03-28 02:41:40,825 44k INFO Train Epoch: 247 [15%] 2023-03-28 02:41:40,825 44k INFO Losses: [2.619222402572632, 2.337575912475586, 13.001132011413574, 15.580093383789062, 0.5822882056236267], step: 179200, lr: 9.691101873690936e-05 2023-03-28 02:42:48,789 44k INFO Train Epoch: 247 [43%] 2023-03-28 02:42:48,789 44k INFO Losses: [1.9810248613357544, 2.7878005504608154, 17.050189971923828, 17.071252822875977, 0.6441420912742615], step: 179400, lr: 9.691101873690936e-05 2023-03-28 02:43:57,117 44k INFO Train Epoch: 247 [70%] 2023-03-28 02:43:57,117 44k INFO Losses: [2.3797898292541504, 2.3415896892547607, 9.38374137878418, 17.546281814575195, 0.9464096426963806], step: 179600, lr: 9.691101873690936e-05 2023-03-28 02:45:05,117 44k INFO Train Epoch: 247 [98%] 2023-03-28 02:45:05,118 44k INFO Losses: [2.5596914291381836, 2.2477731704711914, 6.920080661773682, 9.657902717590332, 0.6947763562202454], step: 179800, lr: 9.691101873690936e-05 2023-03-28 02:45:10,560 44k INFO ====> Epoch: 247, cost 257.06 s 2023-03-28 02:46:22,665 44k INFO Train Epoch: 248 [25%] 2023-03-28 02:46:22,666 44k INFO Losses: [1.759533405303955, 3.350287437438965, 7.667304039001465, 10.843032836914062, 0.8799906373023987], step: 180000, lr: 9.689890485956725e-05 2023-03-28 02:46:25,627 44k INFO Saving model and optimizer state at iteration 248 to ./logs\44k\G_180000.pth 2023-03-28 02:46:26,332 44k INFO Saving model and optimizer state at iteration 248 to ./logs\44k\D_180000.pth 2023-03-28 02:46:26,992 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_177000.pth 2023-03-28 02:46:27,022 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_177000.pth 2023-03-28 02:47:34,729 44k INFO Train Epoch: 248 [53%] 2023-03-28 02:47:34,729 44k INFO Losses: [2.4733498096466064, 2.2916975021362305, 7.9366135597229, 13.805622100830078, 0.7653065919876099], step: 180200, lr: 9.689890485956725e-05 2023-03-28 02:48:43,048 44k INFO Train Epoch: 248 [80%] 2023-03-28 02:48:43,048 44k INFO Losses: [2.4525794982910156, 2.524242401123047, 10.575217247009277, 17.94314193725586, 0.9132097363471985], step: 180400, lr: 9.689890485956725e-05 2023-03-28 02:49:31,899 44k INFO ====> Epoch: 248, cost 261.34 s 2023-03-28 02:50:00,098 44k INFO Train Epoch: 249 [8%] 2023-03-28 02:50:00,099 44k INFO Losses: [2.322924852371216, 2.4430296421051025, 11.389654159545898, 16.478788375854492, 1.0878193378448486], step: 180600, lr: 9.68867924964598e-05 2023-03-28 02:51:08,533 44k INFO Train Epoch: 249 [35%] 2023-03-28 02:51:08,533 44k INFO Losses: [2.624846935272217, 2.252068281173706, 10.76242733001709, 16.64252281188965, 1.144072413444519], step: 180800, lr: 9.68867924964598e-05 2023-03-28 02:52:16,552 44k INFO Train Epoch: 249 [63%] 2023-03-28 02:52:16,552 44k INFO Losses: [2.4649882316589355, 2.372361421585083, 12.14194107055664, 18.34842300415039, 0.5717077851295471], step: 181000, lr: 9.68867924964598e-05 2023-03-28 02:52:19,523 44k INFO Saving model and optimizer state at iteration 249 to ./logs\44k\G_181000.pth 2023-03-28 02:52:20,274 44k INFO Saving model and optimizer state at iteration 249 to ./logs\44k\D_181000.pth 2023-03-28 02:52:20,946 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_178000.pth 2023-03-28 02:52:20,976 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_178000.pth 2023-03-28 02:53:29,069 44k INFO Train Epoch: 249 [90%] 2023-03-28 02:53:29,070 44k INFO Losses: [2.765155792236328, 1.9206360578536987, 6.5153021812438965, 10.887517929077148, 1.0684281587600708], step: 181200, lr: 9.68867924964598e-05 2023-03-28 02:53:53,415 44k INFO ====> Epoch: 249, cost 261.52 s 2023-03-28 02:54:46,331 44k INFO Train Epoch: 250 [18%] 2023-03-28 02:54:46,331 44k INFO Losses: [2.2347564697265625, 2.568427324295044, 10.540266036987305, 12.71943473815918, 0.8477076888084412], step: 181400, lr: 9.687468164739773e-05 2023-03-28 02:55:54,295 44k INFO Train Epoch: 250 [45%] 2023-03-28 02:55:54,295 44k INFO Losses: [2.509294271469116, 2.0410993099212646, 10.020994186401367, 14.845791816711426, 0.7513571381568909], step: 181600, lr: 9.687468164739773e-05 2023-03-28 02:57:02,654 44k INFO Train Epoch: 250 [73%] 2023-03-28 02:57:02,655 44k INFO Losses: [2.404297113418579, 2.3246707916259766, 13.664403915405273, 17.668174743652344, 1.5716431140899658], step: 181800, lr: 9.687468164739773e-05 2023-03-28 02:58:10,735 44k INFO ====> Epoch: 250, cost 257.32 s 2023-03-28 02:58:19,876 44k INFO Train Epoch: 251 [0%] 2023-03-28 02:58:19,877 44k INFO Losses: [2.3421967029571533, 2.5864386558532715, 14.028287887573242, 17.42034339904785, 0.7536882758140564], step: 182000, lr: 9.68625723121918e-05 2023-03-28 02:58:22,807 44k INFO Saving model and optimizer state at iteration 251 to ./logs\44k\G_182000.pth 2023-03-28 02:58:23,514 44k INFO Saving model and optimizer state at iteration 251 to ./logs\44k\D_182000.pth 2023-03-28 02:58:24,188 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_179000.pth 2023-03-28 02:58:24,220 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_179000.pth 2023-03-28 02:59:32,727 44k INFO Train Epoch: 251 [27%] 2023-03-28 02:59:32,728 44k INFO Losses: [2.4460906982421875, 2.3737730979919434, 11.54042911529541, 18.213075637817383, 0.7411572337150574], step: 182200, lr: 9.68625723121918e-05 2023-03-28 03:00:40,750 44k INFO Train Epoch: 251 [55%] 2023-03-28 03:00:40,751 44k INFO Losses: [2.5631747245788574, 2.4700543880462646, 10.728297233581543, 14.809806823730469, 0.9913558959960938], step: 182400, lr: 9.68625723121918e-05 2023-03-28 03:01:49,053 44k INFO Train Epoch: 251 [82%] 2023-03-28 03:01:49,054 44k INFO Losses: [2.5595250129699707, 2.3342814445495605, 10.991732597351074, 12.76771068572998, 0.733020007610321], step: 182600, lr: 9.68625723121918e-05 2023-03-28 03:02:32,549 44k INFO ====> Epoch: 251, cost 261.81 s 2023-03-28 03:03:06,450 44k INFO Train Epoch: 252 [10%] 2023-03-28 03:03:06,451 44k INFO Losses: [2.455989122390747, 2.331876754760742, 12.908832550048828, 15.730009078979492, 0.6221986413002014], step: 182800, lr: 9.685046449065278e-05 2023-03-28 03:04:15,457 44k INFO Train Epoch: 252 [37%] 2023-03-28 03:04:15,457 44k INFO Losses: [2.8594274520874023, 2.0876574516296387, 9.175971031188965, 10.665321350097656, 0.8020965456962585], step: 183000, lr: 9.685046449065278e-05 2023-03-28 03:04:18,482 44k INFO Saving model and optimizer state at iteration 252 to ./logs\44k\G_183000.pth 2023-03-28 03:04:19,183 44k INFO Saving model and optimizer state at iteration 252 to ./logs\44k\D_183000.pth 2023-03-28 03:04:19,870 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_180000.pth 2023-03-28 03:04:19,900 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_180000.pth 2023-03-28 03:05:28,080 44k INFO Train Epoch: 252 [65%] 2023-03-28 03:05:28,080 44k INFO Losses: [2.60054612159729, 2.2207772731781006, 9.696186065673828, 14.49778938293457, 1.3247261047363281], step: 183200, lr: 9.685046449065278e-05 2023-03-28 03:06:36,471 44k INFO Train Epoch: 252 [92%] 2023-03-28 03:06:36,471 44k INFO Losses: [2.4614787101745605, 2.227555274963379, 13.386817932128906, 17.942302703857422, 1.1843181848526], step: 183400, lr: 9.685046449065278e-05 2023-03-28 03:06:55,315 44k INFO ====> Epoch: 252, cost 262.77 s 2023-03-28 03:07:53,942 44k INFO Train Epoch: 253 [20%] 2023-03-28 03:07:53,942 44k INFO Losses: [2.637248992919922, 1.9423623085021973, 8.633707046508789, 12.801521301269531, 0.9351770281791687], step: 183600, lr: 9.683835818259144e-05 2023-03-28 03:09:01,937 44k INFO Train Epoch: 253 [47%] 2023-03-28 03:09:01,938 44k INFO Losses: [2.203828811645508, 2.2891132831573486, 13.437457084655762, 15.792583465576172, 0.8305380940437317], step: 183800, lr: 9.683835818259144e-05 2023-03-28 03:10:10,481 44k INFO Train Epoch: 253 [75%] 2023-03-28 03:10:10,481 44k INFO Losses: [2.454540729522705, 2.455825090408325, 10.743790626525879, 15.495064735412598, 0.35588863492012024], step: 184000, lr: 9.683835818259144e-05 2023-03-28 03:10:13,479 44k INFO Saving model and optimizer state at iteration 253 to ./logs\44k\G_184000.pth 2023-03-28 03:10:14,183 44k INFO Saving model and optimizer state at iteration 253 to ./logs\44k\D_184000.pth 2023-03-28 03:10:14,861 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_181000.pth 2023-03-28 03:10:14,891 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_181000.pth 2023-03-28 03:11:17,396 44k INFO ====> Epoch: 253, cost 262.08 s 2023-03-28 03:11:32,149 44k INFO Train Epoch: 254 [2%] 2023-03-28 03:11:32,149 44k INFO Losses: [2.491953134536743, 2.258136749267578, 12.649773597717285, 17.141910552978516, 1.2565209865570068], step: 184200, lr: 9.68262533878186e-05 2023-03-28 03:12:40,629 44k INFO Train Epoch: 254 [30%] 2023-03-28 03:12:40,630 44k INFO Losses: [2.4882020950317383, 2.1775104999542236, 10.425522804260254, 11.981096267700195, 0.6959567666053772], step: 184400, lr: 9.68262533878186e-05 2023-03-28 03:13:48,939 44k INFO Train Epoch: 254 [57%] 2023-03-28 03:13:48,940 44k INFO Losses: [2.3791933059692383, 2.427067756652832, 14.887238502502441, 19.062192916870117, 1.1983966827392578], step: 184600, lr: 9.68262533878186e-05 2023-03-28 03:14:57,347 44k INFO Train Epoch: 254 [85%] 2023-03-28 03:14:57,348 44k INFO Losses: [2.5449814796447754, 2.2695817947387695, 9.70676326751709, 12.352617263793945, 0.9313132166862488], step: 184800, lr: 9.68262533878186e-05 2023-03-28 03:15:35,422 44k INFO ====> Epoch: 254, cost 258.03 s 2023-03-28 03:16:14,839 44k INFO Train Epoch: 255 [12%] 2023-03-28 03:16:14,840 44k INFO Losses: [2.4625062942504883, 2.2839553356170654, 9.100360870361328, 15.33544921875, 0.8989543318748474], step: 185000, lr: 9.681415010614512e-05 2023-03-28 03:16:17,829 44k INFO Saving model and optimizer state at iteration 255 to ./logs\44k\G_185000.pth 2023-03-28 03:16:18,540 44k INFO Saving model and optimizer state at iteration 255 to ./logs\44k\D_185000.pth 2023-03-28 03:16:19,242 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_182000.pth 2023-03-28 03:16:19,271 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_182000.pth 2023-03-28 03:17:28,127 44k INFO Train Epoch: 255 [40%] 2023-03-28 03:17:28,128 44k INFO Losses: [2.4759016036987305, 2.096416711807251, 10.28193473815918, 14.319622039794922, 1.8842377662658691], step: 185200, lr: 9.681415010614512e-05 2023-03-28 03:18:36,837 44k INFO Train Epoch: 255 [67%] 2023-03-28 03:18:36,837 44k INFO Losses: [2.3618297576904297, 2.450127601623535, 13.37822437286377, 18.8831729888916, 1.0393983125686646], step: 185400, lr: 9.681415010614512e-05 2023-03-28 03:19:45,683 44k INFO Train Epoch: 255 [95%] 2023-03-28 03:19:45,684 44k INFO Losses: [2.374920129776001, 2.1820969581604004, 12.545459747314453, 18.43819236755371, 1.2389761209487915], step: 185600, lr: 9.681415010614512e-05 2023-03-28 03:19:59,249 44k INFO ====> Epoch: 255, cost 263.83 s 2023-03-28 03:21:03,593 44k INFO Train Epoch: 256 [22%] 2023-03-28 03:21:03,594 44k INFO Losses: [2.722456932067871, 2.092240571975708, 8.248664855957031, 10.226183891296387, 0.887154221534729], step: 185800, lr: 9.680204833738185e-05 2023-03-28 03:22:11,939 44k INFO Train Epoch: 256 [49%] 2023-03-28 03:22:11,940 44k INFO Losses: [2.472343683242798, 2.3072080612182617, 12.388075828552246, 16.11260414123535, 1.071854591369629], step: 186000, lr: 9.680204833738185e-05 2023-03-28 03:22:14,904 44k INFO Saving model and optimizer state at iteration 256 to ./logs\44k\G_186000.pth 2023-03-28 03:22:15,654 44k INFO Saving model and optimizer state at iteration 256 to ./logs\44k\D_186000.pth 2023-03-28 03:22:16,318 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_183000.pth 2023-03-28 03:22:16,349 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_183000.pth 2023-03-28 03:23:25,312 44k INFO Train Epoch: 256 [77%] 2023-03-28 03:23:25,313 44k INFO Losses: [2.727585792541504, 1.9423431158065796, 7.591000556945801, 10.001090049743652, 1.1904314756393433], step: 186200, lr: 9.680204833738185e-05 2023-03-28 03:24:22,805 44k INFO ====> Epoch: 256, cost 263.56 s 2023-03-28 03:24:42,821 44k INFO Train Epoch: 257 [4%] 2023-03-28 03:24:42,821 44k INFO Losses: [2.3386008739471436, 2.599985122680664, 10.682660102844238, 15.034215927124023, 0.442556232213974], step: 186400, lr: 9.678994808133967e-05 2023-03-28 03:25:51,839 44k INFO Train Epoch: 257 [32%] 2023-03-28 03:25:51,839 44k INFO Losses: [2.60235595703125, 2.3481545448303223, 11.064467430114746, 14.668078422546387, 0.978481650352478], step: 186600, lr: 9.678994808133967e-05 2023-03-28 03:27:00,657 44k INFO Train Epoch: 257 [59%] 2023-03-28 03:27:00,657 44k INFO Losses: [2.5548486709594727, 2.2598445415496826, 15.190043449401855, 16.351444244384766, 1.03973388671875], step: 186800, lr: 9.678994808133967e-05 2023-03-28 03:28:09,538 44k INFO Train Epoch: 257 [87%] 2023-03-28 03:28:09,538 44k INFO Losses: [2.2773349285125732, 2.442462921142578, 16.007829666137695, 17.919498443603516, 1.324083685874939], step: 187000, lr: 9.678994808133967e-05 2023-03-28 03:28:12,566 44k INFO Saving model and optimizer state at iteration 257 to ./logs\44k\G_187000.pth 2023-03-28 03:28:13,273 44k INFO Saving model and optimizer state at iteration 257 to ./logs\44k\D_187000.pth 2023-03-28 03:28:13,958 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_184000.pth 2023-03-28 03:28:14,001 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_184000.pth 2023-03-28 03:28:46,785 44k INFO ====> Epoch: 257, cost 263.98 s 2023-03-28 03:29:31,753 44k INFO Train Epoch: 258 [14%] 2023-03-28 03:29:31,753 44k INFO Losses: [2.4396331310272217, 2.1270570755004883, 9.761764526367188, 12.318076133728027, 1.145167350769043], step: 187200, lr: 9.67778493378295e-05 2023-03-28 03:30:40,450 44k INFO Train Epoch: 258 [42%] 2023-03-28 03:30:40,450 44k INFO Losses: [2.65573787689209, 2.469160318374634, 11.631431579589844, 14.806086540222168, 1.6309187412261963], step: 187400, lr: 9.67778493378295e-05 2023-03-28 03:31:49,489 44k INFO Train Epoch: 258 [69%] 2023-03-28 03:31:49,489 44k INFO Losses: [2.6360373497009277, 2.267324209213257, 10.561835289001465, 14.644715309143066, 0.9276282787322998], step: 187600, lr: 9.67778493378295e-05 2023-03-28 03:32:58,420 44k INFO Train Epoch: 258 [97%] 2023-03-28 03:32:58,420 44k INFO Losses: [2.329685688018799, 2.3311047554016113, 12.20538330078125, 14.89622688293457, 0.523094654083252], step: 187800, lr: 9.67778493378295e-05 2023-03-28 03:33:06,598 44k INFO ====> Epoch: 258, cost 259.81 s 2023-03-28 03:34:16,498 44k INFO Train Epoch: 259 [24%] 2023-03-28 03:34:16,499 44k INFO Losses: [2.5317955017089844, 2.0478811264038086, 10.136266708374023, 14.823957443237305, 1.098646879196167], step: 188000, lr: 9.676575210666227e-05 2023-03-28 03:34:19,480 44k INFO Saving model and optimizer state at iteration 259 to ./logs\44k\G_188000.pth 2023-03-28 03:34:20,189 44k INFO Saving model and optimizer state at iteration 259 to ./logs\44k\D_188000.pth 2023-03-28 03:34:20,882 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_185000.pth 2023-03-28 03:34:20,917 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_185000.pth 2023-03-28 03:35:29,395 44k INFO Train Epoch: 259 [52%] 2023-03-28 03:35:29,395 44k INFO Losses: [2.131598472595215, 2.5535354614257812, 11.856616973876953, 13.634905815124512, 1.0338629484176636], step: 188200, lr: 9.676575210666227e-05 2023-03-28 03:36:38,675 44k INFO Train Epoch: 259 [79%] 2023-03-28 03:36:38,675 44k INFO Losses: [2.125795364379883, 2.7319376468658447, 17.247392654418945, 19.3321475982666, 0.8270903825759888], step: 188400, lr: 9.676575210666227e-05 2023-03-28 03:37:30,803 44k INFO ====> Epoch: 259, cost 264.20 s 2023-03-28 03:37:56,487 44k INFO Train Epoch: 260 [7%] 2023-03-28 03:37:56,488 44k INFO Losses: [2.4954681396484375, 2.1780009269714355, 10.63471508026123, 15.60483455657959, 1.2888938188552856], step: 188600, lr: 9.675365638764893e-05 2023-03-28 03:39:05,775 44k INFO Train Epoch: 260 [34%] 2023-03-28 03:39:05,775 44k INFO Losses: [2.4120519161224365, 2.0074501037597656, 9.393383026123047, 13.315536499023438, 0.8927484154701233], step: 188800, lr: 9.675365638764893e-05 2023-03-28 03:40:14,732 44k INFO Train Epoch: 260 [62%] 2023-03-28 03:40:14,732 44k INFO Losses: [2.232126474380493, 2.442530632019043, 14.387221336364746, 17.719810485839844, 0.8817055225372314], step: 189000, lr: 9.675365638764893e-05 2023-03-28 03:40:17,742 44k INFO Saving model and optimizer state at iteration 260 to ./logs\44k\G_189000.pth 2023-03-28 03:40:18,454 44k INFO Saving model and optimizer state at iteration 260 to ./logs\44k\D_189000.pth 2023-03-28 03:40:19,126 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_186000.pth 2023-03-28 03:40:19,163 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_186000.pth 2023-03-28 03:41:28,053 44k INFO Train Epoch: 260 [89%] 2023-03-28 03:41:28,053 44k INFO Losses: [2.321671485900879, 2.422691822052002, 11.2778902053833, 15.870330810546875, 1.1073284149169922], step: 189200, lr: 9.675365638764893e-05 2023-03-28 03:41:55,497 44k INFO ====> Epoch: 260, cost 264.69 s 2023-03-28 03:42:46,034 44k INFO Train Epoch: 261 [16%] 2023-03-28 03:42:46,035 44k INFO Losses: [2.455357074737549, 2.3130712509155273, 9.940912246704102, 14.614503860473633, 1.0073816776275635], step: 189400, lr: 9.674156218060047e-05 2023-03-28 03:43:54,954 44k INFO Train Epoch: 261 [44%] 2023-03-28 03:43:54,954 44k INFO Losses: [2.460982322692871, 2.6145401000976562, 5.966538429260254, 13.054708480834961, 0.8216966986656189], step: 189600, lr: 9.674156218060047e-05 2023-03-28 03:45:04,124 44k INFO Train Epoch: 261 [71%] 2023-03-28 03:45:04,124 44k INFO Losses: [2.6356325149536133, 2.243803024291992, 11.633989334106445, 12.499107360839844, 0.977484941482544], step: 189800, lr: 9.674156218060047e-05 2023-03-28 03:46:13,200 44k INFO Train Epoch: 261 [99%] 2023-03-28 03:46:13,200 44k INFO Losses: [2.5733187198638916, 2.345782995223999, 13.910623550415039, 15.8872709274292, 0.8456734418869019], step: 190000, lr: 9.674156218060047e-05 2023-03-28 03:46:16,252 44k INFO Saving model and optimizer state at iteration 261 to ./logs\44k\G_190000.pth 2023-03-28 03:46:16,958 44k INFO Saving model and optimizer state at iteration 261 to ./logs\44k\D_190000.pth 2023-03-28 03:46:17,627 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_187000.pth 2023-03-28 03:46:17,665 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_187000.pth 2023-03-28 03:46:20,441 44k INFO ====> Epoch: 261, cost 264.94 s 2023-03-28 03:47:35,994 44k INFO Train Epoch: 262 [26%] 2023-03-28 03:47:35,995 44k INFO Losses: [2.445991039276123, 2.5408852100372314, 12.811150550842285, 14.546107292175293, 0.7756758332252502], step: 190200, lr: 9.67294694853279e-05 2023-03-28 03:48:44,791 44k INFO Train Epoch: 262 [54%] 2023-03-28 03:48:44,791 44k INFO Losses: [2.205418825149536, 2.6449365615844727, 11.83169174194336, 13.690535545349121, 0.6960495710372925], step: 190400, lr: 9.67294694853279e-05 2023-03-28 03:49:53,924 44k INFO Train Epoch: 262 [81%] 2023-03-28 03:49:53,924 44k INFO Losses: [2.518665313720703, 2.3471240997314453, 8.003206253051758, 15.28868293762207, 1.2787448167800903], step: 190600, lr: 9.67294694853279e-05 2023-03-28 03:50:40,682 44k INFO ====> Epoch: 262, cost 260.24 s 2023-03-28 03:51:11,867 44k INFO Train Epoch: 263 [9%] 2023-03-28 03:51:11,867 44k INFO Losses: [2.772414207458496, 1.9704740047454834, 5.8803815841674805, 11.170614242553711, 1.1672495603561401], step: 190800, lr: 9.671737830164223e-05 2023-03-28 03:52:21,278 44k INFO Train Epoch: 263 [36%] 2023-03-28 03:52:21,278 44k INFO Losses: [2.225895643234253, 2.4052281379699707, 15.69821834564209, 17.810325622558594, 1.565525770187378], step: 191000, lr: 9.671737830164223e-05 2023-03-28 03:52:24,306 44k INFO Saving model and optimizer state at iteration 263 to ./logs\44k\G_191000.pth 2023-03-28 03:52:25,013 44k INFO Saving model and optimizer state at iteration 263 to ./logs\44k\D_191000.pth 2023-03-28 03:52:25,681 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_188000.pth 2023-03-28 03:52:25,723 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_188000.pth 2023-03-28 03:53:34,697 44k INFO Train Epoch: 263 [64%] 2023-03-28 03:53:34,697 44k INFO Losses: [2.3698909282684326, 2.5194318294525146, 14.668045043945312, 17.41701889038086, 0.7560698390007019], step: 191200, lr: 9.671737830164223e-05 2023-03-28 03:54:43,983 44k INFO Train Epoch: 263 [91%] 2023-03-28 03:54:43,983 44k INFO Losses: [2.3320822715759277, 2.4759521484375, 11.8408784866333, 13.904598236083984, 1.4099934101104736], step: 191400, lr: 9.671737830164223e-05 2023-03-28 03:55:05,866 44k INFO ====> Epoch: 263, cost 265.18 s 2023-03-28 03:56:02,279 44k INFO Train Epoch: 264 [19%] 2023-03-28 03:56:02,280 44k INFO Losses: [2.130587100982666, 3.2074897289276123, 12.462244987487793, 14.024124145507812, 1.1230342388153076], step: 191600, lr: 9.670528862935451e-05 2023-03-28 03:57:11,096 44k INFO Train Epoch: 264 [46%] 2023-03-28 03:57:11,097 44k INFO Losses: [2.2658348083496094, 2.3544023036956787, 12.941790580749512, 18.059814453125, 1.1652499437332153], step: 191800, lr: 9.670528862935451e-05 2023-03-28 03:58:20,458 44k INFO Train Epoch: 264 [74%] 2023-03-28 03:58:20,459 44k INFO Losses: [2.5711207389831543, 2.4087772369384766, 12.491219520568848, 16.38353157043457, 0.9937915205955505], step: 192000, lr: 9.670528862935451e-05 2023-03-28 03:58:23,432 44k INFO Saving model and optimizer state at iteration 264 to ./logs\44k\G_192000.pth 2023-03-28 03:58:24,139 44k INFO Saving model and optimizer state at iteration 264 to ./logs\44k\D_192000.pth 2023-03-28 03:58:24,783 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_189000.pth 2023-03-28 03:58:24,816 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_189000.pth 2023-03-28 03:59:30,988 44k INFO ====> Epoch: 264, cost 265.12 s 2023-03-28 03:59:42,870 44k INFO Train Epoch: 265 [1%] 2023-03-28 03:59:42,871 44k INFO Losses: [2.3677706718444824, 2.112205743789673, 14.71024227142334, 17.4189510345459, 0.6675645709037781], step: 192200, lr: 9.669320046827584e-05 2023-03-28 04:00:52,332 44k INFO Train Epoch: 265 [29%] 2023-03-28 04:00:52,333 44k INFO Losses: [2.29407000541687, 2.532107353210449, 10.7650728225708, 15.04749870300293, 0.6541774272918701], step: 192400, lr: 9.669320046827584e-05 2023-03-28 04:02:01,567 44k INFO Train Epoch: 265 [56%] 2023-03-28 04:02:01,567 44k INFO Losses: [2.2691502571105957, 2.72623872756958, 9.732566833496094, 10.605862617492676, 0.6971271634101868], step: 192600, lr: 9.669320046827584e-05 2023-03-28 04:03:10,837 44k INFO Train Epoch: 265 [84%] 2023-03-28 04:03:10,837 44k INFO Losses: [2.492008686065674, 2.4177420139312744, 10.580049514770508, 12.701763153076172, 0.7785888314247131], step: 192800, lr: 9.669320046827584e-05 2023-03-28 04:03:52,247 44k INFO ====> Epoch: 265, cost 261.26 s 2023-03-28 04:04:29,076 44k INFO Train Epoch: 266 [11%] 2023-03-28 04:04:29,076 44k INFO Losses: [2.215855598449707, 2.5175247192382812, 10.464691162109375, 17.711841583251953, 0.712358295917511], step: 193000, lr: 9.668111381821731e-05 2023-03-28 04:04:32,039 44k INFO Saving model and optimizer state at iteration 266 to ./logs\44k\G_193000.pth 2023-03-28 04:04:32,793 44k INFO Saving model and optimizer state at iteration 266 to ./logs\44k\D_193000.pth 2023-03-28 04:04:33,465 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_190000.pth 2023-03-28 04:04:33,504 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_190000.pth 2023-03-28 04:05:42,909 44k INFO Train Epoch: 266 [38%] 2023-03-28 04:05:42,909 44k INFO Losses: [2.5463709831237793, 2.3520426750183105, 11.629732131958008, 15.809866905212402, 1.4057222604751587], step: 193200, lr: 9.668111381821731e-05 2023-03-28 04:06:52,146 44k INFO Train Epoch: 266 [66%] 2023-03-28 04:06:52,147 44k INFO Losses: [1.8705207109451294, 2.8551888465881348, 10.540060043334961, 11.978002548217773, 0.9693655371665955], step: 193400, lr: 9.668111381821731e-05 2023-03-28 04:08:01,679 44k INFO Train Epoch: 266 [93%] 2023-03-28 04:08:01,679 44k INFO Losses: [2.5803725719451904, 2.3876736164093018, 13.681758880615234, 16.02491569519043, 0.9760072827339172], step: 193600, lr: 9.668111381821731e-05 2023-03-28 04:08:18,109 44k INFO ====> Epoch: 266, cost 265.86 s 2023-03-28 04:09:20,172 44k INFO Train Epoch: 267 [21%] 2023-03-28 04:09:20,173 44k INFO Losses: [2.6584935188293457, 1.9244179725646973, 9.58896255493164, 11.576878547668457, 0.6479546427726746], step: 193800, lr: 9.666902867899003e-05 2023-03-28 04:10:28,941 44k INFO Train Epoch: 267 [48%] 2023-03-28 04:10:28,942 44k INFO Losses: [2.2644970417022705, 2.4973549842834473, 10.91313648223877, 14.903483390808105, 0.9679855704307556], step: 194000, lr: 9.666902867899003e-05 2023-03-28 04:10:31,907 44k INFO Saving model and optimizer state at iteration 267 to ./logs\44k\G_194000.pth 2023-03-28 04:10:32,656 44k INFO Saving model and optimizer state at iteration 267 to ./logs\44k\D_194000.pth 2023-03-28 04:10:33,316 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_191000.pth 2023-03-28 04:10:33,355 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_191000.pth 2023-03-28 04:11:42,965 44k INFO Train Epoch: 267 [76%] 2023-03-28 04:11:42,966 44k INFO Losses: [2.1731503009796143, 2.5235795974731445, 11.47872543334961, 18.611759185791016, 1.3444031476974487], step: 194200, lr: 9.666902867899003e-05 2023-03-28 04:12:43,790 44k INFO ====> Epoch: 267, cost 265.68 s 2023-03-28 04:13:01,183 44k INFO Train Epoch: 268 [3%] 2023-03-28 04:13:01,184 44k INFO Losses: [2.1827433109283447, 2.4825687408447266, 8.993408203125, 16.133546829223633, 1.125809669494629], step: 194400, lr: 9.665694505040515e-05 2023-03-28 04:14:10,716 44k INFO Train Epoch: 268 [31%] 2023-03-28 04:14:10,717 44k INFO Losses: [2.253509998321533, 2.31909441947937, 16.088714599609375, 17.910062789916992, 0.8671415448188782], step: 194600, lr: 9.665694505040515e-05 2023-03-28 04:15:20,270 44k INFO Train Epoch: 268 [58%] 2023-03-28 04:15:20,270 44k INFO Losses: [2.5568134784698486, 2.077049493789673, 13.63888931274414, 17.865493774414062, 1.0837408304214478], step: 194800, lr: 9.665694505040515e-05 2023-03-28 04:16:29,642 44k INFO Train Epoch: 268 [86%] 2023-03-28 04:16:29,643 44k INFO Losses: [2.378945827484131, 2.5943102836608887, 18.633068084716797, 18.60083770751953, 1.4156346321105957], step: 195000, lr: 9.665694505040515e-05 2023-03-28 04:16:32,731 44k INFO Saving model and optimizer state at iteration 268 to ./logs\44k\G_195000.pth 2023-03-28 04:16:33,444 44k INFO Saving model and optimizer state at iteration 268 to ./logs\44k\D_195000.pth 2023-03-28 04:16:34,114 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_192000.pth 2023-03-28 04:16:34,154 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_192000.pth 2023-03-28 04:17:09,661 44k INFO ====> Epoch: 268, cost 265.87 s 2023-03-28 04:17:52,092 44k INFO Train Epoch: 269 [13%] 2023-03-28 04:17:52,092 44k INFO Losses: [2.5575523376464844, 2.1585593223571777, 9.076245307922363, 12.664651870727539, 0.8137844800949097], step: 195200, lr: 9.664486293227385e-05 2023-03-28 04:19:01,214 44k INFO Train Epoch: 269 [41%] 2023-03-28 04:19:01,214 44k INFO Losses: [2.643162488937378, 2.197464942932129, 11.986513137817383, 15.682941436767578, 0.9005952477455139], step: 195400, lr: 9.664486293227385e-05 2023-03-28 04:20:10,463 44k INFO Train Epoch: 269 [68%] 2023-03-28 04:20:10,464 44k INFO Losses: [2.407435417175293, 2.6825478076934814, 12.931807518005371, 19.27619743347168, 0.9032448530197144], step: 195600, lr: 9.664486293227385e-05 2023-03-28 04:21:19,802 44k INFO Train Epoch: 269 [96%] 2023-03-28 04:21:19,802 44k INFO Losses: [2.7236616611480713, 2.2400412559509277, 9.667281150817871, 13.972807884216309, 0.7027422189712524], step: 195800, lr: 9.664486293227385e-05 2023-03-28 04:21:30,638 44k INFO ====> Epoch: 269, cost 260.98 s 2023-03-28 04:22:38,333 44k INFO Train Epoch: 270 [23%] 2023-03-28 04:22:38,333 44k INFO Losses: [2.3532330989837646, 2.3622140884399414, 13.186447143554688, 17.76287841796875, 0.724066436290741], step: 196000, lr: 9.663278232440732e-05 2023-03-28 04:22:41,413 44k INFO Saving model and optimizer state at iteration 270 to ./logs\44k\G_196000.pth 2023-03-28 04:22:42,127 44k INFO Saving model and optimizer state at iteration 270 to ./logs\44k\D_196000.pth 2023-03-28 04:22:42,791 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_193000.pth 2023-03-28 04:22:42,826 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_193000.pth 2023-03-28 04:23:51,526 44k INFO Train Epoch: 270 [51%] 2023-03-28 04:23:51,527 44k INFO Losses: [2.40840220451355, 2.260169506072998, 8.795619010925293, 14.126694679260254, 1.073422908782959], step: 196200, lr: 9.663278232440732e-05 2023-03-28 04:25:01,110 44k INFO Train Epoch: 270 [78%] 2023-03-28 04:25:01,110 44k INFO Losses: [2.5106682777404785, 2.5021941661834717, 9.909200668334961, 18.26336097717285, 0.977453351020813], step: 196400, lr: 9.663278232440732e-05 2023-03-28 04:25:56,275 44k INFO ====> Epoch: 270, cost 265.64 s 2023-03-28 04:26:19,410 44k INFO Train Epoch: 271 [5%] 2023-03-28 04:26:19,411 44k INFO Losses: [2.096144199371338, 2.6977672576904297, 10.668758392333984, 15.772719383239746, 0.7150923609733582], step: 196600, lr: 9.662070322661676e-05 2023-03-28 04:27:28,968 44k INFO Train Epoch: 271 [33%] 2023-03-28 04:27:28,968 44k INFO Losses: [1.9684059619903564, 2.931845188140869, 13.403560638427734, 16.45646095275879, 0.6032599806785583], step: 196800, lr: 9.662070322661676e-05 2023-03-28 04:28:38,379 44k INFO Train Epoch: 271 [60%] 2023-03-28 04:28:38,379 44k INFO Losses: [2.334475040435791, 2.4849233627319336, 13.661833763122559, 18.189332962036133, 0.8287876844406128], step: 197000, lr: 9.662070322661676e-05 2023-03-28 04:28:41,351 44k INFO Saving model and optimizer state at iteration 271 to ./logs\44k\G_197000.pth 2023-03-28 04:28:42,062 44k INFO Saving model and optimizer state at iteration 271 to ./logs\44k\D_197000.pth 2023-03-28 04:28:42,736 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_194000.pth 2023-03-28 04:28:42,783 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_194000.pth 2023-03-28 04:29:52,067 44k INFO Train Epoch: 271 [88%] 2023-03-28 04:29:52,067 44k INFO Losses: [2.35487699508667, 2.6348657608032227, 14.91211986541748, 18.533388137817383, 1.1622872352600098], step: 197200, lr: 9.662070322661676e-05 2023-03-28 04:30:22,439 44k INFO ====> Epoch: 271, cost 266.16 s 2023-03-28 04:31:10,493 44k INFO Train Epoch: 272 [15%] 2023-03-28 04:31:10,494 44k INFO Losses: [1.9651436805725098, 2.8589823246002197, 10.072610855102539, 11.121920585632324, 0.9420615434646606], step: 197400, lr: 9.660862563871342e-05 2023-03-28 04:32:19,644 44k INFO Train Epoch: 272 [43%] 2023-03-28 04:32:19,645 44k INFO Losses: [2.3678650856018066, 2.4393672943115234, 15.166237831115723, 17.53807258605957, 1.1759573221206665], step: 197600, lr: 9.660862563871342e-05 2023-03-28 04:33:29,128 44k INFO Train Epoch: 272 [70%] 2023-03-28 04:33:29,128 44k INFO Losses: [2.2512400150299072, 2.605560302734375, 12.91100025177002, 18.662782669067383, 0.9627946615219116], step: 197800, lr: 9.660862563871342e-05 2023-03-28 04:34:38,496 44k INFO Train Epoch: 272 [98%] 2023-03-28 04:34:38,496 44k INFO Losses: [2.050487756729126, 3.2750399112701416, 8.996234893798828, 8.800654411315918, 0.7794025540351868], step: 198000, lr: 9.660862563871342e-05 2023-03-28 04:34:41,467 44k INFO Saving model and optimizer state at iteration 272 to ./logs\44k\G_198000.pth 2023-03-28 04:34:42,174 44k INFO Saving model and optimizer state at iteration 272 to ./logs\44k\D_198000.pth 2023-03-28 04:34:42,854 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_195000.pth 2023-03-28 04:34:42,883 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_195000.pth 2023-03-28 04:34:48,359 44k INFO ====> Epoch: 272, cost 265.92 s 2023-03-28 04:36:01,615 44k INFO Train Epoch: 273 [25%] 2023-03-28 04:36:01,616 44k INFO Losses: [2.503955841064453, 2.3337275981903076, 11.608861923217773, 15.387184143066406, 0.7047857046127319], step: 198200, lr: 9.659654956050859e-05 2023-03-28 04:37:10,732 44k INFO Train Epoch: 273 [53%] 2023-03-28 04:37:10,733 44k INFO Losses: [2.362663984298706, 2.3012232780456543, 13.305954933166504, 17.361417770385742, 0.8476890921592712], step: 198400, lr: 9.659654956050859e-05 2023-03-28 04:38:20,226 44k INFO Train Epoch: 273 [80%] 2023-03-28 04:38:20,226 44k INFO Losses: [2.73233699798584, 2.32773756980896, 11.575587272644043, 18.051294326782227, 1.291455626487732], step: 198600, lr: 9.659654956050859e-05 2023-03-28 04:39:10,034 44k INFO ====> Epoch: 273, cost 261.67 s 2023-03-28 04:39:38,737 44k INFO Train Epoch: 274 [8%] 2023-03-28 04:39:38,737 44k INFO Losses: [2.3545241355895996, 2.5483858585357666, 8.605266571044922, 10.619508743286133, 0.9092255234718323], step: 198800, lr: 9.658447499181352e-05 2023-03-28 04:40:48,362 44k INFO Train Epoch: 274 [35%] 2023-03-28 04:40:48,362 44k INFO Losses: [2.498325824737549, 2.4084279537200928, 9.201712608337402, 15.680262565612793, 0.8603453040122986], step: 199000, lr: 9.658447499181352e-05 2023-03-28 04:40:51,299 44k INFO Saving model and optimizer state at iteration 274 to ./logs\44k\G_199000.pth 2023-03-28 04:40:52,061 44k INFO Saving model and optimizer state at iteration 274 to ./logs\44k\D_199000.pth 2023-03-28 04:40:52,743 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_196000.pth 2023-03-28 04:40:52,772 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_196000.pth 2023-03-28 04:42:02,298 44k INFO Train Epoch: 274 [63%] 2023-03-28 04:42:02,298 44k INFO Losses: [2.357163190841675, 2.3617606163024902, 13.848677635192871, 16.65113639831543, 0.9496592879295349], step: 199200, lr: 9.658447499181352e-05 2023-03-28 04:43:12,012 44k INFO Train Epoch: 274 [90%] 2023-03-28 04:43:12,013 44k INFO Losses: [2.3527932167053223, 2.576408863067627, 14.263056755065918, 19.880643844604492, 1.134117603302002], step: 199400, lr: 9.658447499181352e-05 2023-03-28 04:43:36,877 44k INFO ====> Epoch: 274, cost 266.84 s 2023-03-28 04:44:30,869 44k INFO Train Epoch: 275 [18%] 2023-03-28 04:44:30,869 44k INFO Losses: [2.251126289367676, 2.543705940246582, 13.027837753295898, 15.883216857910156, 0.8770124316215515], step: 199600, lr: 9.657240193243954e-05 2023-03-28 04:45:40,402 44k INFO Train Epoch: 275 [45%] 2023-03-28 04:45:40,403 44k INFO Losses: [2.540471315383911, 2.2822299003601074, 10.64939022064209, 17.200244903564453, 0.9570655822753906], step: 199800, lr: 9.657240193243954e-05 2023-03-28 04:46:50,216 44k INFO Train Epoch: 275 [73%] 2023-03-28 04:46:50,217 44k INFO Losses: [2.379667282104492, 2.2554190158843994, 16.218610763549805, 17.845775604248047, 0.9798967838287354], step: 200000, lr: 9.657240193243954e-05 2023-03-28 04:46:53,193 44k INFO Saving model and optimizer state at iteration 275 to ./logs\44k\G_200000.pth 2023-03-28 04:46:53,903 44k INFO Saving model and optimizer state at iteration 275 to ./logs\44k\D_200000.pth 2023-03-28 04:46:54,588 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_197000.pth 2023-03-28 04:46:54,618 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_197000.pth 2023-03-28 04:48:04,187 44k INFO ====> Epoch: 275, cost 267.31 s 2023-03-28 04:48:13,421 44k INFO Train Epoch: 276 [0%] 2023-03-28 04:48:13,421 44k INFO Losses: [2.350393295288086, 2.9228668212890625, 16.23847770690918, 18.030630111694336, 1.5371601581573486], step: 200200, lr: 9.656033038219798e-05 2023-03-28 04:49:23,613 44k INFO Train Epoch: 276 [27%] 2023-03-28 04:49:23,614 44k INFO Losses: [2.378777027130127, 2.0751001834869385, 9.41419792175293, 11.498671531677246, 1.0545626878738403], step: 200400, lr: 9.656033038219798e-05 2023-03-28 04:50:33,200 44k INFO Train Epoch: 276 [55%] 2023-03-28 04:50:33,200 44k INFO Losses: [2.539686679840088, 2.3139090538024902, 14.314101219177246, 17.701683044433594, 0.8595965504646301], step: 200600, lr: 9.656033038219798e-05 2023-03-28 04:51:43,044 44k INFO Train Epoch: 276 [82%] 2023-03-28 04:51:43,044 44k INFO Losses: [2.525813579559326, 2.2622361183166504, 13.019614219665527, 16.116840362548828, 0.7668913006782532], step: 200800, lr: 9.656033038219798e-05 2023-03-28 04:52:27,594 44k INFO ====> Epoch: 276, cost 263.41 s 2023-03-28 04:53:02,216 44k INFO Train Epoch: 277 [10%] 2023-03-28 04:53:02,216 44k INFO Losses: [2.3911657333374023, 2.331515073776245, 15.147747993469238, 19.23215675354004, 0.9887709617614746], step: 201000, lr: 9.65482603409002e-05 2023-03-28 04:53:05,231 44k INFO Saving model and optimizer state at iteration 277 to ./logs\44k\G_201000.pth 2023-03-28 04:53:05,953 44k INFO Saving model and optimizer state at iteration 277 to ./logs\44k\D_201000.pth 2023-03-28 04:53:06,622 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_198000.pth 2023-03-28 04:53:06,651 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_198000.pth 2023-03-28 04:54:16,536 44k INFO Train Epoch: 277 [37%] 2023-03-28 04:54:16,536 44k INFO Losses: [2.6023833751678467, 2.276693820953369, 10.20368766784668, 13.671032905578613, 0.6593167185783386], step: 201200, lr: 9.65482603409002e-05 2023-03-28 04:55:26,337 44k INFO Train Epoch: 277 [65%] 2023-03-28 04:55:26,338 44k INFO Losses: [2.465221643447876, 2.524965763092041, 14.617851257324219, 20.155027389526367, 1.244929313659668], step: 201400, lr: 9.65482603409002e-05 2023-03-28 04:56:36,388 44k INFO Train Epoch: 277 [92%] 2023-03-28 04:56:36,389 44k INFO Losses: [2.4580249786376953, 2.3191301822662354, 11.035228729248047, 16.356657028198242, 1.2073358297348022], step: 201600, lr: 9.65482603409002e-05 2023-03-28 04:56:55,669 44k INFO ====> Epoch: 277, cost 268.08 s 2023-03-28 04:57:55,317 44k INFO Train Epoch: 278 [20%] 2023-03-28 04:57:55,317 44k INFO Losses: [2.3662140369415283, 2.1651759147644043, 12.01344108581543, 14.798890113830566, 1.1087651252746582], step: 201800, lr: 9.653619180835758e-05 2023-03-28 04:59:04,586 44k INFO Train Epoch: 278 [47%] 2023-03-28 04:59:04,587 44k INFO Losses: [2.515076160430908, 2.16445255279541, 8.919788360595703, 12.518628120422363, 0.7881181836128235], step: 202000, lr: 9.653619180835758e-05 2023-03-28 04:59:07,535 44k INFO Saving model and optimizer state at iteration 278 to ./logs\44k\G_202000.pth 2023-03-28 04:59:08,241 44k INFO Saving model and optimizer state at iteration 278 to ./logs\44k\D_202000.pth 2023-03-28 04:59:08,909 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_199000.pth 2023-03-28 04:59:08,939 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_199000.pth 2023-03-28 05:00:18,715 44k INFO Train Epoch: 278 [75%] 2023-03-28 05:00:18,716 44k INFO Losses: [2.2898106575012207, 2.6022815704345703, 12.744950294494629, 15.501956939697266, 1.047310471534729], step: 202200, lr: 9.653619180835758e-05 2023-03-28 05:01:22,569 44k INFO ====> Epoch: 278, cost 266.90 s 2023-03-28 05:01:37,437 44k INFO Train Epoch: 279 [2%] 2023-03-28 05:01:37,438 44k INFO Losses: [2.336411476135254, 2.299649715423584, 11.605513572692871, 15.633959770202637, 1.0710512399673462], step: 202400, lr: 9.652412478438153e-05 2023-03-28 05:02:47,399 44k INFO Train Epoch: 279 [30%] 2023-03-28 05:02:47,400 44k INFO Losses: [2.272223949432373, 2.5401158332824707, 16.40498161315918, 17.02773666381836, 1.1969197988510132], step: 202600, lr: 9.652412478438153e-05 2023-03-28 05:03:57,240 44k INFO Train Epoch: 279 [57%] 2023-03-28 05:03:57,240 44k INFO Losses: [2.494779109954834, 2.3887696266174316, 10.78056812286377, 15.576316833496094, 0.9431734681129456], step: 202800, lr: 9.652412478438153e-05 2023-03-28 05:05:07,121 44k INFO Train Epoch: 279 [85%] 2023-03-28 05:05:07,121 44k INFO Losses: [2.90548038482666, 1.9203147888183594, 6.824577808380127, 9.056280136108398, 0.7964504957199097], step: 203000, lr: 9.652412478438153e-05 2023-03-28 05:05:10,182 44k INFO Saving model and optimizer state at iteration 279 to ./logs\44k\G_203000.pth 2023-03-28 05:05:10,895 44k INFO Saving model and optimizer state at iteration 279 to ./logs\44k\D_203000.pth 2023-03-28 05:05:11,578 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_200000.pth 2023-03-28 05:05:11,607 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_200000.pth 2023-03-28 05:05:50,368 44k INFO ====> Epoch: 279, cost 267.80 s 2023-03-28 05:06:30,495 44k INFO Train Epoch: 280 [12%] 2023-03-28 05:06:30,495 44k INFO Losses: [2.428819179534912, 2.30607533454895, 12.388322830200195, 17.897722244262695, 0.8078226447105408], step: 203200, lr: 9.651205926878348e-05 2023-03-28 05:07:40,354 44k INFO Train Epoch: 280 [40%] 2023-03-28 05:07:40,354 44k INFO Losses: [2.4140336513519287, 2.5200953483581543, 8.878617286682129, 13.72534465789795, 1.1305820941925049], step: 203400, lr: 9.651205926878348e-05 2023-03-28 05:08:50,205 44k INFO Train Epoch: 280 [67%] 2023-03-28 05:08:50,206 44k INFO Losses: [2.5989582538604736, 2.4887850284576416, 12.655619621276855, 15.820817947387695, 1.2733381986618042], step: 203600, lr: 9.651205926878348e-05 2023-03-28 05:10:00,349 44k INFO Train Epoch: 280 [95%] 2023-03-28 05:10:00,350 44k INFO Losses: [2.582841396331787, 2.113783359527588, 12.358428001403809, 15.852889060974121, 0.852738082408905], step: 203800, lr: 9.651205926878348e-05 2023-03-28 05:10:14,091 44k INFO ====> Epoch: 280, cost 263.72 s 2023-03-28 05:11:19,632 44k INFO Train Epoch: 281 [22%] 2023-03-28 05:11:19,632 44k INFO Losses: [2.6550092697143555, 2.2212729454040527, 7.553178787231445, 11.846031188964844, 0.7085527181625366], step: 204000, lr: 9.649999526137489e-05 2023-03-28 05:11:22,672 44k INFO Saving model and optimizer state at iteration 281 to ./logs\44k\G_204000.pth 2023-03-28 05:11:23,382 44k INFO Saving model and optimizer state at iteration 281 to ./logs\44k\D_204000.pth 2023-03-28 05:11:24,071 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_201000.pth 2023-03-28 05:11:24,117 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_201000.pth 2023-03-28 05:12:33,576 44k INFO Train Epoch: 281 [49%] 2023-03-28 05:12:33,576 44k INFO Losses: [2.258439540863037, 2.370858669281006, 16.43749237060547, 18.575712203979492, 1.575095295906067], step: 204200, lr: 9.649999526137489e-05 2023-03-28 05:13:43,669 44k INFO Train Epoch: 281 [77%] 2023-03-28 05:13:43,670 44k INFO Losses: [2.4298019409179688, 2.4676942825317383, 11.439505577087402, 13.943495750427246, 1.5814073085784912], step: 204400, lr: 9.649999526137489e-05 2023-03-28 05:14:42,185 44k INFO ====> Epoch: 281, cost 268.09 s 2023-03-28 05:15:02,720 44k INFO Train Epoch: 282 [4%] 2023-03-28 05:15:02,720 44k INFO Losses: [2.3625354766845703, 2.21283221244812, 11.5896577835083, 15.687667846679688, 0.5748199820518494], step: 204600, lr: 9.64879327619672e-05 2023-03-28 05:16:13,261 44k INFO Train Epoch: 282 [32%] 2023-03-28 05:16:13,261 44k INFO Losses: [2.2964718341827393, 2.454068660736084, 12.31747817993164, 18.453824996948242, 1.2132827043533325], step: 204800, lr: 9.64879327619672e-05 2023-03-28 05:17:23,646 44k INFO Train Epoch: 282 [59%] 2023-03-28 05:17:23,646 44k INFO Losses: [2.0021889209747314, 2.8235788345336914, 15.508423805236816, 16.8978328704834, 0.9028505682945251], step: 205000, lr: 9.64879327619672e-05 2023-03-28 05:17:26,694 44k INFO Saving model and optimizer state at iteration 282 to ./logs\44k\G_205000.pth 2023-03-28 05:17:27,404 44k INFO Saving model and optimizer state at iteration 282 to ./logs\44k\D_205000.pth 2023-03-28 05:17:28,118 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_202000.pth 2023-03-28 05:17:28,148 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_202000.pth 2023-03-28 05:18:38,432 44k INFO Train Epoch: 282 [87%] 2023-03-28 05:18:38,432 44k INFO Losses: [2.3709921836853027, 2.470949411392212, 16.29347038269043, 18.404680252075195, 1.4168444871902466], step: 205200, lr: 9.64879327619672e-05 2023-03-28 05:19:11,881 44k INFO ====> Epoch: 282, cost 269.70 s 2023-03-28 05:19:57,658 44k INFO Train Epoch: 283 [14%] 2023-03-28 05:19:57,659 44k INFO Losses: [2.49792218208313, 2.22166109085083, 9.786541938781738, 17.30653190612793, 0.7549281120300293], step: 205400, lr: 9.647587177037196e-05 2023-03-28 05:21:07,657 44k INFO Train Epoch: 283 [42%] 2023-03-28 05:21:07,657 44k INFO Losses: [2.480185031890869, 2.250767230987549, 9.336939811706543, 14.246247291564941, 1.2030670642852783], step: 205600, lr: 9.647587177037196e-05 2023-03-28 05:22:18,120 44k INFO Train Epoch: 283 [69%] 2023-03-28 05:22:18,121 44k INFO Losses: [2.6186625957489014, 2.224289655685425, 12.628334999084473, 15.962329864501953, 0.945244550704956], step: 205800, lr: 9.647587177037196e-05 2023-03-28 05:23:28,636 44k INFO Train Epoch: 283 [97%] 2023-03-28 05:23:28,636 44k INFO Losses: [2.6407485008239746, 2.076115608215332, 11.100504875183105, 15.5497407913208, 0.6511049866676331], step: 206000, lr: 9.647587177037196e-05 2023-03-28 05:23:31,615 44k INFO Saving model and optimizer state at iteration 283 to ./logs\44k\G_206000.pth 2023-03-28 05:23:32,330 44k INFO Saving model and optimizer state at iteration 283 to ./logs\44k\D_206000.pth 2023-03-28 05:23:33,054 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_203000.pth 2023-03-28 05:23:33,091 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_203000.pth 2023-03-28 05:23:41,354 44k INFO ====> Epoch: 283, cost 269.47 s 2023-03-28 05:24:52,747 44k INFO Train Epoch: 284 [24%] 2023-03-28 05:24:52,748 44k INFO Losses: [2.5833446979522705, 2.4117088317871094, 8.776638984680176, 12.452888488769531, 1.225576400756836], step: 206200, lr: 9.646381228640066e-05 2023-03-28 05:26:02,867 44k INFO Train Epoch: 284 [52%] 2023-03-28 05:26:02,867 44k INFO Losses: [2.305196762084961, 2.2725136280059814, 11.89309310913086, 15.14962100982666, 0.8300768733024597], step: 206400, lr: 9.646381228640066e-05 2023-03-28 05:27:13,421 44k INFO Train Epoch: 284 [79%] 2023-03-28 05:27:13,421 44k INFO Losses: [2.407525062561035, 2.4193127155303955, 14.915199279785156, 17.345794677734375, 1.3772149085998535], step: 206600, lr: 9.646381228640066e-05 2023-03-28 05:28:06,702 44k INFO ====> Epoch: 284, cost 265.35 s 2023-03-28 05:28:32,870 44k INFO Train Epoch: 285 [7%] 2023-03-28 05:28:32,871 44k INFO Losses: [2.593308687210083, 2.1461751461029053, 11.046256065368652, 13.985614776611328, 1.182348608970642], step: 206800, lr: 9.645175430986486e-05 2023-03-28 05:29:43,457 44k INFO Train Epoch: 285 [34%] 2023-03-28 05:29:43,457 44k INFO Losses: [2.5467729568481445, 2.3274085521698, 12.487383842468262, 16.631702423095703, 1.5293219089508057], step: 207000, lr: 9.645175430986486e-05 2023-03-28 05:29:46,467 44k INFO Saving model and optimizer state at iteration 285 to ./logs\44k\G_207000.pth 2023-03-28 05:29:47,217 44k INFO Saving model and optimizer state at iteration 285 to ./logs\44k\D_207000.pth 2023-03-28 05:29:47,880 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_204000.pth 2023-03-28 05:29:47,909 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_204000.pth 2023-03-28 05:30:58,300 44k INFO Train Epoch: 285 [62%] 2023-03-28 05:30:58,300 44k INFO Losses: [2.493130683898926, 2.296337604522705, 13.100621223449707, 13.038569450378418, 0.7659416198730469], step: 207200, lr: 9.645175430986486e-05 2023-03-28 05:32:08,834 44k INFO Train Epoch: 285 [89%] 2023-03-28 05:32:08,834 44k INFO Losses: [2.7557930946350098, 1.910664439201355, 5.692241668701172, 13.300250053405762, 0.7480587959289551], step: 207400, lr: 9.645175430986486e-05 2023-03-28 05:32:36,806 44k INFO ====> Epoch: 285, cost 270.10 s 2023-03-28 05:33:28,392 44k INFO Train Epoch: 286 [16%] 2023-03-28 05:33:28,392 44k INFO Losses: [2.5185117721557617, 2.1715171337127686, 8.592020988464355, 13.086750984191895, 1.3414157629013062], step: 207600, lr: 9.643969784057613e-05 2023-03-28 05:34:38,853 44k INFO Train Epoch: 286 [44%] 2023-03-28 05:34:38,853 44k INFO Losses: [2.407095432281494, 2.4392409324645996, 10.074203491210938, 16.23967933654785, 0.9857593178749084], step: 207800, lr: 9.643969784057613e-05 2023-03-28 05:35:49,519 44k INFO Train Epoch: 286 [71%] 2023-03-28 05:35:49,520 44k INFO Losses: [2.3129184246063232, 2.6718010902404785, 12.505002975463867, 15.685410499572754, 0.7614068984985352], step: 208000, lr: 9.643969784057613e-05 2023-03-28 05:35:52,537 44k INFO Saving model and optimizer state at iteration 286 to ./logs\44k\G_208000.pth 2023-03-28 05:35:53,248 44k INFO Saving model and optimizer state at iteration 286 to ./logs\44k\D_208000.pth 2023-03-28 05:35:53,922 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_205000.pth 2023-03-28 05:35:53,965 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_205000.pth 2023-03-28 05:37:04,687 44k INFO Train Epoch: 286 [99%] 2023-03-28 05:37:04,687 44k INFO Losses: [2.696178436279297, 2.4124698638916016, 15.006477355957031, 18.124710083007812, 0.9613689184188843], step: 208200, lr: 9.643969784057613e-05 2023-03-28 05:37:07,558 44k INFO ====> Epoch: 286, cost 270.75 s 2023-03-28 05:38:24,775 44k INFO Train Epoch: 287 [26%] 2023-03-28 05:38:24,776 44k INFO Losses: [2.3887035846710205, 2.4628872871398926, 10.980188369750977, 14.641170501708984, 0.8609405159950256], step: 208400, lr: 9.642764287834605e-05 2023-03-28 05:39:35,228 44k INFO Train Epoch: 287 [54%] 2023-03-28 05:39:35,229 44k INFO Losses: [2.6768720149993896, 1.9018417596817017, 11.537473678588867, 13.299372673034668, 0.8278539776802063], step: 208600, lr: 9.642764287834605e-05 2023-03-28 05:40:45,784 44k INFO Train Epoch: 287 [81%] 2023-03-28 05:40:45,784 44k INFO Losses: [2.376481533050537, 2.4081249237060547, 11.759926795959473, 16.552675247192383, 0.6098418831825256], step: 208800, lr: 9.642764287834605e-05 2023-03-28 05:41:33,779 44k INFO ====> Epoch: 287, cost 266.22 s 2023-03-28 05:42:05,657 44k INFO Train Epoch: 288 [9%] 2023-03-28 05:42:05,658 44k INFO Losses: [2.456772804260254, 2.2160873413085938, 9.489032745361328, 13.505303382873535, 1.0383493900299072], step: 209000, lr: 9.641558942298625e-05 2023-03-28 05:42:08,633 44k INFO Saving model and optimizer state at iteration 288 to ./logs\44k\G_209000.pth 2023-03-28 05:42:09,344 44k INFO Saving model and optimizer state at iteration 288 to ./logs\44k\D_209000.pth 2023-03-28 05:42:10,033 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_206000.pth 2023-03-28 05:42:10,074 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_206000.pth 2023-03-28 05:43:20,933 44k INFO Train Epoch: 288 [36%] 2023-03-28 05:43:20,934 44k INFO Losses: [2.456937551498413, 2.241584539413452, 13.872239112854004, 15.366440773010254, 1.1618925333023071], step: 209200, lr: 9.641558942298625e-05 2023-03-28 05:44:31,615 44k INFO Train Epoch: 288 [64%] 2023-03-28 05:44:31,615 44k INFO Losses: [2.5272908210754395, 2.0233113765716553, 11.304187774658203, 11.501331329345703, 0.47437652945518494], step: 209400, lr: 9.641558942298625e-05 2023-03-28 05:45:42,618 44k INFO Train Epoch: 288 [91%] 2023-03-28 05:45:42,619 44k INFO Losses: [2.4628894329071045, 2.5866334438323975, 11.610809326171875, 15.409666061401367, 1.5031287670135498], step: 209600, lr: 9.641558942298625e-05 2023-03-28 05:46:05,078 44k INFO ====> Epoch: 288, cost 271.30 s 2023-03-28 05:47:02,562 44k INFO Train Epoch: 289 [19%] 2023-03-28 05:47:02,562 44k INFO Losses: [2.4574055671691895, 2.1851184368133545, 9.210878372192383, 12.976217269897461, 0.7808789014816284], step: 209800, lr: 9.640353747430838e-05 2023-03-28 05:48:13,410 44k INFO Train Epoch: 289 [46%] 2023-03-28 05:48:13,411 44k INFO Losses: [2.365555763244629, 2.3453478813171387, 13.62033748626709, 16.83810806274414, 0.6058693528175354], step: 210000, lr: 9.640353747430838e-05 2023-03-28 05:48:16,344 44k INFO Saving model and optimizer state at iteration 289 to ./logs\44k\G_210000.pth 2023-03-28 05:48:17,054 44k INFO Saving model and optimizer state at iteration 289 to ./logs\44k\D_210000.pth 2023-03-28 05:48:17,720 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_207000.pth 2023-03-28 05:48:17,760 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_207000.pth 2023-03-28 05:49:28,670 44k INFO Train Epoch: 289 [74%] 2023-03-28 05:49:28,670 44k INFO Losses: [2.393150568008423, 2.5334715843200684, 13.497852325439453, 18.939899444580078, 1.0299856662750244], step: 210200, lr: 9.640353747430838e-05 2023-03-28 05:50:36,575 44k INFO ====> Epoch: 289, cost 271.50 s 2023-03-28 05:50:48,542 44k INFO Train Epoch: 290 [1%] 2023-03-28 05:50:48,542 44k INFO Losses: [2.4617700576782227, 2.128793478012085, 12.802021980285645, 16.631420135498047, 0.9946692585945129], step: 210400, lr: 9.639148703212408e-05 2023-03-28 05:51:59,747 44k INFO Train Epoch: 290 [29%] 2023-03-28 05:51:59,748 44k INFO Losses: [2.38358736038208, 2.4359822273254395, 10.463275909423828, 15.05078411102295, 0.9008592963218689], step: 210600, lr: 9.639148703212408e-05 2023-03-28 05:53:11,239 44k INFO Train Epoch: 290 [56%] 2023-03-28 05:53:11,240 44k INFO Losses: [2.2083637714385986, 2.9505577087402344, 9.521239280700684, 10.897721290588379, 1.177430272102356], step: 210800, lr: 9.639148703212408e-05 2023-03-28 05:54:22,239 44k INFO Train Epoch: 290 [84%] 2023-03-28 05:54:22,239 44k INFO Losses: [2.3677759170532227, 2.527074098587036, 11.183000564575195, 18.09722900390625, 0.817413866519928], step: 211000, lr: 9.639148703212408e-05 2023-03-28 05:54:25,320 44k INFO Saving model and optimizer state at iteration 290 to ./logs\44k\G_211000.pth 2023-03-28 05:54:26,036 44k INFO Saving model and optimizer state at iteration 290 to ./logs\44k\D_211000.pth 2023-03-28 05:54:26,691 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_208000.pth 2023-03-28 05:54:26,728 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_208000.pth 2023-03-28 05:55:09,069 44k INFO ====> Epoch: 290, cost 272.49 s 2023-03-28 05:55:46,742 44k INFO Train Epoch: 291 [11%] 2023-03-28 05:55:46,743 44k INFO Losses: [2.6244399547576904, 2.336239814758301, 9.441961288452148, 18.412981033325195, 1.489073634147644], step: 211200, lr: 9.637943809624507e-05 2023-03-28 05:56:57,781 44k INFO Train Epoch: 291 [38%] 2023-03-28 05:56:57,782 44k INFO Losses: [2.514711856842041, 2.12247633934021, 12.201294898986816, 17.242786407470703, 1.0061821937561035], step: 211400, lr: 9.637943809624507e-05 2023-03-28 05:58:08,860 44k INFO Train Epoch: 291 [66%] 2023-03-28 05:58:08,861 44k INFO Losses: [2.6047067642211914, 1.979585886001587, 8.740665435791016, 13.600090026855469, 1.0038498640060425], step: 211600, lr: 9.637943809624507e-05 2023-03-28 05:59:20,222 44k INFO Train Epoch: 291 [93%] 2023-03-28 05:59:20,222 44k INFO Losses: [2.539656639099121, 2.083559513092041, 10.19057846069336, 13.080375671386719, 1.3683733940124512], step: 211800, lr: 9.637943809624507e-05 2023-03-28 05:59:37,059 44k INFO ====> Epoch: 291, cost 267.99 s 2023-03-28 06:00:40,456 44k INFO Train Epoch: 292 [21%] 2023-03-28 06:00:40,457 44k INFO Losses: [2.507915496826172, 2.2686729431152344, 10.99413013458252, 14.744837760925293, 0.7885242700576782], step: 212000, lr: 9.636739066648303e-05 2023-03-28 06:00:43,472 44k INFO Saving model and optimizer state at iteration 292 to ./logs\44k\G_212000.pth 2023-03-28 06:00:44,189 44k INFO Saving model and optimizer state at iteration 292 to ./logs\44k\D_212000.pth 2023-03-28 06:00:44,858 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_209000.pth 2023-03-28 06:00:44,903 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_209000.pth 2023-03-28 06:01:55,645 44k INFO Train Epoch: 292 [48%] 2023-03-28 06:01:55,645 44k INFO Losses: [2.5926952362060547, 2.2156927585601807, 8.167703628540039, 14.524674415588379, 0.619996964931488], step: 212200, lr: 9.636739066648303e-05 2023-03-28 06:03:07,039 44k INFO Train Epoch: 292 [76%] 2023-03-28 06:03:07,040 44k INFO Losses: [2.491163730621338, 2.5760908126831055, 11.653103828430176, 17.974008560180664, 0.8450263738632202], step: 212400, lr: 9.636739066648303e-05 2023-03-28 06:04:09,569 44k INFO ====> Epoch: 292, cost 272.51 s 2023-03-28 06:04:27,368 44k INFO Train Epoch: 293 [3%] 2023-03-28 06:04:27,368 44k INFO Losses: [2.3396222591400146, 2.4390807151794434, 12.045276641845703, 16.69121742248535, 1.2830411195755005], step: 212600, lr: 9.635534474264972e-05 2023-03-28 06:05:38,848 44k INFO Train Epoch: 293 [31%] 2023-03-28 06:05:38,848 44k INFO Losses: [2.403202533721924, 2.6687357425689697, 13.82166576385498, 18.4494571685791, 0.7442072629928589], step: 212800, lr: 9.635534474264972e-05 2023-03-28 06:06:50,332 44k INFO Train Epoch: 293 [58%] 2023-03-28 06:06:50,332 44k INFO Losses: [2.6287434101104736, 2.142791271209717, 10.470479965209961, 13.498828887939453, 1.1979069709777832], step: 213000, lr: 9.635534474264972e-05 2023-03-28 06:06:53,363 44k INFO Saving model and optimizer state at iteration 293 to ./logs\44k\G_213000.pth 2023-03-28 06:06:54,071 44k INFO Saving model and optimizer state at iteration 293 to ./logs\44k\D_213000.pth 2023-03-28 06:06:54,696 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_210000.pth 2023-03-28 06:06:54,734 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_210000.pth 2023-03-28 06:08:06,229 44k INFO Train Epoch: 293 [86%] 2023-03-28 06:08:06,230 44k INFO Losses: [2.1887974739074707, 2.5030529499053955, 15.09897518157959, 18.081661224365234, 1.027443289756775], step: 213200, lr: 9.635534474264972e-05 2023-03-28 06:08:43,167 44k INFO ====> Epoch: 293, cost 273.60 s 2023-03-28 06:09:26,664 44k INFO Train Epoch: 294 [13%] 2023-03-28 06:09:26,664 44k INFO Losses: [2.192896842956543, 2.5180211067199707, 15.760004997253418, 17.839139938354492, 0.7925189733505249], step: 213400, lr: 9.634330032455689e-05 2023-03-28 06:10:37,893 44k INFO Train Epoch: 294 [41%] 2023-03-28 06:10:37,893 44k INFO Losses: [2.1793386936187744, 2.7479872703552246, 14.777464866638184, 19.20068359375, 1.2114428281784058], step: 213600, lr: 9.634330032455689e-05 2023-03-28 06:11:49,365 44k INFO Train Epoch: 294 [68%] 2023-03-28 06:11:49,365 44k INFO Losses: [2.607598304748535, 2.2226295471191406, 9.293622970581055, 15.708688735961914, 1.1404801607131958], step: 213800, lr: 9.634330032455689e-05 2023-03-28 06:13:01,231 44k INFO Train Epoch: 294 [96%] 2023-03-28 06:13:01,231 44k INFO Losses: [2.3408594131469727, 2.257526397705078, 7.942895889282227, 11.029743194580078, 0.9897887706756592], step: 214000, lr: 9.634330032455689e-05 2023-03-28 06:13:04,231 44k INFO Saving model and optimizer state at iteration 294 to ./logs\44k\G_214000.pth 2023-03-28 06:13:04,929 44k INFO Saving model and optimizer state at iteration 294 to ./logs\44k\D_214000.pth 2023-03-28 06:13:05,560 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_211000.pth 2023-03-28 06:13:05,602 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_211000.pth 2023-03-28 06:13:16,752 44k INFO ====> Epoch: 294, cost 273.58 s 2023-03-28 06:14:26,286 44k INFO Train Epoch: 295 [23%] 2023-03-28 06:14:26,286 44k INFO Losses: [2.287306308746338, 2.426697015762329, 12.012343406677246, 15.31406021118164, 1.0825767517089844], step: 214200, lr: 9.633125741201631e-05 2023-03-28 06:15:37,449 44k INFO Train Epoch: 295 [51%] 2023-03-28 06:15:37,450 44k INFO Losses: [2.155381202697754, 2.9790163040161133, 14.84803295135498, 15.041644096374512, 0.9343889355659485], step: 214400, lr: 9.633125741201631e-05 2023-03-28 06:16:49,043 44k INFO Train Epoch: 295 [78%] 2023-03-28 06:16:49,044 44k INFO Losses: [2.499316453933716, 2.446193218231201, 9.715065956115723, 17.962177276611328, 1.1325294971466064], step: 214600, lr: 9.633125741201631e-05 2023-03-28 06:17:45,952 44k INFO ====> Epoch: 295, cost 269.20 s 2023-03-28 06:18:09,643 44k INFO Train Epoch: 296 [5%] 2023-03-28 06:18:09,643 44k INFO Losses: [2.19596266746521, 2.524354934692383, 15.602398872375488, 18.01761245727539, 1.4498417377471924], step: 214800, lr: 9.631921600483981e-05 2023-03-28 06:19:21,215 44k INFO Train Epoch: 296 [33%] 2023-03-28 06:19:21,216 44k INFO Losses: [2.38533878326416, 2.4272780418395996, 12.998536109924316, 15.939623832702637, 0.9719899892807007], step: 215000, lr: 9.631921600483981e-05 2023-03-28 06:19:24,258 44k INFO Saving model and optimizer state at iteration 296 to ./logs\44k\G_215000.pth 2023-03-28 06:19:25,026 44k INFO Saving model and optimizer state at iteration 296 to ./logs\44k\D_215000.pth 2023-03-28 06:19:25,694 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_212000.pth 2023-03-28 06:19:25,735 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_212000.pth 2023-03-28 06:20:37,119 44k INFO Train Epoch: 296 [60%] 2023-03-28 06:20:37,119 44k INFO Losses: [2.463762044906616, 2.0924108028411865, 10.73410701751709, 15.796463966369629, 1.2171363830566406], step: 215200, lr: 9.631921600483981e-05 2023-03-28 06:21:48,621 44k INFO Train Epoch: 296 [88%] 2023-03-28 06:21:48,621 44k INFO Losses: [2.4654407501220703, 2.5754597187042236, 10.79159164428711, 15.557373046875, 1.0981453657150269], step: 215400, lr: 9.631921600483981e-05 2023-03-28 06:22:19,895 44k INFO ====> Epoch: 296, cost 273.94 s 2023-03-28 06:23:09,419 44k INFO Train Epoch: 297 [15%] 2023-03-28 06:23:09,420 44k INFO Losses: [2.3480231761932373, 2.2031619548797607, 12.392474174499512, 15.420476913452148, 0.7937003374099731], step: 215600, lr: 9.63071761028392e-05 2023-03-28 06:24:20,702 44k INFO Train Epoch: 297 [43%] 2023-03-28 06:24:20,702 44k INFO Losses: [2.3478920459747314, 2.555845022201538, 17.30434799194336, 19.148340225219727, 1.0757416486740112], step: 215800, lr: 9.63071761028392e-05 2023-03-28 06:25:32,353 44k INFO Train Epoch: 297 [70%] 2023-03-28 06:25:32,354 44k INFO Losses: [2.149202823638916, 2.412588596343994, 13.552013397216797, 18.468793869018555, 0.8318763375282288], step: 216000, lr: 9.63071761028392e-05 2023-03-28 06:25:35,344 44k INFO Saving model and optimizer state at iteration 297 to ./logs\44k\G_216000.pth 2023-03-28 06:25:36,098 44k INFO Saving model and optimizer state at iteration 297 to ./logs\44k\D_216000.pth 2023-03-28 06:25:36,769 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_213000.pth 2023-03-28 06:25:36,807 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_213000.pth 2023-03-28 06:26:48,549 44k INFO Train Epoch: 297 [98%] 2023-03-28 06:26:48,550 44k INFO Losses: [2.6569807529449463, 2.2128942012786865, 8.059557914733887, 10.956095695495605, 0.47089940309524536], step: 216200, lr: 9.63071761028392e-05 2023-03-28 06:26:54,195 44k INFO ====> Epoch: 297, cost 274.30 s 2023-03-28 06:28:09,545 44k INFO Train Epoch: 298 [25%] 2023-03-28 06:28:09,546 44k INFO Losses: [2.5089645385742188, 2.318554639816284, 11.459346771240234, 14.507411003112793, 0.6950560808181763], step: 216400, lr: 9.629513770582634e-05 2023-03-28 06:29:21,023 44k INFO Train Epoch: 298 [53%] 2023-03-28 06:29:21,023 44k INFO Losses: [2.518892288208008, 2.3124234676361084, 8.514399528503418, 15.014688491821289, 1.2446657419204712], step: 216600, lr: 9.629513770582634e-05 2023-03-28 06:30:32,465 44k INFO Train Epoch: 298 [80%] 2023-03-28 06:30:32,465 44k INFO Losses: [2.495898723602295, 2.283698558807373, 11.707755088806152, 15.052217483520508, 0.9407528638839722], step: 216800, lr: 9.629513770582634e-05 2023-03-28 06:31:23,992 44k INFO ====> Epoch: 298, cost 269.80 s 2023-03-28 06:31:53,455 44k INFO Train Epoch: 299 [8%] 2023-03-28 06:31:53,455 44k INFO Losses: [2.5210320949554443, 2.2367050647735596, 11.055953979492188, 16.151796340942383, 1.2053335905075073], step: 217000, lr: 9.628310081361311e-05 2023-03-28 06:31:56,450 44k INFO Saving model and optimizer state at iteration 299 to ./logs\44k\G_217000.pth 2023-03-28 06:31:57,160 44k INFO Saving model and optimizer state at iteration 299 to ./logs\44k\D_217000.pth 2023-03-28 06:31:57,818 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_214000.pth 2023-03-28 06:31:57,861 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_214000.pth 2023-03-28 06:33:09,393 44k INFO Train Epoch: 299 [35%] 2023-03-28 06:33:09,394 44k INFO Losses: [2.3019297122955322, 2.3416059017181396, 14.206493377685547, 15.633063316345215, 1.2747288942337036], step: 217200, lr: 9.628310081361311e-05 2023-03-28 06:34:21,303 44k INFO Train Epoch: 299 [63%] 2023-03-28 06:34:21,304 44k INFO Losses: [2.617668628692627, 2.063072443008423, 8.55488395690918, 12.438496589660645, 0.8531717658042908], step: 217400, lr: 9.628310081361311e-05 2023-03-28 06:35:33,314 44k INFO Train Epoch: 299 [90%] 2023-03-28 06:35:33,315 44k INFO Losses: [2.2996764183044434, 2.431049108505249, 14.111663818359375, 18.154422760009766, 1.5938401222229004], step: 217600, lr: 9.628310081361311e-05 2023-03-28 06:35:58,839 44k INFO ====> Epoch: 299, cost 274.85 s 2023-03-28 06:36:54,319 44k INFO Train Epoch: 300 [18%] 2023-03-28 06:36:54,319 44k INFO Losses: [2.5779240131378174, 2.2556965351104736, 12.042736053466797, 13.529996871948242, 0.8563831448554993], step: 217800, lr: 9.627106542601141e-05 2023-03-28 06:38:06,312 44k INFO Train Epoch: 300 [45%] 2023-03-28 06:38:06,313 44k INFO Losses: [2.3716068267822266, 2.54937481880188, 13.15407943725586, 18.11361312866211, 1.2915258407592773], step: 218000, lr: 9.627106542601141e-05 2023-03-28 06:38:09,277 44k INFO Saving model and optimizer state at iteration 300 to ./logs\44k\G_218000.pth 2023-03-28 06:38:10,040 44k INFO Saving model and optimizer state at iteration 300 to ./logs\44k\D_218000.pth 2023-03-28 06:38:10,717 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_215000.pth 2023-03-28 06:38:10,745 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_215000.pth 2023-03-28 06:39:22,329 44k INFO Train Epoch: 300 [73%] 2023-03-28 06:39:22,330 44k INFO Losses: [2.224750518798828, 2.2626852989196777, 14.99294662475586, 17.771516799926758, 0.8224271535873413], step: 218200, lr: 9.627106542601141e-05 2023-03-28 06:40:34,195 44k INFO ====> Epoch: 300, cost 275.36 s 2023-03-28 08:56:09,201 44k INFO {'train': {'log_interval': 200, 'eval_interval': 1000, 'seed': 1234, 'epochs': 100, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kanae': 0}, 'model_dir': './logs\\44k'} 2023-03-28 08:56:09,227 44k WARNING git hash values are different. cea6df30(saved) != fd4d47fd(current) 2023-03-28 08:56:11,286 44k INFO Loaded checkpoint './logs\44k\G_0.pth' (iteration 1) 2023-03-28 08:56:11,667 44k INFO Loaded checkpoint './logs\44k\D_0.pth' (iteration 1) 2023-03-28 08:56:25,533 44k INFO Train Epoch: 1 [0%] 2023-03-28 08:56:25,533 44k INFO Losses: [2.8024466037750244, 2.6371166706085205, 8.450650215148926, 21.69586753845215, 3.122756004333496], step: 0, lr: 0.0001 2023-03-28 08:56:31,220 44k INFO Saving model and optimizer state at iteration 1 to ./logs\44k\G_0.pth 2023-03-28 08:56:31,982 44k INFO Saving model and optimizer state at iteration 1 to ./logs\44k\D_0.pth 2023-03-28 08:57:57,685 44k INFO Train Epoch: 1 [95%] 2023-03-28 08:57:57,685 44k INFO Losses: [2.4879415035247803, 2.1469335556030273, 13.043086051940918, 21.879709243774414, 1.6740301847457886], step: 200, lr: 0.0001 2023-03-28 08:58:03,345 44k INFO ====> Epoch: 1, cost 114.14 s 2023-03-28 08:59:18,670 44k INFO Train Epoch: 2 [90%] 2023-03-28 08:59:18,671 44k INFO Losses: [2.3614845275878906, 2.604933738708496, 13.154882431030273, 22.82014274597168, 1.1853067874908447], step: 400, lr: 9.99875e-05 2023-03-28 08:59:26,500 44k INFO ====> Epoch: 2, cost 83.15 s 2023-03-28 09:00:37,821 44k INFO Train Epoch: 3 [84%] 2023-03-28 09:00:37,821 44k INFO Losses: [2.5533061027526855, 2.487410306930542, 12.031618118286133, 20.38658332824707, 1.6997976303100586], step: 600, lr: 9.99750015625e-05 2023-03-28 09:00:49,564 44k INFO ====> Epoch: 3, cost 83.06 s 2023-03-28 09:01:56,872 44k INFO Train Epoch: 4 [79%] 2023-03-28 09:01:56,872 44k INFO Losses: [2.2886593341827393, 2.5300748348236084, 12.630410194396973, 24.846975326538086, 1.8681604862213135], step: 800, lr: 9.996250468730469e-05 2023-03-28 09:02:12,433 44k INFO ====> Epoch: 4, cost 82.87 s 2023-03-28 09:03:16,119 44k INFO Train Epoch: 5 [74%] 2023-03-28 09:03:16,119 44k INFO Losses: [2.4898107051849365, 2.207388401031494, 10.046737670898438, 20.074716567993164, 1.5375231504440308], step: 1000, lr: 9.995000937421877e-05 2023-03-28 09:03:18,913 44k INFO Saving model and optimizer state at iteration 5 to ./logs\44k\G_1000.pth 2023-03-28 09:03:19,611 44k INFO Saving model and optimizer state at iteration 5 to ./logs\44k\D_1000.pth 2023-03-28 09:03:39,267 44k INFO ====> Epoch: 5, cost 86.83 s 2023-03-28 09:04:39,109 44k INFO Train Epoch: 6 [69%] 2023-03-28 09:04:39,109 44k INFO Losses: [2.398709774017334, 2.325619697570801, 14.505758285522461, 21.649160385131836, 1.7953550815582275], step: 1200, lr: 9.993751562304699e-05 2023-03-28 09:05:02,152 44k INFO ====> Epoch: 6, cost 82.89 s 2023-03-28 09:05:58,166 44k INFO Train Epoch: 7 [64%] 2023-03-28 09:05:58,166 44k INFO Losses: [2.5193021297454834, 2.5625765323638916, 10.881331443786621, 16.968896865844727, 1.2341899871826172], step: 1400, lr: 9.99250234335941e-05 2023-03-28 09:06:24,923 44k INFO ====> Epoch: 7, cost 82.77 s 2023-03-28 09:07:17,094 44k INFO Train Epoch: 8 [58%] 2023-03-28 09:07:17,095 44k INFO Losses: [2.3055388927459717, 2.2741246223449707, 11.614468574523926, 21.05830192565918, 1.5516400337219238], step: 1600, lr: 9.991253280566489e-05 2023-03-28 09:07:47,772 44k INFO ====> Epoch: 8, cost 82.85 s 2023-03-28 09:08:36,088 44k INFO Train Epoch: 9 [53%] 2023-03-28 09:08:36,088 44k INFO Losses: [2.635373830795288, 2.1600751876831055, 13.979146957397461, 19.58698081970215, 1.640193223953247], step: 1800, lr: 9.990004373906418e-05 2023-03-28 09:09:10,676 44k INFO ====> Epoch: 9, cost 82.90 s 2023-03-28 09:09:54,857 44k INFO Train Epoch: 10 [48%] 2023-03-28 09:09:54,857 44k INFO Losses: [2.690178155899048, 1.8358863592147827, 8.60083293914795, 17.8512020111084, 0.9774131178855896], step: 2000, lr: 9.98875562335968e-05 2023-03-28 09:09:57,678 44k INFO Saving model and optimizer state at iteration 10 to ./logs\44k\G_2000.pth 2023-03-28 09:09:58,393 44k INFO Saving model and optimizer state at iteration 10 to ./logs\44k\D_2000.pth 2023-03-28 09:10:37,286 44k INFO ====> Epoch: 10, cost 86.61 s 2023-03-28 09:11:17,687 44k INFO Train Epoch: 11 [43%] 2023-03-28 09:11:17,687 44k INFO Losses: [2.6054635047912598, 2.286773204803467, 13.544784545898438, 20.78312873840332, 1.2377525568008423], step: 2200, lr: 9.987507028906759e-05 2023-03-28 09:12:00,099 44k INFO ====> Epoch: 11, cost 82.81 s 2023-03-28 09:12:36,664 44k INFO Train Epoch: 12 [37%] 2023-03-28 09:12:36,664 44k INFO Losses: [2.4896647930145264, 2.22086763381958, 10.488682746887207, 16.725893020629883, 1.3987797498703003], step: 2400, lr: 9.986258590528146e-05 2023-03-28 09:13:23,089 44k INFO ====> Epoch: 12, cost 82.99 s 2023-03-28 09:13:55,690 44k INFO Train Epoch: 13 [32%] 2023-03-28 09:13:55,691 44k INFO Losses: [2.2576935291290283, 2.4395525455474854, 13.835577964782715, 22.797231674194336, 1.271624207496643], step: 2600, lr: 9.98501030820433e-05 2023-03-28 09:14:45,943 44k INFO ====> Epoch: 13, cost 82.85 s 2023-03-28 09:15:14,689 44k INFO Train Epoch: 14 [27%] 2023-03-28 09:15:14,689 44k INFO Losses: [2.4706485271453857, 2.3273844718933105, 9.516743659973145, 17.892873764038086, 1.1601622104644775], step: 2800, lr: 9.983762181915804e-05 2023-03-28 09:16:08,787 44k INFO ====> Epoch: 14, cost 82.84 s 2023-03-28 09:16:33,762 44k INFO Train Epoch: 15 [22%] 2023-03-28 09:16:33,762 44k INFO Losses: [2.3968753814697266, 2.1937553882598877, 10.061984062194824, 18.389217376708984, 1.4637523889541626], step: 3000, lr: 9.982514211643064e-05 2023-03-28 09:16:36,681 44k INFO Saving model and optimizer state at iteration 15 to ./logs\44k\G_3000.pth 2023-03-28 09:16:37,322 44k INFO Saving model and optimizer state at iteration 15 to ./logs\44k\D_3000.pth 2023-03-28 09:17:35,702 44k INFO ====> Epoch: 15, cost 86.92 s 2023-03-28 09:17:56,811 44k INFO Train Epoch: 16 [17%] 2023-03-28 09:17:56,812 44k INFO Losses: [2.3120646476745605, 2.4351794719696045, 11.372838020324707, 24.37337875366211, 1.1809744834899902], step: 3200, lr: 9.981266397366609e-05 2023-03-28 09:18:58,592 44k INFO ====> Epoch: 16, cost 82.89 s 2023-03-28 09:19:15,811 44k INFO Train Epoch: 17 [11%] 2023-03-28 09:19:15,812 44k INFO Losses: [2.231419086456299, 2.5729246139526367, 11.0763578414917, 23.102697372436523, 1.2078046798706055], step: 3400, lr: 9.980018739066937e-05 2023-03-28 09:20:21,520 44k INFO ====> Epoch: 17, cost 82.93 s 2023-03-28 09:20:34,798 44k INFO Train Epoch: 18 [6%] 2023-03-28 09:20:34,798 44k INFO Losses: [2.369490623474121, 2.3970954418182373, 13.43011474609375, 20.653648376464844, 1.4127885103225708], step: 3600, lr: 9.978771236724554e-05 2023-03-28 09:21:44,366 44k INFO ====> Epoch: 18, cost 82.85 s 2023-03-28 09:21:54,032 44k INFO Train Epoch: 19 [1%] 2023-03-28 09:21:54,032 44k INFO Losses: [2.244265079498291, 2.762638568878174, 13.047948837280273, 20.697715759277344, 1.568071722984314], step: 3800, lr: 9.977523890319963e-05 2023-03-28 09:23:04,302 44k INFO Train Epoch: 19 [96%] 2023-03-28 09:23:04,302 44k INFO Losses: [2.473236083984375, 2.3437018394470215, 11.210517883300781, 17.26759147644043, 1.1788188219070435], step: 4000, lr: 9.977523890319963e-05 2023-03-28 09:23:07,107 44k INFO Saving model and optimizer state at iteration 19 to ./logs\44k\G_4000.pth 2023-03-28 09:23:07,830 44k INFO Saving model and optimizer state at iteration 19 to ./logs\44k\D_4000.pth 2023-03-28 09:23:08,648 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_1000.pth 2023-03-28 09:23:08,690 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_1000.pth 2023-03-28 09:23:11,766 44k INFO ====> Epoch: 19, cost 87.40 s 2023-03-28 09:24:27,518 44k INFO Train Epoch: 20 [91%] 2023-03-28 09:24:27,518 44k INFO Losses: [2.5084452629089355, 2.165191650390625, 10.365538597106934, 22.176239013671875, 1.3180714845657349], step: 4200, lr: 9.976276699833672e-05 2023-03-28 09:24:34,625 44k INFO ====> Epoch: 20, cost 82.86 s 2023-03-28 09:25:46,540 44k INFO Train Epoch: 21 [85%] 2023-03-28 09:25:46,540 44k INFO Losses: [2.56294322013855, 2.179583787918091, 10.495933532714844, 19.481332778930664, 1.0683414936065674], step: 4400, lr: 9.975029665246193e-05 2023-03-28 09:25:57,538 44k INFO ====> Epoch: 21, cost 82.91 s 2023-03-28 09:27:05,541 44k INFO Train Epoch: 22 [80%] 2023-03-28 09:27:05,542 44k INFO Losses: [2.294149160385132, 2.0827529430389404, 14.916914939880371, 21.64295196533203, 1.3355668783187866], step: 4600, lr: 9.973782786538036e-05 2023-03-28 09:27:20,494 44k INFO ====> Epoch: 22, cost 82.96 s 2023-03-28 09:28:24,907 44k INFO Train Epoch: 23 [75%] 2023-03-28 09:28:24,907 44k INFO Losses: [2.4025864601135254, 2.469059467315674, 10.732029914855957, 19.80544662475586, 1.3768725395202637], step: 4800, lr: 9.972536063689719e-05 2023-03-28 09:28:43,442 44k INFO ====> Epoch: 23, cost 82.95 s 2023-03-28 09:29:43,970 44k INFO Train Epoch: 24 [70%] 2023-03-28 09:29:43,970 44k INFO Losses: [2.4460155963897705, 2.334439516067505, 11.12521743774414, 20.1656436920166, 1.4586079120635986], step: 5000, lr: 9.971289496681757e-05 2023-03-28 09:29:46,746 44k INFO Saving model and optimizer state at iteration 24 to ./logs\44k\G_5000.pth 2023-03-28 09:29:47,491 44k INFO Saving model and optimizer state at iteration 24 to ./logs\44k\D_5000.pth 2023-03-28 09:29:48,179 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_2000.pth 2023-03-28 09:29:48,221 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_2000.pth 2023-03-28 09:30:10,317 44k INFO ====> Epoch: 24, cost 86.87 s 2023-03-28 09:31:07,177 44k INFO Train Epoch: 25 [64%] 2023-03-28 09:31:07,178 44k INFO Losses: [2.6431937217712402, 2.2141077518463135, 11.873444557189941, 17.942405700683594, 1.3292723894119263], step: 5200, lr: 9.970043085494672e-05 2023-03-28 09:31:33,230 44k INFO ====> Epoch: 25, cost 82.91 s 2023-03-28 09:32:26,092 44k INFO Train Epoch: 26 [59%] 2023-03-28 09:32:26,093 44k INFO Losses: [2.457573413848877, 2.4640634059906006, 14.159332275390625, 19.641138076782227, 1.5334349870681763], step: 5400, lr: 9.968796830108985e-05 2023-03-28 09:32:56,129 44k INFO ====> Epoch: 26, cost 82.90 s 2023-03-28 09:33:45,226 44k INFO Train Epoch: 27 [54%] 2023-03-28 09:33:45,226 44k INFO Losses: [2.4751996994018555, 2.2247629165649414, 10.798998832702637, 19.52364158630371, 0.9849862456321716], step: 5600, lr: 9.967550730505221e-05 2023-03-28 09:34:19,124 44k INFO ====> Epoch: 27, cost 83.00 s 2023-03-28 09:35:04,154 44k INFO Train Epoch: 28 [49%] 2023-03-28 09:35:04,154 44k INFO Losses: [2.287099838256836, 2.4557645320892334, 13.71335220336914, 21.125595092773438, 1.1721099615097046], step: 5800, lr: 9.966304786663908e-05 2023-03-28 09:35:42,048 44k INFO ====> Epoch: 28, cost 82.92 s 2023-03-28 09:36:23,233 44k INFO Train Epoch: 29 [44%] 2023-03-28 09:36:23,234 44k INFO Losses: [2.489405393600464, 2.386409044265747, 12.116105079650879, 21.260990142822266, 1.3932442665100098], step: 6000, lr: 9.965058998565574e-05 2023-03-28 09:36:25,928 44k INFO Saving model and optimizer state at iteration 29 to ./logs\44k\G_6000.pth 2023-03-28 09:36:26,704 44k INFO Saving model and optimizer state at iteration 29 to ./logs\44k\D_6000.pth 2023-03-28 09:36:27,402 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_3000.pth 2023-03-28 09:36:27,439 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_3000.pth 2023-03-28 09:37:08,946 44k INFO ====> Epoch: 29, cost 86.90 s 2023-03-28 09:37:46,326 44k INFO Train Epoch: 30 [38%] 2023-03-28 09:37:46,326 44k INFO Losses: [2.277132511138916, 2.382495641708374, 10.650903701782227, 20.29062271118164, 1.3970662355422974], step: 6200, lr: 9.963813366190753e-05 2023-03-28 09:38:31,985 44k INFO ====> Epoch: 30, cost 83.04 s 2023-03-28 09:39:05,298 44k INFO Train Epoch: 31 [33%] 2023-03-28 09:39:05,299 44k INFO Losses: [2.2745189666748047, 2.632072925567627, 14.321088790893555, 24.333778381347656, 1.7556630373001099], step: 6400, lr: 9.962567889519979e-05 2023-03-28 09:39:54,865 44k INFO ====> Epoch: 31, cost 82.88 s 2023-03-28 09:40:24,313 44k INFO Train Epoch: 32 [28%] 2023-03-28 09:40:24,313 44k INFO Losses: [2.591717004776001, 2.2135612964630127, 12.400398254394531, 21.900619506835938, 1.0426092147827148], step: 6600, lr: 9.961322568533789e-05 2023-03-28 09:41:17,766 44k INFO ====> Epoch: 32, cost 82.90 s 2023-03-28 09:41:43,471 44k INFO Train Epoch: 33 [23%] 2023-03-28 09:41:43,471 44k INFO Losses: [2.2956812381744385, 2.5500457286834717, 14.490544319152832, 17.432268142700195, 1.4740771055221558], step: 6800, lr: 9.960077403212722e-05 2023-03-28 09:42:40,719 44k INFO ====> Epoch: 33, cost 82.95 s 2023-03-28 09:43:02,630 44k INFO Train Epoch: 34 [18%] 2023-03-28 09:43:02,630 44k INFO Losses: [2.306264638900757, 2.367051839828491, 14.200858116149902, 20.238079071044922, 1.2983323335647583], step: 7000, lr: 9.95883239353732e-05 2023-03-28 09:43:05,399 44k INFO Saving model and optimizer state at iteration 34 to ./logs\44k\G_7000.pth 2023-03-28 09:43:06,108 44k INFO Saving model and optimizer state at iteration 34 to ./logs\44k\D_7000.pth 2023-03-28 09:43:06,757 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_4000.pth 2023-03-28 09:43:06,801 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_4000.pth 2023-03-28 09:44:07,722 44k INFO ====> Epoch: 34, cost 87.00 s 2023-03-28 09:44:25,668 44k INFO Train Epoch: 35 [12%] 2023-03-28 09:44:25,669 44k INFO Losses: [2.3553805351257324, 2.359895706176758, 12.416683197021484, 20.12995147705078, 1.6941442489624023], step: 7200, lr: 9.957587539488128e-05 2023-03-28 09:45:30,610 44k INFO ====> Epoch: 35, cost 82.89 s 2023-03-28 09:45:44,720 44k INFO Train Epoch: 36 [7%] 2023-03-28 09:45:44,720 44k INFO Losses: [2.2137274742126465, 2.3571105003356934, 16.27707862854004, 23.022695541381836, 1.501561164855957], step: 7400, lr: 9.956342841045691e-05 2023-03-28 09:46:53,481 44k INFO ====> Epoch: 36, cost 82.87 s 2023-03-28 09:47:03,872 44k INFO Train Epoch: 37 [2%] 2023-03-28 09:47:03,872 44k INFO Losses: [2.750792980194092, 2.0908029079437256, 6.688190937042236, 13.815529823303223, 1.2397081851959229], step: 7600, lr: 9.95509829819056e-05 2023-03-28 09:48:14,211 44k INFO Train Epoch: 37 [97%] 2023-03-28 09:48:14,211 44k INFO Losses: [2.4305291175842285, 2.4876697063446045, 18.084123611450195, 21.742525100708008, 0.7576390504837036], step: 7800, lr: 9.95509829819056e-05 2023-03-28 09:48:16,792 44k INFO ====> Epoch: 37, cost 83.31 s 2023-03-28 09:49:33,406 44k INFO Train Epoch: 38 [91%] 2023-03-28 09:49:33,406 44k INFO Losses: [2.435333728790283, 2.3317019939422607, 15.040323257446289, 20.90863037109375, 1.1637485027313232], step: 8000, lr: 9.953853910903285e-05 2023-03-28 09:49:36,159 44k INFO Saving model and optimizer state at iteration 38 to ./logs\44k\G_8000.pth 2023-03-28 09:49:36,855 44k INFO Saving model and optimizer state at iteration 38 to ./logs\44k\D_8000.pth 2023-03-28 09:49:37,483 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_5000.pth 2023-03-28 09:49:37,513 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_5000.pth 2023-03-28 09:49:43,695 44k INFO ====> Epoch: 38, cost 86.90 s 2023-03-28 09:50:56,251 44k INFO Train Epoch: 39 [86%] 2023-03-28 09:50:56,252 44k INFO Losses: [2.2215428352355957, 2.3882200717926025, 11.37747573852539, 20.310306549072266, 1.1735423803329468], step: 8200, lr: 9.952609679164422e-05 2023-03-28 09:51:06,552 44k INFO ====> Epoch: 39, cost 82.86 s 2023-03-28 09:52:15,256 44k INFO Train Epoch: 40 [81%] 2023-03-28 09:52:15,256 44k INFO Losses: [2.5731265544891357, 2.1798739433288574, 14.73376750946045, 20.873863220214844, 1.240007996559143], step: 8400, lr: 9.951365602954526e-05 2023-03-28 09:52:29,536 44k INFO ====> Epoch: 40, cost 82.98 s 2023-03-28 09:53:34,568 44k INFO Train Epoch: 41 [76%] 2023-03-28 09:53:34,569 44k INFO Losses: [2.49125075340271, 2.159935474395752, 11.442143440246582, 18.432706832885742, 0.8401350975036621], step: 8600, lr: 9.950121682254156e-05 2023-03-28 09:53:52,465 44k INFO ====> Epoch: 41, cost 82.93 s 2023-03-28 09:54:53,773 44k INFO Train Epoch: 42 [71%] 2023-03-28 09:54:53,774 44k INFO Losses: [2.6530537605285645, 2.053344488143921, 9.69186782836914, 18.844676971435547, 0.8268674612045288], step: 8800, lr: 9.948877917043875e-05 2023-03-28 09:55:15,353 44k INFO ====> Epoch: 42, cost 82.89 s 2023-03-28 09:56:12,852 44k INFO Train Epoch: 43 [65%] 2023-03-28 09:56:12,852 44k INFO Losses: [2.573873519897461, 2.218325614929199, 10.700815200805664, 16.4735164642334, 1.1407667398452759], step: 9000, lr: 9.947634307304244e-05 2023-03-28 09:56:15,602 44k INFO Saving model and optimizer state at iteration 43 to ./logs\44k\G_9000.pth 2023-03-28 09:56:16,266 44k INFO Saving model and optimizer state at iteration 43 to ./logs\44k\D_9000.pth 2023-03-28 09:56:16,894 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_6000.pth 2023-03-28 09:56:16,931 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_6000.pth 2023-03-28 09:56:42,078 44k INFO ====> Epoch: 43, cost 86.73 s 2023-03-28 09:57:35,684 44k INFO Train Epoch: 44 [60%] 2023-03-28 09:57:35,684 44k INFO Losses: [2.5806922912597656, 2.0218288898468018, 4.70736026763916, 13.963470458984375, 0.850321352481842], step: 9200, lr: 9.94639085301583e-05 2023-03-28 09:58:04,942 44k INFO ====> Epoch: 44, cost 82.86 s 2023-03-28 09:58:54,901 44k INFO Train Epoch: 45 [55%] 2023-03-28 09:58:54,901 44k INFO Losses: [2.458188533782959, 2.1044235229492188, 11.05639362335205, 18.141769409179688, 1.35617196559906], step: 9400, lr: 9.945147554159202e-05 2023-03-28 09:59:27,990 44k INFO ====> Epoch: 45, cost 83.05 s 2023-03-28 10:00:13,895 44k INFO Train Epoch: 46 [50%] 2023-03-28 10:00:13,895 44k INFO Losses: [2.664116382598877, 1.8874454498291016, 11.865761756896973, 19.308103561401367, 1.3692843914031982], step: 9600, lr: 9.943904410714931e-05 2023-03-28 10:00:50,970 44k INFO ====> Epoch: 46, cost 82.98 s 2023-03-28 10:01:33,055 44k INFO Train Epoch: 47 [45%] 2023-03-28 10:01:33,055 44k INFO Losses: [2.482840061187744, 2.401728868484497, 9.985258102416992, 22.319854736328125, 1.3407087326049805], step: 9800, lr: 9.942661422663591e-05 2023-03-28 10:02:13,983 44k INFO ====> Epoch: 47, cost 83.01 s 2023-03-28 10:02:52,123 44k INFO Train Epoch: 48 [39%] 2023-03-28 10:02:52,123 44k INFO Losses: [2.5515687465667725, 2.2584636211395264, 12.712625503540039, 16.2366886138916, 1.0784324407577515], step: 10000, lr: 9.941418589985758e-05 2023-03-28 10:02:54,918 44k INFO Saving model and optimizer state at iteration 48 to ./logs\44k\G_10000.pth 2023-03-28 10:02:55,630 44k INFO Saving model and optimizer state at iteration 48 to ./logs\44k\D_10000.pth 2023-03-28 10:02:56,267 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_7000.pth 2023-03-28 10:02:56,308 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_7000.pth 2023-03-28 10:03:40,973 44k INFO ====> Epoch: 48, cost 86.99 s 2023-03-28 10:04:15,233 44k INFO Train Epoch: 49 [34%] 2023-03-28 10:04:15,233 44k INFO Losses: [2.3276400566101074, 2.4833688735961914, 15.710030555725098, 22.27151107788086, 0.7628688812255859], step: 10200, lr: 9.940175912662009e-05 2023-03-28 10:05:04,016 44k INFO ====> Epoch: 49, cost 83.04 s 2023-03-28 10:05:34,411 44k INFO Train Epoch: 50 [29%] 2023-03-28 10:05:34,412 44k INFO Losses: [2.5988683700561523, 2.153642177581787, 9.23619270324707, 17.616212844848633, 1.3573955297470093], step: 10400, lr: 9.938933390672926e-05 2023-03-28 10:06:27,133 44k INFO ====> Epoch: 50, cost 83.12 s 2023-03-28 10:06:53,699 44k INFO Train Epoch: 51 [24%] 2023-03-28 10:06:53,700 44k INFO Losses: [2.716041088104248, 1.9487696886062622, 9.79574203491211, 19.046483993530273, 1.0773154497146606], step: 10600, lr: 9.937691023999092e-05 2023-03-28 10:07:50,086 44k INFO ====> Epoch: 51, cost 82.95 s 2023-03-28 10:08:12,838 44k INFO Train Epoch: 52 [18%] 2023-03-28 10:08:12,838 44k INFO Losses: [2.5847926139831543, 2.381870985031128, 11.737020492553711, 16.884185791015625, 0.9313250780105591], step: 10800, lr: 9.936448812621091e-05 2023-03-28 10:09:13,058 44k INFO ====> Epoch: 52, cost 82.97 s 2023-03-28 10:09:31,898 44k INFO Train Epoch: 53 [13%] 2023-03-28 10:09:31,899 44k INFO Losses: [2.526897668838501, 2.263620138168335, 12.575631141662598, 17.8885498046875, 1.3091402053833008], step: 11000, lr: 9.935206756519513e-05 2023-03-28 10:09:34,795 44k INFO Saving model and optimizer state at iteration 53 to ./logs\44k\G_11000.pth 2023-03-28 10:09:35,444 44k INFO Saving model and optimizer state at iteration 53 to ./logs\44k\D_11000.pth 2023-03-28 10:09:36,076 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_8000.pth 2023-03-28 10:09:36,117 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_8000.pth 2023-03-28 10:10:40,101 44k INFO ====> Epoch: 53, cost 87.04 s 2023-03-28 10:10:55,046 44k INFO Train Epoch: 54 [8%] 2023-03-28 10:10:55,046 44k INFO Losses: [2.1734747886657715, 2.3184146881103516, 11.5369234085083, 18.734569549560547, 1.145734190940857], step: 11200, lr: 9.933964855674948e-05 2023-03-28 10:12:03,079 44k INFO ====> Epoch: 54, cost 82.98 s 2023-03-28 10:12:14,163 44k INFO Train Epoch: 55 [3%] 2023-03-28 10:12:14,163 44k INFO Losses: [2.574740171432495, 2.1138153076171875, 10.993480682373047, 16.404699325561523, 0.8234654664993286], step: 11400, lr: 9.932723110067987e-05 2023-03-28 10:13:24,313 44k INFO Train Epoch: 55 [98%] 2023-03-28 10:13:24,314 44k INFO Losses: [2.3664636611938477, 2.1895620822906494, 10.519318580627441, 17.577640533447266, 1.3853198289871216], step: 11600, lr: 9.932723110067987e-05 2023-03-28 10:13:26,213 44k INFO ====> Epoch: 55, cost 83.13 s 2023-03-28 10:14:43,339 44k INFO Train Epoch: 56 [92%] 2023-03-28 10:14:43,340 44k INFO Losses: [2.3884456157684326, 2.4193990230560303, 14.632052421569824, 19.608535766601562, 1.1055355072021484], step: 11800, lr: 9.931481519679228e-05 2023-03-28 10:14:48,945 44k INFO ====> Epoch: 56, cost 82.73 s 2023-03-28 10:16:02,161 44k INFO Train Epoch: 57 [87%] 2023-03-28 10:16:02,161 44k INFO Losses: [2.2795231342315674, 2.459282159805298, 11.410663604736328, 17.16574478149414, 1.4307525157928467], step: 12000, lr: 9.930240084489267e-05 2023-03-28 10:16:04,965 44k INFO Saving model and optimizer state at iteration 57 to ./logs\44k\G_12000.pth 2023-03-28 10:16:05,647 44k INFO Saving model and optimizer state at iteration 57 to ./logs\44k\D_12000.pth 2023-03-28 10:16:06,295 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_9000.pth 2023-03-28 10:16:06,337 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_9000.pth 2023-03-28 10:16:15,654 44k INFO ====> Epoch: 57, cost 86.71 s 2023-03-28 10:17:24,983 44k INFO Train Epoch: 58 [82%] 2023-03-28 10:17:24,984 44k INFO Losses: [2.3032000064849854, 2.341647148132324, 12.26695442199707, 20.93543243408203, 1.2026079893112183], step: 12200, lr: 9.928998804478705e-05 2023-03-28 10:17:38,464 44k INFO ====> Epoch: 58, cost 82.81 s 2023-03-28 10:18:44,268 44k INFO Train Epoch: 59 [77%] 2023-03-28 10:18:44,269 44k INFO Losses: [2.6148743629455566, 2.0951426029205322, 8.657525062561035, 11.250064849853516, 1.1354877948760986], step: 12400, lr: 9.927757679628145e-05 2023-03-28 10:19:01,347 44k INFO ====> Epoch: 59, cost 82.88 s 2023-03-28 10:20:03,331 44k INFO Train Epoch: 60 [72%] 2023-03-28 10:20:03,332 44k INFO Losses: [2.462462902069092, 2.0845632553100586, 15.740416526794434, 20.309894561767578, 0.8178136944770813], step: 12600, lr: 9.926516709918191e-05 2023-03-28 10:20:24,189 44k INFO ====> Epoch: 60, cost 82.84 s 2023-03-28 10:21:22,415 44k INFO Train Epoch: 61 [66%] 2023-03-28 10:21:22,415 44k INFO Losses: [2.294085741043091, 2.7454962730407715, 7.962324142456055, 16.713045120239258, 1.387993574142456], step: 12800, lr: 9.92527589532945e-05 2023-03-28 10:21:47,064 44k INFO ====> Epoch: 61, cost 82.88 s 2023-03-28 10:22:41,411 44k INFO Train Epoch: 62 [61%] 2023-03-28 10:22:41,411 44k INFO Losses: [2.5031204223632812, 2.2695791721343994, 10.357699394226074, 17.411956787109375, 1.0085800886154175], step: 13000, lr: 9.924035235842533e-05 2023-03-28 10:22:44,233 44k INFO Saving model and optimizer state at iteration 62 to ./logs\44k\G_13000.pth 2023-03-28 10:22:44,888 44k INFO Saving model and optimizer state at iteration 62 to ./logs\44k\D_13000.pth 2023-03-28 10:22:45,527 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_10000.pth 2023-03-28 10:22:45,557 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_10000.pth 2023-03-28 10:23:13,928 44k INFO ====> Epoch: 62, cost 86.86 s 2023-03-28 10:24:04,558 44k INFO Train Epoch: 63 [56%] 2023-03-28 10:24:04,558 44k INFO Losses: [2.469881534576416, 2.3685452938079834, 10.93106746673584, 19.386653900146484, 1.2033615112304688], step: 13200, lr: 9.922794731438052e-05 2023-03-28 10:24:37,025 44k INFO ====> Epoch: 63, cost 83.10 s 2023-03-28 10:25:23,462 44k INFO Train Epoch: 64 [51%] 2023-03-28 10:25:23,462 44k INFO Losses: [2.5238125324249268, 2.3159899711608887, 8.146437644958496, 18.73712730407715, 1.3897887468338013], step: 13400, lr: 9.921554382096622e-05 2023-03-28 10:25:59,862 44k INFO ====> Epoch: 64, cost 82.84 s 2023-03-28 10:26:42,428 44k INFO Train Epoch: 65 [45%] 2023-03-28 10:26:42,428 44k INFO Losses: [2.664973735809326, 2.3884191513061523, 10.496109008789062, 17.678016662597656, 1.198121190071106], step: 13600, lr: 9.92031418779886e-05 2023-03-28 10:27:22,736 44k INFO ====> Epoch: 65, cost 82.88 s 2023-03-28 10:28:01,366 44k INFO Train Epoch: 66 [40%] 2023-03-28 10:28:01,366 44k INFO Losses: [2.4568827152252197, 2.2003185749053955, 9.31553840637207, 19.13401222229004, 0.8375404477119446], step: 13800, lr: 9.919074148525384e-05 2023-03-28 10:28:45,576 44k INFO ====> Epoch: 66, cost 82.84 s 2023-03-28 10:29:20,348 44k INFO Train Epoch: 67 [35%] 2023-03-28 10:29:20,349 44k INFO Losses: [2.3411026000976562, 2.3800594806671143, 12.116939544677734, 18.54276466369629, 0.9208908677101135], step: 14000, lr: 9.917834264256819e-05 2023-03-28 10:29:23,123 44k INFO Saving model and optimizer state at iteration 67 to ./logs\44k\G_14000.pth 2023-03-28 10:29:23,759 44k INFO Saving model and optimizer state at iteration 67 to ./logs\44k\D_14000.pth 2023-03-28 10:29:24,390 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_11000.pth 2023-03-28 10:29:24,425 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_11000.pth 2023-03-28 10:30:12,430 44k INFO ====> Epoch: 67, cost 86.85 s 2023-03-28 10:30:43,309 44k INFO Train Epoch: 68 [30%] 2023-03-28 10:30:43,309 44k INFO Losses: [2.3311827182769775, 2.256192922592163, 12.440742492675781, 28.76460838317871, 1.1964049339294434], step: 14200, lr: 9.916594534973787e-05 2023-03-28 10:31:35,370 44k INFO ====> Epoch: 68, cost 82.94 s 2023-03-28 10:32:02,414 44k INFO Train Epoch: 69 [25%] 2023-03-28 10:32:02,414 44k INFO Losses: [2.729653835296631, 1.9864410161972046, 4.416131973266602, 11.883074760437012, 1.3673940896987915], step: 14400, lr: 9.915354960656915e-05 2023-03-28 10:32:58,273 44k INFO ====> Epoch: 69, cost 82.90 s 2023-03-28 10:33:21,455 44k INFO Train Epoch: 70 [19%] 2023-03-28 10:33:21,455 44k INFO Losses: [2.53609299659729, 2.3046457767486572, 9.988428115844727, 18.85641098022461, 1.5341283082962036], step: 14600, lr: 9.914115541286833e-05 2023-03-28 10:34:21,123 44k INFO ====> Epoch: 70, cost 82.85 s 2023-03-28 10:34:40,565 44k INFO Train Epoch: 71 [14%] 2023-03-28 10:34:40,565 44k INFO Losses: [2.224116802215576, 2.2493319511413574, 16.79281997680664, 20.33685874938965, 1.3926193714141846], step: 14800, lr: 9.912876276844171e-05 2023-03-28 10:35:43,995 44k INFO ====> Epoch: 71, cost 82.87 s 2023-03-28 10:35:59,422 44k INFO Train Epoch: 72 [9%] 2023-03-28 10:35:59,422 44k INFO Losses: [2.606065273284912, 2.2185122966766357, 8.839770317077637, 17.195877075195312, 1.1975902318954468], step: 15000, lr: 9.911637167309565e-05 2023-03-28 10:36:02,247 44k INFO Saving model and optimizer state at iteration 72 to ./logs\44k\G_15000.pth 2023-03-28 10:36:02,886 44k INFO Saving model and optimizer state at iteration 72 to ./logs\44k\D_15000.pth 2023-03-28 10:36:03,525 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_12000.pth 2023-03-28 10:36:03,555 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_12000.pth 2023-03-28 10:37:10,805 44k INFO ====> Epoch: 72, cost 86.81 s 2023-03-28 10:37:22,457 44k INFO Train Epoch: 73 [4%] 2023-03-28 10:37:22,457 44k INFO Losses: [2.514878273010254, 2.220170259475708, 9.544334411621094, 18.218982696533203, 0.9055336117744446], step: 15200, lr: 9.910398212663652e-05 2023-03-28 10:38:32,861 44k INFO Train Epoch: 73 [99%] 2023-03-28 10:38:32,861 44k INFO Losses: [2.307556390762329, 2.3452346324920654, 12.88539981842041, 20.351900100708008, 1.2490304708480835], step: 15400, lr: 9.910398212663652e-05 2023-03-28 10:38:34,093 44k INFO ====> Epoch: 73, cost 83.29 s 2023-03-28 10:39:51,985 44k INFO Train Epoch: 74 [93%] 2023-03-28 10:39:51,985 44k INFO Losses: [2.474641799926758, 2.6618528366088867, 9.565529823303223, 19.401565551757812, 0.9816445708274841], step: 15600, lr: 9.909159412887068e-05 2023-03-28 10:39:57,028 44k INFO ====> Epoch: 74, cost 82.94 s 2023-03-28 10:41:10,929 44k INFO Train Epoch: 75 [88%] 2023-03-28 10:41:10,929 44k INFO Losses: [2.6436195373535156, 2.1421775817871094, 14.095719337463379, 19.249990463256836, 0.7461941242218018], step: 15800, lr: 9.907920767960457e-05 2023-03-28 10:41:19,866 44k INFO ====> Epoch: 75, cost 82.84 s 2023-03-28 10:42:29,989 44k INFO Train Epoch: 76 [83%] 2023-03-28 10:42:29,990 44k INFO Losses: [2.549525022506714, 2.20845365524292, 10.206828117370605, 17.338624954223633, 1.3204456567764282], step: 16000, lr: 9.906682277864462e-05 2023-03-28 10:42:32,748 44k INFO Saving model and optimizer state at iteration 76 to ./logs\44k\G_16000.pth 2023-03-28 10:42:33,384 44k INFO Saving model and optimizer state at iteration 76 to ./logs\44k\D_16000.pth 2023-03-28 10:42:34,031 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_13000.pth 2023-03-28 10:42:34,060 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_13000.pth 2023-03-28 10:42:46,643 44k INFO ====> Epoch: 76, cost 86.78 s 2023-03-28 10:43:53,169 44k INFO Train Epoch: 77 [78%] 2023-03-28 10:43:53,169 44k INFO Losses: [2.1324071884155273, 2.646571159362793, 14.87631607055664, 22.636234283447266, 0.8589655160903931], step: 16200, lr: 9.905443942579728e-05 2023-03-28 10:44:09,681 44k INFO ====> Epoch: 77, cost 83.04 s 2023-03-28 10:45:12,400 44k INFO Train Epoch: 78 [73%] 2023-03-28 10:45:12,401 44k INFO Losses: [2.597229480743408, 2.440584659576416, 14.633549690246582, 21.766454696655273, 1.339699625968933], step: 16400, lr: 9.904205762086905e-05 2023-03-28 10:45:32,619 44k INFO ====> Epoch: 78, cost 82.94 s 2023-03-28 10:46:31,494 44k INFO Train Epoch: 79 [67%] 2023-03-28 10:46:31,495 44k INFO Losses: [2.8782193660736084, 1.793989896774292, 7.274224281311035, 9.446621894836426, 0.8608368635177612], step: 16600, lr: 9.902967736366644e-05 2023-03-28 10:46:55,584 44k INFO ====> Epoch: 79, cost 82.97 s 2023-03-28 10:47:50,662 44k INFO Train Epoch: 80 [62%] 2023-03-28 10:47:50,662 44k INFO Losses: [2.3982744216918945, 2.325754165649414, 14.79326057434082, 20.72083854675293, 0.9020143151283264], step: 16800, lr: 9.901729865399597e-05 2023-03-28 10:48:18,544 44k INFO ====> Epoch: 80, cost 82.96 s 2023-03-28 10:49:09,744 44k INFO Train Epoch: 81 [57%] 2023-03-28 10:49:09,744 44k INFO Losses: [2.640958070755005, 1.9588408470153809, 9.724273681640625, 14.538920402526855, 1.0269485712051392], step: 17000, lr: 9.900492149166423e-05 2023-03-28 10:49:12,521 44k INFO Saving model and optimizer state at iteration 81 to ./logs\44k\G_17000.pth 2023-03-28 10:49:13,207 44k INFO Saving model and optimizer state at iteration 81 to ./logs\44k\D_17000.pth 2023-03-28 10:49:13,844 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_14000.pth 2023-03-28 10:49:13,887 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_14000.pth 2023-03-28 10:49:45,446 44k INFO ====> Epoch: 81, cost 86.90 s 2023-03-28 10:50:32,591 44k INFO Train Epoch: 82 [52%] 2023-03-28 10:50:32,591 44k INFO Losses: [2.7287282943725586, 2.1063828468322754, 11.844592094421387, 15.383955001831055, 0.8866045475006104], step: 17200, lr: 9.899254587647776e-05 2023-03-28 10:51:08,239 44k INFO ====> Epoch: 82, cost 82.79 s 2023-03-28 10:51:51,497 44k INFO Train Epoch: 83 [46%] 2023-03-28 10:51:51,497 44k INFO Losses: [2.4690308570861816, 2.2282514572143555, 13.741714477539062, 18.34269142150879, 0.9448859095573425], step: 17400, lr: 9.89801718082432e-05 2023-03-28 10:52:31,136 44k INFO ====> Epoch: 83, cost 82.90 s 2023-03-28 10:53:10,568 44k INFO Train Epoch: 84 [41%] 2023-03-28 10:53:10,568 44k INFO Losses: [2.6833608150482178, 2.024296522140503, 8.003212928771973, 17.290067672729492, 1.2571971416473389], step: 17600, lr: 9.896779928676716e-05 2023-03-28 10:53:54,067 44k INFO ====> Epoch: 84, cost 82.93 s 2023-03-28 10:54:29,502 44k INFO Train Epoch: 85 [36%] 2023-03-28 10:54:29,503 44k INFO Losses: [2.6400790214538574, 2.20237135887146, 7.610630512237549, 13.690743446350098, 1.105350136756897], step: 17800, lr: 9.895542831185631e-05 2023-03-28 10:55:16,927 44k INFO ====> Epoch: 85, cost 82.86 s 2023-03-28 10:55:48,484 44k INFO Train Epoch: 86 [31%] 2023-03-28 10:55:48,484 44k INFO Losses: [2.358750581741333, 2.3130667209625244, 15.25722599029541, 21.996007919311523, 1.1671128273010254], step: 18000, lr: 9.894305888331732e-05 2023-03-28 10:55:51,270 44k INFO Saving model and optimizer state at iteration 86 to ./logs\44k\G_18000.pth 2023-03-28 10:55:51,904 44k INFO Saving model and optimizer state at iteration 86 to ./logs\44k\D_18000.pth 2023-03-28 10:55:52,562 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_15000.pth 2023-03-28 10:55:52,600 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_15000.pth 2023-03-28 10:56:43,719 44k INFO ====> Epoch: 86, cost 86.79 s 2023-03-28 10:57:11,473 44k INFO Train Epoch: 87 [26%] 2023-03-28 10:57:11,473 44k INFO Losses: [2.4193663597106934, 2.3969385623931885, 12.18989086151123, 18.47564125061035, 1.0810320377349854], step: 18200, lr: 9.89306910009569e-05 2023-03-28 10:58:06,596 44k INFO ====> Epoch: 87, cost 82.88 s 2023-03-28 10:58:30,611 44k INFO Train Epoch: 88 [20%] 2023-03-28 10:58:30,611 44k INFO Losses: [2.3844947814941406, 2.388604164123535, 12.059229850769043, 20.39312171936035, 0.8964427709579468], step: 18400, lr: 9.891832466458178e-05 2023-03-28 10:59:29,611 44k INFO ====> Epoch: 88, cost 83.02 s 2023-03-28 10:59:49,742 44k INFO Train Epoch: 89 [15%] 2023-03-28 10:59:49,742 44k INFO Losses: [2.506246328353882, 2.3942623138427734, 15.50088119506836, 21.470155715942383, 1.0502718687057495], step: 18600, lr: 9.89059598739987e-05 2023-03-28 11:00:52,460 44k INFO ====> Epoch: 89, cost 82.85 s 2023-03-28 11:01:08,591 44k INFO Train Epoch: 90 [10%] 2023-03-28 11:01:08,592 44k INFO Losses: [2.6148130893707275, 2.2562215328216553, 11.208698272705078, 18.248958587646484, 1.1483412981033325], step: 18800, lr: 9.889359662901445e-05 2023-03-28 11:02:15,325 44k INFO ====> Epoch: 90, cost 82.86 s 2023-03-28 11:02:27,605 44k INFO Train Epoch: 91 [5%] 2023-03-28 11:02:27,605 44k INFO Losses: [2.647922992706299, 2.11379337310791, 11.617773056030273, 18.886320114135742, 0.9938008189201355], step: 19000, lr: 9.888123492943583e-05 2023-03-28 11:02:30,312 44k INFO Saving model and optimizer state at iteration 91 to ./logs\44k\G_19000.pth 2023-03-28 11:02:31,000 44k INFO Saving model and optimizer state at iteration 91 to ./logs\44k\D_19000.pth 2023-03-28 11:02:31,632 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_16000.pth 2023-03-28 11:02:31,677 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_16000.pth 2023-03-28 11:03:41,825 44k INFO Train Epoch: 91 [100%] 2023-03-28 11:03:41,826 44k INFO Losses: [2.2973430156707764, 2.924412727355957, 13.073881149291992, 23.208454132080078, 1.0283572673797607], step: 19200, lr: 9.888123492943583e-05 2023-03-28 11:03:42,419 44k INFO ====> Epoch: 91, cost 87.09 s 2023-03-28 11:05:01,025 44k INFO Train Epoch: 92 [94%] 2023-03-28 11:05:01,025 44k INFO Losses: [2.2320103645324707, 2.5698602199554443, 13.871572494506836, 21.521202087402344, 0.6322752237319946], step: 19400, lr: 9.886887477506964e-05 2023-03-28 11:05:05,334 44k INFO ====> Epoch: 92, cost 82.91 s 2023-03-28 11:06:19,980 44k INFO Train Epoch: 93 [89%] 2023-03-28 11:06:19,981 44k INFO Losses: [2.5593631267547607, 2.1301889419555664, 11.452144622802734, 17.862661361694336, 1.2217137813568115], step: 19600, lr: 9.885651616572276e-05 2023-03-28 11:06:28,216 44k INFO ====> Epoch: 93, cost 82.88 s 2023-03-28 11:07:39,037 44k INFO Train Epoch: 94 [84%] 2023-03-28 11:07:39,038 44k INFO Losses: [2.7138657569885254, 1.9207767248153687, 11.655595779418945, 12.806342124938965, 0.8422942757606506], step: 19800, lr: 9.884415910120204e-05 2023-03-28 11:07:51,184 44k INFO ====> Epoch: 94, cost 82.97 s 2023-03-28 11:08:58,248 44k INFO Train Epoch: 95 [79%] 2023-03-28 11:08:58,248 44k INFO Losses: [2.2528085708618164, 2.6685023307800293, 13.04450511932373, 18.5692138671875, 1.4049336910247803], step: 20000, lr: 9.883180358131438e-05 2023-03-28 11:09:01,001 44k INFO Saving model and optimizer state at iteration 95 to ./logs\44k\G_20000.pth 2023-03-28 11:09:01,690 44k INFO Saving model and optimizer state at iteration 95 to ./logs\44k\D_20000.pth 2023-03-28 11:09:02,316 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_17000.pth 2023-03-28 11:09:02,356 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_17000.pth 2023-03-28 11:09:18,002 44k INFO ====> Epoch: 95, cost 86.82 s 2023-03-28 11:10:21,448 44k INFO Train Epoch: 96 [73%] 2023-03-28 11:10:21,449 44k INFO Losses: [2.5572915077209473, 2.435312509536743, 11.24231243133545, 20.69990348815918, 1.123771071434021], step: 20200, lr: 9.881944960586671e-05 2023-03-28 11:10:40,984 44k INFO ====> Epoch: 96, cost 82.98 s 2023-03-28 11:11:40,443 44k INFO Train Epoch: 97 [68%] 2023-03-28 11:11:40,443 44k INFO Losses: [2.4975264072418213, 2.232818365097046, 14.391799926757812, 20.118379592895508, 1.2595881223678589], step: 20400, lr: 9.880709717466598e-05 2023-03-28 11:12:03,785 44k INFO ====> Epoch: 97, cost 82.80 s 2023-03-28 11:12:59,683 44k INFO Train Epoch: 98 [63%] 2023-03-28 11:12:59,683 44k INFO Losses: [2.432237386703491, 2.3928518295288086, 14.675285339355469, 20.635652542114258, 1.1201059818267822], step: 20600, lr: 9.879474628751914e-05 2023-03-28 11:13:26,850 44k INFO ====> Epoch: 98, cost 83.07 s 2023-03-28 11:14:18,679 44k INFO Train Epoch: 99 [58%] 2023-03-28 11:14:18,680 44k INFO Losses: [2.585559844970703, 2.0732383728027344, 14.041743278503418, 20.115596771240234, 0.892521858215332], step: 20800, lr: 9.87823969442332e-05 2023-03-28 11:14:49,741 44k INFO ====> Epoch: 99, cost 82.89 s 2023-03-28 11:15:37,713 44k INFO Train Epoch: 100 [53%] 2023-03-28 11:15:37,713 44k INFO Losses: [2.6276934146881104, 2.2114856243133545, 14.01760196685791, 20.55073356628418, 1.1633940935134888], step: 21000, lr: 9.877004914461517e-05 2023-03-28 11:15:40,483 44k INFO Saving model and optimizer state at iteration 100 to ./logs\44k\G_21000.pth 2023-03-28 11:15:41,129 44k INFO Saving model and optimizer state at iteration 100 to ./logs\44k\D_21000.pth 2023-03-28 11:15:41,788 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_18000.pth 2023-03-28 11:15:41,828 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_18000.pth 2023-03-28 11:16:16,631 44k INFO ====> Epoch: 100, cost 86.89 s