2023-03-09 22:07:15,606 44k INFO {'train': {'log_interval': 200, 'eval_interval': 1000, 'seed': 1234, 'epochs': 100, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'aisa': 0}, 'model_dir': './logs\\44k'} 2023-03-09 22:07:17,956 44k INFO Loaded checkpoint './logs\44k\G_0.pth' (iteration 1) 2023-03-09 22:07:18,492 44k INFO Loaded checkpoint './logs\44k\D_0.pth' (iteration 1) 2023-03-09 22:07:33,232 44k INFO Train Epoch: 1 [0%] 2023-03-09 22:07:33,233 44k INFO Losses: [2.7532143592834473, 2.1225199699401855, 6.865627765655518, 23.084224700927734, 2.0994279384613037], step: 0, lr: 0.0001 2023-03-09 22:07:36,723 44k INFO Saving model and optimizer state at iteration 1 to ./logs\44k\G_0.pth 2023-03-09 22:07:37,415 44k INFO Saving model and optimizer state at iteration 1 to ./logs\44k\D_0.pth 2023-03-09 22:08:52,957 44k INFO Train Epoch: 1 [42%] 2023-03-09 22:08:52,958 44k INFO Losses: [2.4421563148498535, 2.2961127758026123, 10.422914505004883, 24.77933120727539, 2.0568974018096924], step: 200, lr: 0.0001 2023-03-09 22:10:08,890 44k INFO Train Epoch: 1 [83%] 2023-03-09 22:10:08,891 44k INFO Losses: [2.648120880126953, 2.1064438819885254, 7.289734363555908, 19.598392486572266, 1.5763448476791382], step: 400, lr: 0.0001 2023-03-09 22:10:40,667 44k INFO ====> Epoch: 1, cost 205.06 s 2023-03-09 22:11:30,699 44k INFO Train Epoch: 2 [25%] 2023-03-09 22:11:30,700 44k INFO Losses: [2.48688006401062, 2.3656461238861084, 8.08664321899414, 23.009075164794922, 1.5792394876480103], step: 600, lr: 9.99875e-05 2023-03-09 22:12:36,551 44k INFO Train Epoch: 2 [67%] 2023-03-09 22:12:36,551 44k INFO Losses: [2.819711208343506, 2.475625514984131, 7.877129554748535, 22.234840393066406, 1.6142663955688477], step: 800, lr: 9.99875e-05 2023-03-09 22:13:29,967 44k INFO ====> Epoch: 2, cost 169.30 s 2023-03-09 22:13:53,281 44k INFO Train Epoch: 3 [8%] 2023-03-09 22:13:53,282 44k INFO Losses: [2.6841185092926025, 2.074568748474121, 7.123725891113281, 18.286457061767578, 1.198595643043518], step: 1000, lr: 9.99750015625e-05 2023-03-09 22:13:56,355 44k INFO Saving model and optimizer state at iteration 3 to ./logs\44k\G_1000.pth 2023-03-09 22:13:57,037 44k INFO Saving model and optimizer state at iteration 3 to ./logs\44k\D_1000.pth 2023-03-09 22:15:06,006 44k INFO Train Epoch: 3 [50%] 2023-03-09 22:15:06,006 44k INFO Losses: [2.2005116939544678, 2.316885471343994, 9.727132797241211, 25.34775733947754, 1.7601525783538818], step: 1200, lr: 9.99750015625e-05 2023-03-09 22:16:13,275 44k INFO Train Epoch: 3 [92%] 2023-03-09 22:16:13,276 44k INFO Losses: [2.70918869972229, 2.1484742164611816, 9.208160400390625, 21.771608352661133, 1.8784080743789673], step: 1400, lr: 9.99750015625e-05 2023-03-09 22:16:27,476 44k INFO ====> Epoch: 3, cost 177.51 s 2023-03-09 22:17:34,023 44k INFO Train Epoch: 4 [33%] 2023-03-09 22:17:34,023 44k INFO Losses: [2.629467725753784, 2.145282745361328, 5.309225559234619, 16.422060012817383, 1.6363394260406494], step: 1600, lr: 9.996250468730469e-05 2023-03-09 22:18:44,320 44k INFO Train Epoch: 4 [75%] 2023-03-09 22:18:44,320 44k INFO Losses: [2.359875440597534, 2.6663260459899902, 10.675108909606934, 22.943822860717773, 1.4112910032272339], step: 1800, lr: 9.996250468730469e-05 2023-03-09 22:19:26,339 44k INFO ====> Epoch: 4, cost 178.86 s 2023-03-09 22:20:05,291 44k INFO Train Epoch: 5 [17%] 2023-03-09 22:20:05,292 44k INFO Losses: [2.4184532165527344, 2.202078342437744, 10.578102111816406, 20.634305953979492, 1.78025221824646], step: 2000, lr: 9.995000937421877e-05 2023-03-09 22:20:08,393 44k INFO Saving model and optimizer state at iteration 5 to ./logs\44k\G_2000.pth 2023-03-09 22:20:09,225 44k INFO Saving model and optimizer state at iteration 5 to ./logs\44k\D_2000.pth 2023-03-09 22:21:20,064 44k INFO Train Epoch: 5 [58%] 2023-03-09 22:21:20,065 44k INFO Losses: [2.784832239151001, 1.8311623334884644, 9.575661659240723, 20.739303588867188, 1.4636569023132324], step: 2200, lr: 9.995000937421877e-05 2023-03-09 22:22:27,870 44k INFO ====> Epoch: 5, cost 181.53 s 2023-03-09 22:22:37,704 44k INFO Train Epoch: 6 [0%] 2023-03-09 22:22:37,705 44k INFO Losses: [2.751085042953491, 2.104429244995117, 10.574271202087402, 18.58207893371582, 1.2397831678390503], step: 2400, lr: 9.993751562304699e-05 2023-03-09 22:23:48,015 44k INFO Train Epoch: 6 [42%] 2023-03-09 22:23:48,015 44k INFO Losses: [2.56874680519104, 2.4305896759033203, 10.210469245910645, 24.242820739746094, 1.555556058883667], step: 2600, lr: 9.993751562304699e-05 2023-03-09 22:24:56,321 44k INFO Train Epoch: 6 [83%] 2023-03-09 22:24:56,322 44k INFO Losses: [2.872218370437622, 2.2893431186676025, 8.205846786499023, 19.27981948852539, 1.2091784477233887], step: 2800, lr: 9.993751562304699e-05 2023-03-09 22:25:24,331 44k INFO ====> Epoch: 6, cost 176.46 s 2023-03-09 22:26:15,877 44k INFO Train Epoch: 7 [25%] 2023-03-09 22:26:15,878 44k INFO Losses: [2.398118495941162, 2.2882518768310547, 8.92576789855957, 23.146297454833984, 1.445586919784546], step: 3000, lr: 9.99250234335941e-05 2023-03-09 22:26:18,873 44k INFO Saving model and optimizer state at iteration 7 to ./logs\44k\G_3000.pth 2023-03-09 22:26:19,601 44k INFO Saving model and optimizer state at iteration 7 to ./logs\44k\D_3000.pth 2023-03-09 22:27:28,314 44k INFO Train Epoch: 7 [67%] 2023-03-09 22:27:28,315 44k INFO Losses: [2.6192994117736816, 2.012535572052002, 10.190000534057617, 20.864042282104492, 1.5078903436660767], step: 3200, lr: 9.99250234335941e-05 2023-03-09 22:28:23,287 44k INFO ====> Epoch: 7, cost 178.96 s 2023-03-09 22:28:47,111 44k INFO Train Epoch: 8 [8%] 2023-03-09 22:28:47,111 44k INFO Losses: [2.3008713722229004, 2.3889431953430176, 11.322406768798828, 22.719818115234375, 1.4016550779342651], step: 3400, lr: 9.991253280566489e-05 2023-03-09 22:29:55,858 44k INFO Train Epoch: 8 [50%] 2023-03-09 22:29:55,859 44k INFO Losses: [2.2141306400299072, 2.5543158054351807, 13.307168960571289, 26.148569107055664, 1.8661701679229736], step: 3600, lr: 9.991253280566489e-05 2023-03-09 22:31:04,323 44k INFO Train Epoch: 8 [92%] 2023-03-09 22:31:04,323 44k INFO Losses: [2.6786041259765625, 2.179410696029663, 10.544212341308594, 22.434480667114258, 1.278396725654602], step: 3800, lr: 9.991253280566489e-05 2023-03-09 22:31:18,492 44k INFO ====> Epoch: 8, cost 175.21 s 2023-03-09 22:32:23,836 44k INFO Train Epoch: 9 [33%] 2023-03-09 22:32:23,837 44k INFO Losses: [2.529768228530884, 2.1434335708618164, 7.9929022789001465, 20.58642578125, 0.9003176689147949], step: 4000, lr: 9.990004373906418e-05 2023-03-09 22:32:26,758 44k INFO Saving model and optimizer state at iteration 9 to ./logs\44k\G_4000.pth 2023-03-09 22:32:27,457 44k INFO Saving model and optimizer state at iteration 9 to ./logs\44k\D_4000.pth 2023-03-09 22:32:28,084 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_1000.pth 2023-03-09 22:32:28,124 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_1000.pth 2023-03-09 22:33:36,481 44k INFO Train Epoch: 9 [75%] 2023-03-09 22:33:36,482 44k INFO Losses: [2.443915367126465, 2.1057639122009277, 9.841815948486328, 21.705745697021484, 1.4547427892684937], step: 4200, lr: 9.990004373906418e-05 2023-03-09 22:34:17,858 44k INFO ====> Epoch: 9, cost 179.37 s 2023-03-09 22:34:55,538 44k INFO Train Epoch: 10 [17%] 2023-03-09 22:34:55,539 44k INFO Losses: [2.3070597648620605, 2.4193506240844727, 12.322043418884277, 22.36187744140625, 1.4455803632736206], step: 4400, lr: 9.98875562335968e-05 2023-03-09 22:36:04,004 44k INFO Train Epoch: 10 [58%] 2023-03-09 22:36:04,004 44k INFO Losses: [2.621814489364624, 1.9164066314697266, 12.128844261169434, 18.02870750427246, 1.2619836330413818], step: 4600, lr: 9.98875562335968e-05 2023-03-09 22:37:12,968 44k INFO ====> Epoch: 10, cost 175.11 s 2023-03-09 22:37:22,837 44k INFO Train Epoch: 11 [0%] 2023-03-09 22:37:22,838 44k INFO Losses: [2.3263368606567383, 2.5736262798309326, 5.8480119705200195, 17.11333656311035, 1.1771115064620972], step: 4800, lr: 9.987507028906759e-05 2023-03-09 22:38:32,659 44k INFO Train Epoch: 11 [42%] 2023-03-09 22:38:32,659 44k INFO Losses: [2.5110223293304443, 2.317819833755493, 7.410079479217529, 23.0472354888916, 1.5325629711151123], step: 5000, lr: 9.987507028906759e-05 2023-03-09 22:38:35,624 44k INFO Saving model and optimizer state at iteration 11 to ./logs\44k\G_5000.pth 2023-03-09 22:38:36,357 44k INFO Saving model and optimizer state at iteration 11 to ./logs\44k\D_5000.pth 2023-03-09 22:38:37,047 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_2000.pth 2023-03-09 22:38:37,084 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_2000.pth 2023-03-09 22:39:45,442 44k INFO Train Epoch: 11 [83%] 2023-03-09 22:39:45,443 44k INFO Losses: [2.7240819931030273, 2.549437999725342, 6.221593856811523, 15.562834739685059, 1.3884763717651367], step: 5200, lr: 9.987507028906759e-05 2023-03-09 22:40:13,542 44k INFO ====> Epoch: 11, cost 180.57 s 2023-03-09 22:41:05,050 44k INFO Train Epoch: 12 [25%] 2023-03-09 22:41:05,050 44k INFO Losses: [2.5932059288024902, 2.160379648208618, 7.225218772888184, 23.120893478393555, 1.476784586906433], step: 5400, lr: 9.986258590528146e-05 2023-03-09 22:42:13,322 44k INFO Train Epoch: 12 [67%] 2023-03-09 22:42:13,323 44k INFO Losses: [2.898857593536377, 2.2347097396850586, 10.12026596069336, 21.01511573791504, 1.414766550064087], step: 5600, lr: 9.986258590528146e-05 2023-03-09 22:43:11,625 44k INFO ====> Epoch: 12, cost 178.08 s 2023-03-09 22:43:35,490 44k INFO Train Epoch: 13 [8%] 2023-03-09 22:43:35,490 44k INFO Losses: [2.5458338260650635, 2.402634382247925, 8.277725219726562, 21.97381019592285, 1.335066556930542], step: 5800, lr: 9.98501030820433e-05 2023-03-09 22:44:43,468 44k INFO Train Epoch: 13 [50%] 2023-03-09 22:44:43,468 44k INFO Losses: [2.6143736839294434, 1.9827158451080322, 8.365862846374512, 20.510074615478516, 1.3840632438659668], step: 6000, lr: 9.98501030820433e-05 2023-03-09 22:44:46,493 44k INFO Saving model and optimizer state at iteration 13 to ./logs\44k\G_6000.pth 2023-03-09 22:44:47,222 44k INFO Saving model and optimizer state at iteration 13 to ./logs\44k\D_6000.pth 2023-03-09 22:44:47,898 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_3000.pth 2023-03-09 22:44:47,944 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_3000.pth 2023-03-09 22:45:54,264 44k INFO Train Epoch: 13 [92%] 2023-03-09 22:45:54,264 44k INFO Losses: [2.5723958015441895, 2.407287120819092, 10.035711288452148, 18.892135620117188, 1.5453276634216309], step: 6200, lr: 9.98501030820433e-05 2023-03-09 22:46:07,916 44k INFO ====> Epoch: 13, cost 176.29 s 2023-03-09 22:47:11,509 44k INFO Train Epoch: 14 [33%] 2023-03-09 22:47:11,510 44k INFO Losses: [2.36806058883667, 2.2312498092651367, 8.099906921386719, 19.888795852661133, 1.1237491369247437], step: 6400, lr: 9.983762181915804e-05 2023-03-09 22:48:17,659 44k INFO Train Epoch: 14 [75%] 2023-03-09 22:48:17,659 44k INFO Losses: [2.6920676231384277, 2.126502513885498, 8.870818138122559, 19.55061149597168, 1.7066129446029663], step: 6600, lr: 9.983762181915804e-05 2023-03-09 22:48:57,688 44k INFO ====> Epoch: 14, cost 169.77 s 2023-03-09 22:49:34,205 44k INFO Train Epoch: 15 [17%] 2023-03-09 22:49:34,205 44k INFO Losses: [2.484997034072876, 2.1649365425109863, 9.29818058013916, 21.262903213500977, 1.5659104585647583], step: 6800, lr: 9.982514211643064e-05 2023-03-09 22:50:40,378 44k INFO Train Epoch: 15 [58%] 2023-03-09 22:50:40,379 44k INFO Losses: [2.6519980430603027, 2.251462459564209, 10.612017631530762, 20.631542205810547, 1.7596162557601929], step: 7000, lr: 9.982514211643064e-05 2023-03-09 22:50:43,320 44k INFO Saving model and optimizer state at iteration 15 to ./logs\44k\G_7000.pth 2023-03-09 22:50:43,989 44k INFO Saving model and optimizer state at iteration 15 to ./logs\44k\D_7000.pth 2023-03-09 22:50:44,665 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_4000.pth 2023-03-09 22:50:44,693 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_4000.pth 2023-03-09 22:51:51,602 44k INFO ====> Epoch: 15, cost 173.91 s 2023-03-09 22:52:01,297 44k INFO Train Epoch: 16 [0%] 2023-03-09 22:52:01,297 44k INFO Losses: [2.584434986114502, 2.8690125942230225, 8.623632431030273, 19.25146484375, 1.6848952770233154], step: 7200, lr: 9.981266397366609e-05 2023-03-09 22:53:08,805 44k INFO Train Epoch: 16 [42%] 2023-03-09 22:53:08,806 44k INFO Losses: [2.4831955432891846, 2.2092037200927734, 8.89979076385498, 21.747295379638672, 1.8002870082855225], step: 7400, lr: 9.981266397366609e-05 2023-03-09 22:54:15,179 44k INFO Train Epoch: 16 [83%] 2023-03-09 22:54:15,179 44k INFO Losses: [2.550377368927002, 2.139691114425659, 8.239158630371094, 18.619892120361328, 1.1555097103118896], step: 7600, lr: 9.981266397366609e-05 2023-03-09 22:54:42,179 44k INFO ====> Epoch: 16, cost 170.58 s 2023-03-09 22:55:32,201 44k INFO Train Epoch: 17 [25%] 2023-03-09 22:55:32,201 44k INFO Losses: [2.6224045753479004, 2.003127098083496, 5.673051357269287, 18.703475952148438, 1.4593850374221802], step: 7800, lr: 9.980018739066937e-05 2023-03-09 22:56:38,453 44k INFO Train Epoch: 17 [67%] 2023-03-09 22:56:38,453 44k INFO Losses: [2.5512290000915527, 2.491487979888916, 8.692081451416016, 21.967350006103516, 1.230060338973999], step: 8000, lr: 9.980018739066937e-05 2023-03-09 22:56:41,418 44k INFO Saving model and optimizer state at iteration 17 to ./logs\44k\G_8000.pth 2023-03-09 22:56:42,069 44k INFO Saving model and optimizer state at iteration 17 to ./logs\44k\D_8000.pth 2023-03-09 22:56:42,713 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_5000.pth 2023-03-09 22:56:42,749 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_5000.pth 2023-03-09 22:57:36,369 44k INFO ====> Epoch: 17, cost 174.19 s 2023-03-09 22:57:59,625 44k INFO Train Epoch: 18 [8%] 2023-03-09 22:57:59,626 44k INFO Losses: [2.681464195251465, 2.0676474571228027, 8.194007873535156, 18.121309280395508, 0.9629524946212769], step: 8200, lr: 9.978771236724554e-05 2023-03-09 22:59:06,766 44k INFO Train Epoch: 18 [50%] 2023-03-09 22:59:06,767 44k INFO Losses: [2.4612860679626465, 1.9128518104553223, 10.773208618164062, 21.291955947875977, 1.9208170175552368], step: 8400, lr: 9.978771236724554e-05 2023-03-09 23:00:13,278 44k INFO Train Epoch: 18 [92%] 2023-03-09 23:00:13,279 44k INFO Losses: [2.3332037925720215, 2.527757167816162, 10.766252517700195, 20.27721405029297, 1.6251143217086792], step: 8600, lr: 9.978771236724554e-05 2023-03-09 23:00:27,015 44k INFO ====> Epoch: 18, cost 170.65 s 2023-03-09 23:01:30,387 44k INFO Train Epoch: 19 [33%] 2023-03-09 23:01:30,387 44k INFO Losses: [2.6890785694122314, 2.048175573348999, 7.796837329864502, 17.393211364746094, 1.0918943881988525], step: 8800, lr: 9.977523890319963e-05 2023-03-09 23:02:36,951 44k INFO Train Epoch: 19 [75%] 2023-03-09 23:02:36,951 44k INFO Losses: [2.446784257888794, 2.24282169342041, 7.400733470916748, 19.949542999267578, 1.4323956966400146], step: 9000, lr: 9.977523890319963e-05 2023-03-09 23:02:39,875 44k INFO Saving model and optimizer state at iteration 19 to ./logs\44k\G_9000.pth 2023-03-09 23:02:40,549 44k INFO Saving model and optimizer state at iteration 19 to ./logs\44k\D_9000.pth 2023-03-09 23:02:41,199 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_6000.pth 2023-03-09 23:02:41,230 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_6000.pth 2023-03-09 23:03:21,475 44k INFO ====> Epoch: 19, cost 174.46 s 2023-03-09 23:03:58,103 44k INFO Train Epoch: 20 [17%] 2023-03-09 23:03:58,103 44k INFO Losses: [2.499037981033325, 2.2154788970947266, 9.236119270324707, 19.01803970336914, 1.029329776763916], step: 9200, lr: 9.976276699833672e-05 2023-03-09 23:05:04,654 44k INFO Train Epoch: 20 [58%] 2023-03-09 23:05:04,654 44k INFO Losses: [2.707878589630127, 2.379702091217041, 9.363155364990234, 21.514728546142578, 1.0883721113204956], step: 9400, lr: 9.976276699833672e-05 2023-03-09 23:06:11,584 44k INFO ====> Epoch: 20, cost 170.11 s 2023-03-09 23:06:21,395 44k INFO Train Epoch: 21 [0%] 2023-03-09 23:06:21,395 44k INFO Losses: [2.6456973552703857, 2.2951180934906006, 7.92765998840332, 16.195425033569336, 1.277409315109253], step: 9600, lr: 9.975029665246193e-05 2023-03-09 23:07:28,401 44k INFO Train Epoch: 21 [42%] 2023-03-09 23:07:28,402 44k INFO Losses: [2.240675926208496, 2.32568097114563, 10.639389038085938, 23.330589294433594, 1.4125117063522339], step: 9800, lr: 9.975029665246193e-05 2023-03-09 23:08:34,540 44k INFO Train Epoch: 21 [83%] 2023-03-09 23:08:34,540 44k INFO Losses: [2.6852498054504395, 2.272972345352173, 9.825559616088867, 21.62499237060547, 1.1525027751922607], step: 10000, lr: 9.975029665246193e-05 2023-03-09 23:08:37,490 44k INFO Saving model and optimizer state at iteration 21 to ./logs\44k\G_10000.pth 2023-03-09 23:08:38,177 44k INFO Saving model and optimizer state at iteration 21 to ./logs\44k\D_10000.pth 2023-03-09 23:08:38,826 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_7000.pth 2023-03-09 23:08:38,855 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_7000.pth 2023-03-09 23:09:05,706 44k INFO ====> Epoch: 21, cost 174.12 s 2023-03-09 23:09:56,858 44k INFO Train Epoch: 22 [25%] 2023-03-09 23:09:56,859 44k INFO Losses: [2.238344192504883, 2.6883773803710938, 9.67750358581543, 22.289474487304688, 1.5644885301589966], step: 10200, lr: 9.973782786538036e-05 2023-03-09 23:11:03,799 44k INFO Train Epoch: 22 [67%] 2023-03-09 23:11:03,799 44k INFO Losses: [2.553804397583008, 2.063002586364746, 8.806561470031738, 20.5889949798584, 1.483044981956482], step: 10400, lr: 9.973782786538036e-05 2023-03-09 23:11:58,673 44k INFO ====> Epoch: 22, cost 172.97 s 2023-03-09 23:12:22,758 44k INFO Train Epoch: 23 [8%] 2023-03-09 23:12:22,759 44k INFO Losses: [2.6479604244232178, 2.37148380279541, 5.944035053253174, 19.03931427001953, 0.8333165645599365], step: 10600, lr: 9.972536063689719e-05 2023-03-09 23:13:29,695 44k INFO Train Epoch: 23 [50%] 2023-03-09 23:13:29,695 44k INFO Losses: [2.4879696369171143, 2.3861594200134277, 9.920605659484863, 21.222816467285156, 1.5764923095703125], step: 10800, lr: 9.972536063689719e-05 2023-03-09 23:14:36,814 44k INFO Train Epoch: 23 [92%] 2023-03-09 23:14:36,815 44k INFO Losses: [2.3822216987609863, 2.1951522827148438, 9.054584503173828, 21.033418655395508, 1.300223708152771], step: 11000, lr: 9.972536063689719e-05 2023-03-09 23:14:39,782 44k INFO Saving model and optimizer state at iteration 23 to ./logs\44k\G_11000.pth 2023-03-09 23:14:40,521 44k INFO Saving model and optimizer state at iteration 23 to ./logs\44k\D_11000.pth 2023-03-09 23:14:41,229 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_8000.pth 2023-03-09 23:14:41,275 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_8000.pth 2023-03-09 23:14:55,049 44k INFO ====> Epoch: 23, cost 176.38 s 2023-03-09 23:16:03,865 44k INFO Train Epoch: 24 [33%] 2023-03-09 23:16:03,866 44k INFO Losses: [2.527130365371704, 2.1844637393951416, 9.962337493896484, 19.235326766967773, 1.5654696226119995], step: 11200, lr: 9.971289496681757e-05 2023-03-09 23:17:12,570 44k INFO Train Epoch: 24 [75%] 2023-03-09 23:17:12,571 44k INFO Losses: [2.464890480041504, 1.9783828258514404, 9.894397735595703, 19.97930335998535, 1.1896278858184814], step: 11400, lr: 9.971289496681757e-05 2023-03-09 23:17:53,847 44k INFO ====> Epoch: 24, cost 178.80 s 2023-03-09 23:18:32,893 44k INFO Train Epoch: 25 [17%] 2023-03-09 23:18:32,893 44k INFO Losses: [2.3611230850219727, 2.045161485671997, 11.447516441345215, 20.11899757385254, 0.9721967577934265], step: 11600, lr: 9.970043085494672e-05 2023-03-09 23:19:41,104 44k INFO Train Epoch: 25 [58%] 2023-03-09 23:19:41,105 44k INFO Losses: [2.6343815326690674, 2.0267837047576904, 8.24674129486084, 18.057497024536133, 0.9985759854316711], step: 11800, lr: 9.970043085494672e-05 2023-03-09 23:20:50,566 44k INFO ====> Epoch: 25, cost 176.72 s 2023-03-09 23:21:00,706 44k INFO Train Epoch: 26 [0%] 2023-03-09 23:21:00,706 44k INFO Losses: [2.108116388320923, 2.732480049133301, 6.870335578918457, 16.327394485473633, 1.0612876415252686], step: 12000, lr: 9.968796830108985e-05 2023-03-09 23:21:03,750 44k INFO Saving model and optimizer state at iteration 26 to ./logs\44k\G_12000.pth 2023-03-09 23:21:04,453 44k INFO Saving model and optimizer state at iteration 26 to ./logs\44k\D_12000.pth 2023-03-09 23:21:05,077 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_9000.pth 2023-03-09 23:21:05,120 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_9000.pth 2023-03-09 23:22:14,609 44k INFO Train Epoch: 26 [42%] 2023-03-09 23:22:14,610 44k INFO Losses: [2.523098945617676, 2.2026989459991455, 9.008049011230469, 21.576215744018555, 1.4500067234039307], step: 12200, lr: 9.968796830108985e-05 2023-03-09 23:23:21,857 44k INFO Train Epoch: 26 [83%] 2023-03-09 23:23:21,857 44k INFO Losses: [2.6288836002349854, 2.0201313495635986, 7.990760803222656, 14.86129379272461, 1.378536343574524], step: 12400, lr: 9.968796830108985e-05 2023-03-09 23:23:49,151 44k INFO ====> Epoch: 26, cost 178.59 s 2023-03-09 23:24:40,579 44k INFO Train Epoch: 27 [25%] 2023-03-09 23:24:40,580 44k INFO Losses: [2.413933277130127, 2.261289358139038, 6.761435031890869, 19.1599178314209, 1.4506725072860718], step: 12600, lr: 9.967550730505221e-05 2023-03-09 23:25:48,998 44k INFO Train Epoch: 27 [67%] 2023-03-09 23:25:48,998 44k INFO Losses: [2.7390246391296387, 2.257850170135498, 8.31356430053711, 21.033832550048828, 1.130155086517334], step: 12800, lr: 9.967550730505221e-05 2023-03-09 23:26:44,379 44k INFO ====> Epoch: 27, cost 175.23 s 2023-03-09 23:27:08,014 44k INFO Train Epoch: 28 [8%] 2023-03-09 23:27:08,014 44k INFO Losses: [2.575988292694092, 2.568122625350952, 9.701324462890625, 21.5109806060791, 1.560456395149231], step: 13000, lr: 9.966304786663908e-05 2023-03-09 23:27:11,017 44k INFO Saving model and optimizer state at iteration 28 to ./logs\44k\G_13000.pth 2023-03-09 23:27:11,687 44k INFO Saving model and optimizer state at iteration 28 to ./logs\44k\D_13000.pth 2023-03-09 23:27:12,306 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_10000.pth 2023-03-09 23:27:12,351 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_10000.pth 2023-03-09 23:28:18,669 44k INFO Train Epoch: 28 [50%] 2023-03-09 23:28:18,670 44k INFO Losses: [2.396379232406616, 2.5859997272491455, 8.974349975585938, 21.1258487701416, 1.407135009765625], step: 13200, lr: 9.966304786663908e-05 2023-03-09 23:29:29,448 44k INFO Train Epoch: 28 [92%] 2023-03-09 23:29:29,448 44k INFO Losses: [2.6594862937927246, 1.9680594205856323, 8.101217269897461, 15.953340530395508, 0.9844779968261719], step: 13400, lr: 9.966304786663908e-05 2023-03-09 23:29:43,841 44k INFO ====> Epoch: 28, cost 179.46 s 2023-03-09 23:30:54,018 44k INFO Train Epoch: 29 [33%] 2023-03-09 23:30:54,019 44k INFO Losses: [2.4747812747955322, 2.02663254737854, 10.244929313659668, 21.2783145904541, 1.326253890991211], step: 13600, lr: 9.965058998565574e-05 2023-03-09 23:32:04,615 44k INFO Train Epoch: 29 [75%] 2023-03-09 23:32:04,616 44k INFO Losses: [2.3540537357330322, 2.3179657459259033, 10.381710052490234, 20.783424377441406, 1.2607688903808594], step: 13800, lr: 9.965058998565574e-05 2023-03-09 23:32:47,467 44k INFO ====> Epoch: 29, cost 183.63 s 2023-03-09 23:33:26,410 44k INFO Train Epoch: 30 [17%] 2023-03-09 23:33:26,411 44k INFO Losses: [2.474283218383789, 2.4970040321350098, 8.310688972473145, 21.354806900024414, 1.3590052127838135], step: 14000, lr: 9.963813366190753e-05 2023-03-09 23:33:29,452 44k INFO Saving model and optimizer state at iteration 30 to ./logs\44k\G_14000.pth 2023-03-09 23:33:30,221 44k INFO Saving model and optimizer state at iteration 30 to ./logs\44k\D_14000.pth 2023-03-09 23:33:30,839 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_11000.pth 2023-03-09 23:33:30,869 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_11000.pth 2023-03-09 23:34:39,157 44k INFO Train Epoch: 30 [58%] 2023-03-09 23:34:39,158 44k INFO Losses: [2.6338324546813965, 2.0900826454162598, 8.30550479888916, 19.974449157714844, 1.6999881267547607], step: 14200, lr: 9.963813366190753e-05 2023-03-09 23:35:45,919 44k INFO ====> Epoch: 30, cost 178.45 s 2023-03-09 23:35:55,590 44k INFO Train Epoch: 31 [0%] 2023-03-09 23:35:55,591 44k INFO Losses: [2.589956521987915, 2.11285662651062, 6.709275245666504, 14.461118698120117, 1.5959913730621338], step: 14400, lr: 9.962567889519979e-05 2023-03-09 23:37:02,721 44k INFO Train Epoch: 31 [42%] 2023-03-09 23:37:02,722 44k INFO Losses: [2.5060269832611084, 2.0969247817993164, 8.611684799194336, 20.40202522277832, 1.4926910400390625], step: 14600, lr: 9.962567889519979e-05 2023-03-09 23:38:09,303 44k INFO Train Epoch: 31 [83%] 2023-03-09 23:38:09,304 44k INFO Losses: [2.44972562789917, 2.209604263305664, 8.374757766723633, 15.323009490966797, 1.1994714736938477], step: 14800, lr: 9.962567889519979e-05 2023-03-09 23:38:37,051 44k INFO ====> Epoch: 31, cost 171.13 s 2023-03-09 23:39:27,683 44k INFO Train Epoch: 32 [25%] 2023-03-09 23:39:27,684 44k INFO Losses: [3.007453680038452, 2.0412755012512207, 8.696616172790527, 22.206615447998047, 1.6789324283599854], step: 15000, lr: 9.961322568533789e-05 2023-03-09 23:39:30,555 44k INFO Saving model and optimizer state at iteration 32 to ./logs\44k\G_15000.pth 2023-03-09 23:39:31,269 44k INFO Saving model and optimizer state at iteration 32 to ./logs\44k\D_15000.pth 2023-03-09 23:39:31,905 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_12000.pth 2023-03-09 23:39:31,946 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_12000.pth 2023-03-09 23:40:38,063 44k INFO Train Epoch: 32 [67%] 2023-03-09 23:40:38,063 44k INFO Losses: [2.580566883087158, 2.168311834335327, 10.336328506469727, 21.996471405029297, 1.1694210767745972], step: 15200, lr: 9.961322568533789e-05 2023-03-09 23:41:31,924 44k INFO ====> Epoch: 32, cost 174.87 s 2023-03-09 23:41:55,139 44k INFO Train Epoch: 33 [8%] 2023-03-09 23:41:55,139 44k INFO Losses: [2.5315604209899902, 2.3219544887542725, 7.4105305671691895, 18.05023956298828, 1.3385566473007202], step: 15400, lr: 9.960077403212722e-05 2023-03-09 23:43:01,737 44k INFO Train Epoch: 33 [50%] 2023-03-09 23:43:01,737 44k INFO Losses: [2.178466558456421, 2.4093775749206543, 8.204710006713867, 19.970046997070312, 1.2561421394348145], step: 15600, lr: 9.960077403212722e-05 2023-03-09 23:44:09,178 44k INFO Train Epoch: 33 [92%] 2023-03-09 23:44:09,179 44k INFO Losses: [2.4576892852783203, 2.340421199798584, 9.03681755065918, 19.720226287841797, 1.4429585933685303], step: 15800, lr: 9.960077403212722e-05 2023-03-09 23:44:23,625 44k INFO ====> Epoch: 33, cost 171.70 s 2023-03-09 23:45:41,952 44k INFO Train Epoch: 34 [33%] 2023-03-09 23:45:41,952 44k INFO Losses: [2.541362762451172, 2.1348211765289307, 8.14700984954834, 17.03746223449707, 1.528643250465393], step: 16000, lr: 9.95883239353732e-05 2023-03-09 23:45:45,488 44k INFO Saving model and optimizer state at iteration 34 to ./logs\44k\G_16000.pth 2023-03-09 23:45:46,269 44k INFO Saving model and optimizer state at iteration 34 to ./logs\44k\D_16000.pth 2023-03-09 23:45:46,971 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_13000.pth 2023-03-09 23:45:47,003 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_13000.pth 2023-03-09 23:46:55,293 44k INFO Train Epoch: 34 [75%] 2023-03-09 23:46:55,294 44k INFO Losses: [2.2557361125946045, 2.1361310482025146, 11.321039199829102, 22.42060661315918, 1.6674526929855347], step: 16200, lr: 9.95883239353732e-05 2023-03-09 23:47:36,379 44k INFO ====> Epoch: 34, cost 192.75 s 2023-03-09 23:48:13,981 44k INFO Train Epoch: 35 [17%] 2023-03-09 23:48:13,982 44k INFO Losses: [2.5682592391967773, 2.5494468212127686, 10.363664627075195, 20.090839385986328, 1.3562754392623901], step: 16400, lr: 9.957587539488128e-05 2023-03-09 23:49:21,678 44k INFO Train Epoch: 35 [58%] 2023-03-09 23:49:21,678 44k INFO Losses: [2.6154041290283203, 1.9948923587799072, 8.994477272033691, 17.46767234802246, 1.2940129041671753], step: 16600, lr: 9.957587539488128e-05 2023-03-09 23:50:30,138 44k INFO ====> Epoch: 35, cost 173.76 s 2023-03-09 23:50:40,102 44k INFO Train Epoch: 36 [0%] 2023-03-09 23:50:40,102 44k INFO Losses: [2.4660754203796387, 1.9663159847259521, 5.942327499389648, 14.918792724609375, 1.3304672241210938], step: 16800, lr: 9.956342841045691e-05 2023-03-09 23:51:49,804 44k INFO Train Epoch: 36 [42%] 2023-03-09 23:51:49,804 44k INFO Losses: [2.4021859169006348, 2.3796520233154297, 10.489449501037598, 21.498687744140625, 1.2586421966552734], step: 17000, lr: 9.956342841045691e-05 2023-03-09 23:51:52,692 44k INFO Saving model and optimizer state at iteration 36 to ./logs\44k\G_17000.pth 2023-03-09 23:51:53,378 44k INFO Saving model and optimizer state at iteration 36 to ./logs\44k\D_17000.pth 2023-03-09 23:51:54,026 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_14000.pth 2023-03-09 23:51:54,064 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_14000.pth 2023-03-09 23:53:02,828 44k INFO Train Epoch: 36 [83%] 2023-03-09 23:53:02,828 44k INFO Losses: [2.530822277069092, 2.359158754348755, 5.985983371734619, 17.314420700073242, 1.2479362487792969], step: 17200, lr: 9.956342841045691e-05 2023-03-09 23:53:30,740 44k INFO ====> Epoch: 36, cost 180.60 s 2023-03-09 23:54:22,487 44k INFO Train Epoch: 37 [25%] 2023-03-09 23:54:22,487 44k INFO Losses: [2.468174457550049, 2.5823373794555664, 7.509805202484131, 20.679941177368164, 1.1722922325134277], step: 17400, lr: 9.95509829819056e-05 2023-03-09 23:55:30,989 44k INFO Train Epoch: 37 [67%] 2023-03-09 23:55:30,989 44k INFO Losses: [2.440070152282715, 2.3759515285491943, 7.461362838745117, 20.658775329589844, 1.294507384300232], step: 17600, lr: 9.95509829819056e-05 2023-03-09 23:56:27,303 44k INFO ====> Epoch: 37, cost 176.56 s 2023-03-09 23:56:51,642 44k INFO Train Epoch: 38 [8%] 2023-03-09 23:56:51,643 44k INFO Losses: [2.51751708984375, 2.3274128437042236, 8.263319969177246, 20.75530433654785, 1.1685583591461182], step: 17800, lr: 9.953853910903285e-05 2023-03-09 23:58:01,096 44k INFO Train Epoch: 38 [50%] 2023-03-09 23:58:01,097 44k INFO Losses: [2.5445449352264404, 2.6543474197387695, 11.771062850952148, 23.546571731567383, 1.391394019126892], step: 18000, lr: 9.953853910903285e-05 2023-03-09 23:58:04,148 44k INFO Saving model and optimizer state at iteration 38 to ./logs\44k\G_18000.pth 2023-03-09 23:58:04,840 44k INFO Saving model and optimizer state at iteration 38 to ./logs\44k\D_18000.pth 2023-03-09 23:58:05,487 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_15000.pth 2023-03-09 23:58:05,532 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_15000.pth 2023-03-09 23:59:14,115 44k INFO Train Epoch: 38 [92%] 2023-03-09 23:59:14,115 44k INFO Losses: [2.6619672775268555, 2.097445249557495, 6.994243621826172, 20.368392944335938, 1.6910580396652222], step: 18200, lr: 9.953853910903285e-05 2023-03-09 23:59:28,319 44k INFO ====> Epoch: 38, cost 181.02 s 2023-03-10 00:00:34,196 44k INFO Train Epoch: 39 [33%] 2023-03-10 00:00:34,196 44k INFO Losses: [2.468571662902832, 2.37003231048584, 10.23530387878418, 19.475954055786133, 1.09866464138031], step: 18400, lr: 9.952609679164422e-05 2023-03-10 00:01:42,770 44k INFO Train Epoch: 39 [75%] 2023-03-10 00:01:42,771 44k INFO Losses: [2.2688961029052734, 2.219367742538452, 10.984037399291992, 19.067935943603516, 1.0729482173919678], step: 18600, lr: 9.952609679164422e-05 2023-03-10 00:02:24,217 44k INFO ====> Epoch: 39, cost 175.90 s 2023-03-10 00:03:02,512 44k INFO Train Epoch: 40 [17%] 2023-03-10 00:03:02,512 44k INFO Losses: [2.4043712615966797, 2.078387498855591, 7.829358100891113, 17.97997283935547, 1.2149159908294678], step: 18800, lr: 9.951365602954526e-05 2023-03-10 00:04:10,942 44k INFO Train Epoch: 40 [58%] 2023-03-10 00:04:10,943 44k INFO Losses: [2.7210350036621094, 1.999556541442871, 8.090421676635742, 19.198190689086914, 1.1153842210769653], step: 19000, lr: 9.951365602954526e-05 2023-03-10 00:04:14,095 44k INFO Saving model and optimizer state at iteration 40 to ./logs\44k\G_19000.pth 2023-03-10 00:04:14,775 44k INFO Saving model and optimizer state at iteration 40 to ./logs\44k\D_19000.pth 2023-03-10 00:04:15,443 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_16000.pth 2023-03-10 00:04:15,472 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_16000.pth 2023-03-10 00:05:24,294 44k INFO ====> Epoch: 40, cost 180.08 s 2023-03-10 00:05:34,389 44k INFO Train Epoch: 41 [0%] 2023-03-10 00:05:34,390 44k INFO Losses: [2.2023415565490723, 2.5993332862854004, 10.31195068359375, 21.043968200683594, 1.18059504032135], step: 19200, lr: 9.950121682254156e-05 2023-03-10 00:06:43,986 44k INFO Train Epoch: 41 [42%] 2023-03-10 00:06:43,987 44k INFO Losses: [2.6001029014587402, 1.9843547344207764, 7.296051502227783, 20.282991409301758, 1.4453257322311401], step: 19400, lr: 9.950121682254156e-05 2023-03-10 00:07:52,373 44k INFO Train Epoch: 41 [83%] 2023-03-10 00:07:52,373 44k INFO Losses: [2.8434841632843018, 2.0593206882476807, 7.50341272354126, 15.828989028930664, 1.1855241060256958], step: 19600, lr: 9.950121682254156e-05 2023-03-10 00:08:20,504 44k INFO ====> Epoch: 41, cost 176.21 s 2023-03-10 00:09:11,847 44k INFO Train Epoch: 42 [25%] 2023-03-10 00:09:11,847 44k INFO Losses: [2.3398730754852295, 2.205005645751953, 8.405223846435547, 21.237323760986328, 1.3119484186172485], step: 19800, lr: 9.948877917043875e-05 2023-03-10 00:10:21,325 44k INFO Train Epoch: 42 [67%] 2023-03-10 00:10:21,326 44k INFO Losses: [2.564417600631714, 2.2666187286376953, 7.732751369476318, 20.512413024902344, 1.5454299449920654], step: 20000, lr: 9.948877917043875e-05 2023-03-10 00:10:24,293 44k INFO Saving model and optimizer state at iteration 42 to ./logs\44k\G_20000.pth 2023-03-10 00:10:25,030 44k INFO Saving model and optimizer state at iteration 42 to ./logs\44k\D_20000.pth 2023-03-10 00:10:25,678 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_17000.pth 2023-03-10 00:10:25,715 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_17000.pth 2023-03-10 00:11:21,584 44k INFO ====> Epoch: 42, cost 181.08 s 2023-03-10 00:11:45,561 44k INFO Train Epoch: 43 [8%] 2023-03-10 00:11:45,562 44k INFO Losses: [2.582317352294922, 2.2756662368774414, 7.690517425537109, 19.34727668762207, 1.3748066425323486], step: 20200, lr: 9.947634307304244e-05 2023-03-10 00:12:54,895 44k INFO Train Epoch: 43 [50%] 2023-03-10 00:12:54,896 44k INFO Losses: [2.4092278480529785, 2.3263907432556152, 10.733685493469238, 21.365232467651367, 1.4302860498428345], step: 20400, lr: 9.947634307304244e-05 2023-03-10 00:14:04,154 44k INFO Train Epoch: 43 [92%] 2023-03-10 00:14:04,154 44k INFO Losses: [2.6259164810180664, 2.1064963340759277, 7.4166154861450195, 17.447696685791016, 1.1309740543365479], step: 20600, lr: 9.947634307304244e-05 2023-03-10 00:14:18,308 44k INFO ====> Epoch: 43, cost 176.72 s 2023-03-10 00:15:24,905 44k INFO Train Epoch: 44 [33%] 2023-03-10 00:15:24,905 44k INFO Losses: [2.539393186569214, 2.2365403175354004, 9.663187980651855, 19.52463150024414, 1.3997358083724976], step: 20800, lr: 9.94639085301583e-05 2023-03-10 00:16:34,403 44k INFO Train Epoch: 44 [75%] 2023-03-10 00:16:34,403 44k INFO Losses: [2.2573349475860596, 2.207044839859009, 9.318681716918945, 19.270915985107422, 1.3963450193405151], step: 21000, lr: 9.94639085301583e-05 2023-03-10 00:16:37,591 44k INFO Saving model and optimizer state at iteration 44 to ./logs\44k\G_21000.pth 2023-03-10 00:16:38,371 44k INFO Saving model and optimizer state at iteration 44 to ./logs\44k\D_21000.pth 2023-03-10 00:16:39,037 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_18000.pth 2023-03-10 00:16:39,069 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_18000.pth 2023-03-10 00:17:21,565 44k INFO ====> Epoch: 44, cost 183.26 s 2023-03-10 00:17:59,913 44k INFO Train Epoch: 45 [17%] 2023-03-10 00:17:59,913 44k INFO Losses: [2.533785820007324, 2.254905939102173, 11.04792308807373, 21.98154640197754, 1.696187973022461], step: 21200, lr: 9.945147554159202e-05 2023-03-10 00:19:08,333 44k INFO Train Epoch: 45 [58%] 2023-03-10 00:19:08,333 44k INFO Losses: [2.7740299701690674, 1.8208223581314087, 8.827348709106445, 18.517635345458984, 1.6870830059051514], step: 21400, lr: 9.945147554159202e-05 2023-03-10 00:20:16,091 44k INFO ====> Epoch: 45, cost 174.53 s 2023-03-10 00:20:25,665 44k INFO Train Epoch: 46 [0%] 2023-03-10 00:20:25,666 44k INFO Losses: [2.31622052192688, 2.598419427871704, 9.055929183959961, 21.344236373901367, 1.2862190008163452], step: 21600, lr: 9.943904410714931e-05 2023-03-10 00:21:33,562 44k INFO Train Epoch: 46 [42%] 2023-03-10 00:21:33,562 44k INFO Losses: [2.6425986289978027, 2.0487513542175293, 7.534739971160889, 21.130722045898438, 1.5636035203933716], step: 21800, lr: 9.943904410714931e-05 2023-03-10 00:22:44,470 44k INFO Train Epoch: 46 [83%] 2023-03-10 00:22:44,471 44k INFO Losses: [2.6624033451080322, 2.5943055152893066, 7.559395790100098, 16.688627243041992, 1.3365025520324707], step: 22000, lr: 9.943904410714931e-05 2023-03-10 00:22:47,426 44k INFO Saving model and optimizer state at iteration 46 to ./logs\44k\G_22000.pth 2023-03-10 00:22:48,192 44k INFO Saving model and optimizer state at iteration 46 to ./logs\44k\D_22000.pth 2023-03-10 00:22:48,838 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_19000.pth 2023-03-10 00:22:48,868 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_19000.pth 2023-03-10 00:23:16,351 44k INFO ====> Epoch: 46, cost 180.26 s 2023-03-10 00:24:07,162 44k INFO Train Epoch: 47 [25%] 2023-03-10 00:24:07,162 44k INFO Losses: [2.2071139812469482, 2.408928871154785, 10.053417205810547, 23.653078079223633, 1.3212175369262695], step: 22200, lr: 9.942661422663591e-05 2023-03-10 00:25:14,806 44k INFO Train Epoch: 47 [67%] 2023-03-10 00:25:14,807 44k INFO Losses: [2.555342197418213, 2.053321361541748, 7.891706466674805, 20.104381561279297, 1.2838119268417358], step: 22400, lr: 9.942661422663591e-05 2023-03-10 00:26:09,103 44k INFO ====> Epoch: 47, cost 172.75 s 2023-03-10 00:26:32,469 44k INFO Train Epoch: 48 [8%] 2023-03-10 00:26:32,469 44k INFO Losses: [2.4753148555755615, 2.5164923667907715, 8.472489356994629, 19.89665412902832, 1.5288535356521606], step: 22600, lr: 9.941418589985758e-05 2023-03-10 00:27:40,382 44k INFO Train Epoch: 48 [50%] 2023-03-10 00:27:40,382 44k INFO Losses: [2.1079154014587402, 2.5065455436706543, 12.964305877685547, 22.81366539001465, 1.487200379371643], step: 22800, lr: 9.941418589985758e-05 2023-03-10 00:28:48,163 44k INFO Train Epoch: 48 [92%] 2023-03-10 00:28:48,163 44k INFO Losses: [2.1043496131896973, 2.3964056968688965, 12.112120628356934, 22.33637809753418, 1.2500946521759033], step: 23000, lr: 9.941418589985758e-05 2023-03-10 00:28:51,075 44k INFO Saving model and optimizer state at iteration 48 to ./logs\44k\G_23000.pth 2023-03-10 00:28:51,741 44k INFO Saving model and optimizer state at iteration 48 to ./logs\44k\D_23000.pth 2023-03-10 00:28:52,375 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_20000.pth 2023-03-10 00:28:52,405 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_20000.pth 2023-03-10 00:29:06,093 44k INFO ====> Epoch: 48, cost 176.99 s 2023-03-10 00:30:10,750 44k INFO Train Epoch: 49 [33%] 2023-03-10 00:30:10,751 44k INFO Losses: [2.5487921237945557, 2.2012343406677246, 10.218581199645996, 17.530025482177734, 1.2369277477264404], step: 23200, lr: 9.940175912662009e-05 2023-03-10 00:31:18,383 44k INFO Train Epoch: 49 [75%] 2023-03-10 00:31:18,384 44k INFO Losses: [2.455705165863037, 2.133345603942871, 9.95699405670166, 19.278167724609375, 1.496594786643982], step: 23400, lr: 9.940175912662009e-05 2023-03-10 00:31:59,189 44k INFO ====> Epoch: 49, cost 173.10 s 2023-03-10 00:32:36,403 44k INFO Train Epoch: 50 [17%] 2023-03-10 00:32:36,403 44k INFO Losses: [2.5535242557525635, 2.162964105606079, 10.07349681854248, 20.256912231445312, 0.8696603178977966], step: 23600, lr: 9.938933390672926e-05 2023-03-10 00:33:44,030 44k INFO Train Epoch: 50 [58%] 2023-03-10 00:33:44,030 44k INFO Losses: [2.6740314960479736, 2.082170248031616, 6.957486629486084, 17.2308292388916, 1.3569444417953491], step: 23800, lr: 9.938933390672926e-05 2023-03-10 00:34:52,123 44k INFO ====> Epoch: 50, cost 172.93 s 2023-03-10 00:35:01,761 44k INFO Train Epoch: 51 [0%] 2023-03-10 00:35:01,761 44k INFO Losses: [2.55359148979187, 2.621748924255371, 8.933265686035156, 20.598188400268555, 1.4574673175811768], step: 24000, lr: 9.937691023999092e-05 2023-03-10 00:35:04,630 44k INFO Saving model and optimizer state at iteration 51 to ./logs\44k\G_24000.pth 2023-03-10 00:35:05,301 44k INFO Saving model and optimizer state at iteration 51 to ./logs\44k\D_24000.pth 2023-03-10 00:35:05,939 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_21000.pth 2023-03-10 00:35:05,970 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_21000.pth 2023-03-10 00:36:14,180 44k INFO Train Epoch: 51 [42%] 2023-03-10 00:36:14,180 44k INFO Losses: [2.3972549438476562, 2.119649648666382, 10.199462890625, 21.869224548339844, 1.401321291923523], step: 24200, lr: 9.937691023999092e-05 2023-03-10 00:37:21,752 44k INFO Train Epoch: 51 [83%] 2023-03-10 00:37:21,752 44k INFO Losses: [2.5760087966918945, 2.5573372840881348, 8.357288360595703, 17.33403968811035, 1.2820732593536377], step: 24400, lr: 9.937691023999092e-05 2023-03-10 00:37:49,272 44k INFO ====> Epoch: 51, cost 177.15 s 2023-03-10 00:38:39,998 44k INFO Train Epoch: 52 [25%] 2023-03-10 00:38:39,999 44k INFO Losses: [2.3621816635131836, 2.3909835815429688, 9.685585975646973, 20.309288024902344, 1.4920895099639893], step: 24600, lr: 9.936448812621091e-05 2023-03-10 00:39:47,559 44k INFO Train Epoch: 52 [67%] 2023-03-10 00:39:47,559 44k INFO Losses: [2.430614709854126, 2.1781272888183594, 11.714396476745605, 20.615680694580078, 1.3542485237121582], step: 24800, lr: 9.936448812621091e-05 2023-03-10 00:40:42,014 44k INFO ====> Epoch: 52, cost 172.74 s 2023-03-10 00:41:05,433 44k INFO Train Epoch: 53 [8%] 2023-03-10 00:41:05,434 44k INFO Losses: [2.4666404724121094, 2.3001811504364014, 9.228809356689453, 19.622970581054688, 0.9920151829719543], step: 25000, lr: 9.935206756519513e-05 2023-03-10 00:41:08,286 44k INFO Saving model and optimizer state at iteration 53 to ./logs\44k\G_25000.pth 2023-03-10 00:41:08,945 44k INFO Saving model and optimizer state at iteration 53 to ./logs\44k\D_25000.pth 2023-03-10 00:41:09,594 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_22000.pth 2023-03-10 00:41:09,624 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_22000.pth 2023-03-10 00:42:17,451 44k INFO Train Epoch: 53 [50%] 2023-03-10 00:42:17,452 44k INFO Losses: [2.3194563388824463, 2.355863571166992, 9.475533485412598, 20.43379783630371, 1.24263334274292], step: 25200, lr: 9.935206756519513e-05 2023-03-10 00:43:25,297 44k INFO Train Epoch: 53 [92%] 2023-03-10 00:43:25,297 44k INFO Losses: [2.374337911605835, 2.590486764907837, 8.226838111877441, 20.308446884155273, 1.3287748098373413], step: 25400, lr: 9.935206756519513e-05 2023-03-10 00:43:39,314 44k INFO ====> Epoch: 53, cost 177.30 s 2023-03-10 00:44:43,712 44k INFO Train Epoch: 54 [33%] 2023-03-10 00:44:43,713 44k INFO Losses: [2.559992551803589, 2.1120166778564453, 5.345562934875488, 15.033105850219727, 1.0177208185195923], step: 25600, lr: 9.933964855674948e-05 2023-03-10 00:45:51,998 44k INFO Train Epoch: 54 [75%] 2023-03-10 00:45:51,998 44k INFO Losses: [2.465874671936035, 2.1739163398742676, 8.172489166259766, 19.456560134887695, 1.1147669553756714], step: 25800, lr: 9.933964855674948e-05 2023-03-10 00:46:32,512 44k INFO ====> Epoch: 54, cost 173.20 s 2023-03-10 00:47:09,387 44k INFO Train Epoch: 55 [17%] 2023-03-10 00:47:09,387 44k INFO Losses: [2.1682655811309814, 2.5019171237945557, 12.015508651733398, 22.504173278808594, 1.5386526584625244], step: 26000, lr: 9.932723110067987e-05 2023-03-10 00:47:12,320 44k INFO Saving model and optimizer state at iteration 55 to ./logs\44k\G_26000.pth 2023-03-10 00:47:12,978 44k INFO Saving model and optimizer state at iteration 55 to ./logs\44k\D_26000.pth 2023-03-10 00:47:13,630 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_23000.pth 2023-03-10 00:47:13,667 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_23000.pth 2023-03-10 00:48:20,450 44k INFO Train Epoch: 55 [58%] 2023-03-10 00:48:20,451 44k INFO Losses: [2.4462766647338867, 2.1311614513397217, 9.153489112854004, 19.178327560424805, 1.0876898765563965], step: 26200, lr: 9.932723110067987e-05 2023-03-10 00:49:28,045 44k INFO ====> Epoch: 55, cost 175.53 s 2023-03-10 00:49:37,659 44k INFO Train Epoch: 56 [0%] 2023-03-10 00:49:37,659 44k INFO Losses: [2.460718870162964, 2.5246050357818604, 9.879639625549316, 21.140241622924805, 1.204176664352417], step: 26400, lr: 9.931481519679228e-05 2023-03-10 00:50:45,534 44k INFO Train Epoch: 56 [42%] 2023-03-10 00:50:45,535 44k INFO Losses: [2.464543104171753, 2.255500078201294, 10.844827651977539, 22.04778480529785, 1.099898099899292], step: 26600, lr: 9.931481519679228e-05 2023-03-10 00:51:52,596 44k INFO Train Epoch: 56 [83%] 2023-03-10 00:51:52,596 44k INFO Losses: [2.6117959022521973, 2.0736007690429688, 7.773540019989014, 17.324932098388672, 1.5685585737228394], step: 26800, lr: 9.931481519679228e-05 2023-03-10 00:52:19,851 44k INFO ====> Epoch: 56, cost 171.81 s 2023-03-10 00:53:10,189 44k INFO Train Epoch: 57 [25%] 2023-03-10 00:53:10,190 44k INFO Losses: [2.593026876449585, 2.2518861293792725, 9.6329345703125, 22.174802780151367, 1.5273586511611938], step: 27000, lr: 9.930240084489267e-05 2023-03-10 00:53:13,069 44k INFO Saving model and optimizer state at iteration 57 to ./logs\44k\G_27000.pth 2023-03-10 00:53:13,725 44k INFO Saving model and optimizer state at iteration 57 to ./logs\44k\D_27000.pth 2023-03-10 00:53:14,366 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_24000.pth 2023-03-10 00:53:14,407 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_24000.pth 2023-03-10 00:54:21,355 44k INFO Train Epoch: 57 [67%] 2023-03-10 00:54:21,355 44k INFO Losses: [2.7753677368164062, 1.9239017963409424, 7.772047519683838, 18.454837799072266, 1.2699477672576904], step: 27200, lr: 9.930240084489267e-05 2023-03-10 00:55:15,371 44k INFO ====> Epoch: 57, cost 175.52 s 2023-03-10 00:55:38,511 44k INFO Train Epoch: 58 [8%] 2023-03-10 00:55:38,511 44k INFO Losses: [2.4450769424438477, 2.4737496376037598, 7.607985973358154, 19.5660343170166, 1.430598497390747], step: 27400, lr: 9.928998804478705e-05 2023-03-10 00:56:45,665 44k INFO Train Epoch: 58 [50%] 2023-03-10 00:56:45,665 44k INFO Losses: [2.4399118423461914, 2.476273775100708, 11.084647178649902, 22.028345108032227, 1.5207157135009766], step: 27600, lr: 9.928998804478705e-05 2023-03-10 00:57:53,004 44k INFO Train Epoch: 58 [92%] 2023-03-10 00:57:53,004 44k INFO Losses: [2.402250289916992, 2.2848896980285645, 8.98740005493164, 19.900625228881836, 1.143722653388977], step: 27800, lr: 9.928998804478705e-05 2023-03-10 00:58:06,832 44k INFO ====> Epoch: 58, cost 171.46 s 2023-03-10 00:59:10,565 44k INFO Train Epoch: 59 [33%] 2023-03-10 00:59:10,565 44k INFO Losses: [2.5950214862823486, 1.9849491119384766, 8.57032299041748, 16.760623931884766, 1.0661135911941528], step: 28000, lr: 9.927757679628145e-05 2023-03-10 00:59:13,467 44k INFO Saving model and optimizer state at iteration 59 to ./logs\44k\G_28000.pth 2023-03-10 00:59:14,123 44k INFO Saving model and optimizer state at iteration 59 to ./logs\44k\D_28000.pth 2023-03-10 00:59:14,774 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_25000.pth 2023-03-10 00:59:14,820 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_25000.pth 2023-03-10 01:00:21,733 44k INFO Train Epoch: 59 [75%] 2023-03-10 01:00:21,733 44k INFO Losses: [2.396491050720215, 2.4650373458862305, 11.211841583251953, 20.439992904663086, 0.9188471436500549], step: 28200, lr: 9.927757679628145e-05 2023-03-10 01:01:02,256 44k INFO ====> Epoch: 59, cost 175.42 s 2023-03-10 01:01:39,054 44k INFO Train Epoch: 60 [17%] 2023-03-10 01:01:39,054 44k INFO Losses: [2.65049409866333, 2.0966310501098633, 6.8534016609191895, 18.760677337646484, 1.1398675441741943], step: 28400, lr: 9.926516709918191e-05 2023-03-10 01:02:46,150 44k INFO Train Epoch: 60 [58%] 2023-03-10 01:02:46,150 44k INFO Losses: [2.438035726547241, 2.3348889350891113, 7.82972526550293, 18.5557804107666, 1.4146393537521362], step: 28600, lr: 9.926516709918191e-05 2023-03-10 01:03:53,789 44k INFO ====> Epoch: 60, cost 171.53 s 2023-03-10 01:04:03,375 44k INFO Train Epoch: 61 [0%] 2023-03-10 01:04:03,375 44k INFO Losses: [2.6219024658203125, 2.8293468952178955, 9.905922889709473, 18.843236923217773, 1.6379427909851074], step: 28800, lr: 9.92527589532945e-05 2023-03-10 01:05:11,111 44k INFO Train Epoch: 61 [42%] 2023-03-10 01:05:11,111 44k INFO Losses: [2.584650754928589, 2.2321972846984863, 6.915933609008789, 20.58075714111328, 0.8695663213729858], step: 29000, lr: 9.92527589532945e-05 2023-03-10 01:05:13,998 44k INFO Saving model and optimizer state at iteration 61 to ./logs\44k\G_29000.pth 2023-03-10 01:05:14,657 44k INFO Saving model and optimizer state at iteration 61 to ./logs\44k\D_29000.pth 2023-03-10 01:05:15,300 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_26000.pth 2023-03-10 01:05:15,347 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_26000.pth 2023-03-10 01:06:22,282 44k INFO Train Epoch: 61 [83%] 2023-03-10 01:06:22,282 44k INFO Losses: [2.747096538543701, 2.007692575454712, 6.0306878089904785, 15.881680488586426, 0.5645329356193542], step: 29200, lr: 9.92527589532945e-05 2023-03-10 01:06:50,105 44k INFO ====> Epoch: 61, cost 176.32 s 2023-03-10 01:07:40,532 44k INFO Train Epoch: 62 [25%] 2023-03-10 01:07:40,533 44k INFO Losses: [2.195636034011841, 2.7493903636932373, 8.355330467224121, 20.621910095214844, 1.1680060625076294], step: 29400, lr: 9.924035235842533e-05 2023-03-10 01:08:47,371 44k INFO Train Epoch: 62 [67%] 2023-03-10 01:08:47,371 44k INFO Losses: [2.3570761680603027, 2.258380651473999, 9.165984153747559, 18.321725845336914, 1.105709195137024], step: 29600, lr: 9.924035235842533e-05 2023-03-10 01:09:41,281 44k INFO ====> Epoch: 62, cost 171.18 s 2023-03-10 01:10:04,590 44k INFO Train Epoch: 63 [8%] 2023-03-10 01:10:04,590 44k INFO Losses: [2.534261465072632, 1.9493284225463867, 6.108685493469238, 17.727191925048828, 1.1462334394454956], step: 29800, lr: 9.922794731438052e-05 2023-03-10 01:11:11,867 44k INFO Train Epoch: 63 [50%] 2023-03-10 01:11:11,868 44k INFO Losses: [2.1466357707977295, 2.441751003265381, 11.623617172241211, 21.063779830932617, 1.1211217641830444], step: 30000, lr: 9.922794731438052e-05 2023-03-10 01:11:14,667 44k INFO Saving model and optimizer state at iteration 63 to ./logs\44k\G_30000.pth 2023-03-10 01:11:15,383 44k INFO Saving model and optimizer state at iteration 63 to ./logs\44k\D_30000.pth 2023-03-10 01:11:16,027 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_27000.pth 2023-03-10 01:11:16,069 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_27000.pth 2023-03-10 01:12:23,327 44k INFO Train Epoch: 63 [92%] 2023-03-10 01:12:23,327 44k INFO Losses: [2.320936679840088, 2.342817544937134, 11.871529579162598, 20.014780044555664, 1.2792378664016724], step: 30200, lr: 9.922794731438052e-05 2023-03-10 01:12:37,139 44k INFO ====> Epoch: 63, cost 175.86 s 2023-03-10 01:13:41,024 44k INFO Train Epoch: 64 [33%] 2023-03-10 01:13:41,024 44k INFO Losses: [2.411961078643799, 2.2892696857452393, 9.06312084197998, 19.208341598510742, 1.4149917364120483], step: 30400, lr: 9.921554382096622e-05 2023-03-10 01:14:48,092 44k INFO Train Epoch: 64 [75%] 2023-03-10 01:14:48,093 44k INFO Losses: [2.5061354637145996, 2.2133126258850098, 10.020742416381836, 19.887685775756836, 1.369322657585144], step: 30600, lr: 9.921554382096622e-05 2023-03-10 01:15:28,820 44k INFO ====> Epoch: 64, cost 171.68 s 2023-03-10 01:16:05,628 44k INFO Train Epoch: 65 [17%] 2023-03-10 01:16:05,628 44k INFO Losses: [2.4507529735565186, 2.1126151084899902, 13.933971405029297, 21.99840545654297, 1.1083381175994873], step: 30800, lr: 9.92031418779886e-05 2023-03-10 01:17:13,068 44k INFO Train Epoch: 65 [58%] 2023-03-10 01:17:13,069 44k INFO Losses: [2.437870502471924, 2.2035629749298096, 8.825096130371094, 18.92424774169922, 0.8800116181373596], step: 31000, lr: 9.92031418779886e-05 2023-03-10 01:17:15,928 44k INFO Saving model and optimizer state at iteration 65 to ./logs\44k\G_31000.pth 2023-03-10 01:17:16,586 44k INFO Saving model and optimizer state at iteration 65 to ./logs\44k\D_31000.pth 2023-03-10 01:17:17,229 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_28000.pth 2023-03-10 01:17:17,274 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_28000.pth 2023-03-10 01:18:24,893 44k INFO ====> Epoch: 65, cost 176.07 s 2023-03-10 01:18:34,460 44k INFO Train Epoch: 66 [0%] 2023-03-10 01:18:34,460 44k INFO Losses: [2.5885767936706543, 2.5255954265594482, 10.28254508972168, 19.816980361938477, 0.7707668542861938], step: 31200, lr: 9.919074148525384e-05 2023-03-10 01:19:42,281 44k INFO Train Epoch: 66 [42%] 2023-03-10 01:19:42,281 44k INFO Losses: [2.5087437629699707, 2.5391972064971924, 9.325486183166504, 20.830184936523438, 1.2771245241165161], step: 31400, lr: 9.919074148525384e-05 2023-03-10 01:20:49,507 44k INFO Train Epoch: 66 [83%] 2023-03-10 01:20:49,508 44k INFO Losses: [2.615724563598633, 2.324063301086426, 7.402339458465576, 15.822165489196777, 0.9446194171905518], step: 31600, lr: 9.919074148525384e-05 2023-03-10 01:21:16,869 44k INFO ====> Epoch: 66, cost 171.98 s 2023-03-10 01:22:07,246 44k INFO Train Epoch: 67 [25%] 2023-03-10 01:22:07,247 44k INFO Losses: [2.5019991397857666, 2.2032084465026855, 6.534299850463867, 20.32775115966797, 1.291076421737671], step: 31800, lr: 9.917834264256819e-05 2023-03-10 01:23:14,443 44k INFO Train Epoch: 67 [67%] 2023-03-10 01:23:14,443 44k INFO Losses: [2.4641759395599365, 2.3515498638153076, 7.272819995880127, 21.31987190246582, 1.2571789026260376], step: 32000, lr: 9.917834264256819e-05 2023-03-10 01:23:17,313 44k INFO Saving model and optimizer state at iteration 67 to ./logs\44k\G_32000.pth 2023-03-10 01:23:17,975 44k INFO Saving model and optimizer state at iteration 67 to ./logs\44k\D_32000.pth 2023-03-10 01:23:18,616 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_29000.pth 2023-03-10 01:23:18,658 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_29000.pth 2023-03-10 01:24:12,693 44k INFO ====> Epoch: 67, cost 175.82 s 2023-03-10 01:24:35,789 44k INFO Train Epoch: 68 [8%] 2023-03-10 01:24:35,789 44k INFO Losses: [2.3548192977905273, 2.2308290004730225, 9.776409149169922, 21.54472541809082, 1.1720070838928223], step: 32200, lr: 9.916594534973787e-05 2023-03-10 01:25:43,308 44k INFO Train Epoch: 68 [50%] 2023-03-10 01:25:43,308 44k INFO Losses: [2.7263193130493164, 2.142730951309204, 11.529346466064453, 23.22295379638672, 1.3292169570922852], step: 32400, lr: 9.916594534973787e-05 2023-03-10 01:26:50,995 44k INFO Train Epoch: 68 [92%] 2023-03-10 01:26:50,996 44k INFO Losses: [2.4169955253601074, 2.222931385040283, 8.180932998657227, 20.370588302612305, 1.267864465713501], step: 32600, lr: 9.916594534973787e-05 2023-03-10 01:27:04,867 44k INFO ====> Epoch: 68, cost 172.17 s 2023-03-10 01:28:08,938 44k INFO Train Epoch: 69 [33%] 2023-03-10 01:28:08,939 44k INFO Losses: [2.2154951095581055, 2.570366382598877, 11.885047912597656, 20.242231369018555, 1.0402462482452393], step: 32800, lr: 9.915354960656915e-05 2023-03-10 01:29:16,159 44k INFO Train Epoch: 69 [75%] 2023-03-10 01:29:16,159 44k INFO Losses: [2.5609591007232666, 2.1993024349212646, 7.085938930511475, 18.688953399658203, 1.3814340829849243], step: 33000, lr: 9.915354960656915e-05 2023-03-10 01:29:19,088 44k INFO Saving model and optimizer state at iteration 69 to ./logs\44k\G_33000.pth 2023-03-10 01:29:19,750 44k INFO Saving model and optimizer state at iteration 69 to ./logs\44k\D_33000.pth 2023-03-10 01:29:20,381 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_30000.pth 2023-03-10 01:29:20,421 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_30000.pth 2023-03-10 01:30:00,961 44k INFO ====> Epoch: 69, cost 176.09 s 2023-03-10 01:30:37,709 44k INFO Train Epoch: 70 [17%] 2023-03-10 01:30:37,709 44k INFO Losses: [2.489190101623535, 2.3778793811798096, 8.592529296875, 17.722820281982422, 1.3032668828964233], step: 33200, lr: 9.914115541286833e-05 2023-03-10 01:31:44,980 44k INFO Train Epoch: 70 [58%] 2023-03-10 01:31:44,981 44k INFO Losses: [2.6922950744628906, 2.2586448192596436, 7.873056888580322, 19.389326095581055, 1.0352110862731934], step: 33400, lr: 9.914115541286833e-05 2023-03-10 01:32:52,735 44k INFO ====> Epoch: 70, cost 171.77 s 2023-03-10 01:33:02,561 44k INFO Train Epoch: 71 [0%] 2023-03-10 01:33:02,562 44k INFO Losses: [2.6629834175109863, 2.2483139038085938, 6.274883270263672, 21.076974868774414, 1.1676338911056519], step: 33600, lr: 9.912876276844171e-05 2023-03-10 01:34:10,718 44k INFO Train Epoch: 71 [42%] 2023-03-10 01:34:10,719 44k INFO Losses: [2.40325665473938, 2.2859597206115723, 8.061351776123047, 19.120229721069336, 1.45626699924469], step: 33800, lr: 9.912876276844171e-05 2023-03-10 01:35:18,233 44k INFO Train Epoch: 71 [83%] 2023-03-10 01:35:18,233 44k INFO Losses: [2.4639079570770264, 2.7155954837799072, 6.236012935638428, 13.29322338104248, 1.030239462852478], step: 34000, lr: 9.912876276844171e-05 2023-03-10 01:35:21,108 44k INFO Saving model and optimizer state at iteration 71 to ./logs\44k\G_34000.pth 2023-03-10 01:35:21,817 44k INFO Saving model and optimizer state at iteration 71 to ./logs\44k\D_34000.pth 2023-03-10 01:35:22,457 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_31000.pth 2023-03-10 01:35:22,502 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_31000.pth 2023-03-10 01:35:49,743 44k INFO ====> Epoch: 71, cost 177.01 s 2023-03-10 01:36:40,245 44k INFO Train Epoch: 72 [25%] 2023-03-10 01:36:40,245 44k INFO Losses: [2.442927837371826, 2.3261184692382812, 7.148874282836914, 20.592294692993164, 1.2950329780578613], step: 34200, lr: 9.911637167309565e-05 2023-03-10 01:37:47,534 44k INFO Train Epoch: 72 [67%] 2023-03-10 01:37:47,535 44k INFO Losses: [2.361867666244507, 2.5377039909362793, 9.16838264465332, 20.23963737487793, 1.267628788948059], step: 34400, lr: 9.911637167309565e-05 2023-03-10 01:38:41,790 44k INFO ====> Epoch: 72, cost 172.05 s 2023-03-10 01:39:05,113 44k INFO Train Epoch: 73 [8%] 2023-03-10 01:39:05,113 44k INFO Losses: [2.377958297729492, 2.407437324523926, 7.49679708480835, 19.603193283081055, 1.3932220935821533], step: 34600, lr: 9.910398212663652e-05 2023-03-10 01:40:12,752 44k INFO Train Epoch: 73 [50%] 2023-03-10 01:40:12,752 44k INFO Losses: [2.8247716426849365, 1.983457326889038, 7.667083740234375, 18.327037811279297, 1.186850666999817], step: 34800, lr: 9.910398212663652e-05 2023-03-10 01:41:20,634 44k INFO Train Epoch: 73 [92%] 2023-03-10 01:41:20,634 44k INFO Losses: [2.676158905029297, 2.190196990966797, 8.072098731994629, 20.96558952331543, 1.487762451171875], step: 35000, lr: 9.910398212663652e-05 2023-03-10 01:41:23,452 44k INFO Saving model and optimizer state at iteration 73 to ./logs\44k\G_35000.pth 2023-03-10 01:41:24,107 44k INFO Saving model and optimizer state at iteration 73 to ./logs\44k\D_35000.pth 2023-03-10 01:41:24,753 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_32000.pth 2023-03-10 01:41:24,796 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_32000.pth 2023-03-10 01:41:38,571 44k INFO ====> Epoch: 73, cost 176.78 s 2023-03-10 01:42:42,735 44k INFO Train Epoch: 74 [33%] 2023-03-10 01:42:42,735 44k INFO Losses: [2.4282281398773193, 2.2438385486602783, 7.792109966278076, 16.740440368652344, 1.418903112411499], step: 35200, lr: 9.909159412887068e-05 2023-03-10 01:43:50,137 44k INFO Train Epoch: 74 [75%] 2023-03-10 01:43:50,137 44k INFO Losses: [2.3510305881500244, 2.086587429046631, 8.392033576965332, 19.2855224609375, 1.3113199472427368], step: 35400, lr: 9.909159412887068e-05 2023-03-10 01:44:31,072 44k INFO ====> Epoch: 74, cost 172.50 s 2023-03-10 01:45:08,139 44k INFO Train Epoch: 75 [17%] 2023-03-10 01:45:08,140 44k INFO Losses: [2.467280149459839, 2.1546030044555664, 10.749366760253906, 20.11714744567871, 1.3803324699401855], step: 35600, lr: 9.907920767960457e-05 2023-03-10 01:46:15,495 44k INFO Train Epoch: 75 [58%] 2023-03-10 01:46:15,496 44k INFO Losses: [2.525101900100708, 2.134913444519043, 9.422788619995117, 19.940322875976562, 0.5853258371353149], step: 35800, lr: 9.907920767960457e-05 2023-03-10 01:47:23,299 44k INFO ====> Epoch: 75, cost 172.23 s 2023-03-10 01:47:32,861 44k INFO Train Epoch: 76 [0%] 2023-03-10 01:47:32,862 44k INFO Losses: [2.5574803352355957, 2.750986337661743, 11.486483573913574, 19.715124130249023, 1.2036763429641724], step: 36000, lr: 9.906682277864462e-05 2023-03-10 01:47:35,699 44k INFO Saving model and optimizer state at iteration 76 to ./logs\44k\G_36000.pth 2023-03-10 01:47:36,413 44k INFO Saving model and optimizer state at iteration 76 to ./logs\44k\D_36000.pth 2023-03-10 01:47:37,054 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_33000.pth 2023-03-10 01:47:37,095 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_33000.pth 2023-03-10 01:48:45,080 44k INFO Train Epoch: 76 [42%] 2023-03-10 01:48:45,081 44k INFO Losses: [2.5309414863586426, 2.177687644958496, 9.103509902954102, 21.4112606048584, 1.5377999544143677], step: 36200, lr: 9.906682277864462e-05 2023-03-10 01:49:52,743 44k INFO Train Epoch: 76 [83%] 2023-03-10 01:49:52,743 44k INFO Losses: [2.428403854370117, 2.3139281272888184, 8.578767776489258, 17.937519073486328, 0.9283303022384644], step: 36400, lr: 9.906682277864462e-05 2023-03-10 01:50:20,193 44k INFO ====> Epoch: 76, cost 176.89 s 2023-03-10 01:51:10,799 44k INFO Train Epoch: 77 [25%] 2023-03-10 01:51:10,799 44k INFO Losses: [2.14625883102417, 2.4239108562469482, 8.833675384521484, 22.046072006225586, 1.6874737739562988], step: 36600, lr: 9.905443942579728e-05 2023-03-10 01:52:18,271 44k INFO Train Epoch: 77 [67%] 2023-03-10 01:52:18,271 44k INFO Losses: [2.4782447814941406, 2.2567358016967773, 9.361751556396484, 19.60032844543457, 1.0126616954803467], step: 36800, lr: 9.905443942579728e-05 2023-03-10 01:53:12,930 44k INFO ====> Epoch: 77, cost 172.74 s 2023-03-10 01:53:36,252 44k INFO Train Epoch: 78 [8%] 2023-03-10 01:53:36,253 44k INFO Losses: [2.397923231124878, 2.278186082839966, 9.692902565002441, 20.617475509643555, 1.501961350440979], step: 37000, lr: 9.904205762086905e-05 2023-03-10 01:53:39,120 44k INFO Saving model and optimizer state at iteration 78 to ./logs\44k\G_37000.pth 2023-03-10 01:53:39,783 44k INFO Saving model and optimizer state at iteration 78 to ./logs\44k\D_37000.pth 2023-03-10 01:53:40,426 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_34000.pth 2023-03-10 01:53:40,470 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_34000.pth 2023-03-10 01:54:47,992 44k INFO Train Epoch: 78 [50%] 2023-03-10 01:54:47,993 44k INFO Losses: [2.231703996658325, 2.5518651008605957, 13.325697898864746, 23.593231201171875, 1.0728912353515625], step: 37200, lr: 9.904205762086905e-05 2023-03-10 01:55:55,931 44k INFO Train Epoch: 78 [92%] 2023-03-10 01:55:55,931 44k INFO Losses: [2.4122819900512695, 2.2260191440582275, 10.02711009979248, 19.738201141357422, 1.4223968982696533], step: 37400, lr: 9.904205762086905e-05 2023-03-10 01:56:09,855 44k INFO ====> Epoch: 78, cost 176.92 s 2023-03-10 01:57:14,053 44k INFO Train Epoch: 79 [33%] 2023-03-10 01:57:14,053 44k INFO Losses: [2.320143461227417, 2.3592755794525146, 9.966156005859375, 19.560787200927734, 0.8694154620170593], step: 37600, lr: 9.902967736366644e-05 2023-03-10 01:58:21,521 44k INFO Train Epoch: 79 [75%] 2023-03-10 01:58:21,522 44k INFO Losses: [2.4515087604522705, 2.297215700149536, 10.873857498168945, 19.608396530151367, 0.8475047945976257], step: 37800, lr: 9.902967736366644e-05 2023-03-10 01:59:02,291 44k INFO ====> Epoch: 79, cost 172.44 s 2023-03-10 01:59:39,216 44k INFO Train Epoch: 80 [17%] 2023-03-10 01:59:39,216 44k INFO Losses: [2.621845245361328, 2.218869209289551, 5.727862358093262, 16.931283950805664, 0.8260491490364075], step: 38000, lr: 9.901729865399597e-05 2023-03-10 01:59:42,060 44k INFO Saving model and optimizer state at iteration 80 to ./logs\44k\G_38000.pth 2023-03-10 01:59:42,727 44k INFO Saving model and optimizer state at iteration 80 to ./logs\44k\D_38000.pth 2023-03-10 01:59:43,375 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_35000.pth 2023-03-10 01:59:43,420 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_35000.pth 2023-03-10 02:00:50,683 44k INFO Train Epoch: 80 [58%] 2023-03-10 02:00:50,683 44k INFO Losses: [2.549434185028076, 1.9785349369049072, 11.415558815002441, 20.010009765625, 0.5362642407417297], step: 38200, lr: 9.901729865399597e-05 2023-03-10 02:01:58,910 44k INFO ====> Epoch: 80, cost 176.62 s 2023-03-10 02:02:08,366 44k INFO Train Epoch: 81 [0%] 2023-03-10 02:02:08,367 44k INFO Losses: [2.4345498085021973, 2.3334052562713623, 9.885810852050781, 20.113014221191406, 1.0990657806396484], step: 38400, lr: 9.900492149166423e-05 2023-03-10 02:03:16,743 44k INFO Train Epoch: 81 [42%] 2023-03-10 02:03:16,743 44k INFO Losses: [2.421642541885376, 2.2066826820373535, 9.998400688171387, 20.750362396240234, 1.4486732482910156], step: 38600, lr: 9.900492149166423e-05 2023-03-10 02:04:24,112 44k INFO Train Epoch: 81 [83%] 2023-03-10 02:04:24,112 44k INFO Losses: [2.84961199760437, 1.9679622650146484, 5.906168460845947, 13.561853408813477, 1.0935133695602417], step: 38800, lr: 9.900492149166423e-05 2023-03-10 02:04:51,495 44k INFO ====> Epoch: 81, cost 172.59 s 2023-03-10 02:05:42,240 44k INFO Train Epoch: 82 [25%] 2023-03-10 02:05:42,241 44k INFO Losses: [2.660776138305664, 2.1231043338775635, 5.990094184875488, 16.123315811157227, 1.2103991508483887], step: 39000, lr: 9.899254587647776e-05 2023-03-10 02:05:45,115 44k INFO Saving model and optimizer state at iteration 82 to ./logs\44k\G_39000.pth 2023-03-10 02:05:45,796 44k INFO Saving model and optimizer state at iteration 82 to ./logs\44k\D_39000.pth 2023-03-10 02:05:46,430 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_36000.pth 2023-03-10 02:05:46,472 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_36000.pth 2023-03-10 02:06:53,820 44k INFO Train Epoch: 82 [67%] 2023-03-10 02:06:53,820 44k INFO Losses: [2.469160318374634, 2.2834420204162598, 6.774503707885742, 18.82451057434082, 1.4767872095108032], step: 39200, lr: 9.899254587647776e-05 2023-03-10 02:07:48,450 44k INFO ====> Epoch: 82, cost 176.96 s 2023-03-10 02:08:11,793 44k INFO Train Epoch: 83 [8%] 2023-03-10 02:08:11,793 44k INFO Losses: [2.364262580871582, 2.6441338062286377, 12.407462120056152, 22.07462501525879, 1.457845687866211], step: 39400, lr: 9.89801718082432e-05 2023-03-10 02:09:19,794 44k INFO Train Epoch: 83 [50%] 2023-03-10 02:09:19,795 44k INFO Losses: [2.25032901763916, 2.2811808586120605, 11.213237762451172, 19.5427188873291, 1.0862118005752563], step: 39600, lr: 9.89801718082432e-05 2023-03-10 02:10:28,014 44k INFO Train Epoch: 83 [92%] 2023-03-10 02:10:28,014 44k INFO Losses: [2.5107741355895996, 2.2757210731506348, 8.738912582397461, 20.884809494018555, 1.2771507501602173], step: 39800, lr: 9.89801718082432e-05 2023-03-10 02:10:41,922 44k INFO ====> Epoch: 83, cost 173.47 s 2023-03-10 02:11:46,230 44k INFO Train Epoch: 84 [33%] 2023-03-10 02:11:46,231 44k INFO Losses: [2.48128080368042, 2.08335280418396, 8.903714179992676, 18.561168670654297, 1.1124554872512817], step: 40000, lr: 9.896779928676716e-05 2023-03-10 02:11:49,122 44k INFO Saving model and optimizer state at iteration 84 to ./logs\44k\G_40000.pth 2023-03-10 02:11:49,789 44k INFO Saving model and optimizer state at iteration 84 to ./logs\44k\D_40000.pth 2023-03-10 02:11:50,426 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_37000.pth 2023-03-10 02:11:50,472 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_37000.pth 2023-03-10 02:12:58,092 44k INFO Train Epoch: 84 [75%] 2023-03-10 02:12:58,092 44k INFO Losses: [2.395432472229004, 2.1394405364990234, 10.208571434020996, 18.325084686279297, 1.2203572988510132], step: 40200, lr: 9.896779928676716e-05 2023-03-10 02:13:39,245 44k INFO ====> Epoch: 84, cost 177.32 s 2023-03-10 02:14:16,173 44k INFO Train Epoch: 85 [17%] 2023-03-10 02:14:16,174 44k INFO Losses: [2.69671368598938, 2.345823287963867, 7.80781888961792, 19.50173568725586, 1.3170828819274902], step: 40400, lr: 9.895542831185631e-05 2023-03-10 02:15:24,166 44k INFO Train Epoch: 85 [58%] 2023-03-10 02:15:24,166 44k INFO Losses: [2.6812539100646973, 1.9495866298675537, 9.164358139038086, 19.448680877685547, 1.007983684539795], step: 40600, lr: 9.895542831185631e-05 2023-03-10 02:16:32,383 44k INFO ====> Epoch: 85, cost 173.14 s 2023-03-10 02:16:42,069 44k INFO Train Epoch: 86 [0%] 2023-03-10 02:16:42,070 44k INFO Losses: [2.4388413429260254, 2.4413211345672607, 7.891645431518555, 16.501232147216797, 1.5631059408187866], step: 40800, lr: 9.894305888331732e-05 2023-03-10 02:17:50,311 44k INFO Train Epoch: 86 [42%] 2023-03-10 02:17:50,311 44k INFO Losses: [2.3541574478149414, 2.366830587387085, 7.922110557556152, 17.2901554107666, 1.2641910314559937], step: 41000, lr: 9.894305888331732e-05 2023-03-10 02:17:53,218 44k INFO Saving model and optimizer state at iteration 86 to ./logs\44k\G_41000.pth 2023-03-10 02:17:53,884 44k INFO Saving model and optimizer state at iteration 86 to ./logs\44k\D_41000.pth 2023-03-10 02:17:54,524 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_38000.pth 2023-03-10 02:17:54,554 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_38000.pth 2023-03-10 02:19:02,397 44k INFO Train Epoch: 86 [83%] 2023-03-10 02:19:02,397 44k INFO Losses: [2.4955737590789795, 2.472790479660034, 9.767871856689453, 18.626310348510742, 1.079459547996521], step: 41200, lr: 9.894305888331732e-05 2023-03-10 02:19:30,023 44k INFO ====> Epoch: 86, cost 177.64 s 2023-03-10 02:20:20,876 44k INFO Train Epoch: 87 [25%] 2023-03-10 02:20:20,876 44k INFO Losses: [2.48044753074646, 2.5758087635040283, 7.957856178283691, 18.676929473876953, 1.226336121559143], step: 41400, lr: 9.89306910009569e-05 2023-03-10 02:21:28,785 44k INFO Train Epoch: 87 [67%] 2023-03-10 02:21:28,786 44k INFO Losses: [2.4272122383117676, 2.3490002155303955, 11.180319786071777, 19.71648597717285, 1.3572531938552856], step: 41600, lr: 9.89306910009569e-05 2023-03-10 02:22:23,748 44k INFO ====> Epoch: 87, cost 173.72 s 2023-03-10 02:22:47,143 44k INFO Train Epoch: 88 [8%] 2023-03-10 02:22:47,143 44k INFO Losses: [2.5656580924987793, 2.608222246170044, 7.74412202835083, 19.378009796142578, 1.027908444404602], step: 41800, lr: 9.891832466458178e-05 2023-03-10 02:23:55,332 44k INFO Train Epoch: 88 [50%] 2023-03-10 02:23:55,333 44k INFO Losses: [2.2740159034729004, 2.479419708251953, 8.435369491577148, 20.661813735961914, 1.2032458782196045], step: 42000, lr: 9.891832466458178e-05 2023-03-10 02:23:58,155 44k INFO Saving model and optimizer state at iteration 88 to ./logs\44k\G_42000.pth 2023-03-10 02:23:58,815 44k INFO Saving model and optimizer state at iteration 88 to ./logs\44k\D_42000.pth 2023-03-10 02:23:59,465 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_39000.pth 2023-03-10 02:23:59,500 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_39000.pth 2023-03-10 02:25:07,775 44k INFO Train Epoch: 88 [92%] 2023-03-10 02:25:07,775 44k INFO Losses: [2.510584831237793, 2.3520522117614746, 9.499608993530273, 20.018051147460938, 1.5329962968826294], step: 42200, lr: 9.891832466458178e-05 2023-03-10 02:25:21,791 44k INFO ====> Epoch: 88, cost 178.04 s 2023-03-10 02:26:26,238 44k INFO Train Epoch: 89 [33%] 2023-03-10 02:26:26,238 44k INFO Losses: [2.6319351196289062, 2.1740670204162598, 8.002816200256348, 15.51085090637207, 0.9273039698600769], step: 42400, lr: 9.89059598739987e-05 2023-03-10 02:27:34,175 44k INFO Train Epoch: 89 [75%] 2023-03-10 02:27:34,176 44k INFO Losses: [2.442526340484619, 2.130934000015259, 8.880859375, 18.535947799682617, 1.3197972774505615], step: 42600, lr: 9.89059598739987e-05 2023-03-10 02:28:15,448 44k INFO ====> Epoch: 89, cost 173.66 s 2023-03-10 02:28:52,653 44k INFO Train Epoch: 90 [17%] 2023-03-10 02:28:52,653 44k INFO Losses: [2.368036985397339, 2.2541091442108154, 8.655840873718262, 16.256921768188477, 0.9659361243247986], step: 42800, lr: 9.889359662901445e-05 2023-03-10 02:30:00,733 44k INFO Train Epoch: 90 [58%] 2023-03-10 02:30:00,733 44k INFO Losses: [2.5140435695648193, 2.1225194931030273, 10.145703315734863, 19.713003158569336, 1.1034414768218994], step: 43000, lr: 9.889359662901445e-05 2023-03-10 02:30:03,588 44k INFO Saving model and optimizer state at iteration 90 to ./logs\44k\G_43000.pth 2023-03-10 02:30:04,251 44k INFO Saving model and optimizer state at iteration 90 to ./logs\44k\D_43000.pth 2023-03-10 02:30:04,896 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_40000.pth 2023-03-10 02:30:04,940 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_40000.pth 2023-03-10 02:31:13,486 44k INFO ====> Epoch: 90, cost 178.04 s 2023-03-10 02:31:23,157 44k INFO Train Epoch: 91 [0%] 2023-03-10 02:31:23,157 44k INFO Losses: [2.6231162548065186, 2.3267500400543213, 9.666067123413086, 20.483911514282227, 1.411985993385315], step: 43200, lr: 9.888123492943583e-05 2023-03-10 02:32:31,848 44k INFO Train Epoch: 91 [42%] 2023-03-10 02:32:31,848 44k INFO Losses: [2.5606441497802734, 2.2295196056365967, 8.491843223571777, 19.538007736206055, 1.3592729568481445], step: 43400, lr: 9.888123492943583e-05 2023-03-10 02:33:40,236 44k INFO Train Epoch: 91 [83%] 2023-03-10 02:33:40,236 44k INFO Losses: [2.499122142791748, 2.3185603618621826, 10.27350902557373, 19.33860206604004, 1.1391656398773193], step: 43600, lr: 9.888123492943583e-05 2023-03-10 02:34:07,991 44k INFO ====> Epoch: 91, cost 174.51 s 2023-03-10 02:34:58,832 44k INFO Train Epoch: 92 [25%] 2023-03-10 02:34:58,832 44k INFO Losses: [2.6315338611602783, 2.0515692234039307, 6.57252836227417, 16.995559692382812, 1.818975806236267], step: 43800, lr: 9.886887477506964e-05 2023-03-10 02:36:07,030 44k INFO Train Epoch: 92 [67%] 2023-03-10 02:36:07,030 44k INFO Losses: [2.626278877258301, 2.1475539207458496, 11.777432441711426, 20.90768814086914, 1.6315051317214966], step: 44000, lr: 9.886887477506964e-05 2023-03-10 02:36:09,870 44k INFO Saving model and optimizer state at iteration 92 to ./logs\44k\G_44000.pth 2023-03-10 02:36:10,546 44k INFO Saving model and optimizer state at iteration 92 to ./logs\44k\D_44000.pth 2023-03-10 02:36:11,182 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_41000.pth 2023-03-10 02:36:11,211 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_41000.pth 2023-03-10 02:37:06,191 44k INFO ====> Epoch: 92, cost 178.20 s 2023-03-10 02:37:29,435 44k INFO Train Epoch: 93 [8%] 2023-03-10 02:37:29,435 44k INFO Losses: [2.3690264225006104, 2.605854034423828, 11.098400115966797, 18.885906219482422, 1.5265321731567383], step: 44200, lr: 9.885651616572276e-05 2023-03-10 02:38:37,631 44k INFO Train Epoch: 93 [50%] 2023-03-10 02:38:37,631 44k INFO Losses: [2.06398344039917, 2.863292932510376, 12.592281341552734, 20.93813705444336, 1.0604581832885742], step: 44400, lr: 9.885651616572276e-05 2023-03-10 02:39:46,036 44k INFO Train Epoch: 93 [92%] 2023-03-10 02:39:46,036 44k INFO Losses: [2.446136474609375, 2.138134479522705, 9.516739845275879, 20.067073822021484, 1.145005226135254], step: 44600, lr: 9.885651616572276e-05 2023-03-10 02:40:00,039 44k INFO ====> Epoch: 93, cost 173.85 s 2023-03-10 02:41:04,489 44k INFO Train Epoch: 94 [33%] 2023-03-10 02:41:04,490 44k INFO Losses: [2.616746425628662, 2.1049997806549072, 10.677129745483398, 19.70547866821289, 1.3173761367797852], step: 44800, lr: 9.884415910120204e-05 2023-03-10 02:42:12,886 44k INFO Train Epoch: 94 [75%] 2023-03-10 02:42:12,886 44k INFO Losses: [2.3231070041656494, 2.3727831840515137, 10.548434257507324, 20.124788284301758, 1.2973746061325073], step: 45000, lr: 9.884415910120204e-05 2023-03-10 02:42:15,760 44k INFO Saving model and optimizer state at iteration 94 to ./logs\44k\G_45000.pth 2023-03-10 02:42:16,432 44k INFO Saving model and optimizer state at iteration 94 to ./logs\44k\D_45000.pth 2023-03-10 02:42:17,076 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_42000.pth 2023-03-10 02:42:17,120 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_42000.pth 2023-03-10 02:42:58,370 44k INFO ====> Epoch: 94, cost 178.33 s 2023-03-10 02:43:35,738 44k INFO Train Epoch: 95 [17%] 2023-03-10 02:43:35,739 44k INFO Losses: [2.409135103225708, 2.6231465339660645, 8.928082466125488, 18.22141456604004, 1.2995362281799316], step: 45200, lr: 9.883180358131438e-05 2023-03-10 02:44:44,443 44k INFO Train Epoch: 95 [58%] 2023-03-10 02:44:44,444 44k INFO Losses: [2.667802333831787, 2.135019540786743, 7.641706466674805, 19.74456214904785, 1.1708847284317017], step: 45400, lr: 9.883180358131438e-05 2023-03-10 02:45:54,320 44k INFO ====> Epoch: 95, cost 175.95 s 2023-03-10 02:46:04,052 44k INFO Train Epoch: 96 [0%] 2023-03-10 02:46:04,053 44k INFO Losses: [2.4180850982666016, 2.2143120765686035, 9.62486457824707, 19.350177764892578, 1.1389224529266357], step: 45600, lr: 9.881944960586671e-05 2023-03-10 02:47:13,669 44k INFO Train Epoch: 96 [42%] 2023-03-10 02:47:13,669 44k INFO Losses: [2.5621836185455322, 2.185269594192505, 8.74885368347168, 18.95093536376953, 1.366514801979065], step: 45800, lr: 9.881944960586671e-05 2023-03-10 02:48:22,431 44k INFO Train Epoch: 96 [83%] 2023-03-10 02:48:22,431 44k INFO Losses: [2.735117197036743, 2.094013214111328, 5.737760543823242, 16.01725196838379, 1.3262369632720947], step: 46000, lr: 9.881944960586671e-05 2023-03-10 02:48:25,356 44k INFO Saving model and optimizer state at iteration 96 to ./logs\44k\G_46000.pth 2023-03-10 02:48:26,026 44k INFO Saving model and optimizer state at iteration 96 to ./logs\44k\D_46000.pth 2023-03-10 02:48:26,684 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_43000.pth 2023-03-10 02:48:26,723 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_43000.pth 2023-03-10 02:48:54,290 44k INFO ====> Epoch: 96, cost 179.97 s 2023-03-10 02:49:45,401 44k INFO Train Epoch: 97 [25%] 2023-03-10 02:49:45,401 44k INFO Losses: [2.4873476028442383, 2.1946587562561035, 8.176788330078125, 21.95309829711914, 1.4926960468292236], step: 46200, lr: 9.880709717466598e-05 2023-03-10 02:50:53,651 44k INFO Train Epoch: 97 [67%] 2023-03-10 02:50:53,651 44k INFO Losses: [2.4239509105682373, 2.3146116733551025, 8.190256118774414, 20.13637924194336, 1.6566094160079956], step: 46400, lr: 9.880709717466598e-05 2023-03-10 02:51:49,166 44k INFO ====> Epoch: 97, cost 174.88 s 2023-03-10 02:52:12,596 44k INFO Train Epoch: 98 [8%] 2023-03-10 02:52:12,596 44k INFO Losses: [2.55031681060791, 2.106630325317383, 9.410569190979004, 19.512451171875, 0.8256558179855347], step: 46600, lr: 9.879474628751914e-05 2023-03-10 02:53:21,071 44k INFO Train Epoch: 98 [50%] 2023-03-10 02:53:21,071 44k INFO Losses: [2.265395402908325, 2.331608772277832, 12.848119735717773, 23.142833709716797, 1.2709184885025024], step: 46800, lr: 9.879474628751914e-05 2023-03-10 02:54:29,657 44k INFO Train Epoch: 98 [92%] 2023-03-10 02:54:29,657 44k INFO Losses: [2.5988707542419434, 2.24489426612854, 6.459345817565918, 17.870325088500977, 1.1991639137268066], step: 47000, lr: 9.879474628751914e-05 2023-03-10 02:54:32,666 44k INFO Saving model and optimizer state at iteration 98 to ./logs\44k\G_47000.pth 2023-03-10 02:54:33,353 44k INFO Saving model and optimizer state at iteration 98 to ./logs\44k\D_47000.pth 2023-03-10 02:54:34,065 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_44000.pth 2023-03-10 02:54:34,104 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_44000.pth 2023-03-10 02:54:48,050 44k INFO ====> Epoch: 98, cost 178.88 s 2023-03-10 02:55:52,660 44k INFO Train Epoch: 99 [33%] 2023-03-10 02:55:52,660 44k INFO Losses: [2.360546350479126, 2.0910091400146484, 11.3775053024292, 19.653621673583984, 0.7903617024421692], step: 47200, lr: 9.87823969442332e-05 2023-03-10 02:57:01,311 44k INFO Train Epoch: 99 [75%] 2023-03-10 02:57:01,312 44k INFO Losses: [2.281738519668579, 2.0748977661132812, 14.552555084228516, 22.295255661010742, 1.0403858423233032], step: 47400, lr: 9.87823969442332e-05 2023-03-10 02:57:42,889 44k INFO ====> Epoch: 99, cost 174.84 s 2023-03-10 02:58:20,233 44k INFO Train Epoch: 100 [17%] 2023-03-10 02:58:20,233 44k INFO Losses: [2.4321982860565186, 2.2581112384796143, 8.384130477905273, 18.8560733795166, 1.0937718152999878], step: 47600, lr: 9.877004914461517e-05 2023-03-10 02:59:28,688 44k INFO Train Epoch: 100 [58%] 2023-03-10 02:59:28,688 44k INFO Losses: [2.4722695350646973, 2.299949884414673, 9.094573974609375, 19.8353214263916, 0.8555085062980652], step: 47800, lr: 9.877004914461517e-05 2023-03-10 03:00:37,940 44k INFO ====> Epoch: 100, cost 175.05 s 2023-03-10 20:35:29,331 44k INFO {'train': {'log_interval': 200, 'eval_interval': 1000, 'seed': 1234, 'epochs': 130, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'aisa': 0}, 'model_dir': './logs\\44k'} 2023-03-10 20:35:29,364 44k WARNING git hash values are different. cea6df30(saved) != cc5b3bbe(current) 2023-03-10 20:35:31,775 44k INFO Loaded checkpoint './logs\44k\G_47000.pth' (iteration 98) 2023-03-10 20:35:32,138 44k INFO Loaded checkpoint './logs\44k\D_47000.pth' (iteration 98) 2023-03-10 20:36:02,858 44k INFO Train Epoch: 98 [8%] 2023-03-10 20:36:02,858 44k INFO Losses: [2.5353682041168213, 2.407303810119629, 9.827933311462402, 21.578031539916992, 1.420819640159607], step: 46600, lr: 9.87823969442332e-05 2023-03-10 20:37:19,179 44k INFO Train Epoch: 98 [50%] 2023-03-10 20:37:19,179 44k INFO Losses: [2.1962313652038574, 2.842005729675293, 15.469377517700195, 24.453657150268555, 1.4245936870574951], step: 46800, lr: 9.87823969442332e-05 2023-03-10 20:38:40,142 44k INFO Train Epoch: 98 [92%] 2023-03-10 20:38:40,143 44k INFO Losses: [2.6426429748535156, 2.1265904903411865, 8.68488883972168, 19.841407775878906, 1.4562296867370605], step: 47000, lr: 9.87823969442332e-05 2023-03-10 20:38:43,936 44k INFO Saving model and optimizer state at iteration 98 to ./logs\44k\G_47000.pth 2023-03-10 20:38:44,670 44k INFO Saving model and optimizer state at iteration 98 to ./logs\44k\D_47000.pth 2023-03-10 20:39:03,499 44k INFO ====> Epoch: 98, cost 214.17 s 2023-03-10 20:40:09,325 44k INFO Train Epoch: 99 [33%] 2023-03-10 20:40:09,325 44k INFO Losses: [2.7206225395202637, 2.06782603263855, 9.90842342376709, 16.023006439208984, 0.8731637597084045], step: 47200, lr: 9.877004914461517e-05 2023-03-10 20:41:19,344 44k INFO Train Epoch: 99 [75%] 2023-03-10 20:41:19,344 44k INFO Losses: [2.9099414348602295, 1.8907878398895264, 9.848502159118652, 19.099763870239258, 0.8655398488044739], step: 47400, lr: 9.877004914461517e-05 2023-03-10 20:42:04,453 44k INFO ====> Epoch: 99, cost 180.95 s 2023-03-10 20:42:45,610 44k INFO Train Epoch: 100 [17%] 2023-03-10 20:42:45,660 44k INFO Losses: [2.334908962249756, 2.1345458030700684, 11.670673370361328, 21.05769920349121, 1.3268905878067017], step: 47600, lr: 9.875770288847208e-05 2023-03-10 20:43:59,490 44k INFO Train Epoch: 100 [58%] 2023-03-10 20:43:59,491 44k INFO Losses: [2.5611777305603027, 2.2350287437438965, 9.660056114196777, 18.508785247802734, 1.0775189399719238], step: 47800, lr: 9.875770288847208e-05 2023-03-10 20:45:08,413 44k INFO ====> Epoch: 100, cost 183.96 s 2023-03-10 20:45:18,485 44k INFO Train Epoch: 101 [0%] 2023-03-10 20:45:18,486 44k INFO Losses: [2.6127429008483887, 2.197486400604248, 8.127498626708984, 19.76656150817871, 0.9327872395515442], step: 48000, lr: 9.874535817561101e-05 2023-03-10 20:45:21,461 44k INFO Saving model and optimizer state at iteration 101 to ./logs\44k\G_48000.pth 2023-03-10 20:45:22,200 44k INFO Saving model and optimizer state at iteration 101 to ./logs\44k\D_48000.pth 2023-03-10 20:45:22,889 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_45000.pth 2023-03-10 20:45:22,890 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_45000.pth 2023-03-10 20:46:32,489 44k INFO Train Epoch: 101 [42%] 2023-03-10 20:46:32,490 44k INFO Losses: [2.455129861831665, 2.3377742767333984, 11.811349868774414, 21.679683685302734, 1.3665108680725098], step: 48200, lr: 9.874535817561101e-05 2023-03-10 20:47:41,197 44k INFO Train Epoch: 101 [83%] 2023-03-10 20:47:41,198 44k INFO Losses: [2.470296859741211, 2.341890811920166, 9.399015426635742, 17.43128204345703, 1.2156797647476196], step: 48400, lr: 9.874535817561101e-05 2023-03-10 20:48:09,263 44k INFO ====> Epoch: 101, cost 180.85 s 2023-03-10 20:49:01,158 44k INFO Train Epoch: 102 [25%] 2023-03-10 20:49:01,158 44k INFO Losses: [2.3319873809814453, 2.324233055114746, 9.67007064819336, 24.918399810791016, 0.9685816764831543], step: 48600, lr: 9.873301500583906e-05 2023-03-10 20:50:09,867 44k INFO Train Epoch: 102 [67%] 2023-03-10 20:50:09,867 44k INFO Losses: [2.3446614742279053, 2.4207370281219482, 7.530040264129639, 20.79374122619629, 1.0588432550430298], step: 48800, lr: 9.873301500583906e-05 2023-03-10 20:51:05,474 44k INFO ====> Epoch: 102, cost 176.21 s 2023-03-10 20:51:29,008 44k INFO Train Epoch: 103 [8%] 2023-03-10 20:51:29,008 44k INFO Losses: [2.3435935974121094, 2.6919546127319336, 11.258846282958984, 20.273414611816406, 0.9719326496124268], step: 49000, lr: 9.872067337896332e-05 2023-03-10 20:51:31,939 44k INFO Saving model and optimizer state at iteration 103 to ./logs\44k\G_49000.pth 2023-03-10 20:51:32,669 44k INFO Saving model and optimizer state at iteration 103 to ./logs\44k\D_49000.pth 2023-03-10 20:51:33,304 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_46000.pth 2023-03-10 20:51:33,304 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_46000.pth 2023-03-10 20:52:40,414 44k INFO Train Epoch: 103 [50%] 2023-03-10 20:52:40,414 44k INFO Losses: [2.4688682556152344, 2.48207950592041, 11.951579093933105, 22.14127540588379, 1.2969614267349243], step: 49200, lr: 9.872067337896332e-05 2023-03-10 20:53:47,701 44k INFO Train Epoch: 103 [92%] 2023-03-10 20:53:47,702 44k INFO Losses: [2.2953248023986816, 2.2772092819213867, 10.907705307006836, 19.548583984375, 1.156449556350708], step: 49400, lr: 9.872067337896332e-05 2023-03-10 20:54:01,511 44k INFO ====> Epoch: 103, cost 176.04 s 2023-03-10 20:55:05,400 44k INFO Train Epoch: 104 [33%] 2023-03-10 20:55:05,401 44k INFO Losses: [2.5428290367126465, 2.2322463989257812, 10.12808895111084, 16.4022159576416, 1.1398893594741821], step: 49600, lr: 9.870833329479095e-05 2023-03-10 20:56:12,464 44k INFO Train Epoch: 104 [75%] 2023-03-10 20:56:12,465 44k INFO Losses: [2.383042335510254, 2.2254109382629395, 11.079497337341309, 19.57961082458496, 0.9717331528663635], step: 49800, lr: 9.870833329479095e-05 2023-03-10 20:56:53,131 44k INFO ====> Epoch: 104, cost 171.62 s 2023-03-10 20:57:30,075 44k INFO Train Epoch: 105 [17%] 2023-03-10 20:57:30,076 44k INFO Losses: [2.7522201538085938, 1.7528356313705444, 6.633907794952393, 14.481117248535156, 1.1196534633636475], step: 50000, lr: 9.86959947531291e-05 2023-03-10 20:57:32,976 44k INFO Saving model and optimizer state at iteration 105 to ./logs\44k\G_50000.pth 2023-03-10 20:57:33,703 44k INFO Saving model and optimizer state at iteration 105 to ./logs\44k\D_50000.pth 2023-03-10 20:57:34,393 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_47000.pth 2023-03-10 20:57:34,423 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_47000.pth 2023-03-10 20:58:41,243 44k INFO Train Epoch: 105 [58%] 2023-03-10 20:58:41,244 44k INFO Losses: [2.69797682762146, 2.271547317504883, 9.141576766967773, 18.336898803710938, 0.6949782371520996], step: 50200, lr: 9.86959947531291e-05 2023-03-10 20:59:48,641 44k INFO ====> Epoch: 105, cost 175.51 s 2023-03-10 20:59:57,968 44k INFO Train Epoch: 106 [0%] 2023-03-10 20:59:57,968 44k INFO Losses: [2.3631558418273926, 2.461293935775757, 8.532692909240723, 20.91162872314453, 1.1543737649917603], step: 50400, lr: 9.868365775378495e-05 2023-03-10 21:01:05,333 44k INFO Train Epoch: 106 [42%] 2023-03-10 21:01:05,334 44k INFO Losses: [2.436030626296997, 2.436185836791992, 9.54138469696045, 20.4470272064209, 1.0038262605667114], step: 50600, lr: 9.868365775378495e-05 2023-03-10 21:02:11,941 44k INFO Train Epoch: 106 [83%] 2023-03-10 21:02:11,941 44k INFO Losses: [2.7014520168304443, 2.3312535285949707, 7.470837116241455, 18.34264373779297, 0.9698311686515808], step: 50800, lr: 9.868365775378495e-05 2023-03-10 21:02:39,227 44k INFO ====> Epoch: 106, cost 170.59 s 2023-03-10 21:03:29,363 44k INFO Train Epoch: 107 [25%] 2023-03-10 21:03:29,363 44k INFO Losses: [2.588062047958374, 2.1817493438720703, 6.350254058837891, 20.88898277282715, 1.352725863456726], step: 51000, lr: 9.867132229656573e-05 2023-03-10 21:03:32,187 44k INFO Saving model and optimizer state at iteration 107 to ./logs\44k\G_51000.pth 2023-03-10 21:03:32,847 44k INFO Saving model and optimizer state at iteration 107 to ./logs\44k\D_51000.pth 2023-03-10 21:03:33,493 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_48000.pth 2023-03-10 21:03:33,531 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_48000.pth 2023-03-10 21:04:39,935 44k INFO Train Epoch: 107 [67%] 2023-03-10 21:04:39,935 44k INFO Losses: [2.5752532482147217, 2.3378093242645264, 7.246879577636719, 18.540746688842773, 1.5605111122131348], step: 51200, lr: 9.867132229656573e-05 2023-03-10 21:05:33,710 44k INFO ====> Epoch: 107, cost 174.48 s 2023-03-10 21:05:56,633 44k INFO Train Epoch: 108 [8%] 2023-03-10 21:05:56,633 44k INFO Losses: [2.6402182579040527, 2.2177233695983887, 12.219375610351562, 21.974668502807617, 0.8670315146446228], step: 51400, lr: 9.865898838127865e-05 2023-03-10 21:07:03,614 44k INFO Train Epoch: 108 [50%] 2023-03-10 21:07:03,615 44k INFO Losses: [2.1195716857910156, 2.484283924102783, 13.186516761779785, 22.448562622070312, 1.218029499053955], step: 51600, lr: 9.865898838127865e-05 2023-03-10 21:08:10,540 44k INFO Train Epoch: 108 [92%] 2023-03-10 21:08:10,540 44k INFO Losses: [2.5393869876861572, 2.2606606483459473, 10.176072120666504, 19.993452072143555, 1.0593936443328857], step: 51800, lr: 9.865898838127865e-05 2023-03-10 21:08:24,336 44k INFO ====> Epoch: 108, cost 170.63 s 2023-03-10 21:09:27,776 44k INFO Train Epoch: 109 [33%] 2023-03-10 21:09:27,776 44k INFO Losses: [2.381410837173462, 2.297278642654419, 8.875455856323242, 18.65761947631836, 1.0452433824539185], step: 52000, lr: 9.864665600773098e-05 2023-03-10 21:09:30,643 44k INFO Saving model and optimizer state at iteration 109 to ./logs\44k\G_52000.pth 2023-03-10 21:09:31,305 44k INFO Saving model and optimizer state at iteration 109 to ./logs\44k\D_52000.pth 2023-03-10 21:09:31,950 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_49000.pth 2023-03-10 21:09:31,986 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_49000.pth 2023-03-10 21:10:38,543 44k INFO Train Epoch: 109 [75%] 2023-03-10 21:10:38,543 44k INFO Losses: [2.5127453804016113, 2.160771369934082, 8.82069206237793, 19.43600845336914, 0.9794597029685974], step: 52200, lr: 9.864665600773098e-05 2023-03-10 21:11:18,882 44k INFO ====> Epoch: 109, cost 174.55 s 2023-03-10 21:11:55,516 44k INFO Train Epoch: 110 [17%] 2023-03-10 21:11:55,517 44k INFO Losses: [2.3839221000671387, 2.1822118759155273, 10.269407272338867, 20.051111221313477, 0.8753963708877563], step: 52400, lr: 9.863432517573002e-05 2023-03-10 21:13:02,784 44k INFO Train Epoch: 110 [58%] 2023-03-10 21:13:02,784 44k INFO Losses: [2.4736809730529785, 2.1552302837371826, 11.130157470703125, 18.118209838867188, 0.7142908573150635], step: 52600, lr: 9.863432517573002e-05 2023-03-10 21:14:09,994 44k INFO ====> Epoch: 110, cost 171.11 s 2023-03-10 21:14:19,459 44k INFO Train Epoch: 111 [0%] 2023-03-10 21:14:19,459 44k INFO Losses: [2.620481491088867, 2.339320659637451, 7.493619441986084, 17.740379333496094, 1.2952884435653687], step: 52800, lr: 9.862199588508305e-05 2023-03-10 21:15:26,927 44k INFO Train Epoch: 111 [42%] 2023-03-10 21:15:26,927 44k INFO Losses: [2.55251145362854, 2.334249258041382, 8.751977920532227, 18.8341121673584, 1.351853609085083], step: 53000, lr: 9.862199588508305e-05 2023-03-10 21:15:29,685 44k INFO Saving model and optimizer state at iteration 111 to ./logs\44k\G_53000.pth 2023-03-10 21:15:30,391 44k INFO Saving model and optimizer state at iteration 111 to ./logs\44k\D_53000.pth 2023-03-10 21:15:31,015 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_50000.pth 2023-03-10 21:15:31,057 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_50000.pth 2023-03-10 21:16:37,669 44k INFO Train Epoch: 111 [83%] 2023-03-10 21:16:37,669 44k INFO Losses: [2.4264345169067383, 2.5841052532196045, 10.159029006958008, 19.77457618713379, 1.0425516366958618], step: 53200, lr: 9.862199588508305e-05 2023-03-10 21:17:04,939 44k INFO ====> Epoch: 111, cost 174.95 s 2023-03-10 21:17:55,868 44k INFO Train Epoch: 112 [25%] 2023-03-10 21:17:55,868 44k INFO Losses: [2.5143566131591797, 2.2728426456451416, 10.925360679626465, 20.717330932617188, 1.4278674125671387], step: 53400, lr: 9.86096681355974e-05 2023-03-10 21:19:05,634 44k INFO Train Epoch: 112 [67%] 2023-03-10 21:19:05,634 44k INFO Losses: [2.384084701538086, 2.6019084453582764, 9.453219413757324, 20.705461502075195, 1.2243319749832153], step: 53600, lr: 9.86096681355974e-05 2023-03-10 21:20:03,117 44k INFO ====> Epoch: 112, cost 178.18 s 2023-03-10 21:20:29,210 44k INFO Train Epoch: 113 [8%] 2023-03-10 21:20:29,211 44k INFO Losses: [2.832183837890625, 2.3098342418670654, 7.901485919952393, 16.831310272216797, 1.2201958894729614], step: 53800, lr: 9.859734192708044e-05 2023-03-10 21:21:39,463 44k INFO Train Epoch: 113 [50%] 2023-03-10 21:21:39,463 44k INFO Losses: [2.3680665493011475, 2.384824752807617, 11.000890731811523, 21.351930618286133, 1.1414543390274048], step: 54000, lr: 9.859734192708044e-05 2023-03-10 21:21:42,591 44k INFO Saving model and optimizer state at iteration 113 to ./logs\44k\G_54000.pth 2023-03-10 21:21:43,275 44k INFO Saving model and optimizer state at iteration 113 to ./logs\44k\D_54000.pth 2023-03-10 21:21:44,019 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_51000.pth 2023-03-10 21:21:44,062 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_51000.pth 2023-03-10 21:22:54,525 44k INFO Train Epoch: 113 [92%] 2023-03-10 21:22:54,526 44k INFO Losses: [2.3481788635253906, 2.6041312217712402, 11.768047332763672, 20.798093795776367, 1.249822974205017], step: 54200, lr: 9.859734192708044e-05 2023-03-10 21:23:08,747 44k INFO ====> Epoch: 113, cost 185.63 s 2023-03-10 21:24:18,165 44k INFO Train Epoch: 114 [33%] 2023-03-10 21:24:18,165 44k INFO Losses: [2.7301435470581055, 1.8387211561203003, 8.390239715576172, 17.569154739379883, 1.0150202512741089], step: 54400, lr: 9.858501725933955e-05 2023-03-10 21:25:30,890 44k INFO Train Epoch: 114 [75%] 2023-03-10 21:25:30,890 44k INFO Losses: [2.5253982543945312, 2.2110021114349365, 7.582423210144043, 17.346508026123047, 1.0292681455612183], step: 54600, lr: 9.858501725933955e-05 2023-03-10 21:26:13,009 44k INFO ====> Epoch: 114, cost 184.26 s 2023-03-10 21:26:51,601 44k INFO Train Epoch: 115 [17%] 2023-03-10 21:26:51,602 44k INFO Losses: [2.4145846366882324, 2.1539323329925537, 9.008591651916504, 20.50507164001465, 1.070541501045227], step: 54800, lr: 9.857269413218213e-05 2023-03-10 21:28:02,314 44k INFO Train Epoch: 115 [58%] 2023-03-10 21:28:02,315 44k INFO Losses: [2.488292932510376, 2.1845619678497314, 10.300168991088867, 17.974010467529297, 1.3763055801391602], step: 55000, lr: 9.857269413218213e-05 2023-03-10 21:28:05,374 44k INFO Saving model and optimizer state at iteration 115 to ./logs\44k\G_55000.pth 2023-03-10 21:28:06,061 44k INFO Saving model and optimizer state at iteration 115 to ./logs\44k\D_55000.pth 2023-03-10 21:28:06,761 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_52000.pth 2023-03-10 21:28:06,808 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_52000.pth 2023-03-10 21:29:17,878 44k INFO ====> Epoch: 115, cost 184.87 s 2023-03-10 21:29:28,476 44k INFO Train Epoch: 116 [0%] 2023-03-10 21:29:28,479 44k INFO Losses: [2.38480544090271, 2.469709873199463, 11.554682731628418, 20.43757438659668, 1.0705339908599854], step: 55200, lr: 9.85603725454156e-05 2023-03-10 21:31:05,324 44k INFO Train Epoch: 116 [42%] 2023-03-10 21:31:05,325 44k INFO Losses: [2.4011828899383545, 2.1080901622772217, 8.725489616394043, 18.80291748046875, 1.369552731513977], step: 55400, lr: 9.85603725454156e-05 2023-03-10 21:32:17,395 44k INFO Train Epoch: 116 [83%] 2023-03-10 21:32:17,396 44k INFO Losses: [2.541152238845825, 2.3482189178466797, 8.99125862121582, 16.391376495361328, 0.9556304812431335], step: 55600, lr: 9.85603725454156e-05 2023-03-10 21:32:46,549 44k INFO ====> Epoch: 116, cost 208.67 s 2023-03-10 21:33:40,507 44k INFO Train Epoch: 117 [25%] 2023-03-10 21:33:40,508 44k INFO Losses: [2.7141623497009277, 2.009599447250366, 6.5364227294921875, 19.62139892578125, 1.184281349182129], step: 55800, lr: 9.854805249884741e-05 2023-03-10 21:34:51,906 44k INFO Train Epoch: 117 [67%] 2023-03-10 21:34:51,907 44k INFO Losses: [2.4313013553619385, 2.482556104660034, 7.637519359588623, 19.015520095825195, 1.0271650552749634], step: 56000, lr: 9.854805249884741e-05 2023-03-10 21:34:55,229 44k INFO Saving model and optimizer state at iteration 117 to ./logs\44k\G_56000.pth 2023-03-10 21:34:55,994 44k INFO Saving model and optimizer state at iteration 117 to ./logs\44k\D_56000.pth 2023-03-10 21:34:56,710 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_53000.pth 2023-03-10 21:34:56,756 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_53000.pth 2023-03-10 21:35:55,101 44k INFO ====> Epoch: 117, cost 188.55 s 2023-03-10 21:36:19,928 44k INFO Train Epoch: 118 [8%] 2023-03-10 21:36:19,928 44k INFO Losses: [2.366985321044922, 2.3517630100250244, 10.775310516357422, 22.367156982421875, 1.5898412466049194], step: 56200, lr: 9.853573399228505e-05 2023-03-10 21:37:32,371 44k INFO Train Epoch: 118 [50%] 2023-03-10 21:37:32,372 44k INFO Losses: [2.1534993648529053, 2.738316774368286, 14.169914245605469, 23.25755500793457, 1.1168138980865479], step: 56400, lr: 9.853573399228505e-05 2023-03-10 21:38:42,706 44k INFO Train Epoch: 118 [92%] 2023-03-10 21:38:42,706 44k INFO Losses: [2.412947654724121, 2.391221046447754, 9.04958438873291, 20.948633193969727, 1.259826898574829], step: 56600, lr: 9.853573399228505e-05 2023-03-10 21:38:57,091 44k INFO ====> Epoch: 118, cost 181.99 s 2023-03-10 21:40:04,195 44k INFO Train Epoch: 119 [33%] 2023-03-10 21:40:04,195 44k INFO Losses: [2.3386669158935547, 2.438791513442993, 11.563238143920898, 19.185998916625977, 1.0667145252227783], step: 56800, lr: 9.8523417025536e-05 2023-03-10 21:41:15,377 44k INFO Train Epoch: 119 [75%] 2023-03-10 21:41:15,378 44k INFO Losses: [2.6045753955841064, 2.2635958194732666, 9.972951889038086, 19.166879653930664, 1.159376621246338], step: 57000, lr: 9.8523417025536e-05 2023-03-10 21:41:18,323 44k INFO Saving model and optimizer state at iteration 119 to ./logs\44k\G_57000.pth 2023-03-10 21:41:19,051 44k INFO Saving model and optimizer state at iteration 119 to ./logs\44k\D_57000.pth 2023-03-10 21:41:19,716 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_54000.pth 2023-03-10 21:41:19,744 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_54000.pth 2023-03-10 21:42:03,462 44k INFO ====> Epoch: 119, cost 186.37 s 2023-03-10 21:42:43,503 44k INFO Train Epoch: 120 [17%] 2023-03-10 21:42:43,504 44k INFO Losses: [2.3767828941345215, 2.386854648590088, 10.218685150146484, 17.212636947631836, 1.011182188987732], step: 57200, lr: 9.851110159840781e-05 2023-03-10 21:43:55,018 44k INFO Train Epoch: 120 [58%] 2023-03-10 21:43:55,018 44k INFO Losses: [2.395080327987671, 2.1684978008270264, 11.069388389587402, 18.909616470336914, 1.292800784111023], step: 57400, lr: 9.851110159840781e-05 2023-03-10 21:45:08,967 44k INFO ====> Epoch: 120, cost 185.50 s 2023-03-10 21:45:19,277 44k INFO Train Epoch: 121 [0%] 2023-03-10 21:45:19,278 44k INFO Losses: [2.4142961502075195, 2.5343339443206787, 9.690354347229004, 19.851322174072266, 1.2172437906265259], step: 57600, lr: 9.8498787710708e-05 2023-03-10 21:46:35,034 44k INFO Train Epoch: 121 [42%] 2023-03-10 21:46:35,040 44k INFO Losses: [2.4165563583374023, 2.4216089248657227, 10.888091087341309, 21.200775146484375, 1.0869629383087158], step: 57800, lr: 9.8498787710708e-05 2023-03-10 21:47:56,745 44k INFO Train Epoch: 121 [83%] 2023-03-10 21:47:56,750 44k INFO Losses: [2.282374858856201, 2.471604824066162, 7.937741279602051, 15.686677932739258, 0.9070390462875366], step: 58000, lr: 9.8498787710708e-05 2023-03-10 21:48:00,311 44k INFO Saving model and optimizer state at iteration 121 to ./logs\44k\G_58000.pth 2023-03-10 21:48:01,094 44k INFO Saving model and optimizer state at iteration 121 to ./logs\44k\D_58000.pth 2023-03-10 21:48:01,866 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_55000.pth 2023-03-10 21:48:01,919 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_55000.pth 2023-03-10 21:48:34,404 44k INFO ====> Epoch: 121, cost 205.44 s 2023-03-10 21:49:30,578 44k INFO Train Epoch: 122 [25%] 2023-03-10 21:49:30,579 44k INFO Losses: [2.721353530883789, 1.9338716268539429, 4.5783538818359375, 15.087936401367188, 1.4242318868637085], step: 58200, lr: 9.848647536224416e-05 2023-03-10 21:50:42,731 44k INFO Train Epoch: 122 [67%] 2023-03-10 21:50:42,732 44k INFO Losses: [2.5823512077331543, 2.185462236404419, 8.540579795837402, 19.86931037902832, 1.4148286581039429], step: 58400, lr: 9.848647536224416e-05 2023-03-10 21:51:40,371 44k INFO ====> Epoch: 122, cost 185.97 s 2023-03-10 21:52:05,125 44k INFO Train Epoch: 123 [8%] 2023-03-10 21:52:05,125 44k INFO Losses: [2.561491012573242, 2.201587438583374, 7.961848735809326, 19.036191940307617, 1.2058391571044922], step: 58600, lr: 9.847416455282387e-05 2023-03-10 21:53:17,070 44k INFO Train Epoch: 123 [50%] 2023-03-10 21:53:17,071 44k INFO Losses: [2.123305320739746, 2.409733295440674, 12.327372550964355, 21.999929428100586, 1.4899181127548218], step: 58800, lr: 9.847416455282387e-05 2023-03-10 21:54:29,461 44k INFO Train Epoch: 123 [92%] 2023-03-10 21:54:29,461 44k INFO Losses: [2.3109254837036133, 2.4792587757110596, 10.145918846130371, 20.15896224975586, 1.159805417060852], step: 59000, lr: 9.847416455282387e-05 2023-03-10 21:54:32,485 44k INFO Saving model and optimizer state at iteration 123 to ./logs\44k\G_59000.pth 2023-03-10 21:54:33,217 44k INFO Saving model and optimizer state at iteration 123 to ./logs\44k\D_59000.pth 2023-03-10 21:54:33,871 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_56000.pth 2023-03-10 21:54:33,917 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_56000.pth 2023-03-10 21:54:48,359 44k INFO ====> Epoch: 123, cost 187.99 s 2023-03-10 21:55:57,038 44k INFO Train Epoch: 124 [33%] 2023-03-10 21:55:57,040 44k INFO Losses: [2.3641140460968018, 2.43084454536438, 9.14197063446045, 15.625608444213867, 1.3044573068618774], step: 59200, lr: 9.846185528225477e-05 2023-03-10 21:57:18,646 44k INFO Train Epoch: 124 [75%] 2023-03-10 21:57:18,646 44k INFO Losses: [2.476146697998047, 2.431386709213257, 10.71250057220459, 19.3042049407959, 1.2434529066085815], step: 59400, lr: 9.846185528225477e-05 2023-03-10 21:58:07,564 44k INFO ====> Epoch: 124, cost 199.20 s 2023-03-10 21:58:47,452 44k INFO Train Epoch: 125 [17%] 2023-03-10 21:58:47,453 44k INFO Losses: [2.5148580074310303, 2.395150899887085, 8.830897331237793, 19.638525009155273, 1.011060118675232], step: 59600, lr: 9.84495475503445e-05 2023-03-10 21:59:59,119 44k INFO Train Epoch: 125 [58%] 2023-03-10 21:59:59,120 44k INFO Losses: [2.6108510494232178, 2.1697988510131836, 8.154233932495117, 18.474275588989258, 0.7748168706893921], step: 59800, lr: 9.84495475503445e-05 2023-03-10 22:01:23,055 44k INFO ====> Epoch: 125, cost 195.49 s 2023-03-10 22:01:33,480 44k INFO Train Epoch: 126 [0%] 2023-03-10 22:01:33,480 44k INFO Losses: [2.602079391479492, 2.0111606121063232, 6.074387073516846, 18.45685577392578, 1.1268730163574219], step: 60000, lr: 9.84372413569007e-05 2023-03-10 22:01:36,481 44k INFO Saving model and optimizer state at iteration 126 to ./logs\44k\G_60000.pth 2023-03-10 22:01:37,157 44k INFO Saving model and optimizer state at iteration 126 to ./logs\44k\D_60000.pth 2023-03-10 22:01:37,801 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_57000.pth 2023-03-10 22:01:37,831 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_57000.pth 2023-03-10 22:02:58,106 44k INFO Train Epoch: 126 [42%] 2023-03-10 22:02:58,107 44k INFO Losses: [2.395695686340332, 2.563560962677002, 11.31453800201416, 21.63389778137207, 1.244583249092102], step: 60200, lr: 9.84372413569007e-05 2023-03-10 22:04:08,064 44k INFO Train Epoch: 126 [83%] 2023-03-10 22:04:08,064 44k INFO Losses: [2.509502410888672, 2.230083703994751, 9.750213623046875, 20.110122680664062, 0.9419601559638977], step: 60400, lr: 9.84372413569007e-05 2023-03-10 22:04:37,008 44k INFO ====> Epoch: 126, cost 193.95 s 2023-03-10 22:05:32,909 44k INFO Train Epoch: 127 [25%] 2023-03-10 22:05:32,910 44k INFO Losses: [2.613250255584717, 2.1424992084503174, 8.195858001708984, 21.610280990600586, 1.7847726345062256], step: 60600, lr: 9.842493670173108e-05 2023-03-10 22:06:43,533 44k INFO Train Epoch: 127 [67%] 2023-03-10 22:06:43,534 44k INFO Losses: [2.90395450592041, 1.9409708976745605, 6.407796382904053, 18.405025482177734, 1.2056750059127808], step: 60800, lr: 9.842493670173108e-05 2023-03-10 22:07:39,964 44k INFO ====> Epoch: 127, cost 182.96 s 2023-03-10 22:08:04,166 44k INFO Train Epoch: 128 [8%] 2023-03-10 22:08:04,167 44k INFO Losses: [2.3968169689178467, 2.3018054962158203, 10.053366661071777, 20.41103744506836, 0.9091386795043945], step: 61000, lr: 9.841263358464336e-05 2023-03-10 22:08:07,147 44k INFO Saving model and optimizer state at iteration 128 to ./logs\44k\G_61000.pth 2023-03-10 22:08:07,832 44k INFO Saving model and optimizer state at iteration 128 to ./logs\44k\D_61000.pth 2023-03-10 22:08:08,484 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_58000.pth 2023-03-10 22:08:08,522 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_58000.pth 2023-03-10 22:09:17,347 44k INFO Train Epoch: 128 [50%] 2023-03-10 22:09:17,348 44k INFO Losses: [2.279478073120117, 2.633390426635742, 11.67409896850586, 23.693246841430664, 0.983919084072113], step: 61200, lr: 9.841263358464336e-05 2023-03-10 22:10:26,651 44k INFO Train Epoch: 128 [92%] 2023-03-10 22:10:26,651 44k INFO Losses: [2.5496785640716553, 2.323397159576416, 9.32941722869873, 18.87200927734375, 1.106742024421692], step: 61400, lr: 9.841263358464336e-05 2023-03-10 22:10:41,022 44k INFO ====> Epoch: 128, cost 181.06 s 2023-03-10 22:11:48,102 44k INFO Train Epoch: 129 [33%] 2023-03-10 22:11:48,103 44k INFO Losses: [2.4743332862854004, 2.4446592330932617, 8.519780158996582, 19.945512771606445, 0.560448408126831], step: 61600, lr: 9.840033200544528e-05 2023-03-10 22:12:58,896 44k INFO Train Epoch: 129 [75%] 2023-03-10 22:12:58,897 44k INFO Losses: [2.324143409729004, 2.315108299255371, 11.200321197509766, 19.443256378173828, 1.3374336957931519], step: 61800, lr: 9.840033200544528e-05 2023-03-10 22:13:40,942 44k INFO ====> Epoch: 129, cost 179.92 s 2023-03-10 22:14:22,860 44k INFO Train Epoch: 130 [17%] 2023-03-10 22:14:22,861 44k INFO Losses: [2.336583375930786, 2.301057815551758, 11.023492813110352, 20.191280364990234, 1.4816441535949707], step: 62000, lr: 9.838803196394459e-05 2023-03-10 22:14:26,606 44k INFO Saving model and optimizer state at iteration 130 to ./logs\44k\G_62000.pth 2023-03-10 22:14:27,473 44k INFO Saving model and optimizer state at iteration 130 to ./logs\44k\D_62000.pth 2023-03-10 22:14:28,256 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_59000.pth 2023-03-10 22:14:28,290 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_59000.pth 2023-03-10 22:15:44,749 44k INFO Train Epoch: 130 [58%] 2023-03-10 22:15:44,749 44k INFO Losses: [2.54327392578125, 2.0947272777557373, 9.685648918151855, 18.373594284057617, 1.1531988382339478], step: 62200, lr: 9.838803196394459e-05 2023-03-10 22:16:57,755 44k INFO ====> Epoch: 130, cost 196.81 s 2023-03-11 14:38:57,816 44k INFO {'train': {'log_interval': 200, 'eval_interval': 1000, 'seed': 1234, 'epochs': 60, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kiriha': 0}, 'model_dir': './logs\\44k'} 2023-03-11 14:38:57,849 44k WARNING git hash values are different. cea6df30(saved) != cc5b3bbe(current) 2023-03-11 14:39:00,242 44k INFO Loaded checkpoint './logs\44k\G_0.pth' (iteration 1) 2023-03-11 14:39:00,795 44k INFO Loaded checkpoint './logs\44k\D_0.pth' (iteration 1) 2023-03-11 14:39:14,966 44k INFO Train Epoch: 1 [0%] 2023-03-11 14:39:14,966 44k INFO Losses: [2.692014694213867, 2.2042808532714844, 10.523763656616211, 26.676721572875977, 2.287257432937622], step: 0, lr: 0.0001 2023-03-11 14:39:18,413 44k INFO Saving model and optimizer state at iteration 1 to ./logs\44k\G_0.pth 2023-03-11 14:39:19,121 44k INFO Saving model and optimizer state at iteration 1 to ./logs\44k\D_0.pth 2023-03-11 14:40:39,330 44k INFO Train Epoch: 1 [40%] 2023-03-11 14:40:39,330 44k INFO Losses: [2.5842509269714355, 2.2707109451293945, 9.328883171081543, 22.88845443725586, 1.5948665142059326], step: 200, lr: 0.0001 2023-03-11 14:41:55,791 44k INFO Train Epoch: 1 [80%] 2023-03-11 14:41:55,792 44k INFO Losses: [2.840078592300415, 1.9012465476989746, 5.9664459228515625, 17.856870651245117, 1.5224264860153198], step: 400, lr: 0.0001 2023-03-11 14:42:37,234 44k INFO ====> Epoch: 1, cost 219.42 s 2023-03-11 14:43:22,326 44k INFO Train Epoch: 2 [20%] 2023-03-11 14:43:22,327 44k INFO Losses: [2.4843711853027344, 2.051591157913208, 10.628742218017578, 21.35457420349121, 2.1983642578125], step: 600, lr: 9.99875e-05 2023-03-11 14:44:32,774 44k INFO Train Epoch: 2 [60%] 2023-03-11 14:44:32,775 44k INFO Losses: [2.747645378112793, 1.9935522079467773, 3.6814563274383545, 19.112133026123047, 1.3366352319717407], step: 800, lr: 9.99875e-05 2023-03-11 14:45:41,170 44k INFO ====> Epoch: 2, cost 183.94 s 2023-03-11 14:45:52,027 44k INFO Train Epoch: 3 [0%] 2023-03-11 14:45:52,027 44k INFO Losses: [2.5647811889648438, 2.2300755977630615, 7.884703636169434, 18.696147918701172, 1.4687904119491577], step: 1000, lr: 9.99750015625e-05 2023-03-11 14:45:55,022 44k INFO Saving model and optimizer state at iteration 3 to ./logs\44k\G_1000.pth 2023-03-11 14:45:55,788 44k INFO Saving model and optimizer state at iteration 3 to ./logs\44k\D_1000.pth 2023-03-11 14:47:05,561 44k INFO Train Epoch: 3 [40%] 2023-03-11 14:47:05,562 44k INFO Losses: [2.47259521484375, 2.232937812805176, 11.074995040893555, 23.25304412841797, 1.9003015756607056], step: 1200, lr: 9.99750015625e-05 2023-03-11 14:48:15,847 44k INFO Train Epoch: 3 [81%] 2023-03-11 14:48:15,847 44k INFO Losses: [2.565763473510742, 1.9296002388000488, 7.391387939453125, 20.192710876464844, 0.944815993309021], step: 1400, lr: 9.99750015625e-05 2023-03-11 14:48:49,288 44k INFO ====> Epoch: 3, cost 188.12 s 2023-03-11 14:49:35,316 44k INFO Train Epoch: 4 [21%] 2023-03-11 14:49:35,317 44k INFO Losses: [2.547124147415161, 2.1744332313537598, 8.457550048828125, 20.16890525817871, 1.5796974897384644], step: 1600, lr: 9.996250468730469e-05 2023-03-11 14:50:44,618 44k INFO Train Epoch: 4 [61%] 2023-03-11 14:50:44,619 44k INFO Losses: [2.593186378479004, 2.0808334350585938, 9.594361305236816, 20.63544273376465, 1.7887381315231323], step: 1800, lr: 9.996250468730469e-05 2023-03-11 14:51:51,986 44k INFO ====> Epoch: 4, cost 182.70 s 2023-03-11 14:52:03,236 44k INFO Train Epoch: 5 [1%] 2023-03-11 14:52:03,236 44k INFO Losses: [2.459156036376953, 2.2056891918182373, 10.7974853515625, 20.2313232421875, 1.6866904497146606], step: 2000, lr: 9.995000937421877e-05 2023-03-11 14:52:06,038 44k INFO Saving model and optimizer state at iteration 5 to ./logs\44k\G_2000.pth 2023-03-11 14:52:06,771 44k INFO Saving model and optimizer state at iteration 5 to ./logs\44k\D_2000.pth 2023-03-11 14:53:16,249 44k INFO Train Epoch: 5 [41%] 2023-03-11 14:53:16,249 44k INFO Losses: [2.3860321044921875, 2.4146268367767334, 8.062605857849121, 20.828868865966797, 1.8541488647460938], step: 2200, lr: 9.995000937421877e-05 2023-03-11 14:54:25,011 44k INFO Train Epoch: 5 [81%] 2023-03-11 14:54:25,011 44k INFO Losses: [2.4417428970336914, 2.323552131652832, 7.779649257659912, 21.470956802368164, 1.1967414617538452], step: 2400, lr: 9.995000937421877e-05 2023-03-11 14:54:58,012 44k INFO ====> Epoch: 5, cost 186.03 s 2023-03-11 14:55:43,655 44k INFO Train Epoch: 6 [21%] 2023-03-11 14:55:43,655 44k INFO Losses: [2.5927295684814453, 2.033508062362671, 9.40347957611084, 19.048782348632812, 1.2411624193191528], step: 2600, lr: 9.993751562304699e-05 2023-03-11 14:56:52,143 44k INFO Train Epoch: 6 [61%] 2023-03-11 14:56:52,144 44k INFO Losses: [2.6325714588165283, 2.111382246017456, 12.593308448791504, 25.28567886352539, 1.625187873840332], step: 2800, lr: 9.993751562304699e-05 2023-03-11 14:57:59,067 44k INFO ====> Epoch: 6, cost 181.06 s 2023-03-11 14:58:11,030 44k INFO Train Epoch: 7 [1%] 2023-03-11 14:58:11,030 44k INFO Losses: [2.4880237579345703, 2.0183353424072266, 10.287726402282715, 22.53156089782715, 1.4209307432174683], step: 3000, lr: 9.99250234335941e-05 2023-03-11 14:58:13,909 44k INFO Saving model and optimizer state at iteration 7 to ./logs\44k\G_3000.pth 2023-03-11 14:58:14,621 44k INFO Saving model and optimizer state at iteration 7 to ./logs\44k\D_3000.pth 2023-03-11 14:59:23,934 44k INFO Train Epoch: 7 [41%] 2023-03-11 14:59:23,935 44k INFO Losses: [2.620680093765259, 2.1960368156433105, 7.801065921783447, 22.072189331054688, 1.6776741743087769], step: 3200, lr: 9.99250234335941e-05 2023-03-11 15:00:35,283 44k INFO Train Epoch: 7 [81%] 2023-03-11 15:00:35,284 44k INFO Losses: [2.471092700958252, 2.2279698848724365, 11.837902069091797, 22.174488067626953, 1.4028078317642212], step: 3400, lr: 9.99250234335941e-05 2023-03-11 15:01:08,731 44k INFO ====> Epoch: 7, cost 189.66 s 2023-03-11 15:01:56,486 44k INFO Train Epoch: 8 [21%] 2023-03-11 15:01:56,487 44k INFO Losses: [2.5055289268493652, 2.2497341632843018, 8.2509765625, 19.745094299316406, 1.326791524887085], step: 3600, lr: 9.991253280566489e-05 2023-03-11 15:03:04,887 44k INFO Train Epoch: 8 [62%] 2023-03-11 15:03:04,887 44k INFO Losses: [2.5939910411834717, 1.9677252769470215, 7.963136196136475, 21.543333053588867, 1.3148150444030762], step: 3800, lr: 9.991253280566489e-05 2023-03-11 15:04:10,839 44k INFO ====> Epoch: 8, cost 182.11 s 2023-03-11 15:04:23,195 44k INFO Train Epoch: 9 [2%] 2023-03-11 15:04:23,195 44k INFO Losses: [2.7018299102783203, 2.057337760925293, 5.8021159172058105, 18.413925170898438, 0.982990562915802], step: 4000, lr: 9.990004373906418e-05 2023-03-11 15:04:25,970 44k INFO Saving model and optimizer state at iteration 9 to ./logs\44k\G_4000.pth 2023-03-11 15:04:26,641 44k INFO Saving model and optimizer state at iteration 9 to ./logs\44k\D_4000.pth 2023-03-11 15:04:27,408 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_1000.pth 2023-03-11 15:04:27,453 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_1000.pth 2023-03-11 15:05:35,821 44k INFO Train Epoch: 9 [42%] 2023-03-11 15:05:35,821 44k INFO Losses: [2.523653984069824, 2.2708821296691895, 8.2249755859375, 19.987890243530273, 1.0679545402526855], step: 4200, lr: 9.990004373906418e-05 2023-03-11 15:06:44,202 44k INFO Train Epoch: 9 [82%] 2023-03-11 15:06:44,202 44k INFO Losses: [2.628239631652832, 1.8555246591567993, 6.734991550445557, 17.5546817779541, 1.4562206268310547], step: 4400, lr: 9.990004373906418e-05 2023-03-11 15:07:16,664 44k INFO ====> Epoch: 9, cost 185.82 s 2023-03-11 15:08:04,934 44k INFO Train Epoch: 10 [22%] 2023-03-11 15:08:04,935 44k INFO Losses: [2.542933225631714, 2.1957504749298096, 8.647757530212402, 22.842330932617188, 1.2793465852737427], step: 4600, lr: 9.98875562335968e-05 2023-03-11 15:09:13,525 44k INFO Train Epoch: 10 [62%] 2023-03-11 15:09:13,525 44k INFO Losses: [2.5811991691589355, 2.155183792114258, 8.990614891052246, 19.839134216308594, 1.3736584186553955], step: 4800, lr: 9.98875562335968e-05 2023-03-11 15:10:18,439 44k INFO ====> Epoch: 10, cost 181.78 s 2023-03-11 15:10:31,617 44k INFO Train Epoch: 11 [2%] 2023-03-11 15:10:31,617 44k INFO Losses: [2.8019704818725586, 1.8928039073944092, 7.6676459312438965, 16.21021270751953, 1.1751631498336792], step: 5000, lr: 9.987507028906759e-05 2023-03-11 15:10:34,430 44k INFO Saving model and optimizer state at iteration 11 to ./logs\44k\G_5000.pth 2023-03-11 15:10:35,149 44k INFO Saving model and optimizer state at iteration 11 to ./logs\44k\D_5000.pth 2023-03-11 15:10:35,847 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_2000.pth 2023-03-11 15:10:35,890 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_2000.pth 2023-03-11 15:11:44,282 44k INFO Train Epoch: 11 [42%] 2023-03-11 15:11:44,282 44k INFO Losses: [2.444063901901245, 2.1528403759002686, 8.002668380737305, 18.788665771484375, 1.3481777906417847], step: 5200, lr: 9.987507028906759e-05 2023-03-11 15:12:52,748 44k INFO Train Epoch: 11 [82%] 2023-03-11 15:12:52,748 44k INFO Losses: [2.3251287937164307, 2.447930097579956, 10.230199813842773, 25.002521514892578, 1.4786598682403564], step: 5400, lr: 9.987507028906759e-05 2023-03-11 15:13:23,331 44k INFO ====> Epoch: 11, cost 184.89 s 2023-03-11 15:14:11,131 44k INFO Train Epoch: 12 [22%] 2023-03-11 15:14:11,132 44k INFO Losses: [2.500746011734009, 2.217074394226074, 10.083109855651855, 22.20945167541504, 1.186476469039917], step: 5600, lr: 9.986258590528146e-05 2023-03-11 15:15:19,975 44k INFO Train Epoch: 12 [62%] 2023-03-11 15:15:19,976 44k INFO Losses: [2.4195754528045654, 2.3009634017944336, 10.828755378723145, 21.043527603149414, 1.2956870794296265], step: 5800, lr: 9.986258590528146e-05 2023-03-11 15:16:24,137 44k INFO ====> Epoch: 12, cost 180.81 s 2023-03-11 15:16:38,153 44k INFO Train Epoch: 13 [2%] 2023-03-11 15:16:38,153 44k INFO Losses: [2.5527420043945312, 2.179313898086548, 11.098637580871582, 21.033891677856445, 1.3086527585983276], step: 6000, lr: 9.98501030820433e-05 2023-03-11 15:16:41,042 44k INFO Saving model and optimizer state at iteration 13 to ./logs\44k\G_6000.pth 2023-03-11 15:16:41,755 44k INFO Saving model and optimizer state at iteration 13 to ./logs\44k\D_6000.pth 2023-03-11 15:16:42,457 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_3000.pth 2023-03-11 15:16:42,484 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_3000.pth 2023-03-11 15:17:50,772 44k INFO Train Epoch: 13 [42%] 2023-03-11 15:17:50,773 44k INFO Losses: [2.306198835372925, 2.1843819618225098, 12.7627534866333, 23.179401397705078, 1.2503026723861694], step: 6200, lr: 9.98501030820433e-05 2023-03-11 15:18:59,543 44k INFO Train Epoch: 13 [83%] 2023-03-11 15:18:59,543 44k INFO Losses: [2.476243734359741, 2.2170848846435547, 10.562429428100586, 24.258689880371094, 1.5527262687683105], step: 6400, lr: 9.98501030820433e-05 2023-03-11 15:19:29,356 44k INFO ====> Epoch: 13, cost 185.22 s 2023-03-11 15:20:17,575 44k INFO Train Epoch: 14 [23%] 2023-03-11 15:20:17,576 44k INFO Losses: [2.4450771808624268, 2.249439239501953, 8.47510814666748, 20.619245529174805, 1.752815842628479], step: 6600, lr: 9.983762181915804e-05 2023-03-11 15:21:26,400 44k INFO Train Epoch: 14 [63%] 2023-03-11 15:21:26,400 44k INFO Losses: [2.188438653945923, 2.6154887676239014, 12.23504638671875, 22.48587417602539, 1.298343300819397], step: 6800, lr: 9.983762181915804e-05 2023-03-11 15:22:30,012 44k INFO ====> Epoch: 14, cost 180.66 s 2023-03-11 15:22:52,704 44k INFO Train Epoch: 15 [3%] 2023-03-11 15:22:52,704 44k INFO Losses: [2.6491429805755615, 2.1789064407348633, 10.388026237487793, 21.160812377929688, 1.1098334789276123], step: 7000, lr: 9.982514211643064e-05 2023-03-11 15:22:55,530 44k INFO Saving model and optimizer state at iteration 15 to ./logs\44k\G_7000.pth 2023-03-11 15:22:56,215 44k INFO Saving model and optimizer state at iteration 15 to ./logs\44k\D_7000.pth 2023-03-11 15:22:56,922 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_4000.pth 2023-03-11 15:22:56,950 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_4000.pth 2023-03-11 15:24:05,557 44k INFO Train Epoch: 15 [43%] 2023-03-11 15:24:05,557 44k INFO Losses: [2.338808536529541, 2.1829395294189453, 8.635116577148438, 21.88563346862793, 0.9146393537521362], step: 7200, lr: 9.982514211643064e-05 2023-03-11 15:25:14,165 44k INFO Train Epoch: 15 [83%] 2023-03-11 15:25:14,166 44k INFO Losses: [2.3175389766693115, 2.387066602706909, 10.38149356842041, 19.51270294189453, 1.3940963745117188], step: 7400, lr: 9.982514211643064e-05 2023-03-11 15:25:43,424 44k INFO ====> Epoch: 15, cost 193.41 s 2023-03-11 15:26:33,619 44k INFO Train Epoch: 16 [23%] 2023-03-11 15:26:33,619 44k INFO Losses: [2.578110694885254, 1.9693292379379272, 10.056968688964844, 18.0948543548584, 1.215917944908142], step: 7600, lr: 9.981266397366609e-05 2023-03-11 15:27:43,326 44k INFO Train Epoch: 16 [63%] 2023-03-11 15:27:43,327 44k INFO Losses: [2.5368783473968506, 2.261281728744507, 9.7789945602417, 23.01810073852539, 1.438032627105713], step: 7800, lr: 9.981266397366609e-05 2023-03-11 15:28:48,635 44k INFO ====> Epoch: 16, cost 185.21 s 2023-03-11 15:29:04,125 44k INFO Train Epoch: 17 [3%] 2023-03-11 15:29:04,125 44k INFO Losses: [2.596095323562622, 2.4917972087860107, 11.360024452209473, 21.384506225585938, 1.614487886428833], step: 8000, lr: 9.980018739066937e-05 2023-03-11 15:29:07,007 44k INFO Saving model and optimizer state at iteration 17 to ./logs\44k\G_8000.pth 2023-03-11 15:29:07,714 44k INFO Saving model and optimizer state at iteration 17 to ./logs\44k\D_8000.pth 2023-03-11 15:29:08,429 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_5000.pth 2023-03-11 15:29:08,458 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_5000.pth 2023-03-11 15:30:16,864 44k INFO Train Epoch: 17 [43%] 2023-03-11 15:30:16,864 44k INFO Losses: [2.3486127853393555, 2.6766161918640137, 9.551776885986328, 21.577255249023438, 1.3058397769927979], step: 8200, lr: 9.980018739066937e-05 2023-03-11 15:31:25,531 44k INFO Train Epoch: 17 [83%] 2023-03-11 15:31:25,532 44k INFO Losses: [2.575294256210327, 2.41780161857605, 10.423315048217773, 21.15323257446289, 1.1830599308013916], step: 8400, lr: 9.980018739066937e-05 2023-03-11 15:31:54,015 44k INFO ====> Epoch: 17, cost 185.38 s 2023-03-11 15:32:43,879 44k INFO Train Epoch: 18 [23%] 2023-03-11 15:32:43,879 44k INFO Losses: [2.3560214042663574, 2.536303758621216, 10.412434577941895, 20.315086364746094, 1.2954427003860474], step: 8600, lr: 9.978771236724554e-05 2023-03-11 15:33:54,407 44k INFO Train Epoch: 18 [64%] 2023-03-11 15:33:54,408 44k INFO Losses: [2.5323262214660645, 2.2306408882141113, 10.00864315032959, 20.6557674407959, 1.4395190477371216], step: 8800, lr: 9.978771236724554e-05 2023-03-11 15:34:57,046 44k INFO ====> Epoch: 18, cost 183.03 s 2023-03-11 15:35:13,114 44k INFO Train Epoch: 19 [4%] 2023-03-11 15:35:13,115 44k INFO Losses: [2.267770767211914, 2.6263203620910645, 11.321374893188477, 23.627044677734375, 1.4635648727416992], step: 9000, lr: 9.977523890319963e-05 2023-03-11 15:35:15,938 44k INFO Saving model and optimizer state at iteration 19 to ./logs\44k\G_9000.pth 2023-03-11 15:35:16,659 44k INFO Saving model and optimizer state at iteration 19 to ./logs\44k\D_9000.pth 2023-03-11 15:35:17,351 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_6000.pth 2023-03-11 15:35:17,380 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_6000.pth 2023-03-11 15:36:26,485 44k INFO Train Epoch: 19 [44%] 2023-03-11 15:36:26,485 44k INFO Losses: [2.361591100692749, 2.4828968048095703, 7.672822952270508, 19.084186553955078, 1.184876799583435], step: 9200, lr: 9.977523890319963e-05 2023-03-11 15:37:35,609 44k INFO Train Epoch: 19 [84%] 2023-03-11 15:37:35,609 44k INFO Losses: [2.637167453765869, 2.3085429668426514, 8.033655166625977, 20.812997817993164, 1.6387673616409302], step: 9400, lr: 9.977523890319963e-05 2023-03-11 15:38:03,651 44k INFO ====> Epoch: 19, cost 186.60 s 2023-03-11 15:38:54,686 44k INFO Train Epoch: 20 [24%] 2023-03-11 15:38:54,686 44k INFO Losses: [2.6092593669891357, 2.18682861328125, 8.330883979797363, 20.886384963989258, 1.4030362367630005], step: 9600, lr: 9.976276699833672e-05 2023-03-11 15:40:04,284 44k INFO Train Epoch: 20 [64%] 2023-03-11 15:40:04,284 44k INFO Losses: [2.6140048503875732, 2.4397807121276855, 8.55968952178955, 18.41922378540039, 1.6601309776306152], step: 9800, lr: 9.976276699833672e-05 2023-03-11 15:41:06,089 44k INFO ====> Epoch: 20, cost 182.44 s 2023-03-11 15:41:23,827 44k INFO Train Epoch: 21 [4%] 2023-03-11 15:41:23,828 44k INFO Losses: [2.3171749114990234, 2.4037892818450928, 10.515302658081055, 23.69483757019043, 1.6149330139160156], step: 10000, lr: 9.975029665246193e-05 2023-03-11 15:41:27,004 44k INFO Saving model and optimizer state at iteration 21 to ./logs\44k\G_10000.pth 2023-03-11 15:41:27,684 44k INFO Saving model and optimizer state at iteration 21 to ./logs\44k\D_10000.pth 2023-03-11 15:41:28,398 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_7000.pth 2023-03-11 15:41:28,428 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_7000.pth 2023-03-11 15:42:36,609 44k INFO Train Epoch: 21 [44%] 2023-03-11 15:42:36,609 44k INFO Losses: [2.5392627716064453, 2.3838067054748535, 8.312740325927734, 18.544279098510742, 1.4240038394927979], step: 10200, lr: 9.975029665246193e-05 2023-03-11 15:43:44,902 44k INFO Train Epoch: 21 [84%] 2023-03-11 15:43:44,903 44k INFO Losses: [2.2344932556152344, 2.477482318878174, 9.714980125427246, 18.273998260498047, 1.0531855821609497], step: 10400, lr: 9.975029665246193e-05 2023-03-11 15:44:11,834 44k INFO ====> Epoch: 21, cost 185.74 s 2023-03-11 15:45:08,546 44k INFO Train Epoch: 22 [24%] 2023-03-11 15:45:08,546 44k INFO Losses: [2.521688461303711, 2.221766948699951, 5.417431831359863, 20.354537963867188, 1.4835706949234009], step: 10600, lr: 9.973782786538036e-05 2023-03-11 15:46:16,533 44k INFO Train Epoch: 22 [64%] 2023-03-11 15:46:16,534 44k INFO Losses: [2.551846504211426, 2.256437301635742, 9.894890785217285, 19.35997772216797, 1.3543282747268677], step: 10800, lr: 9.973782786538036e-05 2023-03-11 15:47:16,760 44k INFO ====> Epoch: 22, cost 184.93 s 2023-03-11 15:47:33,789 44k INFO Train Epoch: 23 [4%] 2023-03-11 15:47:33,789 44k INFO Losses: [2.268132209777832, 2.1404449939727783, 14.050559997558594, 23.146106719970703, 1.333193302154541], step: 11000, lr: 9.972536063689719e-05 2023-03-11 15:47:36,634 44k INFO Saving model and optimizer state at iteration 23 to ./logs\44k\G_11000.pth 2023-03-11 15:47:37,308 44k INFO Saving model and optimizer state at iteration 23 to ./logs\44k\D_11000.pth 2023-03-11 15:47:38,031 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_8000.pth 2023-03-11 15:47:38,067 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_8000.pth 2023-03-11 15:48:45,821 44k INFO Train Epoch: 23 [44%] 2023-03-11 15:48:45,821 44k INFO Losses: [2.54402494430542, 2.162327289581299, 9.02639389038086, 20.96063804626465, 1.2151039838790894], step: 11200, lr: 9.972536063689719e-05 2023-03-11 15:49:53,670 44k INFO Train Epoch: 23 [85%] 2023-03-11 15:49:53,671 44k INFO Losses: [2.396627902984619, 2.3666141033172607, 8.22119426727295, 19.241539001464844, 1.3158494234085083], step: 11400, lr: 9.972536063689719e-05 2023-03-11 15:50:19,887 44k INFO ====> Epoch: 23, cost 183.13 s 2023-03-11 15:51:10,931 44k INFO Train Epoch: 24 [25%] 2023-03-11 15:51:10,931 44k INFO Losses: [2.531647205352783, 2.247694730758667, 6.8384881019592285, 18.87564468383789, 1.2408108711242676], step: 11600, lr: 9.971289496681757e-05 2023-03-11 15:52:18,837 44k INFO Train Epoch: 24 [65%] 2023-03-11 15:52:18,837 44k INFO Losses: [2.49040150642395, 2.3242979049682617, 11.187715530395508, 21.2567138671875, 1.2845029830932617], step: 11800, lr: 9.971289496681757e-05 2023-03-11 15:53:18,554 44k INFO ====> Epoch: 24, cost 178.67 s 2023-03-11 15:53:36,200 44k INFO Train Epoch: 25 [5%] 2023-03-11 15:53:36,201 44k INFO Losses: [2.8594980239868164, 1.9376575946807861, 4.578432083129883, 16.318082809448242, 1.0923633575439453], step: 12000, lr: 9.970043085494672e-05 2023-03-11 15:53:38,944 44k INFO Saving model and optimizer state at iteration 25 to ./logs\44k\G_12000.pth 2023-03-11 15:53:39,612 44k INFO Saving model and optimizer state at iteration 25 to ./logs\44k\D_12000.pth 2023-03-11 15:53:40,316 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_9000.pth 2023-03-11 15:53:40,345 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_9000.pth 2023-03-11 15:54:48,305 44k INFO Train Epoch: 25 [45%] 2023-03-11 15:54:48,306 44k INFO Losses: [2.3794496059417725, 2.621330976486206, 7.374293327331543, 15.248960494995117, 1.4345470666885376], step: 12200, lr: 9.970043085494672e-05 2023-03-11 15:55:56,021 44k INFO Train Epoch: 25 [85%] 2023-03-11 15:55:56,021 44k INFO Losses: [2.8018879890441895, 2.3064675331115723, 6.092841625213623, 17.962278366088867, 0.8339828252792358], step: 12400, lr: 9.970043085494672e-05 2023-03-11 15:56:21,549 44k INFO ====> Epoch: 25, cost 182.99 s 2023-03-11 15:57:13,334 44k INFO Train Epoch: 26 [25%] 2023-03-11 15:57:13,334 44k INFO Losses: [2.6115291118621826, 1.9229179620742798, 8.922040939331055, 20.844497680664062, 1.23288893699646], step: 12600, lr: 9.968796830108985e-05 2023-03-11 15:58:21,393 44k INFO Train Epoch: 26 [65%] 2023-03-11 15:58:21,393 44k INFO Losses: [2.4812917709350586, 2.373945474624634, 7.699307441711426, 19.854042053222656, 1.3993052244186401], step: 12800, lr: 9.968796830108985e-05 2023-03-11 15:59:20,346 44k INFO ====> Epoch: 26, cost 178.80 s 2023-03-11 15:59:38,681 44k INFO Train Epoch: 27 [5%] 2023-03-11 15:59:38,681 44k INFO Losses: [2.4151499271392822, 2.4096596240997314, 8.39862060546875, 18.028606414794922, 1.1739342212677002], step: 13000, lr: 9.967550730505221e-05 2023-03-11 15:59:41,415 44k INFO Saving model and optimizer state at iteration 27 to ./logs\44k\G_13000.pth 2023-03-11 15:59:42,110 44k INFO Saving model and optimizer state at iteration 27 to ./logs\44k\D_13000.pth 2023-03-11 15:59:42,814 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_10000.pth 2023-03-11 15:59:42,859 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_10000.pth 2023-03-11 16:00:50,707 44k INFO Train Epoch: 27 [45%] 2023-03-11 16:00:50,708 44k INFO Losses: [2.4616153240203857, 2.4259119033813477, 10.093733787536621, 20.094026565551758, 1.1932034492492676], step: 13200, lr: 9.967550730505221e-05 2023-03-11 16:01:58,609 44k INFO Train Epoch: 27 [85%] 2023-03-11 16:01:58,610 44k INFO Losses: [2.3026363849639893, 2.629148006439209, 9.92847728729248, 22.299041748046875, 1.4256742000579834], step: 13400, lr: 9.967550730505221e-05 2023-03-11 16:02:23,479 44k INFO ====> Epoch: 27, cost 183.13 s 2023-03-11 16:03:15,929 44k INFO Train Epoch: 28 [25%] 2023-03-11 16:03:15,929 44k INFO Losses: [2.310499668121338, 2.369058847427368, 8.837119102478027, 19.002578735351562, 1.42247474193573], step: 13600, lr: 9.966304786663908e-05 2023-03-11 16:04:24,091 44k INFO Train Epoch: 28 [66%] 2023-03-11 16:04:24,092 44k INFO Losses: [2.4716298580169678, 2.2599573135375977, 8.4513578414917, 20.024991989135742, 1.4681367874145508], step: 13800, lr: 9.966304786663908e-05 2023-03-11 16:05:22,465 44k INFO ====> Epoch: 28, cost 178.99 s 2023-03-11 16:05:41,431 44k INFO Train Epoch: 29 [6%] 2023-03-11 16:05:41,431 44k INFO Losses: [2.325470447540283, 2.2134058475494385, 8.323976516723633, 16.853160858154297, 1.224076509475708], step: 14000, lr: 9.965058998565574e-05 2023-03-11 16:05:44,238 44k INFO Saving model and optimizer state at iteration 29 to ./logs\44k\G_14000.pth 2023-03-11 16:05:44,894 44k INFO Saving model and optimizer state at iteration 29 to ./logs\44k\D_14000.pth 2023-03-11 16:05:45,586 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_11000.pth 2023-03-11 16:05:45,625 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_11000.pth 2023-03-11 16:06:53,389 44k INFO Train Epoch: 29 [46%] 2023-03-11 16:06:53,389 44k INFO Losses: [2.687605857849121, 2.0422115325927734, 5.888756275177002, 19.188798904418945, 1.4618215560913086], step: 14200, lr: 9.965058998565574e-05 2023-03-11 16:08:01,193 44k INFO Train Epoch: 29 [86%] 2023-03-11 16:08:01,194 44k INFO Losses: [2.5680272579193115, 2.2773945331573486, 9.648045539855957, 21.222450256347656, 1.2454417943954468], step: 14400, lr: 9.965058998565574e-05 2023-03-11 16:08:25,431 44k INFO ====> Epoch: 29, cost 182.97 s 2023-03-11 16:09:18,470 44k INFO Train Epoch: 30 [26%] 2023-03-11 16:09:18,470 44k INFO Losses: [2.655322313308716, 2.1192033290863037, 6.735584259033203, 15.48363208770752, 1.568287968635559], step: 14600, lr: 9.963813366190753e-05 2023-03-11 16:10:26,515 44k INFO Train Epoch: 30 [66%] 2023-03-11 16:10:26,515 44k INFO Losses: [2.5184521675109863, 2.1745223999023438, 11.049639701843262, 20.16798210144043, 1.6601899862289429], step: 14800, lr: 9.963813366190753e-05 2023-03-11 16:11:24,237 44k INFO ====> Epoch: 30, cost 178.81 s 2023-03-11 16:11:43,819 44k INFO Train Epoch: 31 [6%] 2023-03-11 16:11:43,819 44k INFO Losses: [2.2327215671539307, 2.5086252689361572, 9.413586616516113, 17.829486846923828, 1.290259838104248], step: 15000, lr: 9.962567889519979e-05 2023-03-11 16:11:46,691 44k INFO Saving model and optimizer state at iteration 31 to ./logs\44k\G_15000.pth 2023-03-11 16:11:47,361 44k INFO Saving model and optimizer state at iteration 31 to ./logs\44k\D_15000.pth 2023-03-11 16:11:48,062 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_12000.pth 2023-03-11 16:11:48,105 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_12000.pth 2023-03-11 16:12:55,958 44k INFO Train Epoch: 31 [46%] 2023-03-11 16:12:55,959 44k INFO Losses: [2.499075174331665, 2.2139151096343994, 8.173677444458008, 20.767545700073242, 1.2070579528808594], step: 15200, lr: 9.962567889519979e-05 2023-03-11 16:14:03,769 44k INFO Train Epoch: 31 [86%] 2023-03-11 16:14:03,769 44k INFO Losses: [2.642566442489624, 1.9031744003295898, 11.62675666809082, 20.67244529724121, 0.6412864327430725], step: 15400, lr: 9.962567889519979e-05 2023-03-11 16:14:27,343 44k INFO ====> Epoch: 31, cost 183.11 s 2023-03-11 16:15:21,257 44k INFO Train Epoch: 32 [26%] 2023-03-11 16:15:21,257 44k INFO Losses: [2.644009590148926, 2.460171699523926, 9.25871467590332, 21.089767456054688, 0.8900614380836487], step: 15600, lr: 9.961322568533789e-05 2023-03-11 16:16:29,330 44k INFO Train Epoch: 32 [66%] 2023-03-11 16:16:29,330 44k INFO Losses: [2.5840609073638916, 2.215465545654297, 8.920888900756836, 21.032459259033203, 1.4248700141906738], step: 15800, lr: 9.961322568533789e-05 2023-03-11 16:17:26,386 44k INFO ====> Epoch: 32, cost 179.04 s 2023-03-11 16:17:46,745 44k INFO Train Epoch: 33 [6%] 2023-03-11 16:17:46,746 44k INFO Losses: [2.524556875228882, 2.350281000137329, 8.435318946838379, 20.257322311401367, 0.5090409517288208], step: 16000, lr: 9.960077403212722e-05 2023-03-11 16:17:49,620 44k INFO Saving model and optimizer state at iteration 33 to ./logs\44k\G_16000.pth 2023-03-11 16:17:50,287 44k INFO Saving model and optimizer state at iteration 33 to ./logs\44k\D_16000.pth 2023-03-11 16:17:51,018 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_13000.pth 2023-03-11 16:17:51,056 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_13000.pth 2023-03-11 16:18:59,238 44k INFO Train Epoch: 33 [46%] 2023-03-11 16:18:59,239 44k INFO Losses: [2.469866991043091, 2.4440743923187256, 9.283712387084961, 19.435638427734375, 1.3612940311431885], step: 16200, lr: 9.960077403212722e-05 2023-03-11 16:20:07,040 44k INFO Train Epoch: 33 [87%] 2023-03-11 16:20:07,040 44k INFO Losses: [2.414421558380127, 2.263139247894287, 8.946000099182129, 20.819942474365234, 1.2298552989959717], step: 16400, lr: 9.960077403212722e-05 2023-03-11 16:20:29,939 44k INFO ====> Epoch: 33, cost 183.55 s 2023-03-11 16:21:24,491 44k INFO Train Epoch: 34 [27%] 2023-03-11 16:21:24,491 44k INFO Losses: [2.290252208709717, 2.33988094329834, 12.770888328552246, 21.065275192260742, 1.9355679750442505], step: 16600, lr: 9.95883239353732e-05 2023-03-11 16:22:32,551 44k INFO Train Epoch: 34 [67%] 2023-03-11 16:22:32,552 44k INFO Losses: [2.5562222003936768, 2.077059507369995, 6.424043655395508, 15.157722473144531, 1.3733930587768555], step: 16800, lr: 9.95883239353732e-05 2023-03-11 16:23:29,077 44k INFO ====> Epoch: 34, cost 179.14 s 2023-03-11 16:23:50,162 44k INFO Train Epoch: 35 [7%] 2023-03-11 16:23:50,163 44k INFO Losses: [2.4464995861053467, 2.1622304916381836, 9.448915481567383, 19.629295349121094, 1.099104881286621], step: 17000, lr: 9.957587539488128e-05 2023-03-11 16:23:52,949 44k INFO Saving model and optimizer state at iteration 35 to ./logs\44k\G_17000.pth 2023-03-11 16:23:53,654 44k INFO Saving model and optimizer state at iteration 35 to ./logs\44k\D_17000.pth 2023-03-11 16:23:54,362 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_14000.pth 2023-03-11 16:23:54,398 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_14000.pth 2023-03-11 16:25:02,460 44k INFO Train Epoch: 35 [47%] 2023-03-11 16:25:02,461 44k INFO Losses: [2.5337371826171875, 2.33553409576416, 8.414680480957031, 19.264318466186523, 1.1393330097198486], step: 17200, lr: 9.957587539488128e-05 2023-03-11 16:26:10,360 44k INFO Train Epoch: 35 [87%] 2023-03-11 16:26:10,361 44k INFO Losses: [2.6141910552978516, 2.19928240776062, 5.9537858963012695, 17.382766723632812, 1.1941275596618652], step: 17400, lr: 9.957587539488128e-05 2023-03-11 16:26:32,610 44k INFO ====> Epoch: 35, cost 183.53 s 2023-03-11 16:27:27,853 44k INFO Train Epoch: 36 [27%] 2023-03-11 16:27:27,854 44k INFO Losses: [2.6472768783569336, 2.579900026321411, 8.806924819946289, 18.302488327026367, 1.3880969285964966], step: 17600, lr: 9.956342841045691e-05 2023-03-11 16:28:35,979 44k INFO Train Epoch: 36 [67%] 2023-03-11 16:28:35,980 44k INFO Losses: [2.6052515506744385, 2.260882616043091, 5.999853134155273, 16.851709365844727, 0.7387580871582031], step: 17800, lr: 9.956342841045691e-05 2023-03-11 16:29:31,815 44k INFO ====> Epoch: 36, cost 179.20 s 2023-03-11 16:29:53,581 44k INFO Train Epoch: 37 [7%] 2023-03-11 16:29:53,582 44k INFO Losses: [2.767744302749634, 2.052274227142334, 8.497169494628906, 21.28492546081543, 1.4624831676483154], step: 18000, lr: 9.95509829819056e-05 2023-03-11 16:29:56,367 44k INFO Saving model and optimizer state at iteration 37 to ./logs\44k\G_18000.pth 2023-03-11 16:29:57,086 44k INFO Saving model and optimizer state at iteration 37 to ./logs\44k\D_18000.pth 2023-03-11 16:29:57,804 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_15000.pth 2023-03-11 16:29:57,844 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_15000.pth 2023-03-11 16:31:05,883 44k INFO Train Epoch: 37 [47%] 2023-03-11 16:31:05,883 44k INFO Losses: [2.5826416015625, 2.750436305999756, 6.196517467498779, 17.618370056152344, 1.1970990896224976], step: 18200, lr: 9.95509829819056e-05 2023-03-11 16:32:13,948 44k INFO Train Epoch: 37 [87%] 2023-03-11 16:32:13,948 44k INFO Losses: [2.277681350708008, 2.8536429405212402, 8.20014762878418, 16.631288528442383, 1.6474748849868774], step: 18400, lr: 9.95509829819056e-05 2023-03-11 16:32:35,523 44k INFO ====> Epoch: 37, cost 183.71 s 2023-03-11 16:33:31,504 44k INFO Train Epoch: 38 [27%] 2023-03-11 16:33:31,505 44k INFO Losses: [2.301442861557007, 2.3155951499938965, 10.550759315490723, 22.16284942626953, 1.2906886339187622], step: 18600, lr: 9.953853910903285e-05 2023-03-11 16:34:39,580 44k INFO Train Epoch: 38 [68%] 2023-03-11 16:34:39,580 44k INFO Losses: [2.7100744247436523, 2.0104453563690186, 9.161460876464844, 18.685989379882812, 0.868019163608551], step: 18800, lr: 9.953853910903285e-05 2023-03-11 16:35:34,722 44k INFO ====> Epoch: 38, cost 179.20 s 2023-03-11 16:35:57,132 44k INFO Train Epoch: 39 [8%] 2023-03-11 16:35:57,133 44k INFO Losses: [2.177253007888794, 2.5475146770477295, 8.49122428894043, 19.978591918945312, 1.8943049907684326], step: 19000, lr: 9.952609679164422e-05 2023-03-11 16:35:59,918 44k INFO Saving model and optimizer state at iteration 39 to ./logs\44k\G_19000.pth 2023-03-11 16:36:00,595 44k INFO Saving model and optimizer state at iteration 39 to ./logs\44k\D_19000.pth 2023-03-11 16:36:01,305 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_16000.pth 2023-03-11 16:36:01,342 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_16000.pth 2023-03-11 16:37:09,436 44k INFO Train Epoch: 39 [48%] 2023-03-11 16:37:09,436 44k INFO Losses: [2.8255484104156494, 2.1870291233062744, 7.689174175262451, 20.043447494506836, 1.4973183870315552], step: 19200, lr: 9.952609679164422e-05 2023-03-11 16:38:17,579 44k INFO Train Epoch: 39 [88%] 2023-03-11 16:38:17,579 44k INFO Losses: [2.68557071685791, 2.174022674560547, 11.963112831115723, 20.74276351928711, 1.276839256286621], step: 19400, lr: 9.952609679164422e-05 2023-03-11 16:38:38,475 44k INFO ====> Epoch: 39, cost 183.75 s 2023-03-11 16:39:35,315 44k INFO Train Epoch: 40 [28%] 2023-03-11 16:39:35,316 44k INFO Losses: [2.218562364578247, 2.6229286193847656, 9.035249710083008, 22.876636505126953, 1.3404216766357422], step: 19600, lr: 9.951365602954526e-05 2023-03-11 16:40:43,485 44k INFO Train Epoch: 40 [68%] 2023-03-11 16:40:43,485 44k INFO Losses: [2.4556326866149902, 2.153568744659424, 9.056777954101562, 21.45380401611328, 1.3466178178787231], step: 19800, lr: 9.951365602954526e-05 2023-03-11 16:41:37,942 44k INFO ====> Epoch: 40, cost 179.47 s 2023-03-11 16:42:01,410 44k INFO Train Epoch: 41 [8%] 2023-03-11 16:42:01,410 44k INFO Losses: [2.358276844024658, 2.5106313228607178, 9.035852432250977, 17.848896026611328, 1.4333394765853882], step: 20000, lr: 9.950121682254156e-05 2023-03-11 16:42:04,332 44k INFO Saving model and optimizer state at iteration 41 to ./logs\44k\G_20000.pth 2023-03-11 16:42:05,060 44k INFO Saving model and optimizer state at iteration 41 to ./logs\44k\D_20000.pth 2023-03-11 16:42:05,765 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_17000.pth 2023-03-11 16:42:05,795 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_17000.pth 2023-03-11 16:43:13,747 44k INFO Train Epoch: 41 [48%] 2023-03-11 16:43:13,747 44k INFO Losses: [2.757983684539795, 2.174842357635498, 6.241888046264648, 15.312981605529785, 0.7716366648674011], step: 20200, lr: 9.950121682254156e-05 2023-03-11 16:44:21,536 44k INFO Train Epoch: 41 [88%] 2023-03-11 16:44:21,537 44k INFO Losses: [2.2964181900024414, 2.576197862625122, 11.501364707946777, 22.86332130432129, 1.5891414880752563], step: 20400, lr: 9.950121682254156e-05 2023-03-11 16:44:41,688 44k INFO ====> Epoch: 41, cost 183.75 s 2023-03-11 16:45:39,059 44k INFO Train Epoch: 42 [28%] 2023-03-11 16:45:39,060 44k INFO Losses: [2.4904518127441406, 2.275092840194702, 7.894067287445068, 19.96741485595703, 1.1788595914840698], step: 20600, lr: 9.948877917043875e-05 2023-03-11 16:46:46,962 44k INFO Train Epoch: 42 [68%] 2023-03-11 16:46:46,963 44k INFO Losses: [2.727558135986328, 2.3687798976898193, 9.183929443359375, 16.081384658813477, 1.4059839248657227], step: 20800, lr: 9.948877917043875e-05 2023-03-11 16:47:40,552 44k INFO ====> Epoch: 42, cost 178.86 s 2023-03-11 16:48:04,617 44k INFO Train Epoch: 43 [8%] 2023-03-11 16:48:04,617 44k INFO Losses: [2.5260770320892334, 2.349147081375122, 9.988131523132324, 20.033578872680664, 1.376304030418396], step: 21000, lr: 9.947634307304244e-05 2023-03-11 16:48:07,408 44k INFO Saving model and optimizer state at iteration 43 to ./logs\44k\G_21000.pth 2023-03-11 16:48:08,144 44k INFO Saving model and optimizer state at iteration 43 to ./logs\44k\D_21000.pth 2023-03-11 16:48:08,850 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_18000.pth 2023-03-11 16:48:08,884 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_18000.pth 2023-03-11 16:49:16,852 44k INFO Train Epoch: 43 [48%] 2023-03-11 16:49:16,852 44k INFO Losses: [2.4984028339385986, 2.4417600631713867, 8.500208854675293, 16.99225616455078, 1.1533056497573853], step: 21200, lr: 9.947634307304244e-05 2023-03-11 16:50:24,724 44k INFO Train Epoch: 43 [89%] 2023-03-11 16:50:24,724 44k INFO Losses: [2.473890781402588, 2.702753782272339, 7.3053812980651855, 15.003735542297363, 1.7307685613632202], step: 21400, lr: 9.947634307304244e-05 2023-03-11 16:50:44,190 44k INFO ====> Epoch: 43, cost 183.64 s 2023-03-11 16:51:42,272 44k INFO Train Epoch: 44 [29%] 2023-03-11 16:51:42,273 44k INFO Losses: [2.5038490295410156, 2.1202573776245117, 9.077492713928223, 17.84222984313965, 1.4935842752456665], step: 21600, lr: 9.94639085301583e-05 2023-03-11 16:52:50,561 44k INFO Train Epoch: 44 [69%] 2023-03-11 16:52:50,561 44k INFO Losses: [2.65716290473938, 2.289363145828247, 7.747422218322754, 17.09707260131836, 1.6776820421218872], step: 21800, lr: 9.94639085301583e-05 2023-03-11 16:53:43,895 44k INFO ====> Epoch: 44, cost 179.71 s 2023-03-11 16:54:08,382 44k INFO Train Epoch: 45 [9%] 2023-03-11 16:54:08,382 44k INFO Losses: [2.6393637657165527, 2.100656747817993, 10.441187858581543, 19.468936920166016, 1.2376543283462524], step: 22000, lr: 9.945147554159202e-05 2023-03-11 16:54:11,101 44k INFO Saving model and optimizer state at iteration 45 to ./logs\44k\G_22000.pth 2023-03-11 16:54:11,769 44k INFO Saving model and optimizer state at iteration 45 to ./logs\44k\D_22000.pth 2023-03-11 16:54:12,476 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_19000.pth 2023-03-11 16:54:12,506 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_19000.pth 2023-03-11 16:55:20,828 44k INFO Train Epoch: 45 [49%] 2023-03-11 16:55:20,829 44k INFO Losses: [2.4671361446380615, 2.0390491485595703, 12.168758392333984, 18.287343978881836, 0.6854475736618042], step: 22200, lr: 9.945147554159202e-05 2023-03-11 16:56:29,056 44k INFO Train Epoch: 45 [89%] 2023-03-11 16:56:29,056 44k INFO Losses: [2.5607428550720215, 2.481017827987671, 7.669865131378174, 21.49156951904297, 1.5193575620651245], step: 22400, lr: 9.945147554159202e-05 2023-03-11 16:56:48,010 44k INFO ====> Epoch: 45, cost 184.12 s 2023-03-11 16:57:46,858 44k INFO Train Epoch: 46 [29%] 2023-03-11 16:57:46,859 44k INFO Losses: [2.495621681213379, 2.284907341003418, 11.992480278015137, 21.132936477661133, 0.8640260100364685], step: 22600, lr: 9.943904410714931e-05 2023-03-11 16:58:55,263 44k INFO Train Epoch: 46 [69%] 2023-03-11 16:58:55,264 44k INFO Losses: [2.585268974304199, 2.0236713886260986, 10.22848129272461, 20.488847732543945, 1.0707169771194458], step: 22800, lr: 9.943904410714931e-05 2023-03-11 16:59:47,800 44k INFO ====> Epoch: 46, cost 179.79 s 2023-03-11 17:00:13,175 44k INFO Train Epoch: 47 [9%] 2023-03-11 17:00:13,175 44k INFO Losses: [2.445901393890381, 2.3792667388916016, 7.496738433837891, 20.465368270874023, 1.4133371114730835], step: 23000, lr: 9.942661422663591e-05 2023-03-11 17:00:16,088 44k INFO Saving model and optimizer state at iteration 47 to ./logs\44k\G_23000.pth 2023-03-11 17:00:16,766 44k INFO Saving model and optimizer state at iteration 47 to ./logs\44k\D_23000.pth 2023-03-11 17:00:17,464 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_20000.pth 2023-03-11 17:00:17,493 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_20000.pth 2023-03-11 17:01:25,771 44k INFO Train Epoch: 47 [49%] 2023-03-11 17:01:25,771 44k INFO Losses: [2.467263698577881, 2.380950927734375, 8.57858657836914, 18.60300064086914, 0.9571957588195801], step: 23200, lr: 9.942661422663591e-05 2023-03-11 17:02:33,971 44k INFO Train Epoch: 47 [89%] 2023-03-11 17:02:33,972 44k INFO Losses: [2.340728759765625, 2.3148326873779297, 9.841753959655762, 19.913787841796875, 1.1770426034927368], step: 23400, lr: 9.942661422663591e-05 2023-03-11 17:02:52,266 44k INFO ====> Epoch: 47, cost 184.47 s 2023-03-11 17:03:51,981 44k INFO Train Epoch: 48 [29%] 2023-03-11 17:03:51,982 44k INFO Losses: [2.4629876613616943, 2.1850712299346924, 7.08063268661499, 17.942771911621094, 1.166876196861267], step: 23600, lr: 9.941418589985758e-05 2023-03-11 17:05:00,351 44k INFO Train Epoch: 48 [70%] 2023-03-11 17:05:00,351 44k INFO Losses: [2.2366554737091064, 2.580770492553711, 10.116865158081055, 19.903745651245117, 1.3801454305648804], step: 23800, lr: 9.941418589985758e-05 2023-03-11 17:05:52,342 44k INFO ====> Epoch: 48, cost 180.08 s 2023-03-11 17:06:19,065 44k INFO Train Epoch: 49 [10%] 2023-03-11 17:06:19,065 44k INFO Losses: [2.477945327758789, 2.2095224857330322, 11.174955368041992, 21.357126235961914, 1.2417038679122925], step: 24000, lr: 9.940175912662009e-05 2023-03-11 17:06:22,006 44k INFO Saving model and optimizer state at iteration 49 to ./logs\44k\G_24000.pth 2023-03-11 17:06:22,676 44k INFO Saving model and optimizer state at iteration 49 to ./logs\44k\D_24000.pth 2023-03-11 17:06:23,374 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_21000.pth 2023-03-11 17:06:23,403 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_21000.pth 2023-03-11 17:07:31,617 44k INFO Train Epoch: 49 [50%] 2023-03-11 17:07:31,617 44k INFO Losses: [2.886035203933716, 1.8785781860351562, 7.4775614738464355, 20.276151657104492, 1.4474354982376099], step: 24200, lr: 9.940175912662009e-05 2023-03-11 17:08:40,000 44k INFO Train Epoch: 49 [90%] 2023-03-11 17:08:40,000 44k INFO Losses: [2.437899112701416, 2.2021918296813965, 6.469944000244141, 16.84340476989746, 1.5232365131378174], step: 24400, lr: 9.940175912662009e-05 2023-03-11 17:08:57,555 44k INFO ====> Epoch: 49, cost 185.21 s 2023-03-11 17:09:57,804 44k INFO Train Epoch: 50 [30%] 2023-03-11 17:09:57,804 44k INFO Losses: [2.488111972808838, 2.159778594970703, 7.508576393127441, 19.945409774780273, 1.291427731513977], step: 24600, lr: 9.938933390672926e-05 2023-03-11 17:11:06,251 44k INFO Train Epoch: 50 [70%] 2023-03-11 17:11:06,251 44k INFO Losses: [2.338547945022583, 2.3386571407318115, 8.272408485412598, 17.085607528686523, 1.423154592514038], step: 24800, lr: 9.938933390672926e-05 2023-03-11 17:11:57,769 44k INFO ====> Epoch: 50, cost 180.21 s 2023-03-11 17:12:24,504 44k INFO Train Epoch: 51 [10%] 2023-03-11 17:12:24,504 44k INFO Losses: [2.441009759902954, 2.2490782737731934, 9.04307746887207, 20.92750358581543, 1.4339230060577393], step: 25000, lr: 9.937691023999092e-05 2023-03-11 17:12:27,319 44k INFO Saving model and optimizer state at iteration 51 to ./logs\44k\G_25000.pth 2023-03-11 17:12:27,994 44k INFO Saving model and optimizer state at iteration 51 to ./logs\44k\D_25000.pth 2023-03-11 17:12:28,694 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_22000.pth 2023-03-11 17:12:28,725 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_22000.pth 2023-03-11 17:13:37,006 44k INFO Train Epoch: 51 [50%] 2023-03-11 17:13:37,006 44k INFO Losses: [2.7132444381713867, 1.9351176023483276, 8.24345588684082, 18.08477783203125, 1.3425847291946411], step: 25200, lr: 9.937691023999092e-05 2023-03-11 17:14:45,453 44k INFO Train Epoch: 51 [90%] 2023-03-11 17:14:45,454 44k INFO Losses: [2.6710379123687744, 2.590837001800537, 8.807258605957031, 18.14118003845215, 0.9072006344795227], step: 25400, lr: 9.937691023999092e-05 2023-03-11 17:15:02,294 44k INFO ====> Epoch: 51, cost 184.52 s 2023-03-11 17:16:03,223 44k INFO Train Epoch: 52 [30%] 2023-03-11 17:16:03,223 44k INFO Losses: [2.4516069889068604, 2.0709495544433594, 10.720499038696289, 21.143922805786133, 1.3466352224349976], step: 25600, lr: 9.936448812621091e-05 2023-03-11 17:17:11,695 44k INFO Train Epoch: 52 [70%] 2023-03-11 17:17:11,695 44k INFO Losses: [2.7132415771484375, 2.3023927211761475, 7.095896244049072, 17.257678985595703, 0.7083165049552917], step: 25800, lr: 9.936448812621091e-05 2023-03-11 17:18:02,490 44k INFO ====> Epoch: 52, cost 180.20 s 2023-03-11 17:18:29,911 44k INFO Train Epoch: 53 [10%] 2023-03-11 17:18:29,912 44k INFO Losses: [2.4639596939086914, 2.4552831649780273, 9.676838874816895, 21.291894912719727, 1.2999194860458374], step: 26000, lr: 9.935206756519513e-05 2023-03-11 17:18:32,730 44k INFO Saving model and optimizer state at iteration 53 to ./logs\44k\G_26000.pth 2023-03-11 17:18:33,482 44k INFO Saving model and optimizer state at iteration 53 to ./logs\44k\D_26000.pth 2023-03-11 17:18:34,197 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_23000.pth 2023-03-11 17:18:34,229 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_23000.pth 2023-03-11 17:19:42,532 44k INFO Train Epoch: 53 [51%] 2023-03-11 17:19:42,532 44k INFO Losses: [2.4027254581451416, 2.3741626739501953, 9.75043773651123, 19.30217170715332, 1.1789641380310059], step: 26200, lr: 9.935206756519513e-05 2023-03-11 17:20:51,050 44k INFO Train Epoch: 53 [91%] 2023-03-11 17:20:51,051 44k INFO Losses: [2.536745071411133, 2.023958683013916, 9.865911483764648, 19.794979095458984, 1.3731187582015991], step: 26400, lr: 9.935206756519513e-05 2023-03-11 17:21:07,248 44k INFO ====> Epoch: 53, cost 184.76 s 2023-03-11 17:22:08,904 44k INFO Train Epoch: 54 [31%] 2023-03-11 17:22:08,904 44k INFO Losses: [2.377667188644409, 2.2875494956970215, 13.346915245056152, 21.483728408813477, 0.7558721899986267], step: 26600, lr: 9.933964855674948e-05 2023-03-11 17:23:17,457 44k INFO Train Epoch: 54 [71%] 2023-03-11 17:23:17,457 44k INFO Losses: [2.660957098007202, 2.0975182056427, 9.122368812561035, 18.4363956451416, 1.4415180683135986], step: 26800, lr: 9.933964855674948e-05 2023-03-11 17:24:07,606 44k INFO ====> Epoch: 54, cost 180.36 s 2023-03-11 17:24:35,700 44k INFO Train Epoch: 55 [11%] 2023-03-11 17:24:35,700 44k INFO Losses: [2.592202663421631, 2.096247911453247, 7.021150588989258, 18.67670249938965, 1.0539186000823975], step: 27000, lr: 9.932723110067987e-05 2023-03-11 17:24:38,454 44k INFO Saving model and optimizer state at iteration 55 to ./logs\44k\G_27000.pth 2023-03-11 17:24:39,205 44k INFO Saving model and optimizer state at iteration 55 to ./logs\44k\D_27000.pth 2023-03-11 17:24:39,911 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_24000.pth 2023-03-11 17:24:39,953 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_24000.pth 2023-03-11 17:25:48,303 44k INFO Train Epoch: 55 [51%] 2023-03-11 17:25:48,303 44k INFO Losses: [2.388704776763916, 2.708172559738159, 5.416806697845459, 15.12116813659668, 1.149857759475708], step: 27200, lr: 9.932723110067987e-05 2023-03-11 17:26:56,874 44k INFO Train Epoch: 55 [91%] 2023-03-11 17:26:56,875 44k INFO Losses: [2.476592540740967, 2.2643284797668457, 7.098380088806152, 14.569293022155762, 0.7441232800483704], step: 27400, lr: 9.932723110067987e-05 2023-03-11 17:27:12,353 44k INFO ====> Epoch: 55, cost 184.75 s 2023-03-11 17:28:14,638 44k INFO Train Epoch: 56 [31%] 2023-03-11 17:28:14,639 44k INFO Losses: [2.2838194370269775, 2.26231050491333, 12.5723876953125, 23.70583724975586, 1.0600672960281372], step: 27600, lr: 9.931481519679228e-05 2023-03-11 17:29:23,249 44k INFO Train Epoch: 56 [71%] 2023-03-11 17:29:23,250 44k INFO Losses: [2.53718638420105, 2.2681565284729004, 8.027278900146484, 17.26943588256836, 1.2271124124526978], step: 27800, lr: 9.931481519679228e-05 2023-03-11 17:30:12,553 44k INFO ====> Epoch: 56, cost 180.20 s 2023-03-11 17:30:41,227 44k INFO Train Epoch: 57 [11%] 2023-03-11 17:30:41,227 44k INFO Losses: [2.4887173175811768, 2.2539429664611816, 7.177180290222168, 17.915536880493164, 1.286185622215271], step: 28000, lr: 9.930240084489267e-05 2023-03-11 17:30:44,058 44k INFO Saving model and optimizer state at iteration 57 to ./logs\44k\G_28000.pth 2023-03-11 17:30:44,740 44k INFO Saving model and optimizer state at iteration 57 to ./logs\44k\D_28000.pth 2023-03-11 17:30:45,452 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_25000.pth 2023-03-11 17:30:45,484 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_25000.pth 2023-03-11 17:31:53,882 44k INFO Train Epoch: 57 [51%] 2023-03-11 17:31:53,883 44k INFO Losses: [2.569185733795166, 2.4358510971069336, 7.250521183013916, 17.513259887695312, 1.6676620244979858], step: 28200, lr: 9.930240084489267e-05 2023-03-11 17:33:02,465 44k INFO Train Epoch: 57 [91%] 2023-03-11 17:33:02,466 44k INFO Losses: [2.3527286052703857, 2.2220966815948486, 12.004077911376953, 20.600894927978516, 0.9194080233573914], step: 28400, lr: 9.930240084489267e-05 2023-03-11 17:33:17,280 44k INFO ====> Epoch: 57, cost 184.73 s 2023-03-11 17:34:20,286 44k INFO Train Epoch: 58 [31%] 2023-03-11 17:34:20,287 44k INFO Losses: [2.5757927894592285, 2.037921905517578, 8.678258895874023, 19.348342895507812, 0.9605154991149902], step: 28600, lr: 9.928998804478705e-05 2023-03-11 17:35:28,962 44k INFO Train Epoch: 58 [72%] 2023-03-11 17:35:28,963 44k INFO Losses: [2.4977071285247803, 2.39665150642395, 9.980164527893066, 21.12443733215332, 1.6150543689727783], step: 28800, lr: 9.928998804478705e-05 2023-03-11 17:36:17,573 44k INFO ====> Epoch: 58, cost 180.29 s 2023-03-11 17:36:46,974 44k INFO Train Epoch: 59 [12%] 2023-03-11 17:36:46,974 44k INFO Losses: [2.601181983947754, 2.2783608436584473, 8.300724983215332, 19.530622482299805, 0.8267676830291748], step: 29000, lr: 9.927757679628145e-05 2023-03-11 17:36:49,766 44k INFO Saving model and optimizer state at iteration 59 to ./logs\44k\G_29000.pth 2023-03-11 17:36:50,440 44k INFO Saving model and optimizer state at iteration 59 to ./logs\44k\D_29000.pth 2023-03-11 17:36:51,136 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_26000.pth 2023-03-11 17:36:51,180 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_26000.pth 2023-03-11 17:37:59,672 44k INFO Train Epoch: 59 [52%] 2023-03-11 17:37:59,672 44k INFO Losses: [2.5214028358459473, 2.212765693664551, 7.321767807006836, 19.840919494628906, 0.9729735851287842], step: 29200, lr: 9.927757679628145e-05 2023-03-11 17:39:08,458 44k INFO Train Epoch: 59 [92%] 2023-03-11 17:39:08,459 44k INFO Losses: [2.604008197784424, 2.5697851181030273, 9.790483474731445, 18.502750396728516, 1.2918978929519653], step: 29400, lr: 9.927757679628145e-05 2023-03-11 17:39:22,575 44k INFO ====> Epoch: 59, cost 185.00 s 2023-03-11 17:40:26,189 44k INFO Train Epoch: 60 [32%] 2023-03-11 17:40:26,189 44k INFO Losses: [2.821413278579712, 2.1541152000427246, 7.721822738647461, 18.116790771484375, 1.0358073711395264], step: 29600, lr: 9.926516709918191e-05 2023-03-11 17:41:35,132 44k INFO Train Epoch: 60 [72%] 2023-03-11 17:41:35,132 44k INFO Losses: [2.4807820320129395, 2.2203245162963867, 10.342397689819336, 21.603288650512695, 1.1676857471466064], step: 29800, lr: 9.926516709918191e-05 2023-03-11 17:42:23,168 44k INFO ====> Epoch: 60, cost 180.59 s 2023-03-11 20:44:27,516 44k INFO {'train': {'log_interval': 200, 'eval_interval': 1000, 'seed': 1234, 'epochs': 100, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kiriha': 0}, 'model_dir': './logs\\44k'} 2023-03-11 20:44:27,544 44k WARNING git hash values are different. cea6df30(saved) != cc5b3bbe(current) 2023-03-11 20:44:29,797 44k INFO Loaded checkpoint './logs\44k\G_29000.pth' (iteration 59) 2023-03-11 20:44:30,316 44k INFO Loaded checkpoint './logs\44k\D_29000.pth' (iteration 59) 2023-03-11 20:45:10,135 44k INFO Train Epoch: 59 [12%] 2023-03-11 20:45:10,136 44k INFO Losses: [2.7395496368408203, 2.169064521789551, 4.9782538414001465, 13.671679496765137, 1.3571621179580688], step: 29000, lr: 9.926516709918191e-05 2023-03-11 20:45:16,342 44k INFO Saving model and optimizer state at iteration 59 to ./logs\44k\G_29000.pth 2023-03-11 20:45:17,060 44k INFO Saving model and optimizer state at iteration 59 to ./logs\44k\D_29000.pth 2023-03-11 20:46:36,805 44k INFO Train Epoch: 59 [52%] 2023-03-11 20:46:36,806 44k INFO Losses: [2.559898853302002, 2.5711474418640137, 10.055688858032227, 20.156484603881836, 0.6663795709609985], step: 29200, lr: 9.926516709918191e-05 2023-03-11 20:47:53,256 44k INFO Train Epoch: 59 [92%] 2023-03-11 20:47:53,257 44k INFO Losses: [2.457517385482788, 2.3962409496307373, 10.19529914855957, 21.16303825378418, 1.4217925071716309], step: 29400, lr: 9.926516709918191e-05 2023-03-11 20:48:09,799 44k INFO ====> Epoch: 59, cost 222.28 s 2023-03-11 20:49:14,646 44k INFO Train Epoch: 60 [32%] 2023-03-11 20:49:14,647 44k INFO Losses: [2.1070749759674072, 2.3226304054260254, 12.787149429321289, 21.052770614624023, 0.926153838634491], step: 29600, lr: 9.92527589532945e-05 2023-03-11 20:50:26,225 44k INFO Train Epoch: 60 [72%] 2023-03-11 20:50:26,226 44k INFO Losses: [2.343548059463501, 2.285792827606201, 10.513692855834961, 21.15215301513672, 1.0394333600997925], step: 29800, lr: 9.92527589532945e-05 2023-03-11 20:51:16,149 44k INFO ====> Epoch: 60, cost 186.35 s 2023-03-11 20:51:46,523 44k INFO Train Epoch: 61 [12%] 2023-03-11 20:51:46,524 44k INFO Losses: [2.568753957748413, 2.0104408264160156, 8.23434066772461, 18.395591735839844, 1.3121479749679565], step: 30000, lr: 9.924035235842533e-05 2023-03-11 20:51:49,457 44k INFO Saving model and optimizer state at iteration 61 to ./logs\44k\G_30000.pth 2023-03-11 20:51:50,101 44k INFO Saving model and optimizer state at iteration 61 to ./logs\44k\D_30000.pth 2023-03-11 20:51:50,660 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_27000.pth 2023-03-11 20:51:50,661 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_27000.pth 2023-03-11 20:52:57,959 44k INFO Train Epoch: 61 [52%] 2023-03-11 20:52:57,959 44k INFO Losses: [2.438237190246582, 2.186472177505493, 10.548276901245117, 22.610023498535156, 1.3734403848648071], step: 30200, lr: 9.924035235842533e-05 2023-03-11 20:54:05,429 44k INFO Train Epoch: 61 [92%] 2023-03-11 20:54:05,429 44k INFO Losses: [2.3463120460510254, 2.237057685852051, 14.490958213806152, 24.88561248779297, 1.2242178916931152], step: 30400, lr: 9.924035235842533e-05 2023-03-11 20:54:18,479 44k INFO ====> Epoch: 61, cost 182.33 s 2023-03-11 20:55:22,268 44k INFO Train Epoch: 62 [32%] 2023-03-11 20:55:22,269 44k INFO Losses: [2.4503591060638428, 2.3902668952941895, 10.274510383605957, 20.74117660522461, 1.450510859489441], step: 30600, lr: 9.922794731438052e-05 2023-03-11 20:56:30,749 44k INFO Train Epoch: 62 [72%] 2023-03-11 20:56:30,750 44k INFO Losses: [2.5514867305755615, 2.5646538734436035, 9.320901870727539, 21.71694564819336, 1.0253084897994995], step: 30800, lr: 9.922794731438052e-05 2023-03-11 20:57:17,374 44k INFO ====> Epoch: 62, cost 178.90 s 2023-03-11 20:57:47,357 44k INFO Train Epoch: 63 [12%] 2023-03-11 20:57:47,357 44k INFO Losses: [2.4540090560913086, 2.257601737976074, 8.902412414550781, 18.52001953125, 1.301386833190918], step: 31000, lr: 9.921554382096622e-05 2023-03-11 20:57:50,164 44k INFO Saving model and optimizer state at iteration 63 to ./logs\44k\G_31000.pth 2023-03-11 20:57:50,859 44k INFO Saving model and optimizer state at iteration 63 to ./logs\44k\D_31000.pth 2023-03-11 20:57:51,419 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_28000.pth 2023-03-11 20:57:51,420 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_28000.pth 2023-03-11 20:58:57,665 44k INFO Train Epoch: 63 [53%] 2023-03-11 20:58:57,665 44k INFO Losses: [2.45166015625, 2.001648187637329, 9.828557968139648, 19.312227249145508, 0.8186819553375244], step: 31200, lr: 9.921554382096622e-05 2023-03-11 21:00:04,183 44k INFO Train Epoch: 63 [93%] 2023-03-11 21:00:04,183 44k INFO Losses: [2.596999168395996, 2.366590976715088, 6.710174560546875, 17.280502319335938, 1.1916017532348633], step: 31400, lr: 9.921554382096622e-05 2023-03-11 21:00:16,476 44k INFO ====> Epoch: 63, cost 179.10 s 2023-03-11 21:01:19,768 44k INFO Train Epoch: 64 [33%] 2023-03-11 21:01:19,769 44k INFO Losses: [2.468043088912964, 2.4250996112823486, 7.601521015167236, 17.671367645263672, 1.3288215398788452], step: 31600, lr: 9.92031418779886e-05 2023-03-11 21:02:26,120 44k INFO Train Epoch: 64 [73%] 2023-03-11 21:02:26,120 44k INFO Losses: [2.4345903396606445, 2.516038179397583, 6.115878105163574, 19.969146728515625, 1.6527137756347656], step: 31800, lr: 9.92031418779886e-05 2023-03-11 21:03:11,143 44k INFO ====> Epoch: 64, cost 174.67 s 2023-03-11 21:03:41,672 44k INFO Train Epoch: 65 [13%] 2023-03-11 21:03:41,672 44k INFO Losses: [2.53236722946167, 2.240262985229492, 8.863626480102539, 19.99082374572754, 1.3744193315505981], step: 32000, lr: 9.919074148525384e-05 2023-03-11 21:03:44,444 44k INFO Saving model and optimizer state at iteration 65 to ./logs\44k\G_32000.pth 2023-03-11 21:03:45,117 44k INFO Saving model and optimizer state at iteration 65 to ./logs\44k\D_32000.pth 2023-03-11 21:03:45,749 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_29000.pth 2023-03-11 21:03:45,771 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_29000.pth 2023-03-11 21:04:52,000 44k INFO Train Epoch: 65 [53%] 2023-03-11 21:04:52,000 44k INFO Losses: [2.5555801391601562, 2.1247034072875977, 12.441925048828125, 21.132169723510742, 1.0641701221466064], step: 32200, lr: 9.919074148525384e-05 2023-03-11 21:05:58,376 44k INFO Train Epoch: 65 [93%] 2023-03-11 21:05:58,376 44k INFO Losses: [2.4824161529541016, 2.513852119445801, 7.779417514801025, 16.929662704467773, 0.9302098155021667], step: 32400, lr: 9.919074148525384e-05 2023-03-11 21:06:10,031 44k INFO ====> Epoch: 65, cost 178.89 s 2023-03-11 21:07:13,991 44k INFO Train Epoch: 66 [33%] 2023-03-11 21:07:13,992 44k INFO Losses: [2.4808437824249268, 2.0658130645751953, 6.641482353210449, 17.039703369140625, 1.0084729194641113], step: 32600, lr: 9.917834264256819e-05 2023-03-11 21:08:20,350 44k INFO Train Epoch: 66 [73%] 2023-03-11 21:08:20,350 44k INFO Losses: [2.747159242630005, 2.1277828216552734, 6.927733898162842, 18.79554557800293, 1.24385404586792], step: 32800, lr: 9.917834264256819e-05 2023-03-11 21:09:04,626 44k INFO ====> Epoch: 66, cost 174.59 s 2023-03-11 21:09:35,892 44k INFO Train Epoch: 67 [13%] 2023-03-11 21:09:35,892 44k INFO Losses: [2.785015106201172, 2.1028895378112793, 7.862151145935059, 20.33611488342285, 1.0755938291549683], step: 33000, lr: 9.916594534973787e-05 2023-03-11 21:09:38,660 44k INFO Saving model and optimizer state at iteration 67 to ./logs\44k\G_33000.pth 2023-03-11 21:09:39,352 44k INFO Saving model and optimizer state at iteration 67 to ./logs\44k\D_33000.pth 2023-03-11 21:09:39,953 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_30000.pth 2023-03-11 21:09:39,975 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_30000.pth 2023-03-11 21:10:46,240 44k INFO Train Epoch: 67 [53%] 2023-03-11 21:10:46,240 44k INFO Losses: [2.4973456859588623, 2.275160312652588, 10.420083999633789, 22.233200073242188, 0.8378921747207642], step: 33200, lr: 9.916594534973787e-05 2023-03-11 21:11:52,543 44k INFO Train Epoch: 67 [93%] 2023-03-11 21:11:52,543 44k INFO Losses: [2.489945650100708, 2.327136754989624, 6.627249717712402, 16.594816207885742, 1.1041429042816162], step: 33400, lr: 9.916594534973787e-05 2023-03-11 21:12:03,409 44k INFO ====> Epoch: 67, cost 178.78 s 2023-03-11 21:13:08,045 44k INFO Train Epoch: 68 [33%] 2023-03-11 21:13:08,045 44k INFO Losses: [2.511503219604492, 2.1270127296447754, 9.539984703063965, 19.65321922302246, 0.9752045273780823], step: 33600, lr: 9.915354960656915e-05 2023-03-11 21:14:14,437 44k INFO Train Epoch: 68 [74%] 2023-03-11 21:14:14,437 44k INFO Losses: [2.456529140472412, 2.1782665252685547, 10.622865676879883, 20.437519073486328, 1.2677642107009888], step: 33800, lr: 9.915354960656915e-05 2023-03-11 21:14:58,143 44k INFO ====> Epoch: 68, cost 174.73 s 2023-03-11 21:15:30,041 44k INFO Train Epoch: 69 [14%] 2023-03-11 21:15:30,042 44k INFO Losses: [2.4073398113250732, 2.7643909454345703, 8.386627197265625, 19.879173278808594, 1.1028239727020264], step: 34000, lr: 9.914115541286833e-05 2023-03-11 21:15:32,809 44k INFO Saving model and optimizer state at iteration 69 to ./logs\44k\G_34000.pth 2023-03-11 21:15:33,496 44k INFO Saving model and optimizer state at iteration 69 to ./logs\44k\D_34000.pth 2023-03-11 21:15:34,092 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_31000.pth 2023-03-11 21:15:34,114 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_31000.pth 2023-03-11 21:16:40,243 44k INFO Train Epoch: 69 [54%] 2023-03-11 21:16:40,244 44k INFO Losses: [2.3475260734558105, 2.527264356613159, 10.010459899902344, 19.846729278564453, 1.419461965560913], step: 34200, lr: 9.914115541286833e-05 2023-03-11 21:17:46,511 44k INFO Train Epoch: 69 [94%] 2023-03-11 21:17:46,511 44k INFO Losses: [2.450281858444214, 2.297757625579834, 8.116521835327148, 17.811548233032227, 1.007429599761963], step: 34400, lr: 9.914115541286833e-05 2023-03-11 21:17:57,385 44k INFO ====> Epoch: 69, cost 179.24 s 2023-03-11 21:19:04,975 44k INFO Train Epoch: 70 [34%] 2023-03-11 21:19:04,976 44k INFO Losses: [2.4433541297912598, 2.4937517642974854, 10.57686996459961, 20.45323371887207, 1.2970625162124634], step: 34600, lr: 9.912876276844171e-05 2023-03-11 21:20:14,839 44k INFO Train Epoch: 70 [74%] 2023-03-11 21:20:14,840 44k INFO Losses: [2.4562602043151855, 2.1351535320281982, 12.702497482299805, 21.96573257446289, 0.9791819453239441], step: 34800, lr: 9.912876276844171e-05 2023-03-11 21:21:00,357 44k INFO ====> Epoch: 70, cost 182.97 s 2023-03-11 21:21:34,438 44k INFO Train Epoch: 71 [14%] 2023-03-11 21:21:34,439 44k INFO Losses: [2.732184648513794, 1.9525395631790161, 8.09852123260498, 18.62911605834961, 1.888636827468872], step: 35000, lr: 9.911637167309565e-05 2023-03-11 21:21:37,300 44k INFO Saving model and optimizer state at iteration 71 to ./logs\44k\G_35000.pth 2023-03-11 21:21:37,988 44k INFO Saving model and optimizer state at iteration 71 to ./logs\44k\D_35000.pth 2023-03-11 21:21:38,595 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_32000.pth 2023-03-11 21:21:38,616 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_32000.pth 2023-03-11 21:22:48,077 44k INFO Train Epoch: 71 [54%] 2023-03-11 21:22:48,077 44k INFO Losses: [2.7912986278533936, 2.0502424240112305, 6.484429359436035, 18.19309425354004, 1.471872091293335], step: 35200, lr: 9.911637167309565e-05 2023-03-11 21:23:56,749 44k INFO Train Epoch: 71 [94%] 2023-03-11 21:23:56,749 44k INFO Losses: [2.471869707107544, 2.219325542449951, 11.439119338989258, 21.162534713745117, 1.31123685836792], step: 35400, lr: 9.911637167309565e-05 2023-03-11 21:24:06,473 44k INFO ====> Epoch: 71, cost 186.12 s 2023-03-11 21:25:13,678 44k INFO Train Epoch: 72 [34%] 2023-03-11 21:25:13,678 44k INFO Losses: [2.4776854515075684, 2.486640691757202, 8.286742210388184, 23.074905395507812, 1.5206538438796997], step: 35600, lr: 9.910398212663652e-05 2023-03-11 21:26:21,419 44k INFO Train Epoch: 72 [74%] 2023-03-11 21:26:21,420 44k INFO Losses: [2.3822808265686035, 2.276285171508789, 10.441347122192383, 23.423969268798828, 1.1330195665359497], step: 35800, lr: 9.910398212663652e-05 2023-03-11 21:27:04,661 44k INFO ====> Epoch: 72, cost 178.19 s 2023-03-11 21:27:38,635 44k INFO Train Epoch: 73 [14%] 2023-03-11 21:27:38,635 44k INFO Losses: [2.5506722927093506, 2.2461109161376953, 5.821172714233398, 19.562896728515625, 0.5983901023864746], step: 36000, lr: 9.909159412887068e-05 2023-03-11 21:27:41,393 44k INFO Saving model and optimizer state at iteration 73 to ./logs\44k\G_36000.pth 2023-03-11 21:27:42,096 44k INFO Saving model and optimizer state at iteration 73 to ./logs\44k\D_36000.pth 2023-03-11 21:27:42,888 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_33000.pth 2023-03-11 21:27:42,919 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_33000.pth 2023-03-11 21:28:50,353 44k INFO Train Epoch: 73 [55%] 2023-03-11 21:28:50,353 44k INFO Losses: [2.3358652591705322, 2.4468560218811035, 8.918264389038086, 17.358997344970703, 1.2184189558029175], step: 36200, lr: 9.909159412887068e-05 2023-03-11 21:29:56,700 44k INFO Train Epoch: 73 [95%] 2023-03-11 21:29:56,701 44k INFO Losses: [2.372572898864746, 2.7162506580352783, 7.295660018920898, 15.29194164276123, 0.6826701760292053], step: 36400, lr: 9.909159412887068e-05 2023-03-11 21:30:05,752 44k INFO ====> Epoch: 73, cost 181.09 s 2023-03-11 21:31:12,482 44k INFO Train Epoch: 74 [35%] 2023-03-11 21:31:12,482 44k INFO Losses: [2.478341817855835, 1.9461508989334106, 7.713535785675049, 18.296417236328125, 0.9300777316093445], step: 36600, lr: 9.907920767960457e-05 2023-03-11 21:32:19,028 44k INFO Train Epoch: 74 [75%] 2023-03-11 21:32:19,028 44k INFO Losses: [2.348533868789673, 2.372560739517212, 12.947299003601074, 18.33976936340332, 1.0703716278076172], step: 36800, lr: 9.907920767960457e-05 2023-03-11 21:33:00,798 44k INFO ====> Epoch: 74, cost 175.05 s 2023-03-11 21:33:34,858 44k INFO Train Epoch: 75 [15%] 2023-03-11 21:33:34,859 44k INFO Losses: [2.4666364192962646, 2.1556556224823, 11.616887092590332, 21.01991081237793, 1.2900779247283936], step: 37000, lr: 9.906682277864462e-05 2023-03-11 21:33:37,634 44k INFO Saving model and optimizer state at iteration 75 to ./logs\44k\G_37000.pth 2023-03-11 21:33:38,307 44k INFO Saving model and optimizer state at iteration 75 to ./logs\44k\D_37000.pth 2023-03-11 21:33:38,908 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_34000.pth 2023-03-11 21:33:38,930 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_34000.pth 2023-03-11 21:34:45,158 44k INFO Train Epoch: 75 [55%] 2023-03-11 21:34:45,158 44k INFO Losses: [2.45882511138916, 2.0661258697509766, 12.629100799560547, 20.689353942871094, 1.026715636253357], step: 37200, lr: 9.906682277864462e-05 2023-03-11 21:35:51,538 44k INFO Train Epoch: 75 [95%] 2023-03-11 21:35:51,539 44k INFO Losses: [2.419351577758789, 2.1658878326416016, 7.6675286293029785, 19.32959747314453, 1.3555376529693604], step: 37400, lr: 9.906682277864462e-05 2023-03-11 21:35:59,883 44k INFO ====> Epoch: 75, cost 179.09 s 2023-03-11 21:37:07,354 44k INFO Train Epoch: 76 [35%] 2023-03-11 21:37:07,355 44k INFO Losses: [2.8016536235809326, 2.131122589111328, 6.640013694763184, 15.5036039352417, 0.7720618844032288], step: 37600, lr: 9.905443942579728e-05 2023-03-11 21:38:13,842 44k INFO Train Epoch: 76 [75%] 2023-03-11 21:38:13,842 44k INFO Losses: [2.607391834259033, 2.309739589691162, 9.419706344604492, 21.45128631591797, 1.2916693687438965], step: 37800, lr: 9.905443942579728e-05 2023-03-11 21:38:54,959 44k INFO ====> Epoch: 76, cost 175.08 s 2023-03-11 21:39:29,523 44k INFO Train Epoch: 77 [15%] 2023-03-11 21:39:29,523 44k INFO Losses: [2.2328224182128906, 2.466634750366211, 11.889399528503418, 19.511920928955078, 1.2295594215393066], step: 38000, lr: 9.904205762086905e-05 2023-03-11 21:39:32,307 44k INFO Saving model and optimizer state at iteration 77 to ./logs\44k\G_38000.pth 2023-03-11 21:39:32,940 44k INFO Saving model and optimizer state at iteration 77 to ./logs\44k\D_38000.pth 2023-03-11 21:39:33,533 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_35000.pth 2023-03-11 21:39:33,556 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_35000.pth 2023-03-11 21:40:39,821 44k INFO Train Epoch: 77 [55%] 2023-03-11 21:40:39,821 44k INFO Losses: [2.5970332622528076, 2.2562038898468018, 7.702361583709717, 18.11957550048828, 1.1460437774658203], step: 38200, lr: 9.904205762086905e-05 2023-03-11 21:41:46,134 44k INFO Train Epoch: 77 [95%] 2023-03-11 21:41:46,134 44k INFO Losses: [2.2434403896331787, 2.5650393962860107, 8.58699893951416, 18.918643951416016, 1.0178459882736206], step: 38400, lr: 9.904205762086905e-05 2023-03-11 21:41:53,842 44k INFO ====> Epoch: 77, cost 178.88 s 2023-03-11 21:43:01,892 44k INFO Train Epoch: 78 [35%] 2023-03-11 21:43:01,892 44k INFO Losses: [2.437692165374756, 2.2345163822174072, 9.880803108215332, 20.183107376098633, 0.9121027588844299], step: 38600, lr: 9.902967736366644e-05 2023-03-11 21:44:08,385 44k INFO Train Epoch: 78 [76%] 2023-03-11 21:44:08,385 44k INFO Losses: [2.4501900672912598, 2.0773799419403076, 8.28588581085205, 19.002071380615234, 1.3972313404083252], step: 38800, lr: 9.902967736366644e-05 2023-03-11 21:44:48,920 44k INFO ====> Epoch: 78, cost 175.08 s 2023-03-11 21:45:24,258 44k INFO Train Epoch: 79 [16%] 2023-03-11 21:45:24,258 44k INFO Losses: [2.5614304542541504, 2.239374876022339, 10.203324317932129, 21.64052963256836, 1.200857400894165], step: 39000, lr: 9.901729865399597e-05 2023-03-11 21:45:27,001 44k INFO Saving model and optimizer state at iteration 79 to ./logs\44k\G_39000.pth 2023-03-11 21:45:27,650 44k INFO Saving model and optimizer state at iteration 79 to ./logs\44k\D_39000.pth 2023-03-11 21:45:28,256 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_36000.pth 2023-03-11 21:45:28,278 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_36000.pth 2023-03-11 21:46:34,512 44k INFO Train Epoch: 79 [56%] 2023-03-11 21:46:34,512 44k INFO Losses: [2.41797137260437, 2.6184513568878174, 9.80181884765625, 17.448963165283203, 1.0925219058990479], step: 39200, lr: 9.901729865399597e-05 2023-03-11 21:47:40,934 44k INFO Train Epoch: 79 [96%] 2023-03-11 21:47:40,934 44k INFO Losses: [2.584890604019165, 2.1763055324554443, 6.908024787902832, 16.624656677246094, 0.981696605682373], step: 39400, lr: 9.901729865399597e-05 2023-03-11 21:47:48,008 44k INFO ====> Epoch: 79, cost 179.09 s 2023-03-11 21:48:56,611 44k INFO Train Epoch: 80 [36%] 2023-03-11 21:48:56,611 44k INFO Losses: [2.4813809394836426, 2.1435842514038086, 9.205743789672852, 18.759424209594727, 1.0493196249008179], step: 39600, lr: 9.900492149166423e-05 2023-03-11 21:50:03,212 44k INFO Train Epoch: 80 [76%] 2023-03-11 21:50:03,212 44k INFO Losses: [2.5186643600463867, 2.2714664936065674, 11.138525009155273, 20.69942855834961, 1.152079701423645], step: 39800, lr: 9.900492149166423e-05 2023-03-11 21:50:42,976 44k INFO ====> Epoch: 80, cost 174.97 s 2023-03-11 21:51:18,890 44k INFO Train Epoch: 81 [16%] 2023-03-11 21:51:18,890 44k INFO Losses: [2.691744804382324, 1.9014817476272583, 6.990499973297119, 17.330385208129883, 1.2194474935531616], step: 40000, lr: 9.899254587647776e-05 2023-03-11 21:51:21,653 44k INFO Saving model and optimizer state at iteration 81 to ./logs\44k\G_40000.pth 2023-03-11 21:51:22,301 44k INFO Saving model and optimizer state at iteration 81 to ./logs\44k\D_40000.pth 2023-03-11 21:51:22,923 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_37000.pth 2023-03-11 21:51:22,946 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_37000.pth 2023-03-11 21:52:29,140 44k INFO Train Epoch: 81 [56%] 2023-03-11 21:52:29,141 44k INFO Losses: [2.626695394515991, 1.9422394037246704, 7.4837727546691895, 15.21742057800293, 1.0842126607894897], step: 40200, lr: 9.899254587647776e-05 2023-03-11 21:53:35,630 44k INFO Train Epoch: 81 [96%] 2023-03-11 21:53:35,631 44k INFO Losses: [2.5258569717407227, 2.3807830810546875, 8.786744117736816, 21.276140213012695, 1.3565562963485718], step: 40400, lr: 9.899254587647776e-05 2023-03-11 21:53:42,078 44k INFO ====> Epoch: 81, cost 179.10 s 2023-03-11 21:54:51,409 44k INFO Train Epoch: 82 [36%] 2023-03-11 21:54:51,409 44k INFO Losses: [2.6862614154815674, 1.922399878501892, 8.96406364440918, 17.150571823120117, 1.0867763757705688], step: 40600, lr: 9.89801718082432e-05 2023-03-11 21:55:58,032 44k INFO Train Epoch: 82 [76%] 2023-03-11 21:55:58,033 44k INFO Losses: [2.5406336784362793, 2.1794137954711914, 7.765326976776123, 16.156015396118164, 0.9869415163993835], step: 40800, lr: 9.89801718082432e-05 2023-03-11 21:56:37,141 44k INFO ====> Epoch: 82, cost 175.06 s 2023-03-11 21:57:13,801 44k INFO Train Epoch: 83 [16%] 2023-03-11 21:57:13,801 44k INFO Losses: [2.523186683654785, 2.332239866256714, 11.890762329101562, 19.439868927001953, 1.292678713798523], step: 41000, lr: 9.896779928676716e-05 2023-03-11 21:57:16,549 44k INFO Saving model and optimizer state at iteration 83 to ./logs\44k\G_41000.pth 2023-03-11 21:57:17,197 44k INFO Saving model and optimizer state at iteration 83 to ./logs\44k\D_41000.pth 2023-03-11 21:57:17,809 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_38000.pth 2023-03-11 21:57:17,832 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_38000.pth 2023-03-11 21:58:24,848 44k INFO Train Epoch: 83 [57%] 2023-03-11 21:58:24,848 44k INFO Losses: [2.3804595470428467, 2.101896286010742, 12.553365707397461, 20.93764877319336, 1.1010874509811401], step: 41200, lr: 9.896779928676716e-05 2023-03-11 21:59:34,851 44k INFO Train Epoch: 83 [97%] 2023-03-11 21:59:34,851 44k INFO Losses: [2.5616683959960938, 2.5136609077453613, 10.941658973693848, 19.97418975830078, 1.253754734992981], step: 41400, lr: 9.896779928676716e-05 2023-03-11 21:59:41,170 44k INFO ====> Epoch: 83, cost 184.03 s 2023-03-11 22:00:57,656 44k INFO Train Epoch: 84 [37%] 2023-03-11 22:00:57,657 44k INFO Losses: [2.418753147125244, 2.366833448410034, 8.635580062866211, 17.391084671020508, 1.283249020576477], step: 41600, lr: 9.895542831185631e-05 2023-03-11 22:02:06,945 44k INFO Train Epoch: 84 [77%] 2023-03-11 22:02:06,945 44k INFO Losses: [2.587646484375, 1.9666434526443481, 5.5245890617370605, 16.742767333984375, 1.080711841583252], step: 41800, lr: 9.895542831185631e-05 2023-03-11 22:02:46,501 44k INFO ====> Epoch: 84, cost 185.33 s 2023-03-11 22:03:24,762 44k INFO Train Epoch: 85 [17%] 2023-03-11 22:03:24,762 44k INFO Losses: [2.2816052436828613, 2.4035916328430176, 8.744461059570312, 19.551483154296875, 1.152329921722412], step: 42000, lr: 9.894305888331732e-05 2023-03-11 22:03:27,644 44k INFO Saving model and optimizer state at iteration 85 to ./logs\44k\G_42000.pth 2023-03-11 22:03:28,306 44k INFO Saving model and optimizer state at iteration 85 to ./logs\44k\D_42000.pth 2023-03-11 22:03:28,917 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_39000.pth 2023-03-11 22:03:28,939 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_39000.pth 2023-03-11 22:04:36,805 44k INFO Train Epoch: 85 [57%] 2023-03-11 22:04:36,806 44k INFO Losses: [2.419567584991455, 2.4112958908081055, 9.484169960021973, 19.15237808227539, 0.9123910665512085], step: 42200, lr: 9.894305888331732e-05 2023-03-11 22:05:45,573 44k INFO Train Epoch: 85 [97%] 2023-03-11 22:05:45,573 44k INFO Losses: [2.4248180389404297, 2.3802261352539062, 9.265824317932129, 19.45848274230957, 0.788001298904419], step: 42400, lr: 9.894305888331732e-05 2023-03-11 22:05:50,950 44k INFO ====> Epoch: 85, cost 184.45 s 2023-03-11 22:07:04,006 44k INFO Train Epoch: 86 [37%] 2023-03-11 22:07:04,006 44k INFO Losses: [2.4603285789489746, 2.292140483856201, 11.016436576843262, 19.337663650512695, 1.579387903213501], step: 42600, lr: 9.89306910009569e-05 2023-03-11 22:08:12,519 44k INFO Train Epoch: 86 [77%] 2023-03-11 22:08:12,519 44k INFO Losses: [2.392319679260254, 2.2934374809265137, 11.452933311462402, 21.38929557800293, 1.3865315914154053], step: 42800, lr: 9.89306910009569e-05 2023-03-11 22:08:51,514 44k INFO ====> Epoch: 86, cost 180.56 s 2023-03-11 22:09:30,924 44k INFO Train Epoch: 87 [17%] 2023-03-11 22:09:30,925 44k INFO Losses: [2.732043743133545, 2.1198678016662598, 8.340449333190918, 20.31920051574707, 1.4429545402526855], step: 43000, lr: 9.891832466458178e-05 2023-03-11 22:09:33,974 44k INFO Saving model and optimizer state at iteration 87 to ./logs\44k\G_43000.pth 2023-03-11 22:09:34,646 44k INFO Saving model and optimizer state at iteration 87 to ./logs\44k\D_43000.pth 2023-03-11 22:09:35,261 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_40000.pth 2023-03-11 22:09:35,284 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_40000.pth 2023-03-11 22:10:45,540 44k INFO Train Epoch: 87 [57%] 2023-03-11 22:10:45,540 44k INFO Losses: [2.365154266357422, 2.33060622215271, 10.172057151794434, 21.161727905273438, 1.267507791519165], step: 43200, lr: 9.891832466458178e-05 2023-03-11 22:11:54,974 44k INFO Train Epoch: 87 [97%] 2023-03-11 22:11:54,974 44k INFO Losses: [2.6180241107940674, 2.3787221908569336, 8.765889167785645, 18.89069366455078, 0.9791492223739624], step: 43400, lr: 9.891832466458178e-05 2023-03-11 22:11:59,605 44k INFO ====> Epoch: 87, cost 188.09 s 2023-03-11 22:13:14,477 44k INFO Train Epoch: 88 [37%] 2023-03-11 22:13:14,478 44k INFO Losses: [2.387298107147217, 2.4845688343048096, 11.188590049743652, 20.705116271972656, 0.7239991426467896], step: 43600, lr: 9.89059598739987e-05 2023-03-11 22:14:23,897 44k INFO Train Epoch: 88 [78%] 2023-03-11 22:14:23,898 44k INFO Losses: [2.2740392684936523, 2.3051531314849854, 11.893975257873535, 21.56846809387207, 0.7222363352775574], step: 43800, lr: 9.89059598739987e-05 2023-03-11 22:15:01,323 44k INFO ====> Epoch: 88, cost 181.72 s 2023-03-11 22:15:40,400 44k INFO Train Epoch: 89 [18%] 2023-03-11 22:15:40,400 44k INFO Losses: [2.47216534614563, 1.9992939233779907, 14.239577293395996, 21.231008529663086, 1.1755247116088867], step: 44000, lr: 9.889359662901445e-05 2023-03-11 22:15:43,186 44k INFO Saving model and optimizer state at iteration 89 to ./logs\44k\G_44000.pth 2023-03-11 22:15:43,893 44k INFO Saving model and optimizer state at iteration 89 to ./logs\44k\D_44000.pth 2023-03-11 22:15:44,505 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_41000.pth 2023-03-11 22:15:44,527 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_41000.pth 2023-03-11 22:16:51,636 44k INFO Train Epoch: 89 [58%] 2023-03-11 22:16:51,637 44k INFO Losses: [2.478201150894165, 2.1740596294403076, 6.498950004577637, 15.832880973815918, 1.445658802986145], step: 44200, lr: 9.889359662901445e-05 2023-03-11 22:17:59,711 44k INFO Train Epoch: 89 [98%] 2023-03-11 22:17:59,711 44k INFO Losses: [2.748748302459717, 1.965254306793213, 11.544220924377441, 20.075958251953125, 1.2349680662155151], step: 44400, lr: 9.889359662901445e-05 2023-03-11 22:18:03,573 44k INFO ====> Epoch: 89, cost 182.25 s 2023-03-11 22:19:17,136 44k INFO Train Epoch: 90 [38%] 2023-03-11 22:19:17,137 44k INFO Losses: [2.5647454261779785, 2.157625436782837, 8.906147956848145, 15.268668174743652, 0.8914574384689331], step: 44600, lr: 9.888123492943583e-05 2023-03-11 22:20:27,796 44k INFO Train Epoch: 90 [78%] 2023-03-11 22:20:27,796 44k INFO Losses: [2.726130723953247, 1.8456358909606934, 11.570667266845703, 21.345163345336914, 1.3804794549942017], step: 44800, lr: 9.888123492943583e-05 2023-03-11 22:21:06,257 44k INFO ====> Epoch: 90, cost 182.68 s 2023-03-11 22:21:46,398 44k INFO Train Epoch: 91 [18%] 2023-03-11 22:21:46,399 44k INFO Losses: [2.423407793045044, 2.3692820072174072, 10.535714149475098, 19.463729858398438, 1.0351097583770752], step: 45000, lr: 9.886887477506964e-05 2023-03-11 22:21:49,233 44k INFO Saving model and optimizer state at iteration 91 to ./logs\44k\G_45000.pth 2023-03-11 22:21:49,921 44k INFO Saving model and optimizer state at iteration 91 to ./logs\44k\D_45000.pth 2023-03-11 22:21:50,534 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_42000.pth 2023-03-11 22:21:50,557 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_42000.pth 2023-03-11 22:22:58,423 44k INFO Train Epoch: 91 [58%] 2023-03-11 22:22:58,424 44k INFO Losses: [2.5915207862854004, 2.2134482860565186, 8.730073928833008, 18.73012924194336, 0.9986600875854492], step: 45200, lr: 9.886887477506964e-05 2023-03-11 22:24:08,183 44k INFO Train Epoch: 91 [98%] 2023-03-11 22:24:08,183 44k INFO Losses: [2.568037986755371, 2.3460121154785156, 10.41969108581543, 19.430538177490234, 1.2697590589523315], step: 45400, lr: 9.886887477506964e-05 2023-03-11 22:24:11,598 44k INFO ====> Epoch: 91, cost 185.34 s 2023-03-11 22:25:27,283 44k INFO Train Epoch: 92 [38%] 2023-03-11 22:25:27,284 44k INFO Losses: [2.477128028869629, 2.061171531677246, 7.7918829917907715, 19.32783317565918, 1.0805673599243164], step: 45600, lr: 9.885651616572276e-05 2023-03-11 22:26:36,833 44k INFO Train Epoch: 92 [78%] 2023-03-11 22:26:36,833 44k INFO Losses: [2.47843861579895, 2.5626821517944336, 11.112349510192871, 21.587745666503906, 0.8607420921325684], step: 45800, lr: 9.885651616572276e-05 2023-03-11 22:27:14,943 44k INFO ====> Epoch: 92, cost 183.34 s 2023-03-11 22:27:57,580 44k INFO Train Epoch: 93 [18%] 2023-03-11 22:27:57,581 44k INFO Losses: [2.824490547180176, 2.0257766246795654, 5.592790603637695, 16.26495933532715, 1.0882848501205444], step: 46000, lr: 9.884415910120204e-05 2023-03-11 22:28:00,803 44k INFO Saving model and optimizer state at iteration 93 to ./logs\44k\G_46000.pth 2023-03-11 22:28:01,509 44k INFO Saving model and optimizer state at iteration 93 to ./logs\44k\D_46000.pth 2023-03-11 22:28:02,142 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_43000.pth 2023-03-11 22:28:02,165 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_43000.pth 2023-03-11 22:29:19,393 44k INFO Train Epoch: 93 [59%] 2023-03-11 22:29:19,394 44k INFO Losses: [2.3761439323425293, 2.212233066558838, 12.4486083984375, 21.144329071044922, 0.9989954829216003], step: 46200, lr: 9.884415910120204e-05 2023-03-11 22:30:32,976 44k INFO Train Epoch: 93 [99%] 2023-03-11 22:30:32,977 44k INFO Losses: [2.4678139686584473, 2.30543851852417, 8.805989265441895, 18.53038215637207, 1.7523647546768188], step: 46400, lr: 9.884415910120204e-05 2023-03-11 22:30:35,809 44k INFO ====> Epoch: 93, cost 200.87 s 2023-03-11 22:31:56,006 44k INFO Train Epoch: 94 [39%] 2023-03-11 22:31:56,007 44k INFO Losses: [2.3772690296173096, 2.421682357788086, 10.378249168395996, 19.750694274902344, 1.1443594694137573], step: 46600, lr: 9.883180358131438e-05 2023-03-11 22:33:07,547 44k INFO Train Epoch: 94 [79%] 2023-03-11 22:33:07,547 44k INFO Losses: [2.454301595687866, 2.3288373947143555, 8.383586883544922, 18.977031707763672, 1.289185881614685], step: 46800, lr: 9.883180358131438e-05 2023-03-11 22:33:46,367 44k INFO ====> Epoch: 94, cost 190.56 s 2023-03-11 22:34:30,487 44k INFO Train Epoch: 95 [19%] 2023-03-11 22:34:30,487 44k INFO Losses: [2.494868040084839, 2.3798177242279053, 7.953715801239014, 20.254358291625977, 1.167056679725647], step: 47000, lr: 9.881944960586671e-05 2023-03-11 22:34:33,641 44k INFO Saving model and optimizer state at iteration 95 to ./logs\44k\G_47000.pth 2023-03-11 22:34:34,329 44k INFO Saving model and optimizer state at iteration 95 to ./logs\44k\D_47000.pth 2023-03-11 22:34:34,967 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_44000.pth 2023-03-11 22:34:34,994 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_44000.pth 2023-03-11 22:35:48,309 44k INFO Train Epoch: 95 [59%] 2023-03-11 22:35:48,309 44k INFO Losses: [2.8042659759521484, 2.2159767150878906, 7.752758502960205, 18.40287971496582, 0.9502911567687988], step: 47200, lr: 9.881944960586671e-05 2023-03-11 22:37:01,281 44k INFO Train Epoch: 95 [99%] 2023-03-11 22:37:01,281 44k INFO Losses: [2.3747811317443848, 2.561370611190796, 9.513826370239258, 18.529491424560547, 1.4449734687805176], step: 47400, lr: 9.881944960586671e-05 2023-03-11 22:37:03,252 44k INFO ====> Epoch: 95, cost 196.89 s 2023-03-11 22:38:24,535 44k INFO Train Epoch: 96 [39%] 2023-03-11 22:38:24,535 44k INFO Losses: [2.5603530406951904, 2.0814433097839355, 7.2231059074401855, 16.562877655029297, 1.274261474609375], step: 47600, lr: 9.880709717466598e-05 2023-03-11 22:39:39,190 44k INFO Train Epoch: 96 [79%] 2023-03-11 22:39:39,190 44k INFO Losses: [2.6810719966888428, 1.9134403467178345, 6.9756574630737305, 17.5222110748291, 1.301213026046753], step: 47800, lr: 9.880709717466598e-05 2023-03-11 22:40:16,849 44k INFO ====> Epoch: 96, cost 193.60 s 2023-03-11 22:41:01,190 44k INFO Train Epoch: 97 [19%] 2023-03-11 22:41:01,191 44k INFO Losses: [2.464320421218872, 2.051880121231079, 9.57421588897705, 18.565387725830078, 0.92982417345047], step: 48000, lr: 9.879474628751914e-05 2023-03-11 22:41:04,110 44k INFO Saving model and optimizer state at iteration 97 to ./logs\44k\G_48000.pth 2023-03-11 22:41:04,791 44k INFO Saving model and optimizer state at iteration 97 to ./logs\44k\D_48000.pth 2023-03-11 22:41:05,402 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_45000.pth 2023-03-11 22:41:05,428 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_45000.pth 2023-03-11 22:42:18,484 44k INFO Train Epoch: 97 [59%] 2023-03-11 22:42:18,484 44k INFO Losses: [2.432565689086914, 2.359919309616089, 9.110194206237793, 19.540603637695312, 1.3451770544052124], step: 48200, lr: 9.879474628751914e-05 2023-03-11 22:43:31,869 44k INFO Train Epoch: 97 [99%] 2023-03-11 22:43:31,870 44k INFO Losses: [2.3995919227600098, 2.312145948410034, 9.798723220825195, 18.431882858276367, 1.018409013748169], step: 48400, lr: 9.879474628751914e-05 2023-03-11 22:43:33,091 44k INFO ====> Epoch: 97, cost 196.24 s 2023-03-11 22:44:55,865 44k INFO Train Epoch: 98 [39%] 2023-03-11 22:44:55,865 44k INFO Losses: [2.458836793899536, 2.2317638397216797, 7.631978988647461, 16.06192398071289, 1.0266379117965698], step: 48600, lr: 9.87823969442332e-05 2023-03-11 22:46:09,231 44k INFO Train Epoch: 98 [80%] 2023-03-11 22:46:09,231 44k INFO Losses: [2.7123751640319824, 2.206768274307251, 4.940889835357666, 18.959604263305664, 1.491148591041565], step: 48800, lr: 9.87823969442332e-05 2023-03-11 22:46:46,729 44k INFO ====> Epoch: 98, cost 193.64 s 2023-03-11 22:47:32,602 44k INFO Train Epoch: 99 [20%] 2023-03-11 22:47:32,602 44k INFO Losses: [2.540851593017578, 2.261536121368408, 8.845969200134277, 16.96381950378418, 1.2689411640167236], step: 49000, lr: 9.877004914461517e-05 2023-03-11 22:47:35,606 44k INFO Saving model and optimizer state at iteration 99 to ./logs\44k\G_49000.pth 2023-03-11 22:47:36,265 44k INFO Saving model and optimizer state at iteration 99 to ./logs\44k\D_49000.pth 2023-03-11 22:47:36,888 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_46000.pth 2023-03-11 22:47:36,912 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_46000.pth 2023-03-11 22:48:51,387 44k INFO Train Epoch: 99 [60%] 2023-03-11 22:48:51,388 44k INFO Losses: [2.4466590881347656, 2.389101266860962, 11.624275207519531, 22.2747859954834, 1.8415075540542603], step: 49200, lr: 9.877004914461517e-05 2023-03-11 22:50:06,394 44k INFO Train Epoch: 99 [100%] 2023-03-11 22:50:06,395 44k INFO Losses: [2.733264207839966, 1.7459819316864014, 2.9319233894348145, 15.534839630126953, 0.7922065258026123], step: 49400, lr: 9.877004914461517e-05 2023-03-11 22:50:07,069 44k INFO ====> Epoch: 99, cost 200.34 s 2023-03-11 22:51:30,596 44k INFO Train Epoch: 100 [40%] 2023-03-11 22:51:30,597 44k INFO Losses: [2.300870895385742, 2.3982651233673096, 10.844290733337402, 20.44171142578125, 1.1646369695663452], step: 49600, lr: 9.875770288847208e-05 2023-03-11 22:52:43,743 44k INFO Train Epoch: 100 [80%] 2023-03-11 22:52:43,743 44k INFO Losses: [2.468312978744507, 2.179637908935547, 8.105508804321289, 19.717819213867188, 0.9637770652770996], step: 49800, lr: 9.875770288847208e-05 2023-03-11 22:53:20,029 44k INFO ====> Epoch: 100, cost 192.96 s 2023-03-12 13:52:05,616 44k INFO {'train': {'log_interval': 200, 'eval_interval': 1000, 'seed': 1234, 'epochs': 130, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kiriha': 0}, 'model_dir': './logs\\44k'} 2023-03-12 13:52:05,644 44k WARNING git hash values are different. cea6df30(saved) != fd4d47fd(current) 2023-03-12 13:52:07,597 44k INFO Loaded checkpoint './logs\44k\G_49000.pth' (iteration 99) 2023-03-12 13:52:07,904 44k INFO Loaded checkpoint './logs\44k\D_49000.pth' (iteration 99) 2023-03-12 13:52:59,039 44k INFO Train Epoch: 99 [20%] 2023-03-12 13:52:59,040 44k INFO Losses: [2.530569553375244, 2.0833938121795654, 13.01250171661377, 20.13148307800293, 0.669873833656311], step: 49000, lr: 9.875770288847208e-05 2023-03-12 13:53:03,541 44k INFO Saving model and optimizer state at iteration 99 to ./logs\44k\G_49000.pth 2023-03-12 13:53:04,200 44k INFO Saving model and optimizer state at iteration 99 to ./logs\44k\D_49000.pth 2023-03-12 13:54:18,056 44k INFO Train Epoch: 99 [60%] 2023-03-12 13:54:18,056 44k INFO Losses: [2.436291217803955, 2.2720274925231934, 10.322487831115723, 21.95627784729004, 1.293831467628479], step: 49200, lr: 9.875770288847208e-05 2023-03-12 13:55:31,279 44k INFO Train Epoch: 99 [100%] 2023-03-12 13:55:31,280 44k INFO Losses: [1.7202725410461426, 2.9337515830993652, 13.944252014160156, 23.666324615478516, 1.0533711910247803], step: 49400, lr: 9.875770288847208e-05 2023-03-12 13:55:31,907 44k INFO ====> Epoch: 99, cost 206.29 s 2023-03-12 13:56:48,700 44k INFO Train Epoch: 100 [40%] 2023-03-12 13:56:48,700 44k INFO Losses: [2.2631478309631348, 2.5622010231018066, 10.296052932739258, 19.4139461517334, 0.7779372930526733], step: 49600, lr: 9.874535817561101e-05 2023-03-12 13:57:55,763 44k INFO Train Epoch: 100 [80%] 2023-03-12 13:57:55,763 44k INFO Losses: [2.4001083374023438, 2.2513020038604736, 9.756937980651855, 20.888771057128906, 0.9590399861335754], step: 49800, lr: 9.874535817561101e-05 2023-03-12 13:58:28,931 44k INFO ====> Epoch: 100, cost 177.02 s 2023-03-12 13:59:16,077 44k INFO Train Epoch: 101 [20%] 2023-03-12 13:59:16,078 44k INFO Losses: [2.569282293319702, 2.2517330646514893, 9.476912498474121, 18.263660430908203, 1.1066398620605469], step: 50000, lr: 9.873301500583906e-05 2023-03-12 13:59:18,924 44k INFO Saving model and optimizer state at iteration 101 to ./logs\44k\G_50000.pth 2023-03-12 13:59:19,567 44k INFO Saving model and optimizer state at iteration 101 to ./logs\44k\D_50000.pth 2023-03-12 13:59:20,188 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_47000.pth 2023-03-12 13:59:20,225 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_47000.pth 2023-03-12 14:00:26,576 44k INFO Train Epoch: 101 [60%] 2023-03-12 14:00:26,576 44k INFO Losses: [2.3101367950439453, 2.6184325218200684, 9.63654613494873, 19.588706970214844, 1.0586316585540771], step: 50200, lr: 9.873301500583906e-05 2023-03-12 14:01:32,235 44k INFO ====> Epoch: 101, cost 183.30 s 2023-03-12 14:01:41,879 44k INFO Train Epoch: 102 [0%] 2023-03-12 14:01:41,879 44k INFO Losses: [2.5946855545043945, 2.2156081199645996, 10.473191261291504, 20.46152114868164, 1.0441690683364868], step: 50400, lr: 9.872067337896332e-05 2023-03-12 14:02:48,056 44k INFO Train Epoch: 102 [40%] 2023-03-12 14:02:48,056 44k INFO Losses: [2.44809889793396, 2.35406494140625, 10.160893440246582, 18.201629638671875, 1.1300632953643799], step: 50600, lr: 9.872067337896332e-05 2023-03-12 14:03:54,244 44k INFO Train Epoch: 102 [80%] 2023-03-12 14:03:54,245 44k INFO Losses: [2.576388359069824, 2.6059000492095947, 6.143028736114502, 12.846582412719727, 1.01841402053833], step: 50800, lr: 9.872067337896332e-05 2023-03-12 14:04:26,797 44k INFO ====> Epoch: 102, cost 174.56 s 2023-03-12 14:05:09,834 44k INFO Train Epoch: 103 [20%] 2023-03-12 14:05:09,834 44k INFO Losses: [2.369149684906006, 2.792853355407715, 7.800324440002441, 16.652008056640625, 1.044224739074707], step: 51000, lr: 9.870833329479095e-05 2023-03-12 14:05:12,632 44k INFO Saving model and optimizer state at iteration 103 to ./logs\44k\G_51000.pth 2023-03-12 14:05:13,268 44k INFO Saving model and optimizer state at iteration 103 to ./logs\44k\D_51000.pth 2023-03-12 14:05:13,873 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_48000.pth 2023-03-12 14:05:13,901 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_48000.pth 2023-03-12 14:06:20,106 44k INFO Train Epoch: 103 [61%] 2023-03-12 14:06:20,106 44k INFO Losses: [2.403494358062744, 2.1617133617401123, 10.792238235473633, 21.960865020751953, 1.0754145383834839], step: 51200, lr: 9.870833329479095e-05 2023-03-12 14:07:25,191 44k INFO ====> Epoch: 103, cost 178.39 s 2023-03-12 14:07:35,451 44k INFO Train Epoch: 104 [1%] 2023-03-12 14:07:35,451 44k INFO Losses: [2.563819408416748, 2.119814872741699, 7.5755228996276855, 16.072433471679688, 0.9295576214790344], step: 51400, lr: 9.86959947531291e-05 2023-03-12 14:08:41,590 44k INFO Train Epoch: 104 [41%] 2023-03-12 14:08:41,590 44k INFO Losses: [2.5848045349121094, 2.2232654094696045, 6.657059669494629, 15.218085289001465, 0.566681444644928], step: 51600, lr: 9.86959947531291e-05 2023-03-12 14:09:47,818 44k INFO Train Epoch: 104 [81%] 2023-03-12 14:09:47,818 44k INFO Losses: [2.596031904220581, 2.1262824535369873, 8.39098072052002, 17.561927795410156, 0.9262499213218689], step: 51800, lr: 9.86959947531291e-05 2023-03-12 14:10:19,691 44k INFO ====> Epoch: 104, cost 174.50 s 2023-03-12 14:11:03,379 44k INFO Train Epoch: 105 [21%] 2023-03-12 14:11:03,379 44k INFO Losses: [2.4478440284729004, 2.453578472137451, 9.974337577819824, 19.7454891204834, 0.9409962296485901], step: 52000, lr: 9.868365775378495e-05 2023-03-12 14:11:06,156 44k INFO Saving model and optimizer state at iteration 105 to ./logs\44k\G_52000.pth 2023-03-12 14:11:06,807 44k INFO Saving model and optimizer state at iteration 105 to ./logs\44k\D_52000.pth 2023-03-12 14:11:07,409 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_49000.pth 2023-03-12 14:11:07,443 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_49000.pth 2023-03-12 14:12:13,466 44k INFO Train Epoch: 105 [61%] 2023-03-12 14:12:13,466 44k INFO Losses: [2.643007755279541, 2.381148338317871, 8.95638370513916, 21.44439125061035, 1.0273890495300293], step: 52200, lr: 9.868365775378495e-05 2023-03-12 14:13:17,899 44k INFO ====> Epoch: 105, cost 178.21 s 2023-03-12 14:13:28,968 44k INFO Train Epoch: 106 [1%] 2023-03-12 14:13:28,968 44k INFO Losses: [2.376864194869995, 2.3797502517700195, 9.995058059692383, 21.28952980041504, 1.0900520086288452], step: 52400, lr: 9.867132229656573e-05 2023-03-12 14:14:35,236 44k INFO Train Epoch: 106 [41%] 2023-03-12 14:14:35,236 44k INFO Losses: [2.578838348388672, 2.1822407245635986, 10.960586547851562, 19.175676345825195, 0.8294572830200195], step: 52600, lr: 9.867132229656573e-05 2023-03-12 14:15:41,538 44k INFO Train Epoch: 106 [81%] 2023-03-12 14:15:41,538 44k INFO Losses: [2.4425864219665527, 2.2242844104766846, 11.327040672302246, 20.50795555114746, 0.9655824303627014], step: 52800, lr: 9.867132229656573e-05 2023-03-12 14:16:12,727 44k INFO ====> Epoch: 106, cost 174.83 s 2023-03-12 14:16:57,086 44k INFO Train Epoch: 107 [21%] 2023-03-12 14:16:57,086 44k INFO Losses: [2.466730833053589, 2.074704170227051, 13.429642677307129, 23.142955780029297, 1.1753590106964111], step: 53000, lr: 9.865898838127865e-05 2023-03-12 14:16:59,809 44k INFO Saving model and optimizer state at iteration 107 to ./logs\44k\G_53000.pth 2023-03-12 14:17:00,466 44k INFO Saving model and optimizer state at iteration 107 to ./logs\44k\D_53000.pth 2023-03-12 14:17:01,069 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_50000.pth 2023-03-12 14:17:01,107 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_50000.pth 2023-03-12 14:18:07,133 44k INFO Train Epoch: 107 [61%] 2023-03-12 14:18:07,133 44k INFO Losses: [2.6066017150878906, 2.0134224891662598, 8.249110221862793, 16.217613220214844, 0.858394980430603], step: 53200, lr: 9.865898838127865e-05 2023-03-12 14:19:10,987 44k INFO ====> Epoch: 107, cost 178.26 s 2023-03-12 14:19:22,757 44k INFO Train Epoch: 108 [1%] 2023-03-12 14:19:22,757 44k INFO Losses: [2.539820432662964, 2.1354196071624756, 10.3507719039917, 19.979116439819336, 1.278574824333191], step: 53400, lr: 9.864665600773098e-05 2023-03-12 14:20:29,192 44k INFO Train Epoch: 108 [41%] 2023-03-12 14:20:29,193 44k INFO Losses: [2.4852371215820312, 2.478105068206787, 10.57544231414795, 20.714502334594727, 1.380039930343628], step: 53600, lr: 9.864665600773098e-05 2023-03-12 14:21:35,636 44k INFO Train Epoch: 108 [82%] 2023-03-12 14:21:35,636 44k INFO Losses: [2.3605451583862305, 2.227114677429199, 13.681086540222168, 24.1086368560791, 1.3899996280670166], step: 53800, lr: 9.864665600773098e-05 2023-03-12 14:22:06,292 44k INFO ====> Epoch: 108, cost 175.30 s 2023-03-12 14:22:51,606 44k INFO Train Epoch: 109 [22%] 2023-03-12 14:22:51,607 44k INFO Losses: [2.5398969650268555, 2.18524432182312, 5.349452972412109, 16.251239776611328, 1.2095293998718262], step: 54000, lr: 9.863432517573002e-05 2023-03-12 14:22:54,425 44k INFO Saving model and optimizer state at iteration 109 to ./logs\44k\G_54000.pth 2023-03-12 14:22:55,088 44k INFO Saving model and optimizer state at iteration 109 to ./logs\44k\D_54000.pth 2023-03-12 14:22:55,708 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_51000.pth 2023-03-12 14:22:55,741 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_51000.pth 2023-03-12 14:24:01,960 44k INFO Train Epoch: 109 [62%] 2023-03-12 14:24:01,961 44k INFO Losses: [2.3958356380462646, 2.2142605781555176, 11.136125564575195, 21.745189666748047, 1.0929265022277832], step: 54200, lr: 9.863432517573002e-05 2023-03-12 14:25:05,407 44k INFO ====> Epoch: 109, cost 179.12 s 2023-03-12 14:25:17,763 44k INFO Train Epoch: 110 [2%] 2023-03-12 14:25:17,764 44k INFO Losses: [2.753291606903076, 1.8332041501998901, 5.956881999969482, 17.71404266357422, 1.0322374105453491], step: 54400, lr: 9.862199588508305e-05 2023-03-12 14:26:24,085 44k INFO Train Epoch: 110 [42%] 2023-03-12 14:26:24,086 44k INFO Losses: [2.5468432903289795, 2.18591046333313, 8.624340057373047, 18.00257110595703, 1.1558327674865723], step: 54600, lr: 9.862199588508305e-05 2023-03-12 14:27:30,525 44k INFO Train Epoch: 110 [82%] 2023-03-12 14:27:30,525 44k INFO Losses: [2.801177740097046, 1.7067780494689941, 6.545862197875977, 14.167060852050781, 0.9589190483093262], step: 54800, lr: 9.862199588508305e-05 2023-03-12 14:28:00,532 44k INFO ====> Epoch: 110, cost 175.13 s 2023-03-12 14:28:46,446 44k INFO Train Epoch: 111 [22%] 2023-03-12 14:28:46,446 44k INFO Losses: [2.495169162750244, 2.476816415786743, 7.833529949188232, 18.81466293334961, 0.716306746006012], step: 55000, lr: 9.86096681355974e-05 2023-03-12 14:28:49,185 44k INFO Saving model and optimizer state at iteration 111 to ./logs\44k\G_55000.pth 2023-03-12 14:28:49,833 44k INFO Saving model and optimizer state at iteration 111 to ./logs\44k\D_55000.pth 2023-03-12 14:28:50,440 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_52000.pth 2023-03-12 14:28:50,476 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_52000.pth 2023-03-12 14:29:56,750 44k INFO Train Epoch: 111 [62%] 2023-03-12 14:29:56,750 44k INFO Losses: [2.4858956336975098, 2.1637279987335205, 6.538644790649414, 17.67612648010254, 1.2324265241622925], step: 55200, lr: 9.86096681355974e-05 2023-03-12 14:30:59,527 44k INFO ====> Epoch: 111, cost 179.00 s 2023-03-12 14:31:12,728 44k INFO Train Epoch: 112 [2%] 2023-03-12 14:31:12,729 44k INFO Losses: [2.4887993335723877, 2.034064292907715, 8.165989875793457, 18.961471557617188, 1.074874758720398], step: 55400, lr: 9.859734192708044e-05 2023-03-12 14:32:19,063 44k INFO Train Epoch: 112 [42%] 2023-03-12 14:32:19,063 44k INFO Losses: [2.438242197036743, 2.188232898712158, 9.122795104980469, 20.575374603271484, 1.2448499202728271], step: 55600, lr: 9.859734192708044e-05 2023-03-12 14:33:25,544 44k INFO Train Epoch: 112 [82%] 2023-03-12 14:33:25,545 44k INFO Losses: [2.6073851585388184, 2.3081068992614746, 7.792697429656982, 19.252273559570312, 1.0521364212036133], step: 55800, lr: 9.859734192708044e-05 2023-03-12 14:33:54,921 44k INFO ====> Epoch: 112, cost 175.39 s 2023-03-12 14:34:41,509 44k INFO Train Epoch: 113 [22%] 2023-03-12 14:34:41,510 44k INFO Losses: [2.535154104232788, 1.9380693435668945, 7.487481594085693, 18.76061248779297, 1.2359235286712646], step: 56000, lr: 9.858501725933955e-05 2023-03-12 14:34:44,246 44k INFO Saving model and optimizer state at iteration 113 to ./logs\44k\G_56000.pth 2023-03-12 14:34:44,893 44k INFO Saving model and optimizer state at iteration 113 to ./logs\44k\D_56000.pth 2023-03-12 14:34:45,508 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_53000.pth 2023-03-12 14:34:45,540 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_53000.pth 2023-03-12 14:35:51,863 44k INFO Train Epoch: 113 [63%] 2023-03-12 14:35:51,864 44k INFO Losses: [2.641195774078369, 2.30829119682312, 7.049264430999756, 15.466882705688477, 0.6948811411857605], step: 56200, lr: 9.858501725933955e-05 2023-03-12 14:36:54,051 44k INFO ====> Epoch: 113, cost 179.13 s 2023-03-12 14:37:08,003 44k INFO Train Epoch: 114 [3%] 2023-03-12 14:37:08,004 44k INFO Losses: [2.753098487854004, 2.0237607955932617, 8.051836967468262, 15.966660499572754, 0.8943933844566345], step: 56400, lr: 9.857269413218213e-05 2023-03-12 14:38:14,553 44k INFO Train Epoch: 114 [43%] 2023-03-12 14:38:14,554 44k INFO Losses: [2.526326894760132, 2.224480152130127, 8.037116050720215, 16.14765167236328, 1.1912078857421875], step: 56600, lr: 9.857269413218213e-05 2023-03-12 14:39:21,111 44k INFO Train Epoch: 114 [83%] 2023-03-12 14:39:21,111 44k INFO Losses: [2.4801909923553467, 2.089843273162842, 9.68632698059082, 18.982738494873047, 1.10832679271698], step: 56800, lr: 9.857269413218213e-05 2023-03-12 14:39:49,826 44k INFO ====> Epoch: 114, cost 175.78 s 2023-03-12 14:40:36,958 44k INFO Train Epoch: 115 [23%] 2023-03-12 14:40:36,958 44k INFO Losses: [2.546854257583618, 2.2685444355010986, 7.71811056137085, 19.523561477661133, 1.2662781476974487], step: 57000, lr: 9.85603725454156e-05 2023-03-12 14:40:39,750 44k INFO Saving model and optimizer state at iteration 115 to ./logs\44k\G_57000.pth 2023-03-12 14:40:40,396 44k INFO Saving model and optimizer state at iteration 115 to ./logs\44k\D_57000.pth 2023-03-12 14:40:41,014 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_54000.pth 2023-03-12 14:40:41,049 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_54000.pth 2023-03-12 14:41:47,364 44k INFO Train Epoch: 115 [63%] 2023-03-12 14:41:47,364 44k INFO Losses: [2.547548770904541, 2.1659560203552246, 8.272828102111816, 15.195450782775879, 0.6036759614944458], step: 57200, lr: 9.85603725454156e-05 2023-03-12 14:42:48,868 44k INFO ====> Epoch: 115, cost 179.04 s 2023-03-12 14:43:03,384 44k INFO Train Epoch: 116 [3%] 2023-03-12 14:43:03,384 44k INFO Losses: [2.797173261642456, 2.15388560295105, 9.429797172546387, 15.906258583068848, 0.9594957828521729], step: 57400, lr: 9.854805249884741e-05 2023-03-12 14:44:09,863 44k INFO Train Epoch: 116 [43%] 2023-03-12 14:44:09,864 44k INFO Losses: [2.305539131164551, 2.709357500076294, 9.138225555419922, 19.52425765991211, 0.9534539580345154], step: 57600, lr: 9.854805249884741e-05 2023-03-12 14:45:16,443 44k INFO Train Epoch: 116 [83%] 2023-03-12 14:45:16,444 44k INFO Losses: [2.7679221630096436, 2.121371030807495, 8.50774097442627, 19.459548950195312, 1.2488939762115479], step: 57800, lr: 9.854805249884741e-05 2023-03-12 14:45:44,602 44k INFO ====> Epoch: 116, cost 175.73 s 2023-03-12 14:46:32,479 44k INFO Train Epoch: 117 [23%] 2023-03-12 14:46:32,479 44k INFO Losses: [2.34775972366333, 2.195242166519165, 8.913904190063477, 21.135540008544922, 1.3499116897583008], step: 58000, lr: 9.853573399228505e-05 2023-03-12 14:46:35,352 44k INFO Saving model and optimizer state at iteration 117 to ./logs\44k\G_58000.pth 2023-03-12 14:46:35,999 44k INFO Saving model and optimizer state at iteration 117 to ./logs\44k\D_58000.pth 2023-03-12 14:46:36,639 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_55000.pth 2023-03-12 14:46:36,675 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_55000.pth 2023-03-12 14:47:43,249 44k INFO Train Epoch: 117 [63%] 2023-03-12 14:47:43,250 44k INFO Losses: [2.3396055698394775, 2.369509220123291, 10.00290584564209, 20.154760360717773, 0.4263248145580292], step: 58200, lr: 9.853573399228505e-05 2023-03-12 14:48:44,111 44k INFO ====> Epoch: 117, cost 179.51 s 2023-03-12 14:48:59,334 44k INFO Train Epoch: 118 [3%] 2023-03-12 14:48:59,335 44k INFO Losses: [2.4184534549713135, 2.2449090480804443, 6.167513370513916, 17.603717803955078, 1.045264720916748], step: 58400, lr: 9.8523417025536e-05 2023-03-12 14:50:05,872 44k INFO Train Epoch: 118 [43%] 2023-03-12 14:50:05,873 44k INFO Losses: [2.5406241416931152, 2.1599276065826416, 10.035455703735352, 18.540908813476562, 0.9100748300552368], step: 58600, lr: 9.8523417025536e-05 2023-03-12 14:51:12,438 44k INFO Train Epoch: 118 [84%] 2023-03-12 14:51:12,439 44k INFO Losses: [2.4142167568206787, 2.5588533878326416, 9.41864013671875, 20.910202026367188, 0.8729894161224365], step: 58800, lr: 9.8523417025536e-05 2023-03-12 14:51:39,863 44k INFO ====> Epoch: 118, cost 175.75 s 2023-03-12 14:52:28,599 44k INFO Train Epoch: 119 [24%] 2023-03-12 14:52:28,599 44k INFO Losses: [2.655186176300049, 2.422694683074951, 8.626376152038574, 19.542461395263672, 0.8591475486755371], step: 59000, lr: 9.851110159840781e-05 2023-03-12 14:52:31,430 44k INFO Saving model and optimizer state at iteration 119 to ./logs\44k\G_59000.pth 2023-03-12 14:52:32,080 44k INFO Saving model and optimizer state at iteration 119 to ./logs\44k\D_59000.pth 2023-03-12 14:52:32,715 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_56000.pth 2023-03-12 14:52:32,752 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_56000.pth 2023-03-12 14:53:39,053 44k INFO Train Epoch: 119 [64%] 2023-03-12 14:53:39,053 44k INFO Losses: [2.4799001216888428, 2.3237011432647705, 9.983625411987305, 19.37213706970215, 1.1400971412658691], step: 59200, lr: 9.851110159840781e-05 2023-03-12 14:54:39,295 44k INFO ====> Epoch: 119, cost 179.43 s 2023-03-12 14:54:55,137 44k INFO Train Epoch: 120 [4%] 2023-03-12 14:54:55,138 44k INFO Losses: [2.637849807739258, 1.9863094091415405, 9.282312393188477, 19.842092514038086, 1.2838584184646606], step: 59400, lr: 9.8498787710708e-05 2023-03-12 14:56:01,674 44k INFO Train Epoch: 120 [44%] 2023-03-12 14:56:01,674 44k INFO Losses: [2.6601662635803223, 2.0398268699645996, 9.083541870117188, 19.529178619384766, 1.3680793046951294], step: 59600, lr: 9.8498787710708e-05 2023-03-12 14:57:08,112 44k INFO Train Epoch: 120 [84%] 2023-03-12 14:57:08,113 44k INFO Losses: [2.4298253059387207, 2.492029905319214, 9.524579048156738, 19.913347244262695, 1.1235295534133911], step: 59800, lr: 9.8498787710708e-05 2023-03-12 14:57:34,732 44k INFO ====> Epoch: 120, cost 175.44 s 2023-03-12 14:58:23,867 44k INFO Train Epoch: 121 [24%] 2023-03-12 14:58:23,868 44k INFO Losses: [2.6557974815368652, 2.1976969242095947, 8.92565631866455, 19.02800750732422, 0.7957410216331482], step: 60000, lr: 9.848647536224416e-05 2023-03-12 14:58:26,708 44k INFO Saving model and optimizer state at iteration 121 to ./logs\44k\G_60000.pth 2023-03-12 14:58:27,363 44k INFO Saving model and optimizer state at iteration 121 to ./logs\44k\D_60000.pth 2023-03-12 14:58:27,967 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_57000.pth 2023-03-12 14:58:28,001 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_57000.pth 2023-03-12 14:59:34,192 44k INFO Train Epoch: 121 [64%] 2023-03-12 14:59:34,193 44k INFO Losses: [2.667863130569458, 1.8016116619110107, 5.744339942932129, 15.110112190246582, 0.7991032600402832], step: 60200, lr: 9.848647536224416e-05 2023-03-12 15:00:33,567 44k INFO ====> Epoch: 121, cost 178.84 s 2023-03-12 15:00:50,202 44k INFO Train Epoch: 122 [4%] 2023-03-12 15:00:50,203 44k INFO Losses: [2.5602433681488037, 2.0573768615722656, 10.4984712600708, 19.151681900024414, 0.46081700921058655], step: 60400, lr: 9.847416455282387e-05 2023-03-12 15:01:56,429 44k INFO Train Epoch: 122 [44%] 2023-03-12 15:01:56,429 44k INFO Losses: [2.4646615982055664, 2.145195722579956, 11.33735179901123, 18.245716094970703, 1.29243803024292], step: 60600, lr: 9.847416455282387e-05 2023-03-12 15:03:02,845 44k INFO Train Epoch: 122 [84%] 2023-03-12 15:03:02,846 44k INFO Losses: [2.5410327911376953, 2.1605331897735596, 8.100025177001953, 15.76478385925293, 0.9408947825431824], step: 60800, lr: 9.847416455282387e-05 2023-03-12 15:03:28,817 44k INFO ====> Epoch: 122, cost 175.25 s 2023-03-12 15:04:18,549 44k INFO Train Epoch: 123 [24%] 2023-03-12 15:04:18,549 44k INFO Losses: [2.6049482822418213, 2.348824977874756, 6.360328197479248, 16.428133010864258, 1.1749716997146606], step: 61000, lr: 9.846185528225477e-05 2023-03-12 15:04:21,416 44k INFO Saving model and optimizer state at iteration 123 to ./logs\44k\G_61000.pth 2023-03-12 15:04:22,064 44k INFO Saving model and optimizer state at iteration 123 to ./logs\44k\D_61000.pth 2023-03-12 15:04:22,688 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_58000.pth 2023-03-12 15:04:22,725 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_58000.pth 2023-03-12 15:05:28,923 44k INFO Train Epoch: 123 [65%] 2023-03-12 15:05:28,923 44k INFO Losses: [2.426384925842285, 2.1798641681671143, 7.430263996124268, 19.395584106445312, 0.945557177066803], step: 61200, lr: 9.846185528225477e-05 2023-03-12 15:06:27,791 44k INFO ====> Epoch: 123, cost 178.97 s 2023-03-12 15:06:45,120 44k INFO Train Epoch: 124 [5%] 2023-03-12 15:06:45,120 44k INFO Losses: [2.682405948638916, 2.133742094039917, 9.528481483459473, 21.89979362487793, 1.0059382915496826], step: 61400, lr: 9.84495475503445e-05 2023-03-12 15:07:51,547 44k INFO Train Epoch: 124 [45%] 2023-03-12 15:07:51,548 44k INFO Losses: [2.043673038482666, 2.9120264053344727, 17.44105339050293, 29.509580612182617, 1.66597580909729], step: 61600, lr: 9.84495475503445e-05 2023-03-12 15:08:58,060 44k INFO Train Epoch: 124 [85%] 2023-03-12 15:08:58,060 44k INFO Losses: [2.772627592086792, 1.942929983139038, 7.169674873352051, 18.296823501586914, 0.8164033889770508], step: 61800, lr: 9.84495475503445e-05 2023-03-12 15:09:23,428 44k INFO ====> Epoch: 124, cost 175.64 s 2023-03-12 15:10:13,925 44k INFO Train Epoch: 125 [25%] 2023-03-12 15:10:13,925 44k INFO Losses: [2.43510365486145, 2.2499849796295166, 11.82221794128418, 18.986167907714844, 0.9535871148109436], step: 62000, lr: 9.84372413569007e-05 2023-03-12 15:10:16,731 44k INFO Saving model and optimizer state at iteration 125 to ./logs\44k\G_62000.pth 2023-03-12 15:10:17,378 44k INFO Saving model and optimizer state at iteration 125 to ./logs\44k\D_62000.pth 2023-03-12 15:10:18,002 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_59000.pth 2023-03-12 15:10:18,040 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_59000.pth 2023-03-12 15:11:24,195 44k INFO Train Epoch: 125 [65%] 2023-03-12 15:11:24,195 44k INFO Losses: [2.224346876144409, 2.556025981903076, 11.752298355102539, 21.95399284362793, 0.8409875631332397], step: 62200, lr: 9.84372413569007e-05 2023-03-12 15:12:22,469 44k INFO ====> Epoch: 125, cost 179.04 s 2023-03-12 15:12:40,260 44k INFO Train Epoch: 126 [5%] 2023-03-12 15:12:40,261 44k INFO Losses: [2.652935266494751, 2.099679708480835, 8.035571098327637, 18.075931549072266, 0.9544440507888794], step: 62400, lr: 9.842493670173108e-05 2023-03-12 15:13:46,745 44k INFO Train Epoch: 126 [45%] 2023-03-12 15:13:46,746 44k INFO Losses: [2.646048069000244, 2.1015326976776123, 9.824590682983398, 19.902164459228516, 0.9396716356277466], step: 62600, lr: 9.842493670173108e-05 2023-03-12 15:14:53,029 44k INFO Train Epoch: 126 [85%] 2023-03-12 15:14:53,029 44k INFO Losses: [2.4422974586486816, 2.7184743881225586, 8.526591300964355, 17.8053035736084, 0.5865074992179871], step: 62800, lr: 9.842493670173108e-05 2023-03-12 15:15:17,768 44k INFO ====> Epoch: 126, cost 175.30 s 2023-03-12 15:16:08,916 44k INFO Train Epoch: 127 [25%] 2023-03-12 15:16:08,916 44k INFO Losses: [2.406055212020874, 2.0538723468780518, 10.580684661865234, 19.086185455322266, 0.6580051779747009], step: 63000, lr: 9.841263358464336e-05 2023-03-12 15:16:11,786 44k INFO Saving model and optimizer state at iteration 127 to ./logs\44k\G_63000.pth 2023-03-12 15:16:12,433 44k INFO Saving model and optimizer state at iteration 127 to ./logs\44k\D_63000.pth 2023-03-12 15:16:13,064 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_60000.pth 2023-03-12 15:16:13,098 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_60000.pth 2023-03-12 15:17:19,379 44k INFO Train Epoch: 127 [65%] 2023-03-12 15:17:19,380 44k INFO Losses: [2.5768938064575195, 2.029104709625244, 9.729928970336914, 18.006486892700195, 1.2117549180984497], step: 63200, lr: 9.841263358464336e-05 2023-03-12 15:18:16,877 44k INFO ====> Epoch: 127, cost 179.11 s 2023-03-12 15:18:35,407 44k INFO Train Epoch: 128 [5%] 2023-03-12 15:18:35,408 44k INFO Losses: [2.479503631591797, 2.05698823928833, 11.73935604095459, 18.689525604248047, 1.2549959421157837], step: 63400, lr: 9.840033200544528e-05 2023-03-12 15:19:41,957 44k INFO Train Epoch: 128 [45%] 2023-03-12 15:19:41,957 44k INFO Losses: [2.4276726245880127, 2.2981646060943604, 7.84566068649292, 18.534637451171875, 1.5560003519058228], step: 63600, lr: 9.840033200544528e-05 2023-03-12 15:20:48,403 44k INFO Train Epoch: 128 [86%] 2023-03-12 15:20:48,403 44k INFO Losses: [2.7886271476745605, 2.238522529602051, 6.702478408813477, 17.43375587463379, 1.0344722270965576], step: 63800, lr: 9.840033200544528e-05 2023-03-12 15:21:12,458 44k INFO ====> Epoch: 128, cost 175.58 s 2023-03-12 15:22:04,174 44k INFO Train Epoch: 129 [26%] 2023-03-12 15:22:04,175 44k INFO Losses: [2.583775758743286, 2.6667749881744385, 10.194723129272461, 21.70988655090332, 0.8028169274330139], step: 64000, lr: 9.838803196394459e-05 2023-03-12 15:22:06,999 44k INFO Saving model and optimizer state at iteration 129 to ./logs\44k\G_64000.pth 2023-03-12 15:22:07,693 44k INFO Saving model and optimizer state at iteration 129 to ./logs\44k\D_64000.pth 2023-03-12 15:22:08,299 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_61000.pth 2023-03-12 15:22:08,334 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_61000.pth 2023-03-12 15:23:14,704 44k INFO Train Epoch: 129 [66%] 2023-03-12 15:23:14,704 44k INFO Losses: [2.4136040210723877, 2.4286930561065674, 9.456846237182617, 22.13960838317871, 1.4705806970596313], step: 64200, lr: 9.838803196394459e-05 2023-03-12 15:24:11,676 44k INFO ====> Epoch: 129, cost 179.22 s 2023-03-12 15:24:30,913 44k INFO Train Epoch: 130 [6%] 2023-03-12 15:24:30,914 44k INFO Losses: [2.4987571239471436, 2.181457996368408, 11.313584327697754, 19.64276695251465, 1.1965885162353516], step: 64400, lr: 9.837573345994909e-05 2023-03-12 15:25:37,536 44k INFO Train Epoch: 130 [46%] 2023-03-12 15:25:37,537 44k INFO Losses: [2.809941530227661, 2.0686402320861816, 6.816439151763916, 15.208210945129395, 1.1769635677337646], step: 64600, lr: 9.837573345994909e-05 2023-03-12 15:26:43,923 44k INFO Train Epoch: 130 [86%] 2023-03-12 15:26:43,923 44k INFO Losses: [2.430873155593872, 2.1850247383117676, 8.337047576904297, 18.948827743530273, 0.855047345161438], step: 64800, lr: 9.837573345994909e-05 2023-03-12 15:27:07,294 44k INFO ====> Epoch: 130, cost 175.62 s