TachibanaKimika
upload kiriha model
4602eda
raw
history blame contribute delete
No virus
110 kB
2023-03-09 22:07:15,606 44k INFO {'train': {'log_interval': 200, 'eval_interval': 1000, 'seed': 1234, 'epochs': 100, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'aisa': 0}, 'model_dir': './logs\\44k'}
2023-03-09 22:07:17,956 44k INFO Loaded checkpoint './logs\44k\G_0.pth' (iteration 1)
2023-03-09 22:07:18,492 44k INFO Loaded checkpoint './logs\44k\D_0.pth' (iteration 1)
2023-03-09 22:07:33,232 44k INFO Train Epoch: 1 [0%]
2023-03-09 22:07:33,233 44k INFO Losses: [2.7532143592834473, 2.1225199699401855, 6.865627765655518, 23.084224700927734, 2.0994279384613037], step: 0, lr: 0.0001
2023-03-09 22:07:36,723 44k INFO Saving model and optimizer state at iteration 1 to ./logs\44k\G_0.pth
2023-03-09 22:07:37,415 44k INFO Saving model and optimizer state at iteration 1 to ./logs\44k\D_0.pth
2023-03-09 22:08:52,957 44k INFO Train Epoch: 1 [42%]
2023-03-09 22:08:52,958 44k INFO Losses: [2.4421563148498535, 2.2961127758026123, 10.422914505004883, 24.77933120727539, 2.0568974018096924], step: 200, lr: 0.0001
2023-03-09 22:10:08,890 44k INFO Train Epoch: 1 [83%]
2023-03-09 22:10:08,891 44k INFO Losses: [2.648120880126953, 2.1064438819885254, 7.289734363555908, 19.598392486572266, 1.5763448476791382], step: 400, lr: 0.0001
2023-03-09 22:10:40,667 44k INFO ====> Epoch: 1, cost 205.06 s
2023-03-09 22:11:30,699 44k INFO Train Epoch: 2 [25%]
2023-03-09 22:11:30,700 44k INFO Losses: [2.48688006401062, 2.3656461238861084, 8.08664321899414, 23.009075164794922, 1.5792394876480103], step: 600, lr: 9.99875e-05
2023-03-09 22:12:36,551 44k INFO Train Epoch: 2 [67%]
2023-03-09 22:12:36,551 44k INFO Losses: [2.819711208343506, 2.475625514984131, 7.877129554748535, 22.234840393066406, 1.6142663955688477], step: 800, lr: 9.99875e-05
2023-03-09 22:13:29,967 44k INFO ====> Epoch: 2, cost 169.30 s
2023-03-09 22:13:53,281 44k INFO Train Epoch: 3 [8%]
2023-03-09 22:13:53,282 44k INFO Losses: [2.6841185092926025, 2.074568748474121, 7.123725891113281, 18.286457061767578, 1.198595643043518], step: 1000, lr: 9.99750015625e-05
2023-03-09 22:13:56,355 44k INFO Saving model and optimizer state at iteration 3 to ./logs\44k\G_1000.pth
2023-03-09 22:13:57,037 44k INFO Saving model and optimizer state at iteration 3 to ./logs\44k\D_1000.pth
2023-03-09 22:15:06,006 44k INFO Train Epoch: 3 [50%]
2023-03-09 22:15:06,006 44k INFO Losses: [2.2005116939544678, 2.316885471343994, 9.727132797241211, 25.34775733947754, 1.7601525783538818], step: 1200, lr: 9.99750015625e-05
2023-03-09 22:16:13,275 44k INFO Train Epoch: 3 [92%]
2023-03-09 22:16:13,276 44k INFO Losses: [2.70918869972229, 2.1484742164611816, 9.208160400390625, 21.771608352661133, 1.8784080743789673], step: 1400, lr: 9.99750015625e-05
2023-03-09 22:16:27,476 44k INFO ====> Epoch: 3, cost 177.51 s
2023-03-09 22:17:34,023 44k INFO Train Epoch: 4 [33%]
2023-03-09 22:17:34,023 44k INFO Losses: [2.629467725753784, 2.145282745361328, 5.309225559234619, 16.422060012817383, 1.6363394260406494], step: 1600, lr: 9.996250468730469e-05
2023-03-09 22:18:44,320 44k INFO Train Epoch: 4 [75%]
2023-03-09 22:18:44,320 44k INFO Losses: [2.359875440597534, 2.6663260459899902, 10.675108909606934, 22.943822860717773, 1.4112910032272339], step: 1800, lr: 9.996250468730469e-05
2023-03-09 22:19:26,339 44k INFO ====> Epoch: 4, cost 178.86 s
2023-03-09 22:20:05,291 44k INFO Train Epoch: 5 [17%]
2023-03-09 22:20:05,292 44k INFO Losses: [2.4184532165527344, 2.202078342437744, 10.578102111816406, 20.634305953979492, 1.78025221824646], step: 2000, lr: 9.995000937421877e-05
2023-03-09 22:20:08,393 44k INFO Saving model and optimizer state at iteration 5 to ./logs\44k\G_2000.pth
2023-03-09 22:20:09,225 44k INFO Saving model and optimizer state at iteration 5 to ./logs\44k\D_2000.pth
2023-03-09 22:21:20,064 44k INFO Train Epoch: 5 [58%]
2023-03-09 22:21:20,065 44k INFO Losses: [2.784832239151001, 1.8311623334884644, 9.575661659240723, 20.739303588867188, 1.4636569023132324], step: 2200, lr: 9.995000937421877e-05
2023-03-09 22:22:27,870 44k INFO ====> Epoch: 5, cost 181.53 s
2023-03-09 22:22:37,704 44k INFO Train Epoch: 6 [0%]
2023-03-09 22:22:37,705 44k INFO Losses: [2.751085042953491, 2.104429244995117, 10.574271202087402, 18.58207893371582, 1.2397831678390503], step: 2400, lr: 9.993751562304699e-05
2023-03-09 22:23:48,015 44k INFO Train Epoch: 6 [42%]
2023-03-09 22:23:48,015 44k INFO Losses: [2.56874680519104, 2.4305896759033203, 10.210469245910645, 24.242820739746094, 1.555556058883667], step: 2600, lr: 9.993751562304699e-05
2023-03-09 22:24:56,321 44k INFO Train Epoch: 6 [83%]
2023-03-09 22:24:56,322 44k INFO Losses: [2.872218370437622, 2.2893431186676025, 8.205846786499023, 19.27981948852539, 1.2091784477233887], step: 2800, lr: 9.993751562304699e-05
2023-03-09 22:25:24,331 44k INFO ====> Epoch: 6, cost 176.46 s
2023-03-09 22:26:15,877 44k INFO Train Epoch: 7 [25%]
2023-03-09 22:26:15,878 44k INFO Losses: [2.398118495941162, 2.2882518768310547, 8.92576789855957, 23.146297454833984, 1.445586919784546], step: 3000, lr: 9.99250234335941e-05
2023-03-09 22:26:18,873 44k INFO Saving model and optimizer state at iteration 7 to ./logs\44k\G_3000.pth
2023-03-09 22:26:19,601 44k INFO Saving model and optimizer state at iteration 7 to ./logs\44k\D_3000.pth
2023-03-09 22:27:28,314 44k INFO Train Epoch: 7 [67%]
2023-03-09 22:27:28,315 44k INFO Losses: [2.6192994117736816, 2.012535572052002, 10.190000534057617, 20.864042282104492, 1.5078903436660767], step: 3200, lr: 9.99250234335941e-05
2023-03-09 22:28:23,287 44k INFO ====> Epoch: 7, cost 178.96 s
2023-03-09 22:28:47,111 44k INFO Train Epoch: 8 [8%]
2023-03-09 22:28:47,111 44k INFO Losses: [2.3008713722229004, 2.3889431953430176, 11.322406768798828, 22.719818115234375, 1.4016550779342651], step: 3400, lr: 9.991253280566489e-05
2023-03-09 22:29:55,858 44k INFO Train Epoch: 8 [50%]
2023-03-09 22:29:55,859 44k INFO Losses: [2.2141306400299072, 2.5543158054351807, 13.307168960571289, 26.148569107055664, 1.8661701679229736], step: 3600, lr: 9.991253280566489e-05
2023-03-09 22:31:04,323 44k INFO Train Epoch: 8 [92%]
2023-03-09 22:31:04,323 44k INFO Losses: [2.6786041259765625, 2.179410696029663, 10.544212341308594, 22.434480667114258, 1.278396725654602], step: 3800, lr: 9.991253280566489e-05
2023-03-09 22:31:18,492 44k INFO ====> Epoch: 8, cost 175.21 s
2023-03-09 22:32:23,836 44k INFO Train Epoch: 9 [33%]
2023-03-09 22:32:23,837 44k INFO Losses: [2.529768228530884, 2.1434335708618164, 7.9929022789001465, 20.58642578125, 0.9003176689147949], step: 4000, lr: 9.990004373906418e-05
2023-03-09 22:32:26,758 44k INFO Saving model and optimizer state at iteration 9 to ./logs\44k\G_4000.pth
2023-03-09 22:32:27,457 44k INFO Saving model and optimizer state at iteration 9 to ./logs\44k\D_4000.pth
2023-03-09 22:32:28,084 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_1000.pth
2023-03-09 22:32:28,124 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_1000.pth
2023-03-09 22:33:36,481 44k INFO Train Epoch: 9 [75%]
2023-03-09 22:33:36,482 44k INFO Losses: [2.443915367126465, 2.1057639122009277, 9.841815948486328, 21.705745697021484, 1.4547427892684937], step: 4200, lr: 9.990004373906418e-05
2023-03-09 22:34:17,858 44k INFO ====> Epoch: 9, cost 179.37 s
2023-03-09 22:34:55,538 44k INFO Train Epoch: 10 [17%]
2023-03-09 22:34:55,539 44k INFO Losses: [2.3070597648620605, 2.4193506240844727, 12.322043418884277, 22.36187744140625, 1.4455803632736206], step: 4400, lr: 9.98875562335968e-05
2023-03-09 22:36:04,004 44k INFO Train Epoch: 10 [58%]
2023-03-09 22:36:04,004 44k INFO Losses: [2.621814489364624, 1.9164066314697266, 12.128844261169434, 18.02870750427246, 1.2619836330413818], step: 4600, lr: 9.98875562335968e-05
2023-03-09 22:37:12,968 44k INFO ====> Epoch: 10, cost 175.11 s
2023-03-09 22:37:22,837 44k INFO Train Epoch: 11 [0%]
2023-03-09 22:37:22,838 44k INFO Losses: [2.3263368606567383, 2.5736262798309326, 5.8480119705200195, 17.11333656311035, 1.1771115064620972], step: 4800, lr: 9.987507028906759e-05
2023-03-09 22:38:32,659 44k INFO Train Epoch: 11 [42%]
2023-03-09 22:38:32,659 44k INFO Losses: [2.5110223293304443, 2.317819833755493, 7.410079479217529, 23.0472354888916, 1.5325629711151123], step: 5000, lr: 9.987507028906759e-05
2023-03-09 22:38:35,624 44k INFO Saving model and optimizer state at iteration 11 to ./logs\44k\G_5000.pth
2023-03-09 22:38:36,357 44k INFO Saving model and optimizer state at iteration 11 to ./logs\44k\D_5000.pth
2023-03-09 22:38:37,047 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_2000.pth
2023-03-09 22:38:37,084 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_2000.pth
2023-03-09 22:39:45,442 44k INFO Train Epoch: 11 [83%]
2023-03-09 22:39:45,443 44k INFO Losses: [2.7240819931030273, 2.549437999725342, 6.221593856811523, 15.562834739685059, 1.3884763717651367], step: 5200, lr: 9.987507028906759e-05
2023-03-09 22:40:13,542 44k INFO ====> Epoch: 11, cost 180.57 s
2023-03-09 22:41:05,050 44k INFO Train Epoch: 12 [25%]
2023-03-09 22:41:05,050 44k INFO Losses: [2.5932059288024902, 2.160379648208618, 7.225218772888184, 23.120893478393555, 1.476784586906433], step: 5400, lr: 9.986258590528146e-05
2023-03-09 22:42:13,322 44k INFO Train Epoch: 12 [67%]
2023-03-09 22:42:13,323 44k INFO Losses: [2.898857593536377, 2.2347097396850586, 10.12026596069336, 21.01511573791504, 1.414766550064087], step: 5600, lr: 9.986258590528146e-05
2023-03-09 22:43:11,625 44k INFO ====> Epoch: 12, cost 178.08 s
2023-03-09 22:43:35,490 44k INFO Train Epoch: 13 [8%]
2023-03-09 22:43:35,490 44k INFO Losses: [2.5458338260650635, 2.402634382247925, 8.277725219726562, 21.97381019592285, 1.335066556930542], step: 5800, lr: 9.98501030820433e-05
2023-03-09 22:44:43,468 44k INFO Train Epoch: 13 [50%]
2023-03-09 22:44:43,468 44k INFO Losses: [2.6143736839294434, 1.9827158451080322, 8.365862846374512, 20.510074615478516, 1.3840632438659668], step: 6000, lr: 9.98501030820433e-05
2023-03-09 22:44:46,493 44k INFO Saving model and optimizer state at iteration 13 to ./logs\44k\G_6000.pth
2023-03-09 22:44:47,222 44k INFO Saving model and optimizer state at iteration 13 to ./logs\44k\D_6000.pth
2023-03-09 22:44:47,898 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_3000.pth
2023-03-09 22:44:47,944 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_3000.pth
2023-03-09 22:45:54,264 44k INFO Train Epoch: 13 [92%]
2023-03-09 22:45:54,264 44k INFO Losses: [2.5723958015441895, 2.407287120819092, 10.035711288452148, 18.892135620117188, 1.5453276634216309], step: 6200, lr: 9.98501030820433e-05
2023-03-09 22:46:07,916 44k INFO ====> Epoch: 13, cost 176.29 s
2023-03-09 22:47:11,509 44k INFO Train Epoch: 14 [33%]
2023-03-09 22:47:11,510 44k INFO Losses: [2.36806058883667, 2.2312498092651367, 8.099906921386719, 19.888795852661133, 1.1237491369247437], step: 6400, lr: 9.983762181915804e-05
2023-03-09 22:48:17,659 44k INFO Train Epoch: 14 [75%]
2023-03-09 22:48:17,659 44k INFO Losses: [2.6920676231384277, 2.126502513885498, 8.870818138122559, 19.55061149597168, 1.7066129446029663], step: 6600, lr: 9.983762181915804e-05
2023-03-09 22:48:57,688 44k INFO ====> Epoch: 14, cost 169.77 s
2023-03-09 22:49:34,205 44k INFO Train Epoch: 15 [17%]
2023-03-09 22:49:34,205 44k INFO Losses: [2.484997034072876, 2.1649365425109863, 9.29818058013916, 21.262903213500977, 1.5659104585647583], step: 6800, lr: 9.982514211643064e-05
2023-03-09 22:50:40,378 44k INFO Train Epoch: 15 [58%]
2023-03-09 22:50:40,379 44k INFO Losses: [2.6519980430603027, 2.251462459564209, 10.612017631530762, 20.631542205810547, 1.7596162557601929], step: 7000, lr: 9.982514211643064e-05
2023-03-09 22:50:43,320 44k INFO Saving model and optimizer state at iteration 15 to ./logs\44k\G_7000.pth
2023-03-09 22:50:43,989 44k INFO Saving model and optimizer state at iteration 15 to ./logs\44k\D_7000.pth
2023-03-09 22:50:44,665 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_4000.pth
2023-03-09 22:50:44,693 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_4000.pth
2023-03-09 22:51:51,602 44k INFO ====> Epoch: 15, cost 173.91 s
2023-03-09 22:52:01,297 44k INFO Train Epoch: 16 [0%]
2023-03-09 22:52:01,297 44k INFO Losses: [2.584434986114502, 2.8690125942230225, 8.623632431030273, 19.25146484375, 1.6848952770233154], step: 7200, lr: 9.981266397366609e-05
2023-03-09 22:53:08,805 44k INFO Train Epoch: 16 [42%]
2023-03-09 22:53:08,806 44k INFO Losses: [2.4831955432891846, 2.2092037200927734, 8.89979076385498, 21.747295379638672, 1.8002870082855225], step: 7400, lr: 9.981266397366609e-05
2023-03-09 22:54:15,179 44k INFO Train Epoch: 16 [83%]
2023-03-09 22:54:15,179 44k INFO Losses: [2.550377368927002, 2.139691114425659, 8.239158630371094, 18.619892120361328, 1.1555097103118896], step: 7600, lr: 9.981266397366609e-05
2023-03-09 22:54:42,179 44k INFO ====> Epoch: 16, cost 170.58 s
2023-03-09 22:55:32,201 44k INFO Train Epoch: 17 [25%]
2023-03-09 22:55:32,201 44k INFO Losses: [2.6224045753479004, 2.003127098083496, 5.673051357269287, 18.703475952148438, 1.4593850374221802], step: 7800, lr: 9.980018739066937e-05
2023-03-09 22:56:38,453 44k INFO Train Epoch: 17 [67%]
2023-03-09 22:56:38,453 44k INFO Losses: [2.5512290000915527, 2.491487979888916, 8.692081451416016, 21.967350006103516, 1.230060338973999], step: 8000, lr: 9.980018739066937e-05
2023-03-09 22:56:41,418 44k INFO Saving model and optimizer state at iteration 17 to ./logs\44k\G_8000.pth
2023-03-09 22:56:42,069 44k INFO Saving model and optimizer state at iteration 17 to ./logs\44k\D_8000.pth
2023-03-09 22:56:42,713 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_5000.pth
2023-03-09 22:56:42,749 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_5000.pth
2023-03-09 22:57:36,369 44k INFO ====> Epoch: 17, cost 174.19 s
2023-03-09 22:57:59,625 44k INFO Train Epoch: 18 [8%]
2023-03-09 22:57:59,626 44k INFO Losses: [2.681464195251465, 2.0676474571228027, 8.194007873535156, 18.121309280395508, 0.9629524946212769], step: 8200, lr: 9.978771236724554e-05
2023-03-09 22:59:06,766 44k INFO Train Epoch: 18 [50%]
2023-03-09 22:59:06,767 44k INFO Losses: [2.4612860679626465, 1.9128518104553223, 10.773208618164062, 21.291955947875977, 1.9208170175552368], step: 8400, lr: 9.978771236724554e-05
2023-03-09 23:00:13,278 44k INFO Train Epoch: 18 [92%]
2023-03-09 23:00:13,279 44k INFO Losses: [2.3332037925720215, 2.527757167816162, 10.766252517700195, 20.27721405029297, 1.6251143217086792], step: 8600, lr: 9.978771236724554e-05
2023-03-09 23:00:27,015 44k INFO ====> Epoch: 18, cost 170.65 s
2023-03-09 23:01:30,387 44k INFO Train Epoch: 19 [33%]
2023-03-09 23:01:30,387 44k INFO Losses: [2.6890785694122314, 2.048175573348999, 7.796837329864502, 17.393211364746094, 1.0918943881988525], step: 8800, lr: 9.977523890319963e-05
2023-03-09 23:02:36,951 44k INFO Train Epoch: 19 [75%]
2023-03-09 23:02:36,951 44k INFO Losses: [2.446784257888794, 2.24282169342041, 7.400733470916748, 19.949542999267578, 1.4323956966400146], step: 9000, lr: 9.977523890319963e-05
2023-03-09 23:02:39,875 44k INFO Saving model and optimizer state at iteration 19 to ./logs\44k\G_9000.pth
2023-03-09 23:02:40,549 44k INFO Saving model and optimizer state at iteration 19 to ./logs\44k\D_9000.pth
2023-03-09 23:02:41,199 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_6000.pth
2023-03-09 23:02:41,230 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_6000.pth
2023-03-09 23:03:21,475 44k INFO ====> Epoch: 19, cost 174.46 s
2023-03-09 23:03:58,103 44k INFO Train Epoch: 20 [17%]
2023-03-09 23:03:58,103 44k INFO Losses: [2.499037981033325, 2.2154788970947266, 9.236119270324707, 19.01803970336914, 1.029329776763916], step: 9200, lr: 9.976276699833672e-05
2023-03-09 23:05:04,654 44k INFO Train Epoch: 20 [58%]
2023-03-09 23:05:04,654 44k INFO Losses: [2.707878589630127, 2.379702091217041, 9.363155364990234, 21.514728546142578, 1.0883721113204956], step: 9400, lr: 9.976276699833672e-05
2023-03-09 23:06:11,584 44k INFO ====> Epoch: 20, cost 170.11 s
2023-03-09 23:06:21,395 44k INFO Train Epoch: 21 [0%]
2023-03-09 23:06:21,395 44k INFO Losses: [2.6456973552703857, 2.2951180934906006, 7.92765998840332, 16.195425033569336, 1.277409315109253], step: 9600, lr: 9.975029665246193e-05
2023-03-09 23:07:28,401 44k INFO Train Epoch: 21 [42%]
2023-03-09 23:07:28,402 44k INFO Losses: [2.240675926208496, 2.32568097114563, 10.639389038085938, 23.330589294433594, 1.4125117063522339], step: 9800, lr: 9.975029665246193e-05
2023-03-09 23:08:34,540 44k INFO Train Epoch: 21 [83%]
2023-03-09 23:08:34,540 44k INFO Losses: [2.6852498054504395, 2.272972345352173, 9.825559616088867, 21.62499237060547, 1.1525027751922607], step: 10000, lr: 9.975029665246193e-05
2023-03-09 23:08:37,490 44k INFO Saving model and optimizer state at iteration 21 to ./logs\44k\G_10000.pth
2023-03-09 23:08:38,177 44k INFO Saving model and optimizer state at iteration 21 to ./logs\44k\D_10000.pth
2023-03-09 23:08:38,826 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_7000.pth
2023-03-09 23:08:38,855 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_7000.pth
2023-03-09 23:09:05,706 44k INFO ====> Epoch: 21, cost 174.12 s
2023-03-09 23:09:56,858 44k INFO Train Epoch: 22 [25%]
2023-03-09 23:09:56,859 44k INFO Losses: [2.238344192504883, 2.6883773803710938, 9.67750358581543, 22.289474487304688, 1.5644885301589966], step: 10200, lr: 9.973782786538036e-05
2023-03-09 23:11:03,799 44k INFO Train Epoch: 22 [67%]
2023-03-09 23:11:03,799 44k INFO Losses: [2.553804397583008, 2.063002586364746, 8.806561470031738, 20.5889949798584, 1.483044981956482], step: 10400, lr: 9.973782786538036e-05
2023-03-09 23:11:58,673 44k INFO ====> Epoch: 22, cost 172.97 s
2023-03-09 23:12:22,758 44k INFO Train Epoch: 23 [8%]
2023-03-09 23:12:22,759 44k INFO Losses: [2.6479604244232178, 2.37148380279541, 5.944035053253174, 19.03931427001953, 0.8333165645599365], step: 10600, lr: 9.972536063689719e-05
2023-03-09 23:13:29,695 44k INFO Train Epoch: 23 [50%]
2023-03-09 23:13:29,695 44k INFO Losses: [2.4879696369171143, 2.3861594200134277, 9.920605659484863, 21.222816467285156, 1.5764923095703125], step: 10800, lr: 9.972536063689719e-05
2023-03-09 23:14:36,814 44k INFO Train Epoch: 23 [92%]
2023-03-09 23:14:36,815 44k INFO Losses: [2.3822216987609863, 2.1951522827148438, 9.054584503173828, 21.033418655395508, 1.300223708152771], step: 11000, lr: 9.972536063689719e-05
2023-03-09 23:14:39,782 44k INFO Saving model and optimizer state at iteration 23 to ./logs\44k\G_11000.pth
2023-03-09 23:14:40,521 44k INFO Saving model and optimizer state at iteration 23 to ./logs\44k\D_11000.pth
2023-03-09 23:14:41,229 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_8000.pth
2023-03-09 23:14:41,275 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_8000.pth
2023-03-09 23:14:55,049 44k INFO ====> Epoch: 23, cost 176.38 s
2023-03-09 23:16:03,865 44k INFO Train Epoch: 24 [33%]
2023-03-09 23:16:03,866 44k INFO Losses: [2.527130365371704, 2.1844637393951416, 9.962337493896484, 19.235326766967773, 1.5654696226119995], step: 11200, lr: 9.971289496681757e-05
2023-03-09 23:17:12,570 44k INFO Train Epoch: 24 [75%]
2023-03-09 23:17:12,571 44k INFO Losses: [2.464890480041504, 1.9783828258514404, 9.894397735595703, 19.97930335998535, 1.1896278858184814], step: 11400, lr: 9.971289496681757e-05
2023-03-09 23:17:53,847 44k INFO ====> Epoch: 24, cost 178.80 s
2023-03-09 23:18:32,893 44k INFO Train Epoch: 25 [17%]
2023-03-09 23:18:32,893 44k INFO Losses: [2.3611230850219727, 2.045161485671997, 11.447516441345215, 20.11899757385254, 0.9721967577934265], step: 11600, lr: 9.970043085494672e-05
2023-03-09 23:19:41,104 44k INFO Train Epoch: 25 [58%]
2023-03-09 23:19:41,105 44k INFO Losses: [2.6343815326690674, 2.0267837047576904, 8.24674129486084, 18.057497024536133, 0.9985759854316711], step: 11800, lr: 9.970043085494672e-05
2023-03-09 23:20:50,566 44k INFO ====> Epoch: 25, cost 176.72 s
2023-03-09 23:21:00,706 44k INFO Train Epoch: 26 [0%]
2023-03-09 23:21:00,706 44k INFO Losses: [2.108116388320923, 2.732480049133301, 6.870335578918457, 16.327394485473633, 1.0612876415252686], step: 12000, lr: 9.968796830108985e-05
2023-03-09 23:21:03,750 44k INFO Saving model and optimizer state at iteration 26 to ./logs\44k\G_12000.pth
2023-03-09 23:21:04,453 44k INFO Saving model and optimizer state at iteration 26 to ./logs\44k\D_12000.pth
2023-03-09 23:21:05,077 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_9000.pth
2023-03-09 23:21:05,120 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_9000.pth
2023-03-09 23:22:14,609 44k INFO Train Epoch: 26 [42%]
2023-03-09 23:22:14,610 44k INFO Losses: [2.523098945617676, 2.2026989459991455, 9.008049011230469, 21.576215744018555, 1.4500067234039307], step: 12200, lr: 9.968796830108985e-05
2023-03-09 23:23:21,857 44k INFO Train Epoch: 26 [83%]
2023-03-09 23:23:21,857 44k INFO Losses: [2.6288836002349854, 2.0201313495635986, 7.990760803222656, 14.86129379272461, 1.378536343574524], step: 12400, lr: 9.968796830108985e-05
2023-03-09 23:23:49,151 44k INFO ====> Epoch: 26, cost 178.59 s
2023-03-09 23:24:40,579 44k INFO Train Epoch: 27 [25%]
2023-03-09 23:24:40,580 44k INFO Losses: [2.413933277130127, 2.261289358139038, 6.761435031890869, 19.1599178314209, 1.4506725072860718], step: 12600, lr: 9.967550730505221e-05
2023-03-09 23:25:48,998 44k INFO Train Epoch: 27 [67%]
2023-03-09 23:25:48,998 44k INFO Losses: [2.7390246391296387, 2.257850170135498, 8.31356430053711, 21.033832550048828, 1.130155086517334], step: 12800, lr: 9.967550730505221e-05
2023-03-09 23:26:44,379 44k INFO ====> Epoch: 27, cost 175.23 s
2023-03-09 23:27:08,014 44k INFO Train Epoch: 28 [8%]
2023-03-09 23:27:08,014 44k INFO Losses: [2.575988292694092, 2.568122625350952, 9.701324462890625, 21.5109806060791, 1.560456395149231], step: 13000, lr: 9.966304786663908e-05
2023-03-09 23:27:11,017 44k INFO Saving model and optimizer state at iteration 28 to ./logs\44k\G_13000.pth
2023-03-09 23:27:11,687 44k INFO Saving model and optimizer state at iteration 28 to ./logs\44k\D_13000.pth
2023-03-09 23:27:12,306 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_10000.pth
2023-03-09 23:27:12,351 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_10000.pth
2023-03-09 23:28:18,669 44k INFO Train Epoch: 28 [50%]
2023-03-09 23:28:18,670 44k INFO Losses: [2.396379232406616, 2.5859997272491455, 8.974349975585938, 21.1258487701416, 1.407135009765625], step: 13200, lr: 9.966304786663908e-05
2023-03-09 23:29:29,448 44k INFO Train Epoch: 28 [92%]
2023-03-09 23:29:29,448 44k INFO Losses: [2.6594862937927246, 1.9680594205856323, 8.101217269897461, 15.953340530395508, 0.9844779968261719], step: 13400, lr: 9.966304786663908e-05
2023-03-09 23:29:43,841 44k INFO ====> Epoch: 28, cost 179.46 s
2023-03-09 23:30:54,018 44k INFO Train Epoch: 29 [33%]
2023-03-09 23:30:54,019 44k INFO Losses: [2.4747812747955322, 2.02663254737854, 10.244929313659668, 21.2783145904541, 1.326253890991211], step: 13600, lr: 9.965058998565574e-05
2023-03-09 23:32:04,615 44k INFO Train Epoch: 29 [75%]
2023-03-09 23:32:04,616 44k INFO Losses: [2.3540537357330322, 2.3179657459259033, 10.381710052490234, 20.783424377441406, 1.2607688903808594], step: 13800, lr: 9.965058998565574e-05
2023-03-09 23:32:47,467 44k INFO ====> Epoch: 29, cost 183.63 s
2023-03-09 23:33:26,410 44k INFO Train Epoch: 30 [17%]
2023-03-09 23:33:26,411 44k INFO Losses: [2.474283218383789, 2.4970040321350098, 8.310688972473145, 21.354806900024414, 1.3590052127838135], step: 14000, lr: 9.963813366190753e-05
2023-03-09 23:33:29,452 44k INFO Saving model and optimizer state at iteration 30 to ./logs\44k\G_14000.pth
2023-03-09 23:33:30,221 44k INFO Saving model and optimizer state at iteration 30 to ./logs\44k\D_14000.pth
2023-03-09 23:33:30,839 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_11000.pth
2023-03-09 23:33:30,869 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_11000.pth
2023-03-09 23:34:39,157 44k INFO Train Epoch: 30 [58%]
2023-03-09 23:34:39,158 44k INFO Losses: [2.6338324546813965, 2.0900826454162598, 8.30550479888916, 19.974449157714844, 1.6999881267547607], step: 14200, lr: 9.963813366190753e-05
2023-03-09 23:35:45,919 44k INFO ====> Epoch: 30, cost 178.45 s
2023-03-09 23:35:55,590 44k INFO Train Epoch: 31 [0%]
2023-03-09 23:35:55,591 44k INFO Losses: [2.589956521987915, 2.11285662651062, 6.709275245666504, 14.461118698120117, 1.5959913730621338], step: 14400, lr: 9.962567889519979e-05
2023-03-09 23:37:02,721 44k INFO Train Epoch: 31 [42%]
2023-03-09 23:37:02,722 44k INFO Losses: [2.5060269832611084, 2.0969247817993164, 8.611684799194336, 20.40202522277832, 1.4926910400390625], step: 14600, lr: 9.962567889519979e-05
2023-03-09 23:38:09,303 44k INFO Train Epoch: 31 [83%]
2023-03-09 23:38:09,304 44k INFO Losses: [2.44972562789917, 2.209604263305664, 8.374757766723633, 15.323009490966797, 1.1994714736938477], step: 14800, lr: 9.962567889519979e-05
2023-03-09 23:38:37,051 44k INFO ====> Epoch: 31, cost 171.13 s
2023-03-09 23:39:27,683 44k INFO Train Epoch: 32 [25%]
2023-03-09 23:39:27,684 44k INFO Losses: [3.007453680038452, 2.0412755012512207, 8.696616172790527, 22.206615447998047, 1.6789324283599854], step: 15000, lr: 9.961322568533789e-05
2023-03-09 23:39:30,555 44k INFO Saving model and optimizer state at iteration 32 to ./logs\44k\G_15000.pth
2023-03-09 23:39:31,269 44k INFO Saving model and optimizer state at iteration 32 to ./logs\44k\D_15000.pth
2023-03-09 23:39:31,905 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_12000.pth
2023-03-09 23:39:31,946 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_12000.pth
2023-03-09 23:40:38,063 44k INFO Train Epoch: 32 [67%]
2023-03-09 23:40:38,063 44k INFO Losses: [2.580566883087158, 2.168311834335327, 10.336328506469727, 21.996471405029297, 1.1694210767745972], step: 15200, lr: 9.961322568533789e-05
2023-03-09 23:41:31,924 44k INFO ====> Epoch: 32, cost 174.87 s
2023-03-09 23:41:55,139 44k INFO Train Epoch: 33 [8%]
2023-03-09 23:41:55,139 44k INFO Losses: [2.5315604209899902, 2.3219544887542725, 7.4105305671691895, 18.05023956298828, 1.3385566473007202], step: 15400, lr: 9.960077403212722e-05
2023-03-09 23:43:01,737 44k INFO Train Epoch: 33 [50%]
2023-03-09 23:43:01,737 44k INFO Losses: [2.178466558456421, 2.4093775749206543, 8.204710006713867, 19.970046997070312, 1.2561421394348145], step: 15600, lr: 9.960077403212722e-05
2023-03-09 23:44:09,178 44k INFO Train Epoch: 33 [92%]
2023-03-09 23:44:09,179 44k INFO Losses: [2.4576892852783203, 2.340421199798584, 9.03681755065918, 19.720226287841797, 1.4429585933685303], step: 15800, lr: 9.960077403212722e-05
2023-03-09 23:44:23,625 44k INFO ====> Epoch: 33, cost 171.70 s
2023-03-09 23:45:41,952 44k INFO Train Epoch: 34 [33%]
2023-03-09 23:45:41,952 44k INFO Losses: [2.541362762451172, 2.1348211765289307, 8.14700984954834, 17.03746223449707, 1.528643250465393], step: 16000, lr: 9.95883239353732e-05
2023-03-09 23:45:45,488 44k INFO Saving model and optimizer state at iteration 34 to ./logs\44k\G_16000.pth
2023-03-09 23:45:46,269 44k INFO Saving model and optimizer state at iteration 34 to ./logs\44k\D_16000.pth
2023-03-09 23:45:46,971 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_13000.pth
2023-03-09 23:45:47,003 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_13000.pth
2023-03-09 23:46:55,293 44k INFO Train Epoch: 34 [75%]
2023-03-09 23:46:55,294 44k INFO Losses: [2.2557361125946045, 2.1361310482025146, 11.321039199829102, 22.42060661315918, 1.6674526929855347], step: 16200, lr: 9.95883239353732e-05
2023-03-09 23:47:36,379 44k INFO ====> Epoch: 34, cost 192.75 s
2023-03-09 23:48:13,981 44k INFO Train Epoch: 35 [17%]
2023-03-09 23:48:13,982 44k INFO Losses: [2.5682592391967773, 2.5494468212127686, 10.363664627075195, 20.090839385986328, 1.3562754392623901], step: 16400, lr: 9.957587539488128e-05
2023-03-09 23:49:21,678 44k INFO Train Epoch: 35 [58%]
2023-03-09 23:49:21,678 44k INFO Losses: [2.6154041290283203, 1.9948923587799072, 8.994477272033691, 17.46767234802246, 1.2940129041671753], step: 16600, lr: 9.957587539488128e-05
2023-03-09 23:50:30,138 44k INFO ====> Epoch: 35, cost 173.76 s
2023-03-09 23:50:40,102 44k INFO Train Epoch: 36 [0%]
2023-03-09 23:50:40,102 44k INFO Losses: [2.4660754203796387, 1.9663159847259521, 5.942327499389648, 14.918792724609375, 1.3304672241210938], step: 16800, lr: 9.956342841045691e-05
2023-03-09 23:51:49,804 44k INFO Train Epoch: 36 [42%]
2023-03-09 23:51:49,804 44k INFO Losses: [2.4021859169006348, 2.3796520233154297, 10.489449501037598, 21.498687744140625, 1.2586421966552734], step: 17000, lr: 9.956342841045691e-05
2023-03-09 23:51:52,692 44k INFO Saving model and optimizer state at iteration 36 to ./logs\44k\G_17000.pth
2023-03-09 23:51:53,378 44k INFO Saving model and optimizer state at iteration 36 to ./logs\44k\D_17000.pth
2023-03-09 23:51:54,026 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_14000.pth
2023-03-09 23:51:54,064 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_14000.pth
2023-03-09 23:53:02,828 44k INFO Train Epoch: 36 [83%]
2023-03-09 23:53:02,828 44k INFO Losses: [2.530822277069092, 2.359158754348755, 5.985983371734619, 17.314420700073242, 1.2479362487792969], step: 17200, lr: 9.956342841045691e-05
2023-03-09 23:53:30,740 44k INFO ====> Epoch: 36, cost 180.60 s
2023-03-09 23:54:22,487 44k INFO Train Epoch: 37 [25%]
2023-03-09 23:54:22,487 44k INFO Losses: [2.468174457550049, 2.5823373794555664, 7.509805202484131, 20.679941177368164, 1.1722922325134277], step: 17400, lr: 9.95509829819056e-05
2023-03-09 23:55:30,989 44k INFO Train Epoch: 37 [67%]
2023-03-09 23:55:30,989 44k INFO Losses: [2.440070152282715, 2.3759515285491943, 7.461362838745117, 20.658775329589844, 1.294507384300232], step: 17600, lr: 9.95509829819056e-05
2023-03-09 23:56:27,303 44k INFO ====> Epoch: 37, cost 176.56 s
2023-03-09 23:56:51,642 44k INFO Train Epoch: 38 [8%]
2023-03-09 23:56:51,643 44k INFO Losses: [2.51751708984375, 2.3274128437042236, 8.263319969177246, 20.75530433654785, 1.1685583591461182], step: 17800, lr: 9.953853910903285e-05
2023-03-09 23:58:01,096 44k INFO Train Epoch: 38 [50%]
2023-03-09 23:58:01,097 44k INFO Losses: [2.5445449352264404, 2.6543474197387695, 11.771062850952148, 23.546571731567383, 1.391394019126892], step: 18000, lr: 9.953853910903285e-05
2023-03-09 23:58:04,148 44k INFO Saving model and optimizer state at iteration 38 to ./logs\44k\G_18000.pth
2023-03-09 23:58:04,840 44k INFO Saving model and optimizer state at iteration 38 to ./logs\44k\D_18000.pth
2023-03-09 23:58:05,487 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_15000.pth
2023-03-09 23:58:05,532 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_15000.pth
2023-03-09 23:59:14,115 44k INFO Train Epoch: 38 [92%]
2023-03-09 23:59:14,115 44k INFO Losses: [2.6619672775268555, 2.097445249557495, 6.994243621826172, 20.368392944335938, 1.6910580396652222], step: 18200, lr: 9.953853910903285e-05
2023-03-09 23:59:28,319 44k INFO ====> Epoch: 38, cost 181.02 s
2023-03-10 00:00:34,196 44k INFO Train Epoch: 39 [33%]
2023-03-10 00:00:34,196 44k INFO Losses: [2.468571662902832, 2.37003231048584, 10.23530387878418, 19.475954055786133, 1.09866464138031], step: 18400, lr: 9.952609679164422e-05
2023-03-10 00:01:42,770 44k INFO Train Epoch: 39 [75%]
2023-03-10 00:01:42,771 44k INFO Losses: [2.2688961029052734, 2.219367742538452, 10.984037399291992, 19.067935943603516, 1.0729482173919678], step: 18600, lr: 9.952609679164422e-05
2023-03-10 00:02:24,217 44k INFO ====> Epoch: 39, cost 175.90 s
2023-03-10 00:03:02,512 44k INFO Train Epoch: 40 [17%]
2023-03-10 00:03:02,512 44k INFO Losses: [2.4043712615966797, 2.078387498855591, 7.829358100891113, 17.97997283935547, 1.2149159908294678], step: 18800, lr: 9.951365602954526e-05
2023-03-10 00:04:10,942 44k INFO Train Epoch: 40 [58%]
2023-03-10 00:04:10,943 44k INFO Losses: [2.7210350036621094, 1.999556541442871, 8.090421676635742, 19.198190689086914, 1.1153842210769653], step: 19000, lr: 9.951365602954526e-05
2023-03-10 00:04:14,095 44k INFO Saving model and optimizer state at iteration 40 to ./logs\44k\G_19000.pth
2023-03-10 00:04:14,775 44k INFO Saving model and optimizer state at iteration 40 to ./logs\44k\D_19000.pth
2023-03-10 00:04:15,443 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_16000.pth
2023-03-10 00:04:15,472 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_16000.pth
2023-03-10 00:05:24,294 44k INFO ====> Epoch: 40, cost 180.08 s
2023-03-10 00:05:34,389 44k INFO Train Epoch: 41 [0%]
2023-03-10 00:05:34,390 44k INFO Losses: [2.2023415565490723, 2.5993332862854004, 10.31195068359375, 21.043968200683594, 1.18059504032135], step: 19200, lr: 9.950121682254156e-05
2023-03-10 00:06:43,986 44k INFO Train Epoch: 41 [42%]
2023-03-10 00:06:43,987 44k INFO Losses: [2.6001029014587402, 1.9843547344207764, 7.296051502227783, 20.282991409301758, 1.4453257322311401], step: 19400, lr: 9.950121682254156e-05
2023-03-10 00:07:52,373 44k INFO Train Epoch: 41 [83%]
2023-03-10 00:07:52,373 44k INFO Losses: [2.8434841632843018, 2.0593206882476807, 7.50341272354126, 15.828989028930664, 1.1855241060256958], step: 19600, lr: 9.950121682254156e-05
2023-03-10 00:08:20,504 44k INFO ====> Epoch: 41, cost 176.21 s
2023-03-10 00:09:11,847 44k INFO Train Epoch: 42 [25%]
2023-03-10 00:09:11,847 44k INFO Losses: [2.3398730754852295, 2.205005645751953, 8.405223846435547, 21.237323760986328, 1.3119484186172485], step: 19800, lr: 9.948877917043875e-05
2023-03-10 00:10:21,325 44k INFO Train Epoch: 42 [67%]
2023-03-10 00:10:21,326 44k INFO Losses: [2.564417600631714, 2.2666187286376953, 7.732751369476318, 20.512413024902344, 1.5454299449920654], step: 20000, lr: 9.948877917043875e-05
2023-03-10 00:10:24,293 44k INFO Saving model and optimizer state at iteration 42 to ./logs\44k\G_20000.pth
2023-03-10 00:10:25,030 44k INFO Saving model and optimizer state at iteration 42 to ./logs\44k\D_20000.pth
2023-03-10 00:10:25,678 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_17000.pth
2023-03-10 00:10:25,715 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_17000.pth
2023-03-10 00:11:21,584 44k INFO ====> Epoch: 42, cost 181.08 s
2023-03-10 00:11:45,561 44k INFO Train Epoch: 43 [8%]
2023-03-10 00:11:45,562 44k INFO Losses: [2.582317352294922, 2.2756662368774414, 7.690517425537109, 19.34727668762207, 1.3748066425323486], step: 20200, lr: 9.947634307304244e-05
2023-03-10 00:12:54,895 44k INFO Train Epoch: 43 [50%]
2023-03-10 00:12:54,896 44k INFO Losses: [2.4092278480529785, 2.3263907432556152, 10.733685493469238, 21.365232467651367, 1.4302860498428345], step: 20400, lr: 9.947634307304244e-05
2023-03-10 00:14:04,154 44k INFO Train Epoch: 43 [92%]
2023-03-10 00:14:04,154 44k INFO Losses: [2.6259164810180664, 2.1064963340759277, 7.4166154861450195, 17.447696685791016, 1.1309740543365479], step: 20600, lr: 9.947634307304244e-05
2023-03-10 00:14:18,308 44k INFO ====> Epoch: 43, cost 176.72 s
2023-03-10 00:15:24,905 44k INFO Train Epoch: 44 [33%]
2023-03-10 00:15:24,905 44k INFO Losses: [2.539393186569214, 2.2365403175354004, 9.663187980651855, 19.52463150024414, 1.3997358083724976], step: 20800, lr: 9.94639085301583e-05
2023-03-10 00:16:34,403 44k INFO Train Epoch: 44 [75%]
2023-03-10 00:16:34,403 44k INFO Losses: [2.2573349475860596, 2.207044839859009, 9.318681716918945, 19.270915985107422, 1.3963450193405151], step: 21000, lr: 9.94639085301583e-05
2023-03-10 00:16:37,591 44k INFO Saving model and optimizer state at iteration 44 to ./logs\44k\G_21000.pth
2023-03-10 00:16:38,371 44k INFO Saving model and optimizer state at iteration 44 to ./logs\44k\D_21000.pth
2023-03-10 00:16:39,037 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_18000.pth
2023-03-10 00:16:39,069 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_18000.pth
2023-03-10 00:17:21,565 44k INFO ====> Epoch: 44, cost 183.26 s
2023-03-10 00:17:59,913 44k INFO Train Epoch: 45 [17%]
2023-03-10 00:17:59,913 44k INFO Losses: [2.533785820007324, 2.254905939102173, 11.04792308807373, 21.98154640197754, 1.696187973022461], step: 21200, lr: 9.945147554159202e-05
2023-03-10 00:19:08,333 44k INFO Train Epoch: 45 [58%]
2023-03-10 00:19:08,333 44k INFO Losses: [2.7740299701690674, 1.8208223581314087, 8.827348709106445, 18.517635345458984, 1.6870830059051514], step: 21400, lr: 9.945147554159202e-05
2023-03-10 00:20:16,091 44k INFO ====> Epoch: 45, cost 174.53 s
2023-03-10 00:20:25,665 44k INFO Train Epoch: 46 [0%]
2023-03-10 00:20:25,666 44k INFO Losses: [2.31622052192688, 2.598419427871704, 9.055929183959961, 21.344236373901367, 1.2862190008163452], step: 21600, lr: 9.943904410714931e-05
2023-03-10 00:21:33,562 44k INFO Train Epoch: 46 [42%]
2023-03-10 00:21:33,562 44k INFO Losses: [2.6425986289978027, 2.0487513542175293, 7.534739971160889, 21.130722045898438, 1.5636035203933716], step: 21800, lr: 9.943904410714931e-05
2023-03-10 00:22:44,470 44k INFO Train Epoch: 46 [83%]
2023-03-10 00:22:44,471 44k INFO Losses: [2.6624033451080322, 2.5943055152893066, 7.559395790100098, 16.688627243041992, 1.3365025520324707], step: 22000, lr: 9.943904410714931e-05
2023-03-10 00:22:47,426 44k INFO Saving model and optimizer state at iteration 46 to ./logs\44k\G_22000.pth
2023-03-10 00:22:48,192 44k INFO Saving model and optimizer state at iteration 46 to ./logs\44k\D_22000.pth
2023-03-10 00:22:48,838 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_19000.pth
2023-03-10 00:22:48,868 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_19000.pth
2023-03-10 00:23:16,351 44k INFO ====> Epoch: 46, cost 180.26 s
2023-03-10 00:24:07,162 44k INFO Train Epoch: 47 [25%]
2023-03-10 00:24:07,162 44k INFO Losses: [2.2071139812469482, 2.408928871154785, 10.053417205810547, 23.653078079223633, 1.3212175369262695], step: 22200, lr: 9.942661422663591e-05
2023-03-10 00:25:14,806 44k INFO Train Epoch: 47 [67%]
2023-03-10 00:25:14,807 44k INFO Losses: [2.555342197418213, 2.053321361541748, 7.891706466674805, 20.104381561279297, 1.2838119268417358], step: 22400, lr: 9.942661422663591e-05
2023-03-10 00:26:09,103 44k INFO ====> Epoch: 47, cost 172.75 s
2023-03-10 00:26:32,469 44k INFO Train Epoch: 48 [8%]
2023-03-10 00:26:32,469 44k INFO Losses: [2.4753148555755615, 2.5164923667907715, 8.472489356994629, 19.89665412902832, 1.5288535356521606], step: 22600, lr: 9.941418589985758e-05
2023-03-10 00:27:40,382 44k INFO Train Epoch: 48 [50%]
2023-03-10 00:27:40,382 44k INFO Losses: [2.1079154014587402, 2.5065455436706543, 12.964305877685547, 22.81366539001465, 1.487200379371643], step: 22800, lr: 9.941418589985758e-05
2023-03-10 00:28:48,163 44k INFO Train Epoch: 48 [92%]
2023-03-10 00:28:48,163 44k INFO Losses: [2.1043496131896973, 2.3964056968688965, 12.112120628356934, 22.33637809753418, 1.2500946521759033], step: 23000, lr: 9.941418589985758e-05
2023-03-10 00:28:51,075 44k INFO Saving model and optimizer state at iteration 48 to ./logs\44k\G_23000.pth
2023-03-10 00:28:51,741 44k INFO Saving model and optimizer state at iteration 48 to ./logs\44k\D_23000.pth
2023-03-10 00:28:52,375 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_20000.pth
2023-03-10 00:28:52,405 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_20000.pth
2023-03-10 00:29:06,093 44k INFO ====> Epoch: 48, cost 176.99 s
2023-03-10 00:30:10,750 44k INFO Train Epoch: 49 [33%]
2023-03-10 00:30:10,751 44k INFO Losses: [2.5487921237945557, 2.2012343406677246, 10.218581199645996, 17.530025482177734, 1.2369277477264404], step: 23200, lr: 9.940175912662009e-05
2023-03-10 00:31:18,383 44k INFO Train Epoch: 49 [75%]
2023-03-10 00:31:18,384 44k INFO Losses: [2.455705165863037, 2.133345603942871, 9.95699405670166, 19.278167724609375, 1.496594786643982], step: 23400, lr: 9.940175912662009e-05
2023-03-10 00:31:59,189 44k INFO ====> Epoch: 49, cost 173.10 s
2023-03-10 00:32:36,403 44k INFO Train Epoch: 50 [17%]
2023-03-10 00:32:36,403 44k INFO Losses: [2.5535242557525635, 2.162964105606079, 10.07349681854248, 20.256912231445312, 0.8696603178977966], step: 23600, lr: 9.938933390672926e-05
2023-03-10 00:33:44,030 44k INFO Train Epoch: 50 [58%]
2023-03-10 00:33:44,030 44k INFO Losses: [2.6740314960479736, 2.082170248031616, 6.957486629486084, 17.2308292388916, 1.3569444417953491], step: 23800, lr: 9.938933390672926e-05
2023-03-10 00:34:52,123 44k INFO ====> Epoch: 50, cost 172.93 s
2023-03-10 00:35:01,761 44k INFO Train Epoch: 51 [0%]
2023-03-10 00:35:01,761 44k INFO Losses: [2.55359148979187, 2.621748924255371, 8.933265686035156, 20.598188400268555, 1.4574673175811768], step: 24000, lr: 9.937691023999092e-05
2023-03-10 00:35:04,630 44k INFO Saving model and optimizer state at iteration 51 to ./logs\44k\G_24000.pth
2023-03-10 00:35:05,301 44k INFO Saving model and optimizer state at iteration 51 to ./logs\44k\D_24000.pth
2023-03-10 00:35:05,939 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_21000.pth
2023-03-10 00:35:05,970 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_21000.pth
2023-03-10 00:36:14,180 44k INFO Train Epoch: 51 [42%]
2023-03-10 00:36:14,180 44k INFO Losses: [2.3972549438476562, 2.119649648666382, 10.199462890625, 21.869224548339844, 1.401321291923523], step: 24200, lr: 9.937691023999092e-05
2023-03-10 00:37:21,752 44k INFO Train Epoch: 51 [83%]
2023-03-10 00:37:21,752 44k INFO Losses: [2.5760087966918945, 2.5573372840881348, 8.357288360595703, 17.33403968811035, 1.2820732593536377], step: 24400, lr: 9.937691023999092e-05
2023-03-10 00:37:49,272 44k INFO ====> Epoch: 51, cost 177.15 s
2023-03-10 00:38:39,998 44k INFO Train Epoch: 52 [25%]
2023-03-10 00:38:39,999 44k INFO Losses: [2.3621816635131836, 2.3909835815429688, 9.685585975646973, 20.309288024902344, 1.4920895099639893], step: 24600, lr: 9.936448812621091e-05
2023-03-10 00:39:47,559 44k INFO Train Epoch: 52 [67%]
2023-03-10 00:39:47,559 44k INFO Losses: [2.430614709854126, 2.1781272888183594, 11.714396476745605, 20.615680694580078, 1.3542485237121582], step: 24800, lr: 9.936448812621091e-05
2023-03-10 00:40:42,014 44k INFO ====> Epoch: 52, cost 172.74 s
2023-03-10 00:41:05,433 44k INFO Train Epoch: 53 [8%]
2023-03-10 00:41:05,434 44k INFO Losses: [2.4666404724121094, 2.3001811504364014, 9.228809356689453, 19.622970581054688, 0.9920151829719543], step: 25000, lr: 9.935206756519513e-05
2023-03-10 00:41:08,286 44k INFO Saving model and optimizer state at iteration 53 to ./logs\44k\G_25000.pth
2023-03-10 00:41:08,945 44k INFO Saving model and optimizer state at iteration 53 to ./logs\44k\D_25000.pth
2023-03-10 00:41:09,594 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_22000.pth
2023-03-10 00:41:09,624 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_22000.pth
2023-03-10 00:42:17,451 44k INFO Train Epoch: 53 [50%]
2023-03-10 00:42:17,452 44k INFO Losses: [2.3194563388824463, 2.355863571166992, 9.475533485412598, 20.43379783630371, 1.24263334274292], step: 25200, lr: 9.935206756519513e-05
2023-03-10 00:43:25,297 44k INFO Train Epoch: 53 [92%]
2023-03-10 00:43:25,297 44k INFO Losses: [2.374337911605835, 2.590486764907837, 8.226838111877441, 20.308446884155273, 1.3287748098373413], step: 25400, lr: 9.935206756519513e-05
2023-03-10 00:43:39,314 44k INFO ====> Epoch: 53, cost 177.30 s
2023-03-10 00:44:43,712 44k INFO Train Epoch: 54 [33%]
2023-03-10 00:44:43,713 44k INFO Losses: [2.559992551803589, 2.1120166778564453, 5.345562934875488, 15.033105850219727, 1.0177208185195923], step: 25600, lr: 9.933964855674948e-05
2023-03-10 00:45:51,998 44k INFO Train Epoch: 54 [75%]
2023-03-10 00:45:51,998 44k INFO Losses: [2.465874671936035, 2.1739163398742676, 8.172489166259766, 19.456560134887695, 1.1147669553756714], step: 25800, lr: 9.933964855674948e-05
2023-03-10 00:46:32,512 44k INFO ====> Epoch: 54, cost 173.20 s
2023-03-10 00:47:09,387 44k INFO Train Epoch: 55 [17%]
2023-03-10 00:47:09,387 44k INFO Losses: [2.1682655811309814, 2.5019171237945557, 12.015508651733398, 22.504173278808594, 1.5386526584625244], step: 26000, lr: 9.932723110067987e-05
2023-03-10 00:47:12,320 44k INFO Saving model and optimizer state at iteration 55 to ./logs\44k\G_26000.pth
2023-03-10 00:47:12,978 44k INFO Saving model and optimizer state at iteration 55 to ./logs\44k\D_26000.pth
2023-03-10 00:47:13,630 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_23000.pth
2023-03-10 00:47:13,667 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_23000.pth
2023-03-10 00:48:20,450 44k INFO Train Epoch: 55 [58%]
2023-03-10 00:48:20,451 44k INFO Losses: [2.4462766647338867, 2.1311614513397217, 9.153489112854004, 19.178327560424805, 1.0876898765563965], step: 26200, lr: 9.932723110067987e-05
2023-03-10 00:49:28,045 44k INFO ====> Epoch: 55, cost 175.53 s
2023-03-10 00:49:37,659 44k INFO Train Epoch: 56 [0%]
2023-03-10 00:49:37,659 44k INFO Losses: [2.460718870162964, 2.5246050357818604, 9.879639625549316, 21.140241622924805, 1.204176664352417], step: 26400, lr: 9.931481519679228e-05
2023-03-10 00:50:45,534 44k INFO Train Epoch: 56 [42%]
2023-03-10 00:50:45,535 44k INFO Losses: [2.464543104171753, 2.255500078201294, 10.844827651977539, 22.04778480529785, 1.099898099899292], step: 26600, lr: 9.931481519679228e-05
2023-03-10 00:51:52,596 44k INFO Train Epoch: 56 [83%]
2023-03-10 00:51:52,596 44k INFO Losses: [2.6117959022521973, 2.0736007690429688, 7.773540019989014, 17.324932098388672, 1.5685585737228394], step: 26800, lr: 9.931481519679228e-05
2023-03-10 00:52:19,851 44k INFO ====> Epoch: 56, cost 171.81 s
2023-03-10 00:53:10,189 44k INFO Train Epoch: 57 [25%]
2023-03-10 00:53:10,190 44k INFO Losses: [2.593026876449585, 2.2518861293792725, 9.6329345703125, 22.174802780151367, 1.5273586511611938], step: 27000, lr: 9.930240084489267e-05
2023-03-10 00:53:13,069 44k INFO Saving model and optimizer state at iteration 57 to ./logs\44k\G_27000.pth
2023-03-10 00:53:13,725 44k INFO Saving model and optimizer state at iteration 57 to ./logs\44k\D_27000.pth
2023-03-10 00:53:14,366 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_24000.pth
2023-03-10 00:53:14,407 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_24000.pth
2023-03-10 00:54:21,355 44k INFO Train Epoch: 57 [67%]
2023-03-10 00:54:21,355 44k INFO Losses: [2.7753677368164062, 1.9239017963409424, 7.772047519683838, 18.454837799072266, 1.2699477672576904], step: 27200, lr: 9.930240084489267e-05
2023-03-10 00:55:15,371 44k INFO ====> Epoch: 57, cost 175.52 s
2023-03-10 00:55:38,511 44k INFO Train Epoch: 58 [8%]
2023-03-10 00:55:38,511 44k INFO Losses: [2.4450769424438477, 2.4737496376037598, 7.607985973358154, 19.5660343170166, 1.430598497390747], step: 27400, lr: 9.928998804478705e-05
2023-03-10 00:56:45,665 44k INFO Train Epoch: 58 [50%]
2023-03-10 00:56:45,665 44k INFO Losses: [2.4399118423461914, 2.476273775100708, 11.084647178649902, 22.028345108032227, 1.5207157135009766], step: 27600, lr: 9.928998804478705e-05
2023-03-10 00:57:53,004 44k INFO Train Epoch: 58 [92%]
2023-03-10 00:57:53,004 44k INFO Losses: [2.402250289916992, 2.2848896980285645, 8.98740005493164, 19.900625228881836, 1.143722653388977], step: 27800, lr: 9.928998804478705e-05
2023-03-10 00:58:06,832 44k INFO ====> Epoch: 58, cost 171.46 s
2023-03-10 00:59:10,565 44k INFO Train Epoch: 59 [33%]
2023-03-10 00:59:10,565 44k INFO Losses: [2.5950214862823486, 1.9849491119384766, 8.57032299041748, 16.760623931884766, 1.0661135911941528], step: 28000, lr: 9.927757679628145e-05
2023-03-10 00:59:13,467 44k INFO Saving model and optimizer state at iteration 59 to ./logs\44k\G_28000.pth
2023-03-10 00:59:14,123 44k INFO Saving model and optimizer state at iteration 59 to ./logs\44k\D_28000.pth
2023-03-10 00:59:14,774 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_25000.pth
2023-03-10 00:59:14,820 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_25000.pth
2023-03-10 01:00:21,733 44k INFO Train Epoch: 59 [75%]
2023-03-10 01:00:21,733 44k INFO Losses: [2.396491050720215, 2.4650373458862305, 11.211841583251953, 20.439992904663086, 0.9188471436500549], step: 28200, lr: 9.927757679628145e-05
2023-03-10 01:01:02,256 44k INFO ====> Epoch: 59, cost 175.42 s
2023-03-10 01:01:39,054 44k INFO Train Epoch: 60 [17%]
2023-03-10 01:01:39,054 44k INFO Losses: [2.65049409866333, 2.0966310501098633, 6.8534016609191895, 18.760677337646484, 1.1398675441741943], step: 28400, lr: 9.926516709918191e-05
2023-03-10 01:02:46,150 44k INFO Train Epoch: 60 [58%]
2023-03-10 01:02:46,150 44k INFO Losses: [2.438035726547241, 2.3348889350891113, 7.82972526550293, 18.5557804107666, 1.4146393537521362], step: 28600, lr: 9.926516709918191e-05
2023-03-10 01:03:53,789 44k INFO ====> Epoch: 60, cost 171.53 s
2023-03-10 01:04:03,375 44k INFO Train Epoch: 61 [0%]
2023-03-10 01:04:03,375 44k INFO Losses: [2.6219024658203125, 2.8293468952178955, 9.905922889709473, 18.843236923217773, 1.6379427909851074], step: 28800, lr: 9.92527589532945e-05
2023-03-10 01:05:11,111 44k INFO Train Epoch: 61 [42%]
2023-03-10 01:05:11,111 44k INFO Losses: [2.584650754928589, 2.2321972846984863, 6.915933609008789, 20.58075714111328, 0.8695663213729858], step: 29000, lr: 9.92527589532945e-05
2023-03-10 01:05:13,998 44k INFO Saving model and optimizer state at iteration 61 to ./logs\44k\G_29000.pth
2023-03-10 01:05:14,657 44k INFO Saving model and optimizer state at iteration 61 to ./logs\44k\D_29000.pth
2023-03-10 01:05:15,300 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_26000.pth
2023-03-10 01:05:15,347 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_26000.pth
2023-03-10 01:06:22,282 44k INFO Train Epoch: 61 [83%]
2023-03-10 01:06:22,282 44k INFO Losses: [2.747096538543701, 2.007692575454712, 6.0306878089904785, 15.881680488586426, 0.5645329356193542], step: 29200, lr: 9.92527589532945e-05
2023-03-10 01:06:50,105 44k INFO ====> Epoch: 61, cost 176.32 s
2023-03-10 01:07:40,532 44k INFO Train Epoch: 62 [25%]
2023-03-10 01:07:40,533 44k INFO Losses: [2.195636034011841, 2.7493903636932373, 8.355330467224121, 20.621910095214844, 1.1680060625076294], step: 29400, lr: 9.924035235842533e-05
2023-03-10 01:08:47,371 44k INFO Train Epoch: 62 [67%]
2023-03-10 01:08:47,371 44k INFO Losses: [2.3570761680603027, 2.258380651473999, 9.165984153747559, 18.321725845336914, 1.105709195137024], step: 29600, lr: 9.924035235842533e-05
2023-03-10 01:09:41,281 44k INFO ====> Epoch: 62, cost 171.18 s
2023-03-10 01:10:04,590 44k INFO Train Epoch: 63 [8%]
2023-03-10 01:10:04,590 44k INFO Losses: [2.534261465072632, 1.9493284225463867, 6.108685493469238, 17.727191925048828, 1.1462334394454956], step: 29800, lr: 9.922794731438052e-05
2023-03-10 01:11:11,867 44k INFO Train Epoch: 63 [50%]
2023-03-10 01:11:11,868 44k INFO Losses: [2.1466357707977295, 2.441751003265381, 11.623617172241211, 21.063779830932617, 1.1211217641830444], step: 30000, lr: 9.922794731438052e-05
2023-03-10 01:11:14,667 44k INFO Saving model and optimizer state at iteration 63 to ./logs\44k\G_30000.pth
2023-03-10 01:11:15,383 44k INFO Saving model and optimizer state at iteration 63 to ./logs\44k\D_30000.pth
2023-03-10 01:11:16,027 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_27000.pth
2023-03-10 01:11:16,069 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_27000.pth
2023-03-10 01:12:23,327 44k INFO Train Epoch: 63 [92%]
2023-03-10 01:12:23,327 44k INFO Losses: [2.320936679840088, 2.342817544937134, 11.871529579162598, 20.014780044555664, 1.2792378664016724], step: 30200, lr: 9.922794731438052e-05
2023-03-10 01:12:37,139 44k INFO ====> Epoch: 63, cost 175.86 s
2023-03-10 01:13:41,024 44k INFO Train Epoch: 64 [33%]
2023-03-10 01:13:41,024 44k INFO Losses: [2.411961078643799, 2.2892696857452393, 9.06312084197998, 19.208341598510742, 1.4149917364120483], step: 30400, lr: 9.921554382096622e-05
2023-03-10 01:14:48,092 44k INFO Train Epoch: 64 [75%]
2023-03-10 01:14:48,093 44k INFO Losses: [2.5061354637145996, 2.2133126258850098, 10.020742416381836, 19.887685775756836, 1.369322657585144], step: 30600, lr: 9.921554382096622e-05
2023-03-10 01:15:28,820 44k INFO ====> Epoch: 64, cost 171.68 s
2023-03-10 01:16:05,628 44k INFO Train Epoch: 65 [17%]
2023-03-10 01:16:05,628 44k INFO Losses: [2.4507529735565186, 2.1126151084899902, 13.933971405029297, 21.99840545654297, 1.1083381175994873], step: 30800, lr: 9.92031418779886e-05
2023-03-10 01:17:13,068 44k INFO Train Epoch: 65 [58%]
2023-03-10 01:17:13,069 44k INFO Losses: [2.437870502471924, 2.2035629749298096, 8.825096130371094, 18.92424774169922, 0.8800116181373596], step: 31000, lr: 9.92031418779886e-05
2023-03-10 01:17:15,928 44k INFO Saving model and optimizer state at iteration 65 to ./logs\44k\G_31000.pth
2023-03-10 01:17:16,586 44k INFO Saving model and optimizer state at iteration 65 to ./logs\44k\D_31000.pth
2023-03-10 01:17:17,229 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_28000.pth
2023-03-10 01:17:17,274 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_28000.pth
2023-03-10 01:18:24,893 44k INFO ====> Epoch: 65, cost 176.07 s
2023-03-10 01:18:34,460 44k INFO Train Epoch: 66 [0%]
2023-03-10 01:18:34,460 44k INFO Losses: [2.5885767936706543, 2.5255954265594482, 10.28254508972168, 19.816980361938477, 0.7707668542861938], step: 31200, lr: 9.919074148525384e-05
2023-03-10 01:19:42,281 44k INFO Train Epoch: 66 [42%]
2023-03-10 01:19:42,281 44k INFO Losses: [2.5087437629699707, 2.5391972064971924, 9.325486183166504, 20.830184936523438, 1.2771245241165161], step: 31400, lr: 9.919074148525384e-05
2023-03-10 01:20:49,507 44k INFO Train Epoch: 66 [83%]
2023-03-10 01:20:49,508 44k INFO Losses: [2.615724563598633, 2.324063301086426, 7.402339458465576, 15.822165489196777, 0.9446194171905518], step: 31600, lr: 9.919074148525384e-05
2023-03-10 01:21:16,869 44k INFO ====> Epoch: 66, cost 171.98 s
2023-03-10 01:22:07,246 44k INFO Train Epoch: 67 [25%]
2023-03-10 01:22:07,247 44k INFO Losses: [2.5019991397857666, 2.2032084465026855, 6.534299850463867, 20.32775115966797, 1.291076421737671], step: 31800, lr: 9.917834264256819e-05
2023-03-10 01:23:14,443 44k INFO Train Epoch: 67 [67%]
2023-03-10 01:23:14,443 44k INFO Losses: [2.4641759395599365, 2.3515498638153076, 7.272819995880127, 21.31987190246582, 1.2571789026260376], step: 32000, lr: 9.917834264256819e-05
2023-03-10 01:23:17,313 44k INFO Saving model and optimizer state at iteration 67 to ./logs\44k\G_32000.pth
2023-03-10 01:23:17,975 44k INFO Saving model and optimizer state at iteration 67 to ./logs\44k\D_32000.pth
2023-03-10 01:23:18,616 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_29000.pth
2023-03-10 01:23:18,658 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_29000.pth
2023-03-10 01:24:12,693 44k INFO ====> Epoch: 67, cost 175.82 s
2023-03-10 01:24:35,789 44k INFO Train Epoch: 68 [8%]
2023-03-10 01:24:35,789 44k INFO Losses: [2.3548192977905273, 2.2308290004730225, 9.776409149169922, 21.54472541809082, 1.1720070838928223], step: 32200, lr: 9.916594534973787e-05
2023-03-10 01:25:43,308 44k INFO Train Epoch: 68 [50%]
2023-03-10 01:25:43,308 44k INFO Losses: [2.7263193130493164, 2.142730951309204, 11.529346466064453, 23.22295379638672, 1.3292169570922852], step: 32400, lr: 9.916594534973787e-05
2023-03-10 01:26:50,995 44k INFO Train Epoch: 68 [92%]
2023-03-10 01:26:50,996 44k INFO Losses: [2.4169955253601074, 2.222931385040283, 8.180932998657227, 20.370588302612305, 1.267864465713501], step: 32600, lr: 9.916594534973787e-05
2023-03-10 01:27:04,867 44k INFO ====> Epoch: 68, cost 172.17 s
2023-03-10 01:28:08,938 44k INFO Train Epoch: 69 [33%]
2023-03-10 01:28:08,939 44k INFO Losses: [2.2154951095581055, 2.570366382598877, 11.885047912597656, 20.242231369018555, 1.0402462482452393], step: 32800, lr: 9.915354960656915e-05
2023-03-10 01:29:16,159 44k INFO Train Epoch: 69 [75%]
2023-03-10 01:29:16,159 44k INFO Losses: [2.5609591007232666, 2.1993024349212646, 7.085938930511475, 18.688953399658203, 1.3814340829849243], step: 33000, lr: 9.915354960656915e-05
2023-03-10 01:29:19,088 44k INFO Saving model and optimizer state at iteration 69 to ./logs\44k\G_33000.pth
2023-03-10 01:29:19,750 44k INFO Saving model and optimizer state at iteration 69 to ./logs\44k\D_33000.pth
2023-03-10 01:29:20,381 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_30000.pth
2023-03-10 01:29:20,421 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_30000.pth
2023-03-10 01:30:00,961 44k INFO ====> Epoch: 69, cost 176.09 s
2023-03-10 01:30:37,709 44k INFO Train Epoch: 70 [17%]
2023-03-10 01:30:37,709 44k INFO Losses: [2.489190101623535, 2.3778793811798096, 8.592529296875, 17.722820281982422, 1.3032668828964233], step: 33200, lr: 9.914115541286833e-05
2023-03-10 01:31:44,980 44k INFO Train Epoch: 70 [58%]
2023-03-10 01:31:44,981 44k INFO Losses: [2.6922950744628906, 2.2586448192596436, 7.873056888580322, 19.389326095581055, 1.0352110862731934], step: 33400, lr: 9.914115541286833e-05
2023-03-10 01:32:52,735 44k INFO ====> Epoch: 70, cost 171.77 s
2023-03-10 01:33:02,561 44k INFO Train Epoch: 71 [0%]
2023-03-10 01:33:02,562 44k INFO Losses: [2.6629834175109863, 2.2483139038085938, 6.274883270263672, 21.076974868774414, 1.1676338911056519], step: 33600, lr: 9.912876276844171e-05
2023-03-10 01:34:10,718 44k INFO Train Epoch: 71 [42%]
2023-03-10 01:34:10,719 44k INFO Losses: [2.40325665473938, 2.2859597206115723, 8.061351776123047, 19.120229721069336, 1.45626699924469], step: 33800, lr: 9.912876276844171e-05
2023-03-10 01:35:18,233 44k INFO Train Epoch: 71 [83%]
2023-03-10 01:35:18,233 44k INFO Losses: [2.4639079570770264, 2.7155954837799072, 6.236012935638428, 13.29322338104248, 1.030239462852478], step: 34000, lr: 9.912876276844171e-05
2023-03-10 01:35:21,108 44k INFO Saving model and optimizer state at iteration 71 to ./logs\44k\G_34000.pth
2023-03-10 01:35:21,817 44k INFO Saving model and optimizer state at iteration 71 to ./logs\44k\D_34000.pth
2023-03-10 01:35:22,457 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_31000.pth
2023-03-10 01:35:22,502 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_31000.pth
2023-03-10 01:35:49,743 44k INFO ====> Epoch: 71, cost 177.01 s
2023-03-10 01:36:40,245 44k INFO Train Epoch: 72 [25%]
2023-03-10 01:36:40,245 44k INFO Losses: [2.442927837371826, 2.3261184692382812, 7.148874282836914, 20.592294692993164, 1.2950329780578613], step: 34200, lr: 9.911637167309565e-05
2023-03-10 01:37:47,534 44k INFO Train Epoch: 72 [67%]
2023-03-10 01:37:47,535 44k INFO Losses: [2.361867666244507, 2.5377039909362793, 9.16838264465332, 20.23963737487793, 1.267628788948059], step: 34400, lr: 9.911637167309565e-05
2023-03-10 01:38:41,790 44k INFO ====> Epoch: 72, cost 172.05 s
2023-03-10 01:39:05,113 44k INFO Train Epoch: 73 [8%]
2023-03-10 01:39:05,113 44k INFO Losses: [2.377958297729492, 2.407437324523926, 7.49679708480835, 19.603193283081055, 1.3932220935821533], step: 34600, lr: 9.910398212663652e-05
2023-03-10 01:40:12,752 44k INFO Train Epoch: 73 [50%]
2023-03-10 01:40:12,752 44k INFO Losses: [2.8247716426849365, 1.983457326889038, 7.667083740234375, 18.327037811279297, 1.186850666999817], step: 34800, lr: 9.910398212663652e-05
2023-03-10 01:41:20,634 44k INFO Train Epoch: 73 [92%]
2023-03-10 01:41:20,634 44k INFO Losses: [2.676158905029297, 2.190196990966797, 8.072098731994629, 20.96558952331543, 1.487762451171875], step: 35000, lr: 9.910398212663652e-05
2023-03-10 01:41:23,452 44k INFO Saving model and optimizer state at iteration 73 to ./logs\44k\G_35000.pth
2023-03-10 01:41:24,107 44k INFO Saving model and optimizer state at iteration 73 to ./logs\44k\D_35000.pth
2023-03-10 01:41:24,753 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_32000.pth
2023-03-10 01:41:24,796 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_32000.pth
2023-03-10 01:41:38,571 44k INFO ====> Epoch: 73, cost 176.78 s
2023-03-10 01:42:42,735 44k INFO Train Epoch: 74 [33%]
2023-03-10 01:42:42,735 44k INFO Losses: [2.4282281398773193, 2.2438385486602783, 7.792109966278076, 16.740440368652344, 1.418903112411499], step: 35200, lr: 9.909159412887068e-05
2023-03-10 01:43:50,137 44k INFO Train Epoch: 74 [75%]
2023-03-10 01:43:50,137 44k INFO Losses: [2.3510305881500244, 2.086587429046631, 8.392033576965332, 19.2855224609375, 1.3113199472427368], step: 35400, lr: 9.909159412887068e-05
2023-03-10 01:44:31,072 44k INFO ====> Epoch: 74, cost 172.50 s
2023-03-10 01:45:08,139 44k INFO Train Epoch: 75 [17%]
2023-03-10 01:45:08,140 44k INFO Losses: [2.467280149459839, 2.1546030044555664, 10.749366760253906, 20.11714744567871, 1.3803324699401855], step: 35600, lr: 9.907920767960457e-05
2023-03-10 01:46:15,495 44k INFO Train Epoch: 75 [58%]
2023-03-10 01:46:15,496 44k INFO Losses: [2.525101900100708, 2.134913444519043, 9.422788619995117, 19.940322875976562, 0.5853258371353149], step: 35800, lr: 9.907920767960457e-05
2023-03-10 01:47:23,299 44k INFO ====> Epoch: 75, cost 172.23 s
2023-03-10 01:47:32,861 44k INFO Train Epoch: 76 [0%]
2023-03-10 01:47:32,862 44k INFO Losses: [2.5574803352355957, 2.750986337661743, 11.486483573913574, 19.715124130249023, 1.2036763429641724], step: 36000, lr: 9.906682277864462e-05
2023-03-10 01:47:35,699 44k INFO Saving model and optimizer state at iteration 76 to ./logs\44k\G_36000.pth
2023-03-10 01:47:36,413 44k INFO Saving model and optimizer state at iteration 76 to ./logs\44k\D_36000.pth
2023-03-10 01:47:37,054 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_33000.pth
2023-03-10 01:47:37,095 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_33000.pth
2023-03-10 01:48:45,080 44k INFO Train Epoch: 76 [42%]
2023-03-10 01:48:45,081 44k INFO Losses: [2.5309414863586426, 2.177687644958496, 9.103509902954102, 21.4112606048584, 1.5377999544143677], step: 36200, lr: 9.906682277864462e-05
2023-03-10 01:49:52,743 44k INFO Train Epoch: 76 [83%]
2023-03-10 01:49:52,743 44k INFO Losses: [2.428403854370117, 2.3139281272888184, 8.578767776489258, 17.937519073486328, 0.9283303022384644], step: 36400, lr: 9.906682277864462e-05
2023-03-10 01:50:20,193 44k INFO ====> Epoch: 76, cost 176.89 s
2023-03-10 01:51:10,799 44k INFO Train Epoch: 77 [25%]
2023-03-10 01:51:10,799 44k INFO Losses: [2.14625883102417, 2.4239108562469482, 8.833675384521484, 22.046072006225586, 1.6874737739562988], step: 36600, lr: 9.905443942579728e-05
2023-03-10 01:52:18,271 44k INFO Train Epoch: 77 [67%]
2023-03-10 01:52:18,271 44k INFO Losses: [2.4782447814941406, 2.2567358016967773, 9.361751556396484, 19.60032844543457, 1.0126616954803467], step: 36800, lr: 9.905443942579728e-05
2023-03-10 01:53:12,930 44k INFO ====> Epoch: 77, cost 172.74 s
2023-03-10 01:53:36,252 44k INFO Train Epoch: 78 [8%]
2023-03-10 01:53:36,253 44k INFO Losses: [2.397923231124878, 2.278186082839966, 9.692902565002441, 20.617475509643555, 1.501961350440979], step: 37000, lr: 9.904205762086905e-05
2023-03-10 01:53:39,120 44k INFO Saving model and optimizer state at iteration 78 to ./logs\44k\G_37000.pth
2023-03-10 01:53:39,783 44k INFO Saving model and optimizer state at iteration 78 to ./logs\44k\D_37000.pth
2023-03-10 01:53:40,426 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_34000.pth
2023-03-10 01:53:40,470 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_34000.pth
2023-03-10 01:54:47,992 44k INFO Train Epoch: 78 [50%]
2023-03-10 01:54:47,993 44k INFO Losses: [2.231703996658325, 2.5518651008605957, 13.325697898864746, 23.593231201171875, 1.0728912353515625], step: 37200, lr: 9.904205762086905e-05
2023-03-10 01:55:55,931 44k INFO Train Epoch: 78 [92%]
2023-03-10 01:55:55,931 44k INFO Losses: [2.4122819900512695, 2.2260191440582275, 10.02711009979248, 19.738201141357422, 1.4223968982696533], step: 37400, lr: 9.904205762086905e-05
2023-03-10 01:56:09,855 44k INFO ====> Epoch: 78, cost 176.92 s
2023-03-10 01:57:14,053 44k INFO Train Epoch: 79 [33%]
2023-03-10 01:57:14,053 44k INFO Losses: [2.320143461227417, 2.3592755794525146, 9.966156005859375, 19.560787200927734, 0.8694154620170593], step: 37600, lr: 9.902967736366644e-05
2023-03-10 01:58:21,521 44k INFO Train Epoch: 79 [75%]
2023-03-10 01:58:21,522 44k INFO Losses: [2.4515087604522705, 2.297215700149536, 10.873857498168945, 19.608396530151367, 0.8475047945976257], step: 37800, lr: 9.902967736366644e-05
2023-03-10 01:59:02,291 44k INFO ====> Epoch: 79, cost 172.44 s
2023-03-10 01:59:39,216 44k INFO Train Epoch: 80 [17%]
2023-03-10 01:59:39,216 44k INFO Losses: [2.621845245361328, 2.218869209289551, 5.727862358093262, 16.931283950805664, 0.8260491490364075], step: 38000, lr: 9.901729865399597e-05
2023-03-10 01:59:42,060 44k INFO Saving model and optimizer state at iteration 80 to ./logs\44k\G_38000.pth
2023-03-10 01:59:42,727 44k INFO Saving model and optimizer state at iteration 80 to ./logs\44k\D_38000.pth
2023-03-10 01:59:43,375 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_35000.pth
2023-03-10 01:59:43,420 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_35000.pth
2023-03-10 02:00:50,683 44k INFO Train Epoch: 80 [58%]
2023-03-10 02:00:50,683 44k INFO Losses: [2.549434185028076, 1.9785349369049072, 11.415558815002441, 20.010009765625, 0.5362642407417297], step: 38200, lr: 9.901729865399597e-05
2023-03-10 02:01:58,910 44k INFO ====> Epoch: 80, cost 176.62 s
2023-03-10 02:02:08,366 44k INFO Train Epoch: 81 [0%]
2023-03-10 02:02:08,367 44k INFO Losses: [2.4345498085021973, 2.3334052562713623, 9.885810852050781, 20.113014221191406, 1.0990657806396484], step: 38400, lr: 9.900492149166423e-05
2023-03-10 02:03:16,743 44k INFO Train Epoch: 81 [42%]
2023-03-10 02:03:16,743 44k INFO Losses: [2.421642541885376, 2.2066826820373535, 9.998400688171387, 20.750362396240234, 1.4486732482910156], step: 38600, lr: 9.900492149166423e-05
2023-03-10 02:04:24,112 44k INFO Train Epoch: 81 [83%]
2023-03-10 02:04:24,112 44k INFO Losses: [2.84961199760437, 1.9679622650146484, 5.906168460845947, 13.561853408813477, 1.0935133695602417], step: 38800, lr: 9.900492149166423e-05
2023-03-10 02:04:51,495 44k INFO ====> Epoch: 81, cost 172.59 s
2023-03-10 02:05:42,240 44k INFO Train Epoch: 82 [25%]
2023-03-10 02:05:42,241 44k INFO Losses: [2.660776138305664, 2.1231043338775635, 5.990094184875488, 16.123315811157227, 1.2103991508483887], step: 39000, lr: 9.899254587647776e-05
2023-03-10 02:05:45,115 44k INFO Saving model and optimizer state at iteration 82 to ./logs\44k\G_39000.pth
2023-03-10 02:05:45,796 44k INFO Saving model and optimizer state at iteration 82 to ./logs\44k\D_39000.pth
2023-03-10 02:05:46,430 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_36000.pth
2023-03-10 02:05:46,472 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_36000.pth
2023-03-10 02:06:53,820 44k INFO Train Epoch: 82 [67%]
2023-03-10 02:06:53,820 44k INFO Losses: [2.469160318374634, 2.2834420204162598, 6.774503707885742, 18.82451057434082, 1.4767872095108032], step: 39200, lr: 9.899254587647776e-05
2023-03-10 02:07:48,450 44k INFO ====> Epoch: 82, cost 176.96 s
2023-03-10 02:08:11,793 44k INFO Train Epoch: 83 [8%]
2023-03-10 02:08:11,793 44k INFO Losses: [2.364262580871582, 2.6441338062286377, 12.407462120056152, 22.07462501525879, 1.457845687866211], step: 39400, lr: 9.89801718082432e-05
2023-03-10 02:09:19,794 44k INFO Train Epoch: 83 [50%]
2023-03-10 02:09:19,795 44k INFO Losses: [2.25032901763916, 2.2811808586120605, 11.213237762451172, 19.5427188873291, 1.0862118005752563], step: 39600, lr: 9.89801718082432e-05
2023-03-10 02:10:28,014 44k INFO Train Epoch: 83 [92%]
2023-03-10 02:10:28,014 44k INFO Losses: [2.5107741355895996, 2.2757210731506348, 8.738912582397461, 20.884809494018555, 1.2771507501602173], step: 39800, lr: 9.89801718082432e-05
2023-03-10 02:10:41,922 44k INFO ====> Epoch: 83, cost 173.47 s
2023-03-10 02:11:46,230 44k INFO Train Epoch: 84 [33%]
2023-03-10 02:11:46,231 44k INFO Losses: [2.48128080368042, 2.08335280418396, 8.903714179992676, 18.561168670654297, 1.1124554872512817], step: 40000, lr: 9.896779928676716e-05
2023-03-10 02:11:49,122 44k INFO Saving model and optimizer state at iteration 84 to ./logs\44k\G_40000.pth
2023-03-10 02:11:49,789 44k INFO Saving model and optimizer state at iteration 84 to ./logs\44k\D_40000.pth
2023-03-10 02:11:50,426 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_37000.pth
2023-03-10 02:11:50,472 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_37000.pth
2023-03-10 02:12:58,092 44k INFO Train Epoch: 84 [75%]
2023-03-10 02:12:58,092 44k INFO Losses: [2.395432472229004, 2.1394405364990234, 10.208571434020996, 18.325084686279297, 1.2203572988510132], step: 40200, lr: 9.896779928676716e-05
2023-03-10 02:13:39,245 44k INFO ====> Epoch: 84, cost 177.32 s
2023-03-10 02:14:16,173 44k INFO Train Epoch: 85 [17%]
2023-03-10 02:14:16,174 44k INFO Losses: [2.69671368598938, 2.345823287963867, 7.80781888961792, 19.50173568725586, 1.3170828819274902], step: 40400, lr: 9.895542831185631e-05
2023-03-10 02:15:24,166 44k INFO Train Epoch: 85 [58%]
2023-03-10 02:15:24,166 44k INFO Losses: [2.6812539100646973, 1.9495866298675537, 9.164358139038086, 19.448680877685547, 1.007983684539795], step: 40600, lr: 9.895542831185631e-05
2023-03-10 02:16:32,383 44k INFO ====> Epoch: 85, cost 173.14 s
2023-03-10 02:16:42,069 44k INFO Train Epoch: 86 [0%]
2023-03-10 02:16:42,070 44k INFO Losses: [2.4388413429260254, 2.4413211345672607, 7.891645431518555, 16.501232147216797, 1.5631059408187866], step: 40800, lr: 9.894305888331732e-05
2023-03-10 02:17:50,311 44k INFO Train Epoch: 86 [42%]
2023-03-10 02:17:50,311 44k INFO Losses: [2.3541574478149414, 2.366830587387085, 7.922110557556152, 17.2901554107666, 1.2641910314559937], step: 41000, lr: 9.894305888331732e-05
2023-03-10 02:17:53,218 44k INFO Saving model and optimizer state at iteration 86 to ./logs\44k\G_41000.pth
2023-03-10 02:17:53,884 44k INFO Saving model and optimizer state at iteration 86 to ./logs\44k\D_41000.pth
2023-03-10 02:17:54,524 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_38000.pth
2023-03-10 02:17:54,554 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_38000.pth
2023-03-10 02:19:02,397 44k INFO Train Epoch: 86 [83%]
2023-03-10 02:19:02,397 44k INFO Losses: [2.4955737590789795, 2.472790479660034, 9.767871856689453, 18.626310348510742, 1.079459547996521], step: 41200, lr: 9.894305888331732e-05
2023-03-10 02:19:30,023 44k INFO ====> Epoch: 86, cost 177.64 s
2023-03-10 02:20:20,876 44k INFO Train Epoch: 87 [25%]
2023-03-10 02:20:20,876 44k INFO Losses: [2.48044753074646, 2.5758087635040283, 7.957856178283691, 18.676929473876953, 1.226336121559143], step: 41400, lr: 9.89306910009569e-05
2023-03-10 02:21:28,785 44k INFO Train Epoch: 87 [67%]
2023-03-10 02:21:28,786 44k INFO Losses: [2.4272122383117676, 2.3490002155303955, 11.180319786071777, 19.71648597717285, 1.3572531938552856], step: 41600, lr: 9.89306910009569e-05
2023-03-10 02:22:23,748 44k INFO ====> Epoch: 87, cost 173.72 s
2023-03-10 02:22:47,143 44k INFO Train Epoch: 88 [8%]
2023-03-10 02:22:47,143 44k INFO Losses: [2.5656580924987793, 2.608222246170044, 7.74412202835083, 19.378009796142578, 1.027908444404602], step: 41800, lr: 9.891832466458178e-05
2023-03-10 02:23:55,332 44k INFO Train Epoch: 88 [50%]
2023-03-10 02:23:55,333 44k INFO Losses: [2.2740159034729004, 2.479419708251953, 8.435369491577148, 20.661813735961914, 1.2032458782196045], step: 42000, lr: 9.891832466458178e-05
2023-03-10 02:23:58,155 44k INFO Saving model and optimizer state at iteration 88 to ./logs\44k\G_42000.pth
2023-03-10 02:23:58,815 44k INFO Saving model and optimizer state at iteration 88 to ./logs\44k\D_42000.pth
2023-03-10 02:23:59,465 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_39000.pth
2023-03-10 02:23:59,500 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_39000.pth
2023-03-10 02:25:07,775 44k INFO Train Epoch: 88 [92%]
2023-03-10 02:25:07,775 44k INFO Losses: [2.510584831237793, 2.3520522117614746, 9.499608993530273, 20.018051147460938, 1.5329962968826294], step: 42200, lr: 9.891832466458178e-05
2023-03-10 02:25:21,791 44k INFO ====> Epoch: 88, cost 178.04 s
2023-03-10 02:26:26,238 44k INFO Train Epoch: 89 [33%]
2023-03-10 02:26:26,238 44k INFO Losses: [2.6319351196289062, 2.1740670204162598, 8.002816200256348, 15.51085090637207, 0.9273039698600769], step: 42400, lr: 9.89059598739987e-05
2023-03-10 02:27:34,175 44k INFO Train Epoch: 89 [75%]
2023-03-10 02:27:34,176 44k INFO Losses: [2.442526340484619, 2.130934000015259, 8.880859375, 18.535947799682617, 1.3197972774505615], step: 42600, lr: 9.89059598739987e-05
2023-03-10 02:28:15,448 44k INFO ====> Epoch: 89, cost 173.66 s
2023-03-10 02:28:52,653 44k INFO Train Epoch: 90 [17%]
2023-03-10 02:28:52,653 44k INFO Losses: [2.368036985397339, 2.2541091442108154, 8.655840873718262, 16.256921768188477, 0.9659361243247986], step: 42800, lr: 9.889359662901445e-05
2023-03-10 02:30:00,733 44k INFO Train Epoch: 90 [58%]
2023-03-10 02:30:00,733 44k INFO Losses: [2.5140435695648193, 2.1225194931030273, 10.145703315734863, 19.713003158569336, 1.1034414768218994], step: 43000, lr: 9.889359662901445e-05
2023-03-10 02:30:03,588 44k INFO Saving model and optimizer state at iteration 90 to ./logs\44k\G_43000.pth
2023-03-10 02:30:04,251 44k INFO Saving model and optimizer state at iteration 90 to ./logs\44k\D_43000.pth
2023-03-10 02:30:04,896 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_40000.pth
2023-03-10 02:30:04,940 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_40000.pth
2023-03-10 02:31:13,486 44k INFO ====> Epoch: 90, cost 178.04 s
2023-03-10 02:31:23,157 44k INFO Train Epoch: 91 [0%]
2023-03-10 02:31:23,157 44k INFO Losses: [2.6231162548065186, 2.3267500400543213, 9.666067123413086, 20.483911514282227, 1.411985993385315], step: 43200, lr: 9.888123492943583e-05
2023-03-10 02:32:31,848 44k INFO Train Epoch: 91 [42%]
2023-03-10 02:32:31,848 44k INFO Losses: [2.5606441497802734, 2.2295196056365967, 8.491843223571777, 19.538007736206055, 1.3592729568481445], step: 43400, lr: 9.888123492943583e-05
2023-03-10 02:33:40,236 44k INFO Train Epoch: 91 [83%]
2023-03-10 02:33:40,236 44k INFO Losses: [2.499122142791748, 2.3185603618621826, 10.27350902557373, 19.33860206604004, 1.1391656398773193], step: 43600, lr: 9.888123492943583e-05
2023-03-10 02:34:07,991 44k INFO ====> Epoch: 91, cost 174.51 s
2023-03-10 02:34:58,832 44k INFO Train Epoch: 92 [25%]
2023-03-10 02:34:58,832 44k INFO Losses: [2.6315338611602783, 2.0515692234039307, 6.57252836227417, 16.995559692382812, 1.818975806236267], step: 43800, lr: 9.886887477506964e-05
2023-03-10 02:36:07,030 44k INFO Train Epoch: 92 [67%]
2023-03-10 02:36:07,030 44k INFO Losses: [2.626278877258301, 2.1475539207458496, 11.777432441711426, 20.90768814086914, 1.6315051317214966], step: 44000, lr: 9.886887477506964e-05
2023-03-10 02:36:09,870 44k INFO Saving model and optimizer state at iteration 92 to ./logs\44k\G_44000.pth
2023-03-10 02:36:10,546 44k INFO Saving model and optimizer state at iteration 92 to ./logs\44k\D_44000.pth
2023-03-10 02:36:11,182 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_41000.pth
2023-03-10 02:36:11,211 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_41000.pth
2023-03-10 02:37:06,191 44k INFO ====> Epoch: 92, cost 178.20 s
2023-03-10 02:37:29,435 44k INFO Train Epoch: 93 [8%]
2023-03-10 02:37:29,435 44k INFO Losses: [2.3690264225006104, 2.605854034423828, 11.098400115966797, 18.885906219482422, 1.5265321731567383], step: 44200, lr: 9.885651616572276e-05
2023-03-10 02:38:37,631 44k INFO Train Epoch: 93 [50%]
2023-03-10 02:38:37,631 44k INFO Losses: [2.06398344039917, 2.863292932510376, 12.592281341552734, 20.93813705444336, 1.0604581832885742], step: 44400, lr: 9.885651616572276e-05
2023-03-10 02:39:46,036 44k INFO Train Epoch: 93 [92%]
2023-03-10 02:39:46,036 44k INFO Losses: [2.446136474609375, 2.138134479522705, 9.516739845275879, 20.067073822021484, 1.145005226135254], step: 44600, lr: 9.885651616572276e-05
2023-03-10 02:40:00,039 44k INFO ====> Epoch: 93, cost 173.85 s
2023-03-10 02:41:04,489 44k INFO Train Epoch: 94 [33%]
2023-03-10 02:41:04,490 44k INFO Losses: [2.616746425628662, 2.1049997806549072, 10.677129745483398, 19.70547866821289, 1.3173761367797852], step: 44800, lr: 9.884415910120204e-05
2023-03-10 02:42:12,886 44k INFO Train Epoch: 94 [75%]
2023-03-10 02:42:12,886 44k INFO Losses: [2.3231070041656494, 2.3727831840515137, 10.548434257507324, 20.124788284301758, 1.2973746061325073], step: 45000, lr: 9.884415910120204e-05
2023-03-10 02:42:15,760 44k INFO Saving model and optimizer state at iteration 94 to ./logs\44k\G_45000.pth
2023-03-10 02:42:16,432 44k INFO Saving model and optimizer state at iteration 94 to ./logs\44k\D_45000.pth
2023-03-10 02:42:17,076 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_42000.pth
2023-03-10 02:42:17,120 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_42000.pth
2023-03-10 02:42:58,370 44k INFO ====> Epoch: 94, cost 178.33 s
2023-03-10 02:43:35,738 44k INFO Train Epoch: 95 [17%]
2023-03-10 02:43:35,739 44k INFO Losses: [2.409135103225708, 2.6231465339660645, 8.928082466125488, 18.22141456604004, 1.2995362281799316], step: 45200, lr: 9.883180358131438e-05
2023-03-10 02:44:44,443 44k INFO Train Epoch: 95 [58%]
2023-03-10 02:44:44,444 44k INFO Losses: [2.667802333831787, 2.135019540786743, 7.641706466674805, 19.74456214904785, 1.1708847284317017], step: 45400, lr: 9.883180358131438e-05
2023-03-10 02:45:54,320 44k INFO ====> Epoch: 95, cost 175.95 s
2023-03-10 02:46:04,052 44k INFO Train Epoch: 96 [0%]
2023-03-10 02:46:04,053 44k INFO Losses: [2.4180850982666016, 2.2143120765686035, 9.62486457824707, 19.350177764892578, 1.1389224529266357], step: 45600, lr: 9.881944960586671e-05
2023-03-10 02:47:13,669 44k INFO Train Epoch: 96 [42%]
2023-03-10 02:47:13,669 44k INFO Losses: [2.5621836185455322, 2.185269594192505, 8.74885368347168, 18.95093536376953, 1.366514801979065], step: 45800, lr: 9.881944960586671e-05
2023-03-10 02:48:22,431 44k INFO Train Epoch: 96 [83%]
2023-03-10 02:48:22,431 44k INFO Losses: [2.735117197036743, 2.094013214111328, 5.737760543823242, 16.01725196838379, 1.3262369632720947], step: 46000, lr: 9.881944960586671e-05
2023-03-10 02:48:25,356 44k INFO Saving model and optimizer state at iteration 96 to ./logs\44k\G_46000.pth
2023-03-10 02:48:26,026 44k INFO Saving model and optimizer state at iteration 96 to ./logs\44k\D_46000.pth
2023-03-10 02:48:26,684 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_43000.pth
2023-03-10 02:48:26,723 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_43000.pth
2023-03-10 02:48:54,290 44k INFO ====> Epoch: 96, cost 179.97 s
2023-03-10 02:49:45,401 44k INFO Train Epoch: 97 [25%]
2023-03-10 02:49:45,401 44k INFO Losses: [2.4873476028442383, 2.1946587562561035, 8.176788330078125, 21.95309829711914, 1.4926960468292236], step: 46200, lr: 9.880709717466598e-05
2023-03-10 02:50:53,651 44k INFO Train Epoch: 97 [67%]
2023-03-10 02:50:53,651 44k INFO Losses: [2.4239509105682373, 2.3146116733551025, 8.190256118774414, 20.13637924194336, 1.6566094160079956], step: 46400, lr: 9.880709717466598e-05
2023-03-10 02:51:49,166 44k INFO ====> Epoch: 97, cost 174.88 s
2023-03-10 02:52:12,596 44k INFO Train Epoch: 98 [8%]
2023-03-10 02:52:12,596 44k INFO Losses: [2.55031681060791, 2.106630325317383, 9.410569190979004, 19.512451171875, 0.8256558179855347], step: 46600, lr: 9.879474628751914e-05
2023-03-10 02:53:21,071 44k INFO Train Epoch: 98 [50%]
2023-03-10 02:53:21,071 44k INFO Losses: [2.265395402908325, 2.331608772277832, 12.848119735717773, 23.142833709716797, 1.2709184885025024], step: 46800, lr: 9.879474628751914e-05
2023-03-10 02:54:29,657 44k INFO Train Epoch: 98 [92%]
2023-03-10 02:54:29,657 44k INFO Losses: [2.5988707542419434, 2.24489426612854, 6.459345817565918, 17.870325088500977, 1.1991639137268066], step: 47000, lr: 9.879474628751914e-05
2023-03-10 02:54:32,666 44k INFO Saving model and optimizer state at iteration 98 to ./logs\44k\G_47000.pth
2023-03-10 02:54:33,353 44k INFO Saving model and optimizer state at iteration 98 to ./logs\44k\D_47000.pth
2023-03-10 02:54:34,065 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_44000.pth
2023-03-10 02:54:34,104 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_44000.pth
2023-03-10 02:54:48,050 44k INFO ====> Epoch: 98, cost 178.88 s
2023-03-10 02:55:52,660 44k INFO Train Epoch: 99 [33%]
2023-03-10 02:55:52,660 44k INFO Losses: [2.360546350479126, 2.0910091400146484, 11.3775053024292, 19.653621673583984, 0.7903617024421692], step: 47200, lr: 9.87823969442332e-05
2023-03-10 02:57:01,311 44k INFO Train Epoch: 99 [75%]
2023-03-10 02:57:01,312 44k INFO Losses: [2.281738519668579, 2.0748977661132812, 14.552555084228516, 22.295255661010742, 1.0403858423233032], step: 47400, lr: 9.87823969442332e-05
2023-03-10 02:57:42,889 44k INFO ====> Epoch: 99, cost 174.84 s
2023-03-10 02:58:20,233 44k INFO Train Epoch: 100 [17%]
2023-03-10 02:58:20,233 44k INFO Losses: [2.4321982860565186, 2.2581112384796143, 8.384130477905273, 18.8560733795166, 1.0937718152999878], step: 47600, lr: 9.877004914461517e-05
2023-03-10 02:59:28,688 44k INFO Train Epoch: 100 [58%]
2023-03-10 02:59:28,688 44k INFO Losses: [2.4722695350646973, 2.299949884414673, 9.094573974609375, 19.8353214263916, 0.8555085062980652], step: 47800, lr: 9.877004914461517e-05
2023-03-10 03:00:37,940 44k INFO ====> Epoch: 100, cost 175.05 s
2023-03-10 20:35:29,331 44k INFO {'train': {'log_interval': 200, 'eval_interval': 1000, 'seed': 1234, 'epochs': 130, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'aisa': 0}, 'model_dir': './logs\\44k'}
2023-03-10 20:35:29,364 44k WARNING git hash values are different. cea6df30(saved) != cc5b3bbe(current)
2023-03-10 20:35:31,775 44k INFO Loaded checkpoint './logs\44k\G_47000.pth' (iteration 98)
2023-03-10 20:35:32,138 44k INFO Loaded checkpoint './logs\44k\D_47000.pth' (iteration 98)
2023-03-10 20:36:02,858 44k INFO Train Epoch: 98 [8%]
2023-03-10 20:36:02,858 44k INFO Losses: [2.5353682041168213, 2.407303810119629, 9.827933311462402, 21.578031539916992, 1.420819640159607], step: 46600, lr: 9.87823969442332e-05
2023-03-10 20:37:19,179 44k INFO Train Epoch: 98 [50%]
2023-03-10 20:37:19,179 44k INFO Losses: [2.1962313652038574, 2.842005729675293, 15.469377517700195, 24.453657150268555, 1.4245936870574951], step: 46800, lr: 9.87823969442332e-05
2023-03-10 20:38:40,142 44k INFO Train Epoch: 98 [92%]
2023-03-10 20:38:40,143 44k INFO Losses: [2.6426429748535156, 2.1265904903411865, 8.68488883972168, 19.841407775878906, 1.4562296867370605], step: 47000, lr: 9.87823969442332e-05
2023-03-10 20:38:43,936 44k INFO Saving model and optimizer state at iteration 98 to ./logs\44k\G_47000.pth
2023-03-10 20:38:44,670 44k INFO Saving model and optimizer state at iteration 98 to ./logs\44k\D_47000.pth
2023-03-10 20:39:03,499 44k INFO ====> Epoch: 98, cost 214.17 s
2023-03-10 20:40:09,325 44k INFO Train Epoch: 99 [33%]
2023-03-10 20:40:09,325 44k INFO Losses: [2.7206225395202637, 2.06782603263855, 9.90842342376709, 16.023006439208984, 0.8731637597084045], step: 47200, lr: 9.877004914461517e-05
2023-03-10 20:41:19,344 44k INFO Train Epoch: 99 [75%]
2023-03-10 20:41:19,344 44k INFO Losses: [2.9099414348602295, 1.8907878398895264, 9.848502159118652, 19.099763870239258, 0.8655398488044739], step: 47400, lr: 9.877004914461517e-05
2023-03-10 20:42:04,453 44k INFO ====> Epoch: 99, cost 180.95 s
2023-03-10 20:42:45,610 44k INFO Train Epoch: 100 [17%]
2023-03-10 20:42:45,660 44k INFO Losses: [2.334908962249756, 2.1345458030700684, 11.670673370361328, 21.05769920349121, 1.3268905878067017], step: 47600, lr: 9.875770288847208e-05
2023-03-10 20:43:59,490 44k INFO Train Epoch: 100 [58%]
2023-03-10 20:43:59,491 44k INFO Losses: [2.5611777305603027, 2.2350287437438965, 9.660056114196777, 18.508785247802734, 1.0775189399719238], step: 47800, lr: 9.875770288847208e-05
2023-03-10 20:45:08,413 44k INFO ====> Epoch: 100, cost 183.96 s
2023-03-10 20:45:18,485 44k INFO Train Epoch: 101 [0%]
2023-03-10 20:45:18,486 44k INFO Losses: [2.6127429008483887, 2.197486400604248, 8.127498626708984, 19.76656150817871, 0.9327872395515442], step: 48000, lr: 9.874535817561101e-05
2023-03-10 20:45:21,461 44k INFO Saving model and optimizer state at iteration 101 to ./logs\44k\G_48000.pth
2023-03-10 20:45:22,200 44k INFO Saving model and optimizer state at iteration 101 to ./logs\44k\D_48000.pth
2023-03-10 20:45:22,889 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_45000.pth
2023-03-10 20:45:22,890 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_45000.pth
2023-03-10 20:46:32,489 44k INFO Train Epoch: 101 [42%]
2023-03-10 20:46:32,490 44k INFO Losses: [2.455129861831665, 2.3377742767333984, 11.811349868774414, 21.679683685302734, 1.3665108680725098], step: 48200, lr: 9.874535817561101e-05
2023-03-10 20:47:41,197 44k INFO Train Epoch: 101 [83%]
2023-03-10 20:47:41,198 44k INFO Losses: [2.470296859741211, 2.341890811920166, 9.399015426635742, 17.43128204345703, 1.2156797647476196], step: 48400, lr: 9.874535817561101e-05
2023-03-10 20:48:09,263 44k INFO ====> Epoch: 101, cost 180.85 s
2023-03-10 20:49:01,158 44k INFO Train Epoch: 102 [25%]
2023-03-10 20:49:01,158 44k INFO Losses: [2.3319873809814453, 2.324233055114746, 9.67007064819336, 24.918399810791016, 0.9685816764831543], step: 48600, lr: 9.873301500583906e-05
2023-03-10 20:50:09,867 44k INFO Train Epoch: 102 [67%]
2023-03-10 20:50:09,867 44k INFO Losses: [2.3446614742279053, 2.4207370281219482, 7.530040264129639, 20.79374122619629, 1.0588432550430298], step: 48800, lr: 9.873301500583906e-05
2023-03-10 20:51:05,474 44k INFO ====> Epoch: 102, cost 176.21 s
2023-03-10 20:51:29,008 44k INFO Train Epoch: 103 [8%]
2023-03-10 20:51:29,008 44k INFO Losses: [2.3435935974121094, 2.6919546127319336, 11.258846282958984, 20.273414611816406, 0.9719326496124268], step: 49000, lr: 9.872067337896332e-05
2023-03-10 20:51:31,939 44k INFO Saving model and optimizer state at iteration 103 to ./logs\44k\G_49000.pth
2023-03-10 20:51:32,669 44k INFO Saving model and optimizer state at iteration 103 to ./logs\44k\D_49000.pth
2023-03-10 20:51:33,304 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_46000.pth
2023-03-10 20:51:33,304 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_46000.pth
2023-03-10 20:52:40,414 44k INFO Train Epoch: 103 [50%]
2023-03-10 20:52:40,414 44k INFO Losses: [2.4688682556152344, 2.48207950592041, 11.951579093933105, 22.14127540588379, 1.2969614267349243], step: 49200, lr: 9.872067337896332e-05
2023-03-10 20:53:47,701 44k INFO Train Epoch: 103 [92%]
2023-03-10 20:53:47,702 44k INFO Losses: [2.2953248023986816, 2.2772092819213867, 10.907705307006836, 19.548583984375, 1.156449556350708], step: 49400, lr: 9.872067337896332e-05
2023-03-10 20:54:01,511 44k INFO ====> Epoch: 103, cost 176.04 s
2023-03-10 20:55:05,400 44k INFO Train Epoch: 104 [33%]
2023-03-10 20:55:05,401 44k INFO Losses: [2.5428290367126465, 2.2322463989257812, 10.12808895111084, 16.4022159576416, 1.1398893594741821], step: 49600, lr: 9.870833329479095e-05
2023-03-10 20:56:12,464 44k INFO Train Epoch: 104 [75%]
2023-03-10 20:56:12,465 44k INFO Losses: [2.383042335510254, 2.2254109382629395, 11.079497337341309, 19.57961082458496, 0.9717331528663635], step: 49800, lr: 9.870833329479095e-05
2023-03-10 20:56:53,131 44k INFO ====> Epoch: 104, cost 171.62 s
2023-03-10 20:57:30,075 44k INFO Train Epoch: 105 [17%]
2023-03-10 20:57:30,076 44k INFO Losses: [2.7522201538085938, 1.7528356313705444, 6.633907794952393, 14.481117248535156, 1.1196534633636475], step: 50000, lr: 9.86959947531291e-05
2023-03-10 20:57:32,976 44k INFO Saving model and optimizer state at iteration 105 to ./logs\44k\G_50000.pth
2023-03-10 20:57:33,703 44k INFO Saving model and optimizer state at iteration 105 to ./logs\44k\D_50000.pth
2023-03-10 20:57:34,393 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_47000.pth
2023-03-10 20:57:34,423 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_47000.pth
2023-03-10 20:58:41,243 44k INFO Train Epoch: 105 [58%]
2023-03-10 20:58:41,244 44k INFO Losses: [2.69797682762146, 2.271547317504883, 9.141576766967773, 18.336898803710938, 0.6949782371520996], step: 50200, lr: 9.86959947531291e-05
2023-03-10 20:59:48,641 44k INFO ====> Epoch: 105, cost 175.51 s
2023-03-10 20:59:57,968 44k INFO Train Epoch: 106 [0%]
2023-03-10 20:59:57,968 44k INFO Losses: [2.3631558418273926, 2.461293935775757, 8.532692909240723, 20.91162872314453, 1.1543737649917603], step: 50400, lr: 9.868365775378495e-05
2023-03-10 21:01:05,333 44k INFO Train Epoch: 106 [42%]
2023-03-10 21:01:05,334 44k INFO Losses: [2.436030626296997, 2.436185836791992, 9.54138469696045, 20.4470272064209, 1.0038262605667114], step: 50600, lr: 9.868365775378495e-05
2023-03-10 21:02:11,941 44k INFO Train Epoch: 106 [83%]
2023-03-10 21:02:11,941 44k INFO Losses: [2.7014520168304443, 2.3312535285949707, 7.470837116241455, 18.34264373779297, 0.9698311686515808], step: 50800, lr: 9.868365775378495e-05
2023-03-10 21:02:39,227 44k INFO ====> Epoch: 106, cost 170.59 s
2023-03-10 21:03:29,363 44k INFO Train Epoch: 107 [25%]
2023-03-10 21:03:29,363 44k INFO Losses: [2.588062047958374, 2.1817493438720703, 6.350254058837891, 20.88898277282715, 1.352725863456726], step: 51000, lr: 9.867132229656573e-05
2023-03-10 21:03:32,187 44k INFO Saving model and optimizer state at iteration 107 to ./logs\44k\G_51000.pth
2023-03-10 21:03:32,847 44k INFO Saving model and optimizer state at iteration 107 to ./logs\44k\D_51000.pth
2023-03-10 21:03:33,493 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_48000.pth
2023-03-10 21:03:33,531 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_48000.pth
2023-03-10 21:04:39,935 44k INFO Train Epoch: 107 [67%]
2023-03-10 21:04:39,935 44k INFO Losses: [2.5752532482147217, 2.3378093242645264, 7.246879577636719, 18.540746688842773, 1.5605111122131348], step: 51200, lr: 9.867132229656573e-05
2023-03-10 21:05:33,710 44k INFO ====> Epoch: 107, cost 174.48 s
2023-03-10 21:05:56,633 44k INFO Train Epoch: 108 [8%]
2023-03-10 21:05:56,633 44k INFO Losses: [2.6402182579040527, 2.2177233695983887, 12.219375610351562, 21.974668502807617, 0.8670315146446228], step: 51400, lr: 9.865898838127865e-05
2023-03-10 21:07:03,614 44k INFO Train Epoch: 108 [50%]
2023-03-10 21:07:03,615 44k INFO Losses: [2.1195716857910156, 2.484283924102783, 13.186516761779785, 22.448562622070312, 1.218029499053955], step: 51600, lr: 9.865898838127865e-05
2023-03-10 21:08:10,540 44k INFO Train Epoch: 108 [92%]
2023-03-10 21:08:10,540 44k INFO Losses: [2.5393869876861572, 2.2606606483459473, 10.176072120666504, 19.993452072143555, 1.0593936443328857], step: 51800, lr: 9.865898838127865e-05
2023-03-10 21:08:24,336 44k INFO ====> Epoch: 108, cost 170.63 s
2023-03-10 21:09:27,776 44k INFO Train Epoch: 109 [33%]
2023-03-10 21:09:27,776 44k INFO Losses: [2.381410837173462, 2.297278642654419, 8.875455856323242, 18.65761947631836, 1.0452433824539185], step: 52000, lr: 9.864665600773098e-05
2023-03-10 21:09:30,643 44k INFO Saving model and optimizer state at iteration 109 to ./logs\44k\G_52000.pth
2023-03-10 21:09:31,305 44k INFO Saving model and optimizer state at iteration 109 to ./logs\44k\D_52000.pth
2023-03-10 21:09:31,950 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_49000.pth
2023-03-10 21:09:31,986 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_49000.pth
2023-03-10 21:10:38,543 44k INFO Train Epoch: 109 [75%]
2023-03-10 21:10:38,543 44k INFO Losses: [2.5127453804016113, 2.160771369934082, 8.82069206237793, 19.43600845336914, 0.9794597029685974], step: 52200, lr: 9.864665600773098e-05
2023-03-10 21:11:18,882 44k INFO ====> Epoch: 109, cost 174.55 s
2023-03-10 21:11:55,516 44k INFO Train Epoch: 110 [17%]
2023-03-10 21:11:55,517 44k INFO Losses: [2.3839221000671387, 2.1822118759155273, 10.269407272338867, 20.051111221313477, 0.8753963708877563], step: 52400, lr: 9.863432517573002e-05
2023-03-10 21:13:02,784 44k INFO Train Epoch: 110 [58%]
2023-03-10 21:13:02,784 44k INFO Losses: [2.4736809730529785, 2.1552302837371826, 11.130157470703125, 18.118209838867188, 0.7142908573150635], step: 52600, lr: 9.863432517573002e-05
2023-03-10 21:14:09,994 44k INFO ====> Epoch: 110, cost 171.11 s
2023-03-10 21:14:19,459 44k INFO Train Epoch: 111 [0%]
2023-03-10 21:14:19,459 44k INFO Losses: [2.620481491088867, 2.339320659637451, 7.493619441986084, 17.740379333496094, 1.2952884435653687], step: 52800, lr: 9.862199588508305e-05
2023-03-10 21:15:26,927 44k INFO Train Epoch: 111 [42%]
2023-03-10 21:15:26,927 44k INFO Losses: [2.55251145362854, 2.334249258041382, 8.751977920532227, 18.8341121673584, 1.351853609085083], step: 53000, lr: 9.862199588508305e-05
2023-03-10 21:15:29,685 44k INFO Saving model and optimizer state at iteration 111 to ./logs\44k\G_53000.pth
2023-03-10 21:15:30,391 44k INFO Saving model and optimizer state at iteration 111 to ./logs\44k\D_53000.pth
2023-03-10 21:15:31,015 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_50000.pth
2023-03-10 21:15:31,057 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_50000.pth
2023-03-10 21:16:37,669 44k INFO Train Epoch: 111 [83%]
2023-03-10 21:16:37,669 44k INFO Losses: [2.4264345169067383, 2.5841052532196045, 10.159029006958008, 19.77457618713379, 1.0425516366958618], step: 53200, lr: 9.862199588508305e-05
2023-03-10 21:17:04,939 44k INFO ====> Epoch: 111, cost 174.95 s
2023-03-10 21:17:55,868 44k INFO Train Epoch: 112 [25%]
2023-03-10 21:17:55,868 44k INFO Losses: [2.5143566131591797, 2.2728426456451416, 10.925360679626465, 20.717330932617188, 1.4278674125671387], step: 53400, lr: 9.86096681355974e-05
2023-03-10 21:19:05,634 44k INFO Train Epoch: 112 [67%]
2023-03-10 21:19:05,634 44k INFO Losses: [2.384084701538086, 2.6019084453582764, 9.453219413757324, 20.705461502075195, 1.2243319749832153], step: 53600, lr: 9.86096681355974e-05
2023-03-10 21:20:03,117 44k INFO ====> Epoch: 112, cost 178.18 s
2023-03-10 21:20:29,210 44k INFO Train Epoch: 113 [8%]
2023-03-10 21:20:29,211 44k INFO Losses: [2.832183837890625, 2.3098342418670654, 7.901485919952393, 16.831310272216797, 1.2201958894729614], step: 53800, lr: 9.859734192708044e-05
2023-03-10 21:21:39,463 44k INFO Train Epoch: 113 [50%]
2023-03-10 21:21:39,463 44k INFO Losses: [2.3680665493011475, 2.384824752807617, 11.000890731811523, 21.351930618286133, 1.1414543390274048], step: 54000, lr: 9.859734192708044e-05
2023-03-10 21:21:42,591 44k INFO Saving model and optimizer state at iteration 113 to ./logs\44k\G_54000.pth
2023-03-10 21:21:43,275 44k INFO Saving model and optimizer state at iteration 113 to ./logs\44k\D_54000.pth
2023-03-10 21:21:44,019 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_51000.pth
2023-03-10 21:21:44,062 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_51000.pth
2023-03-10 21:22:54,525 44k INFO Train Epoch: 113 [92%]
2023-03-10 21:22:54,526 44k INFO Losses: [2.3481788635253906, 2.6041312217712402, 11.768047332763672, 20.798093795776367, 1.249822974205017], step: 54200, lr: 9.859734192708044e-05
2023-03-10 21:23:08,747 44k INFO ====> Epoch: 113, cost 185.63 s
2023-03-10 21:24:18,165 44k INFO Train Epoch: 114 [33%]
2023-03-10 21:24:18,165 44k INFO Losses: [2.7301435470581055, 1.8387211561203003, 8.390239715576172, 17.569154739379883, 1.0150202512741089], step: 54400, lr: 9.858501725933955e-05
2023-03-10 21:25:30,890 44k INFO Train Epoch: 114 [75%]
2023-03-10 21:25:30,890 44k INFO Losses: [2.5253982543945312, 2.2110021114349365, 7.582423210144043, 17.346508026123047, 1.0292681455612183], step: 54600, lr: 9.858501725933955e-05
2023-03-10 21:26:13,009 44k INFO ====> Epoch: 114, cost 184.26 s
2023-03-10 21:26:51,601 44k INFO Train Epoch: 115 [17%]
2023-03-10 21:26:51,602 44k INFO Losses: [2.4145846366882324, 2.1539323329925537, 9.008591651916504, 20.50507164001465, 1.070541501045227], step: 54800, lr: 9.857269413218213e-05
2023-03-10 21:28:02,314 44k INFO Train Epoch: 115 [58%]
2023-03-10 21:28:02,315 44k INFO Losses: [2.488292932510376, 2.1845619678497314, 10.300168991088867, 17.974010467529297, 1.3763055801391602], step: 55000, lr: 9.857269413218213e-05
2023-03-10 21:28:05,374 44k INFO Saving model and optimizer state at iteration 115 to ./logs\44k\G_55000.pth
2023-03-10 21:28:06,061 44k INFO Saving model and optimizer state at iteration 115 to ./logs\44k\D_55000.pth
2023-03-10 21:28:06,761 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_52000.pth
2023-03-10 21:28:06,808 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_52000.pth
2023-03-10 21:29:17,878 44k INFO ====> Epoch: 115, cost 184.87 s
2023-03-10 21:29:28,476 44k INFO Train Epoch: 116 [0%]
2023-03-10 21:29:28,479 44k INFO Losses: [2.38480544090271, 2.469709873199463, 11.554682731628418, 20.43757438659668, 1.0705339908599854], step: 55200, lr: 9.85603725454156e-05
2023-03-10 21:31:05,324 44k INFO Train Epoch: 116 [42%]
2023-03-10 21:31:05,325 44k INFO Losses: [2.4011828899383545, 2.1080901622772217, 8.725489616394043, 18.80291748046875, 1.369552731513977], step: 55400, lr: 9.85603725454156e-05
2023-03-10 21:32:17,395 44k INFO Train Epoch: 116 [83%]
2023-03-10 21:32:17,396 44k INFO Losses: [2.541152238845825, 2.3482189178466797, 8.99125862121582, 16.391376495361328, 0.9556304812431335], step: 55600, lr: 9.85603725454156e-05
2023-03-10 21:32:46,549 44k INFO ====> Epoch: 116, cost 208.67 s
2023-03-10 21:33:40,507 44k INFO Train Epoch: 117 [25%]
2023-03-10 21:33:40,508 44k INFO Losses: [2.7141623497009277, 2.009599447250366, 6.5364227294921875, 19.62139892578125, 1.184281349182129], step: 55800, lr: 9.854805249884741e-05
2023-03-10 21:34:51,906 44k INFO Train Epoch: 117 [67%]
2023-03-10 21:34:51,907 44k INFO Losses: [2.4313013553619385, 2.482556104660034, 7.637519359588623, 19.015520095825195, 1.0271650552749634], step: 56000, lr: 9.854805249884741e-05
2023-03-10 21:34:55,229 44k INFO Saving model and optimizer state at iteration 117 to ./logs\44k\G_56000.pth
2023-03-10 21:34:55,994 44k INFO Saving model and optimizer state at iteration 117 to ./logs\44k\D_56000.pth
2023-03-10 21:34:56,710 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_53000.pth
2023-03-10 21:34:56,756 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_53000.pth
2023-03-10 21:35:55,101 44k INFO ====> Epoch: 117, cost 188.55 s
2023-03-10 21:36:19,928 44k INFO Train Epoch: 118 [8%]
2023-03-10 21:36:19,928 44k INFO Losses: [2.366985321044922, 2.3517630100250244, 10.775310516357422, 22.367156982421875, 1.5898412466049194], step: 56200, lr: 9.853573399228505e-05
2023-03-10 21:37:32,371 44k INFO Train Epoch: 118 [50%]
2023-03-10 21:37:32,372 44k INFO Losses: [2.1534993648529053, 2.738316774368286, 14.169914245605469, 23.25755500793457, 1.1168138980865479], step: 56400, lr: 9.853573399228505e-05
2023-03-10 21:38:42,706 44k INFO Train Epoch: 118 [92%]
2023-03-10 21:38:42,706 44k INFO Losses: [2.412947654724121, 2.391221046447754, 9.04958438873291, 20.948633193969727, 1.259826898574829], step: 56600, lr: 9.853573399228505e-05
2023-03-10 21:38:57,091 44k INFO ====> Epoch: 118, cost 181.99 s
2023-03-10 21:40:04,195 44k INFO Train Epoch: 119 [33%]
2023-03-10 21:40:04,195 44k INFO Losses: [2.3386669158935547, 2.438791513442993, 11.563238143920898, 19.185998916625977, 1.0667145252227783], step: 56800, lr: 9.8523417025536e-05
2023-03-10 21:41:15,377 44k INFO Train Epoch: 119 [75%]
2023-03-10 21:41:15,378 44k INFO Losses: [2.6045753955841064, 2.2635958194732666, 9.972951889038086, 19.166879653930664, 1.159376621246338], step: 57000, lr: 9.8523417025536e-05
2023-03-10 21:41:18,323 44k INFO Saving model and optimizer state at iteration 119 to ./logs\44k\G_57000.pth
2023-03-10 21:41:19,051 44k INFO Saving model and optimizer state at iteration 119 to ./logs\44k\D_57000.pth
2023-03-10 21:41:19,716 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_54000.pth
2023-03-10 21:41:19,744 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_54000.pth
2023-03-10 21:42:03,462 44k INFO ====> Epoch: 119, cost 186.37 s
2023-03-10 21:42:43,503 44k INFO Train Epoch: 120 [17%]
2023-03-10 21:42:43,504 44k INFO Losses: [2.3767828941345215, 2.386854648590088, 10.218685150146484, 17.212636947631836, 1.011182188987732], step: 57200, lr: 9.851110159840781e-05
2023-03-10 21:43:55,018 44k INFO Train Epoch: 120 [58%]
2023-03-10 21:43:55,018 44k INFO Losses: [2.395080327987671, 2.1684978008270264, 11.069388389587402, 18.909616470336914, 1.292800784111023], step: 57400, lr: 9.851110159840781e-05
2023-03-10 21:45:08,967 44k INFO ====> Epoch: 120, cost 185.50 s
2023-03-10 21:45:19,277 44k INFO Train Epoch: 121 [0%]
2023-03-10 21:45:19,278 44k INFO Losses: [2.4142961502075195, 2.5343339443206787, 9.690354347229004, 19.851322174072266, 1.2172437906265259], step: 57600, lr: 9.8498787710708e-05
2023-03-10 21:46:35,034 44k INFO Train Epoch: 121 [42%]
2023-03-10 21:46:35,040 44k INFO Losses: [2.4165563583374023, 2.4216089248657227, 10.888091087341309, 21.200775146484375, 1.0869629383087158], step: 57800, lr: 9.8498787710708e-05
2023-03-10 21:47:56,745 44k INFO Train Epoch: 121 [83%]
2023-03-10 21:47:56,750 44k INFO Losses: [2.282374858856201, 2.471604824066162, 7.937741279602051, 15.686677932739258, 0.9070390462875366], step: 58000, lr: 9.8498787710708e-05
2023-03-10 21:48:00,311 44k INFO Saving model and optimizer state at iteration 121 to ./logs\44k\G_58000.pth
2023-03-10 21:48:01,094 44k INFO Saving model and optimizer state at iteration 121 to ./logs\44k\D_58000.pth
2023-03-10 21:48:01,866 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_55000.pth
2023-03-10 21:48:01,919 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_55000.pth
2023-03-10 21:48:34,404 44k INFO ====> Epoch: 121, cost 205.44 s
2023-03-10 21:49:30,578 44k INFO Train Epoch: 122 [25%]
2023-03-10 21:49:30,579 44k INFO Losses: [2.721353530883789, 1.9338716268539429, 4.5783538818359375, 15.087936401367188, 1.4242318868637085], step: 58200, lr: 9.848647536224416e-05
2023-03-10 21:50:42,731 44k INFO Train Epoch: 122 [67%]
2023-03-10 21:50:42,732 44k INFO Losses: [2.5823512077331543, 2.185462236404419, 8.540579795837402, 19.86931037902832, 1.4148286581039429], step: 58400, lr: 9.848647536224416e-05
2023-03-10 21:51:40,371 44k INFO ====> Epoch: 122, cost 185.97 s
2023-03-10 21:52:05,125 44k INFO Train Epoch: 123 [8%]
2023-03-10 21:52:05,125 44k INFO Losses: [2.561491012573242, 2.201587438583374, 7.961848735809326, 19.036191940307617, 1.2058391571044922], step: 58600, lr: 9.847416455282387e-05
2023-03-10 21:53:17,070 44k INFO Train Epoch: 123 [50%]
2023-03-10 21:53:17,071 44k INFO Losses: [2.123305320739746, 2.409733295440674, 12.327372550964355, 21.999929428100586, 1.4899181127548218], step: 58800, lr: 9.847416455282387e-05
2023-03-10 21:54:29,461 44k INFO Train Epoch: 123 [92%]
2023-03-10 21:54:29,461 44k INFO Losses: [2.3109254837036133, 2.4792587757110596, 10.145918846130371, 20.15896224975586, 1.159805417060852], step: 59000, lr: 9.847416455282387e-05
2023-03-10 21:54:32,485 44k INFO Saving model and optimizer state at iteration 123 to ./logs\44k\G_59000.pth
2023-03-10 21:54:33,217 44k INFO Saving model and optimizer state at iteration 123 to ./logs\44k\D_59000.pth
2023-03-10 21:54:33,871 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_56000.pth
2023-03-10 21:54:33,917 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_56000.pth
2023-03-10 21:54:48,359 44k INFO ====> Epoch: 123, cost 187.99 s
2023-03-10 21:55:57,038 44k INFO Train Epoch: 124 [33%]
2023-03-10 21:55:57,040 44k INFO Losses: [2.3641140460968018, 2.43084454536438, 9.14197063446045, 15.625608444213867, 1.3044573068618774], step: 59200, lr: 9.846185528225477e-05
2023-03-10 21:57:18,646 44k INFO Train Epoch: 124 [75%]
2023-03-10 21:57:18,646 44k INFO Losses: [2.476146697998047, 2.431386709213257, 10.71250057220459, 19.3042049407959, 1.2434529066085815], step: 59400, lr: 9.846185528225477e-05
2023-03-10 21:58:07,564 44k INFO ====> Epoch: 124, cost 199.20 s
2023-03-10 21:58:47,452 44k INFO Train Epoch: 125 [17%]
2023-03-10 21:58:47,453 44k INFO Losses: [2.5148580074310303, 2.395150899887085, 8.830897331237793, 19.638525009155273, 1.011060118675232], step: 59600, lr: 9.84495475503445e-05
2023-03-10 21:59:59,119 44k INFO Train Epoch: 125 [58%]
2023-03-10 21:59:59,120 44k INFO Losses: [2.6108510494232178, 2.1697988510131836, 8.154233932495117, 18.474275588989258, 0.7748168706893921], step: 59800, lr: 9.84495475503445e-05
2023-03-10 22:01:23,055 44k INFO ====> Epoch: 125, cost 195.49 s
2023-03-10 22:01:33,480 44k INFO Train Epoch: 126 [0%]
2023-03-10 22:01:33,480 44k INFO Losses: [2.602079391479492, 2.0111606121063232, 6.074387073516846, 18.45685577392578, 1.1268730163574219], step: 60000, lr: 9.84372413569007e-05
2023-03-10 22:01:36,481 44k INFO Saving model and optimizer state at iteration 126 to ./logs\44k\G_60000.pth
2023-03-10 22:01:37,157 44k INFO Saving model and optimizer state at iteration 126 to ./logs\44k\D_60000.pth
2023-03-10 22:01:37,801 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_57000.pth
2023-03-10 22:01:37,831 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_57000.pth
2023-03-10 22:02:58,106 44k INFO Train Epoch: 126 [42%]
2023-03-10 22:02:58,107 44k INFO Losses: [2.395695686340332, 2.563560962677002, 11.31453800201416, 21.63389778137207, 1.244583249092102], step: 60200, lr: 9.84372413569007e-05
2023-03-10 22:04:08,064 44k INFO Train Epoch: 126 [83%]
2023-03-10 22:04:08,064 44k INFO Losses: [2.509502410888672, 2.230083703994751, 9.750213623046875, 20.110122680664062, 0.9419601559638977], step: 60400, lr: 9.84372413569007e-05
2023-03-10 22:04:37,008 44k INFO ====> Epoch: 126, cost 193.95 s
2023-03-10 22:05:32,909 44k INFO Train Epoch: 127 [25%]
2023-03-10 22:05:32,910 44k INFO Losses: [2.613250255584717, 2.1424992084503174, 8.195858001708984, 21.610280990600586, 1.7847726345062256], step: 60600, lr: 9.842493670173108e-05
2023-03-10 22:06:43,533 44k INFO Train Epoch: 127 [67%]
2023-03-10 22:06:43,534 44k INFO Losses: [2.90395450592041, 1.9409708976745605, 6.407796382904053, 18.405025482177734, 1.2056750059127808], step: 60800, lr: 9.842493670173108e-05
2023-03-10 22:07:39,964 44k INFO ====> Epoch: 127, cost 182.96 s
2023-03-10 22:08:04,166 44k INFO Train Epoch: 128 [8%]
2023-03-10 22:08:04,167 44k INFO Losses: [2.3968169689178467, 2.3018054962158203, 10.053366661071777, 20.41103744506836, 0.9091386795043945], step: 61000, lr: 9.841263358464336e-05
2023-03-10 22:08:07,147 44k INFO Saving model and optimizer state at iteration 128 to ./logs\44k\G_61000.pth
2023-03-10 22:08:07,832 44k INFO Saving model and optimizer state at iteration 128 to ./logs\44k\D_61000.pth
2023-03-10 22:08:08,484 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_58000.pth
2023-03-10 22:08:08,522 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_58000.pth
2023-03-10 22:09:17,347 44k INFO Train Epoch: 128 [50%]
2023-03-10 22:09:17,348 44k INFO Losses: [2.279478073120117, 2.633390426635742, 11.67409896850586, 23.693246841430664, 0.983919084072113], step: 61200, lr: 9.841263358464336e-05
2023-03-10 22:10:26,651 44k INFO Train Epoch: 128 [92%]
2023-03-10 22:10:26,651 44k INFO Losses: [2.5496785640716553, 2.323397159576416, 9.32941722869873, 18.87200927734375, 1.106742024421692], step: 61400, lr: 9.841263358464336e-05
2023-03-10 22:10:41,022 44k INFO ====> Epoch: 128, cost 181.06 s
2023-03-10 22:11:48,102 44k INFO Train Epoch: 129 [33%]
2023-03-10 22:11:48,103 44k INFO Losses: [2.4743332862854004, 2.4446592330932617, 8.519780158996582, 19.945512771606445, 0.560448408126831], step: 61600, lr: 9.840033200544528e-05
2023-03-10 22:12:58,896 44k INFO Train Epoch: 129 [75%]
2023-03-10 22:12:58,897 44k INFO Losses: [2.324143409729004, 2.315108299255371, 11.200321197509766, 19.443256378173828, 1.3374336957931519], step: 61800, lr: 9.840033200544528e-05
2023-03-10 22:13:40,942 44k INFO ====> Epoch: 129, cost 179.92 s
2023-03-10 22:14:22,860 44k INFO Train Epoch: 130 [17%]
2023-03-10 22:14:22,861 44k INFO Losses: [2.336583375930786, 2.301057815551758, 11.023492813110352, 20.191280364990234, 1.4816441535949707], step: 62000, lr: 9.838803196394459e-05
2023-03-10 22:14:26,606 44k INFO Saving model and optimizer state at iteration 130 to ./logs\44k\G_62000.pth
2023-03-10 22:14:27,473 44k INFO Saving model and optimizer state at iteration 130 to ./logs\44k\D_62000.pth
2023-03-10 22:14:28,256 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_59000.pth
2023-03-10 22:14:28,290 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_59000.pth
2023-03-10 22:15:44,749 44k INFO Train Epoch: 130 [58%]
2023-03-10 22:15:44,749 44k INFO Losses: [2.54327392578125, 2.0947272777557373, 9.685648918151855, 18.373594284057617, 1.1531988382339478], step: 62200, lr: 9.838803196394459e-05
2023-03-10 22:16:57,755 44k INFO ====> Epoch: 130, cost 196.81 s