TachibanaKimika
upload kageaki model
a5c7ab1
raw
history blame
130 kB
2023-03-26 17:39:38,032 44k INFO {'train': {'log_interval': 200, 'eval_interval': 1000, 'seed': 1234, 'epochs': 10, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kageaki': 0}, 'model_dir': './logs\\44k'}
2023-03-26 17:39:38,058 44k WARNING git hash values are different. cea6df30(saved) != fd4d47fd(current)
2023-03-26 17:39:40,379 44k INFO Loaded checkpoint './logs\44k\G_0.pth' (iteration 1)
2023-03-26 17:39:40,933 44k INFO Loaded checkpoint './logs\44k\D_0.pth' (iteration 1)
2023-03-26 17:39:56,094 44k INFO Train Epoch: 1 [0%]
2023-03-26 17:39:56,094 44k INFO Losses: [2.5377755165100098, 2.903019666671753, 10.91149616241455, 27.32355308532715, 4.235077857971191], step: 0, lr: 0.0001
2023-03-26 17:40:00,069 44k INFO Saving model and optimizer state at iteration 1 to ./logs\44k\G_0.pth
2023-03-26 17:40:00,833 44k INFO Saving model and optimizer state at iteration 1 to ./logs\44k\D_0.pth
2023-03-26 17:41:29,168 44k INFO Train Epoch: 1 [27%]
2023-03-26 17:41:29,169 44k INFO Losses: [2.4305684566497803, 2.2155966758728027, 10.047785758972168, 18.85099983215332, 1.3608992099761963], step: 200, lr: 0.0001
2023-03-26 17:42:53,530 44k INFO Train Epoch: 1 [55%]
2023-03-26 17:42:53,530 44k INFO Losses: [2.4205574989318848, 2.6263515949249268, 8.473373413085938, 15.497400283813477, 1.5911765098571777], step: 400, lr: 0.0001
2023-03-26 17:44:18,316 44k INFO Train Epoch: 1 [82%]
2023-03-26 17:44:18,317 44k INFO Losses: [2.2375619411468506, 2.511993646621704, 14.825691223144531, 23.141586303710938, 0.9749691486358643], step: 600, lr: 0.0001
2023-03-26 17:45:13,228 44k INFO ====> Epoch: 1, cost 335.20 s
2023-03-26 17:45:47,332 44k INFO Train Epoch: 2 [10%]
2023-03-26 17:45:47,333 44k INFO Losses: [2.397797107696533, 2.373901844024658, 8.271860122680664, 15.803826332092285, 0.9859277009963989], step: 800, lr: 9.99875e-05
2023-03-26 17:46:56,083 44k INFO Train Epoch: 2 [37%]
2023-03-26 17:46:56,084 44k INFO Losses: [2.59622859954834, 2.0695106983184814, 9.532870292663574, 15.16479778289795, 1.2511309385299683], step: 1000, lr: 9.99875e-05
2023-03-26 17:46:59,140 44k INFO Saving model and optimizer state at iteration 2 to ./logs\44k\G_1000.pth
2023-03-26 17:46:59,838 44k INFO Saving model and optimizer state at iteration 2 to ./logs\44k\D_1000.pth
2023-03-26 17:48:08,202 44k INFO Train Epoch: 2 [65%]
2023-03-26 17:48:08,202 44k INFO Losses: [2.5527453422546387, 1.9406085014343262, 11.667440414428711, 15.09459400177002, 1.3829652070999146], step: 1200, lr: 9.99875e-05
2023-03-26 17:49:16,170 44k INFO Train Epoch: 2 [92%]
2023-03-26 17:49:16,171 44k INFO Losses: [2.694300651550293, 2.3796257972717285, 9.117003440856934, 14.53477954864502, 1.4188555479049683], step: 1400, lr: 9.99875e-05
2023-03-26 17:49:34,893 44k INFO ====> Epoch: 2, cost 261.67 s
2023-03-26 17:50:33,180 44k INFO Train Epoch: 3 [20%]
2023-03-26 17:50:33,180 44k INFO Losses: [2.5938282012939453, 2.1179003715515137, 8.693191528320312, 14.279634475708008, 1.063474416732788], step: 1600, lr: 9.99750015625e-05
2023-03-26 17:51:40,718 44k INFO Train Epoch: 3 [47%]
2023-03-26 17:51:40,719 44k INFO Losses: [2.639939308166504, 2.2141642570495605, 14.255056381225586, 21.819019317626953, 1.4886901378631592], step: 1800, lr: 9.99750015625e-05
2023-03-26 17:52:48,902 44k INFO Train Epoch: 3 [75%]
2023-03-26 17:52:48,903 44k INFO Losses: [2.212458610534668, 2.7131903171539307, 12.33214282989502, 18.393356323242188, 1.5152887105941772], step: 2000, lr: 9.99750015625e-05
2023-03-26 17:52:51,956 44k INFO Saving model and optimizer state at iteration 3 to ./logs\44k\G_2000.pth
2023-03-26 17:52:52,656 44k INFO Saving model and optimizer state at iteration 3 to ./logs\44k\D_2000.pth
2023-03-26 17:53:55,294 44k INFO ====> Epoch: 3, cost 260.40 s
2023-03-26 17:54:10,066 44k INFO Train Epoch: 4 [2%]
2023-03-26 17:54:10,066 44k INFO Losses: [2.425558090209961, 2.2590694427490234, 11.511456489562988, 21.43431854248047, 1.5592743158340454], step: 2200, lr: 9.996250468730469e-05
2023-03-26 17:55:18,218 44k INFO Train Epoch: 4 [30%]
2023-03-26 17:55:18,219 44k INFO Losses: [2.645364999771118, 2.0146186351776123, 9.450831413269043, 17.087810516357422, 1.1853785514831543], step: 2400, lr: 9.996250468730469e-05
2023-03-26 17:56:27,341 44k INFO Train Epoch: 4 [57%]
2023-03-26 17:56:27,341 44k INFO Losses: [2.5699472427368164, 2.794123649597168, 6.703786849975586, 14.36314868927002, 1.4213221073150635], step: 2600, lr: 9.996250468730469e-05
2023-03-26 17:57:35,779 44k INFO Train Epoch: 4 [85%]
2023-03-26 17:57:35,780 44k INFO Losses: [2.6018178462982178, 2.334325075149536, 12.623315811157227, 20.649280548095703, 1.24234139919281], step: 2800, lr: 9.996250468730469e-05
2023-03-26 17:58:13,732 44k INFO ====> Epoch: 4, cost 258.44 s
2023-03-26 17:58:54,260 44k INFO Train Epoch: 5 [12%]
2023-03-26 17:58:54,261 44k INFO Losses: [2.7838244438171387, 2.1547186374664307, 6.306748390197754, 14.336325645446777, 1.6287577152252197], step: 3000, lr: 9.995000937421877e-05
2023-03-26 17:58:57,535 44k INFO Saving model and optimizer state at iteration 5 to ./logs\44k\G_3000.pth
2023-03-26 17:58:58,351 44k INFO Saving model and optimizer state at iteration 5 to ./logs\44k\D_3000.pth
2023-03-26 18:00:08,606 44k INFO Train Epoch: 5 [40%]
2023-03-26 18:00:08,606 44k INFO Losses: [2.3616766929626465, 2.534543037414551, 10.59256649017334, 16.077133178710938, 1.266877293586731], step: 3200, lr: 9.995000937421877e-05
2023-03-26 18:01:16,910 44k INFO Train Epoch: 5 [67%]
2023-03-26 18:01:16,911 44k INFO Losses: [2.5994486808776855, 2.2965142726898193, 8.335776329040527, 14.85177230834961, 1.1047139167785645], step: 3400, lr: 9.995000937421877e-05
2023-03-26 18:02:25,202 44k INFO Train Epoch: 5 [95%]
2023-03-26 18:02:25,202 44k INFO Losses: [2.473801374435425, 2.1478612422943115, 11.251014709472656, 18.507349014282227, 1.3262391090393066], step: 3600, lr: 9.995000937421877e-05
2023-03-26 18:02:38,679 44k INFO ====> Epoch: 5, cost 264.95 s
2023-03-26 18:03:43,268 44k INFO Train Epoch: 6 [22%]
2023-03-26 18:03:43,268 44k INFO Losses: [2.3330955505371094, 2.1014323234558105, 8.225790977478027, 13.843910217285156, 1.0384153127670288], step: 3800, lr: 9.993751562304699e-05
2023-03-26 18:04:51,317 44k INFO Train Epoch: 6 [49%]
2023-03-26 18:04:51,317 44k INFO Losses: [2.625917911529541, 2.358022689819336, 14.715231895446777, 20.801605224609375, 1.2775871753692627], step: 4000, lr: 9.993751562304699e-05
2023-03-26 18:04:54,566 44k INFO Saving model and optimizer state at iteration 6 to ./logs\44k\G_4000.pth
2023-03-26 18:04:55,271 44k INFO Saving model and optimizer state at iteration 6 to ./logs\44k\D_4000.pth
2023-03-26 18:04:55,968 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_1000.pth
2023-03-26 18:04:56,007 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_1000.pth
2023-03-26 18:06:04,309 44k INFO Train Epoch: 6 [77%]
2023-03-26 18:06:04,309 44k INFO Losses: [2.6342339515686035, 2.3278164863586426, 8.934106826782227, 17.502992630004883, 1.3025579452514648], step: 4200, lr: 9.993751562304699e-05
2023-03-26 18:07:01,147 44k INFO ====> Epoch: 6, cost 262.47 s
2023-03-26 18:07:21,228 44k INFO Train Epoch: 7 [4%]
2023-03-26 18:07:21,228 44k INFO Losses: [2.71872878074646, 2.0840580463409424, 9.441400527954102, 13.771171569824219, 2.0024354457855225], step: 4400, lr: 9.99250234335941e-05
2023-03-26 18:08:29,729 44k INFO Train Epoch: 7 [32%]
2023-03-26 18:08:29,729 44k INFO Losses: [2.683831214904785, 2.223313093185425, 9.49968147277832, 16.27731704711914, 1.6022688150405884], step: 4600, lr: 9.99250234335941e-05
2023-03-26 18:09:37,548 44k INFO Train Epoch: 7 [59%]
2023-03-26 18:09:37,549 44k INFO Losses: [2.5431575775146484, 2.251652717590332, 11.220416069030762, 14.8209228515625, 1.8512675762176514], step: 4800, lr: 9.99250234335941e-05
2023-03-26 18:10:45,529 44k INFO Train Epoch: 7 [87%]
2023-03-26 18:10:45,530 44k INFO Losses: [2.621312141418457, 2.5917396545410156, 10.091723442077637, 20.77967071533203, 1.9012998342514038], step: 5000, lr: 9.99250234335941e-05
2023-03-26 18:10:48,617 44k INFO Saving model and optimizer state at iteration 7 to ./logs\44k\G_5000.pth
2023-03-26 18:10:49,344 44k INFO Saving model and optimizer state at iteration 7 to ./logs\44k\D_5000.pth
2023-03-26 18:10:50,044 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_2000.pth
2023-03-26 18:10:50,088 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_2000.pth
2023-03-26 18:11:22,369 44k INFO ====> Epoch: 7, cost 261.22 s
2023-03-26 18:12:07,018 44k INFO Train Epoch: 8 [14%]
2023-03-26 18:12:07,018 44k INFO Losses: [2.364821434020996, 2.5530383586883545, 13.18740463256836, 20.40546417236328, 1.319463849067688], step: 5200, lr: 9.991253280566489e-05
2023-03-26 18:13:15,123 44k INFO Train Epoch: 8 [42%]
2023-03-26 18:13:15,124 44k INFO Losses: [2.7404699325561523, 1.8259596824645996, 6.623441696166992, 11.952010154724121, 1.8055057525634766], step: 5400, lr: 9.991253280566489e-05
2023-03-26 18:14:23,488 44k INFO Train Epoch: 8 [69%]
2023-03-26 18:14:23,488 44k INFO Losses: [2.615143299102783, 2.2321488857269287, 14.727906227111816, 21.101221084594727, 1.0454636812210083], step: 5600, lr: 9.991253280566489e-05
2023-03-26 18:15:31,397 44k INFO Train Epoch: 8 [97%]
2023-03-26 18:15:31,397 44k INFO Losses: [2.510842800140381, 2.3249058723449707, 12.244482040405273, 19.545791625976562, 1.3539787530899048], step: 5800, lr: 9.991253280566489e-05
2023-03-26 18:15:39,508 44k INFO ====> Epoch: 8, cost 257.14 s
2023-03-26 18:16:48,980 44k INFO Train Epoch: 9 [24%]
2023-03-26 18:16:48,980 44k INFO Losses: [2.5039987564086914, 2.5685911178588867, 7.713610649108887, 16.99083137512207, 1.3174961805343628], step: 6000, lr: 9.990004373906418e-05
2023-03-26 18:16:52,077 44k INFO Saving model and optimizer state at iteration 9 to ./logs\44k\G_6000.pth
2023-03-26 18:16:52,794 44k INFO Saving model and optimizer state at iteration 9 to ./logs\44k\D_6000.pth
2023-03-26 18:16:53,496 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_3000.pth
2023-03-26 18:16:53,528 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_3000.pth
2023-03-26 18:18:01,015 44k INFO Train Epoch: 9 [52%]
2023-03-26 18:18:01,015 44k INFO Losses: [2.7452139854431152, 2.009155511856079, 7.5243353843688965, 13.27002239227295, 1.6816840171813965], step: 6200, lr: 9.990004373906418e-05
2023-03-26 18:19:09,591 44k INFO Train Epoch: 9 [79%]
2023-03-26 18:19:09,591 44k INFO Losses: [2.6896116733551025, 2.317214012145996, 13.586854934692383, 21.118850708007812, 1.6939321756362915], step: 6400, lr: 9.990004373906418e-05
2023-03-26 18:20:00,969 44k INFO ====> Epoch: 9, cost 261.46 s
2023-03-26 18:20:26,612 44k INFO Train Epoch: 10 [7%]
2023-03-26 18:20:26,613 44k INFO Losses: [2.38116717338562, 2.4761710166931152, 10.26457405090332, 13.916464805603027, 1.423701524734497], step: 6600, lr: 9.98875562335968e-05
2023-03-26 18:21:36,408 44k INFO Train Epoch: 10 [34%]
2023-03-26 18:21:36,409 44k INFO Losses: [2.6033682823181152, 2.052231788635254, 7.61380672454834, 16.19424819946289, 1.3552354574203491], step: 6800, lr: 9.98875562335968e-05
2023-03-26 18:22:46,088 44k INFO Train Epoch: 10 [62%]
2023-03-26 18:22:46,089 44k INFO Losses: [2.8540406227111816, 2.013615131378174, 9.860207557678223, 16.183984756469727, 1.046617865562439], step: 7000, lr: 9.98875562335968e-05
2023-03-26 18:22:49,269 44k INFO Saving model and optimizer state at iteration 10 to ./logs\44k\G_7000.pth
2023-03-26 18:22:50,010 44k INFO Saving model and optimizer state at iteration 10 to ./logs\44k\D_7000.pth
2023-03-26 18:22:50,776 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_4000.pth
2023-03-26 18:22:50,819 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_4000.pth
2023-03-26 18:23:59,488 44k INFO Train Epoch: 10 [89%]
2023-03-26 18:23:59,488 44k INFO Losses: [2.7886390686035156, 2.085160493850708, 11.16639518737793, 20.36288070678711, 1.4053449630737305], step: 7200, lr: 9.98875562335968e-05
2023-03-26 18:24:26,900 44k INFO ====> Epoch: 10, cost 265.93 s
2023-03-26 18:34:47,550 44k INFO {'train': {'log_interval': 200, 'eval_interval': 1000, 'seed': 1234, 'epochs': 30, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kageaki': 0}, 'model_dir': './logs\\44k'}
2023-03-26 18:34:47,579 44k WARNING git hash values are different. cea6df30(saved) != fd4d47fd(current)
2023-03-26 18:34:49,612 44k INFO Loaded checkpoint './logs\44k\G_7000.pth' (iteration 10)
2023-03-26 18:34:49,971 44k INFO Loaded checkpoint './logs\44k\D_7000.pth' (iteration 10)
2023-03-26 18:35:26,850 44k INFO Train Epoch: 10 [7%]
2023-03-26 18:35:26,851 44k INFO Losses: [2.5533604621887207, 2.188552141189575, 10.346734046936035, 17.699909210205078, 1.293749213218689], step: 6600, lr: 9.987507028906759e-05
2023-03-26 18:36:56,524 44k INFO Train Epoch: 10 [34%]
2023-03-26 18:36:56,524 44k INFO Losses: [2.3003833293914795, 2.4629688262939453, 7.944945812225342, 11.544402122497559, 1.2250933647155762], step: 6800, lr: 9.987507028906759e-05
2023-03-26 18:38:23,483 44k INFO Train Epoch: 10 [62%]
2023-03-26 18:38:23,483 44k INFO Losses: [2.40164852142334, 2.2444326877593994, 9.651368141174316, 16.686256408691406, 0.8747689723968506], step: 7000, lr: 9.987507028906759e-05
2023-03-26 18:38:27,446 44k INFO Saving model and optimizer state at iteration 10 to ./logs\44k\G_7000.pth
2023-03-26 18:38:28,239 44k INFO Saving model and optimizer state at iteration 10 to ./logs\44k\D_7000.pth
2023-03-26 18:40:00,614 44k INFO Train Epoch: 10 [89%]
2023-03-26 18:40:00,614 44k INFO Losses: [2.4287614822387695, 2.3594701290130615, 12.557588577270508, 20.547157287597656, 1.0031839609146118], step: 7200, lr: 9.987507028906759e-05
2023-03-26 18:41:11,212 44k INFO ====> Epoch: 10, cost 383.66 s
2023-03-26 18:42:02,702 44k INFO Train Epoch: 11 [16%]
2023-03-26 18:42:02,703 44k INFO Losses: [2.7678236961364746, 2.112701177597046, 8.429422378540039, 18.075698852539062, 1.3546189069747925], step: 7400, lr: 9.986258590528146e-05
2023-03-26 18:43:12,353 44k INFO Train Epoch: 11 [44%]
2023-03-26 18:43:12,353 44k INFO Losses: [2.4518330097198486, 2.41371488571167, 9.840558052062988, 13.752519607543945, 1.434598445892334], step: 7600, lr: 9.986258590528146e-05
2023-03-26 18:44:22,464 44k INFO Train Epoch: 11 [71%]
2023-03-26 18:44:22,465 44k INFO Losses: [2.507891893386841, 2.341181993484497, 10.208455085754395, 14.264325141906738, 1.0694255828857422], step: 7800, lr: 9.986258590528146e-05
2023-03-26 18:45:32,333 44k INFO Train Epoch: 11 [99%]
2023-03-26 18:45:32,333 44k INFO Losses: [2.123678207397461, 3.1937904357910156, 9.68159008026123, 16.58867645263672, 1.3500096797943115], step: 8000, lr: 9.986258590528146e-05
2023-03-26 18:45:35,497 44k INFO Saving model and optimizer state at iteration 11 to ./logs\44k\G_8000.pth
2023-03-26 18:45:36,281 44k INFO Saving model and optimizer state at iteration 11 to ./logs\44k\D_8000.pth
2023-03-26 18:45:36,963 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_5000.pth
2023-03-26 18:45:36,996 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_5000.pth
2023-03-26 18:45:39,810 44k INFO ====> Epoch: 11, cost 268.60 s
2023-03-26 18:46:55,791 44k INFO Train Epoch: 12 [26%]
2023-03-26 18:46:55,792 44k INFO Losses: [2.426657199859619, 2.042884111404419, 11.742992401123047, 18.878435134887695, 1.4948300123214722], step: 8200, lr: 9.98501030820433e-05
2023-03-26 18:48:03,829 44k INFO Train Epoch: 12 [54%]
2023-03-26 18:48:03,829 44k INFO Losses: [2.3794922828674316, 2.20752215385437, 11.286712646484375, 17.970600128173828, 1.1025506258010864], step: 8400, lr: 9.98501030820433e-05
2023-03-26 18:49:13,845 44k INFO Train Epoch: 12 [81%]
2023-03-26 18:49:13,845 44k INFO Losses: [2.699594020843506, 2.111863851547241, 7.350751876831055, 21.120838165283203, 1.6042152643203735], step: 8600, lr: 9.98501030820433e-05
2023-03-26 18:50:01,973 44k INFO ====> Epoch: 12, cost 262.16 s
2023-03-26 18:50:34,023 44k INFO Train Epoch: 13 [9%]
2023-03-26 18:50:34,024 44k INFO Losses: [2.8004846572875977, 2.1405537128448486, 5.862995624542236, 10.480680465698242, 1.271390438079834], step: 8800, lr: 9.983762181915804e-05
2023-03-26 18:51:44,536 44k INFO Train Epoch: 13 [36%]
2023-03-26 18:51:44,537 44k INFO Losses: [2.434276580810547, 2.4731249809265137, 11.914716720581055, 18.728864669799805, 0.9573069214820862], step: 9000, lr: 9.983762181915804e-05
2023-03-26 18:51:47,636 44k INFO Saving model and optimizer state at iteration 13 to ./logs\44k\G_9000.pth
2023-03-26 18:51:48,360 44k INFO Saving model and optimizer state at iteration 13 to ./logs\44k\D_9000.pth
2023-03-26 18:51:49,031 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_6000.pth
2023-03-26 18:51:49,072 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_6000.pth
2023-03-26 18:52:58,432 44k INFO Train Epoch: 13 [64%]
2023-03-26 18:52:58,433 44k INFO Losses: [2.540266990661621, 2.3790817260742188, 11.579080581665039, 16.329910278320312, 1.2830095291137695], step: 9200, lr: 9.983762181915804e-05
2023-03-26 18:54:08,227 44k INFO Train Epoch: 13 [91%]
2023-03-26 18:54:08,227 44k INFO Losses: [2.422616958618164, 2.354771375656128, 10.294193267822266, 16.54452133178711, 1.2835444211959839], step: 9400, lr: 9.983762181915804e-05
2023-03-26 18:54:30,315 44k INFO ====> Epoch: 13, cost 268.34 s
2023-03-26 18:55:27,748 44k INFO Train Epoch: 14 [19%]
2023-03-26 18:55:27,748 44k INFO Losses: [2.7023301124572754, 2.224945068359375, 9.906207084655762, 17.76188087463379, 0.881771981716156], step: 9600, lr: 9.982514211643064e-05
2023-03-26 18:56:36,426 44k INFO Train Epoch: 14 [46%]
2023-03-26 18:56:36,426 44k INFO Losses: [2.621765613555908, 2.3448433876037598, 11.42595386505127, 17.85344886779785, 0.9860710501670837], step: 9800, lr: 9.982514211643064e-05
2023-03-26 18:57:45,119 44k INFO Train Epoch: 14 [74%]
2023-03-26 18:57:45,120 44k INFO Losses: [2.376032829284668, 2.520024538040161, 11.349810600280762, 20.452016830444336, 1.183514952659607], step: 10000, lr: 9.982514211643064e-05
2023-03-26 18:57:48,255 44k INFO Saving model and optimizer state at iteration 14 to ./logs\44k\G_10000.pth
2023-03-26 18:57:48,965 44k INFO Saving model and optimizer state at iteration 14 to ./logs\44k\D_10000.pth
2023-03-26 18:57:49,630 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_7000.pth
2023-03-26 18:57:49,661 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_7000.pth
2023-03-26 18:58:54,794 44k INFO ====> Epoch: 14, cost 264.48 s
2023-03-26 18:59:07,069 44k INFO Train Epoch: 15 [1%]
2023-03-26 18:59:07,069 44k INFO Losses: [2.5483999252319336, 2.1715004444122314, 9.351938247680664, 13.837029457092285, 1.2241127490997314], step: 10200, lr: 9.981266397366609e-05
2023-03-26 19:00:15,943 44k INFO Train Epoch: 15 [29%]
2023-03-26 19:00:15,943 44k INFO Losses: [2.504495859146118, 2.5773801803588867, 11.583601951599121, 19.028501510620117, 1.3856337070465088], step: 10400, lr: 9.981266397366609e-05
2023-03-26 19:01:24,153 44k INFO Train Epoch: 15 [56%]
2023-03-26 19:01:24,153 44k INFO Losses: [2.5489957332611084, 2.3842098712921143, 7.237650394439697, 15.037101745605469, 1.4804646968841553], step: 10600, lr: 9.981266397366609e-05
2023-03-26 19:02:32,750 44k INFO Train Epoch: 15 [84%]
2023-03-26 19:02:32,750 44k INFO Losses: [2.5680196285247803, 2.1249020099639893, 5.117791652679443, 11.572392463684082, 0.9273977875709534], step: 10800, lr: 9.981266397366609e-05
2023-03-26 19:03:13,519 44k INFO ====> Epoch: 15, cost 258.72 s
2023-03-26 19:03:50,154 44k INFO Train Epoch: 16 [11%]
2023-03-26 19:03:50,154 44k INFO Losses: [2.508527994155884, 2.306581735610962, 10.516632080078125, 19.146289825439453, 1.234512448310852], step: 11000, lr: 9.980018739066937e-05
2023-03-26 19:03:53,273 44k INFO Saving model and optimizer state at iteration 16 to ./logs\44k\G_11000.pth
2023-03-26 19:03:53,998 44k INFO Saving model and optimizer state at iteration 16 to ./logs\44k\D_11000.pth
2023-03-26 19:03:54,672 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_8000.pth
2023-03-26 19:03:54,703 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_8000.pth
2023-03-26 19:05:03,574 44k INFO Train Epoch: 16 [38%]
2023-03-26 19:05:03,574 44k INFO Losses: [2.681306838989258, 1.9447368383407593, 9.382843971252441, 19.791671752929688, 1.1135826110839844], step: 11200, lr: 9.980018739066937e-05
2023-03-26 19:06:12,120 44k INFO Train Epoch: 16 [66%]
2023-03-26 19:06:12,120 44k INFO Losses: [2.779426336288452, 1.8779959678649902, 5.856019020080566, 12.40982437133789, 1.3707494735717773], step: 11400, lr: 9.980018739066937e-05
2023-03-26 19:07:20,636 44k INFO Train Epoch: 16 [93%]
2023-03-26 19:07:20,637 44k INFO Losses: [2.4304559230804443, 2.3551981449127197, 10.659273147583008, 16.688438415527344, 1.0421156883239746], step: 11600, lr: 9.980018739066937e-05
2023-03-26 19:07:36,827 44k INFO ====> Epoch: 16, cost 263.31 s
2023-03-26 19:08:38,579 44k INFO Train Epoch: 17 [21%]
2023-03-26 19:08:38,579 44k INFO Losses: [2.632368326187134, 2.3899827003479004, 8.575041770935059, 15.419698715209961, 1.3164335489273071], step: 11800, lr: 9.978771236724554e-05
2023-03-26 19:09:46,608 44k INFO Train Epoch: 17 [48%]
2023-03-26 19:09:46,608 44k INFO Losses: [2.5371720790863037, 2.409085512161255, 9.42126178741455, 13.208368301391602, 0.8270394206047058], step: 12000, lr: 9.978771236724554e-05
2023-03-26 19:09:49,748 44k INFO Saving model and optimizer state at iteration 17 to ./logs\44k\G_12000.pth
2023-03-26 19:09:50,467 44k INFO Saving model and optimizer state at iteration 17 to ./logs\44k\D_12000.pth
2023-03-26 19:09:51,130 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_9000.pth
2023-03-26 19:09:51,161 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_9000.pth
2023-03-26 19:11:00,090 44k INFO Train Epoch: 17 [76%]
2023-03-26 19:11:00,091 44k INFO Losses: [2.5986952781677246, 2.261148691177368, 9.256317138671875, 16.88166046142578, 0.9449052810668945], step: 12200, lr: 9.978771236724554e-05
2023-03-26 19:11:59,911 44k INFO ====> Epoch: 17, cost 263.08 s
2023-03-26 19:12:17,538 44k INFO Train Epoch: 18 [3%]
2023-03-26 19:12:17,539 44k INFO Losses: [2.5862667560577393, 2.404651165008545, 7.419188022613525, 15.658306121826172, 1.6325950622558594], step: 12400, lr: 9.977523890319963e-05
2023-03-26 19:13:26,408 44k INFO Train Epoch: 18 [31%]
2023-03-26 19:13:26,409 44k INFO Losses: [2.5046818256378174, 2.566683292388916, 11.40576457977295, 20.323307037353516, 0.7503674030303955], step: 12600, lr: 9.977523890319963e-05
2023-03-26 19:14:35,924 44k INFO Train Epoch: 18 [58%]
2023-03-26 19:14:35,924 44k INFO Losses: [2.342552423477173, 2.4409661293029785, 11.510695457458496, 14.227130889892578, 1.170577049255371], step: 12800, lr: 9.977523890319963e-05
2023-03-26 19:15:46,107 44k INFO Train Epoch: 18 [86%]
2023-03-26 19:15:46,107 44k INFO Losses: [2.318747043609619, 2.397803544998169, 14.357951164245605, 21.396282196044922, 1.337156891822815], step: 13000, lr: 9.977523890319963e-05
2023-03-26 19:15:49,305 44k INFO Saving model and optimizer state at iteration 18 to ./logs\44k\G_13000.pth
2023-03-26 19:15:50,034 44k INFO Saving model and optimizer state at iteration 18 to ./logs\44k\D_13000.pth
2023-03-26 19:15:50,709 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_10000.pth
2023-03-26 19:15:50,749 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_10000.pth
2023-03-26 19:16:26,771 44k INFO ====> Epoch: 18, cost 266.86 s
2023-03-26 19:17:09,924 44k INFO Train Epoch: 19 [13%]
2023-03-26 19:17:09,924 44k INFO Losses: [2.453674554824829, 2.147922992706299, 12.158828735351562, 19.503765106201172, 1.561596393585205], step: 13200, lr: 9.976276699833672e-05
2023-03-26 19:18:20,302 44k INFO Train Epoch: 19 [41%]
2023-03-26 19:18:20,302 44k INFO Losses: [2.651553153991699, 2.187086820602417, 6.514610767364502, 12.54338264465332, 1.6053649187088013], step: 13400, lr: 9.976276699833672e-05
2023-03-26 19:19:30,114 44k INFO Train Epoch: 19 [68%]
2023-03-26 19:19:30,114 44k INFO Losses: [2.2522342205047607, 2.427008867263794, 12.093758583068848, 19.923839569091797, 0.429791659116745], step: 13600, lr: 9.976276699833672e-05
2023-03-26 19:20:38,688 44k INFO Train Epoch: 19 [96%]
2023-03-26 19:20:38,688 44k INFO Losses: [2.721558094024658, 1.9361917972564697, 5.935189247131348, 14.535965919494629, 1.0994250774383545], step: 13800, lr: 9.976276699833672e-05
2023-03-26 19:20:49,538 44k INFO ====> Epoch: 19, cost 262.77 s
2023-03-26 19:21:56,853 44k INFO Train Epoch: 20 [23%]
2023-03-26 19:21:56,853 44k INFO Losses: [2.2145438194274902, 2.534202814102173, 11.711210250854492, 19.500167846679688, 1.9434586763381958], step: 14000, lr: 9.975029665246193e-05
2023-03-26 19:21:59,933 44k INFO Saving model and optimizer state at iteration 20 to ./logs\44k\G_14000.pth
2023-03-26 19:22:00,656 44k INFO Saving model and optimizer state at iteration 20 to ./logs\44k\D_14000.pth
2023-03-26 19:22:01,329 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_11000.pth
2023-03-26 19:22:01,371 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_11000.pth
2023-03-26 19:23:09,187 44k INFO Train Epoch: 20 [51%]
2023-03-26 19:23:09,188 44k INFO Losses: [2.522836685180664, 2.355647563934326, 10.368391036987305, 19.6075439453125, 1.434714913368225], step: 14200, lr: 9.975029665246193e-05
2023-03-26 19:24:18,175 44k INFO Train Epoch: 20 [78%]
2023-03-26 19:24:18,175 44k INFO Losses: [2.4127960205078125, 2.2381203174591064, 11.525047302246094, 20.181774139404297, 1.293044924736023], step: 14400, lr: 9.975029665246193e-05
2023-03-26 19:25:12,673 44k INFO ====> Epoch: 20, cost 263.13 s
2023-03-26 19:25:35,634 44k INFO Train Epoch: 21 [5%]
2023-03-26 19:25:35,635 44k INFO Losses: [2.256744384765625, 2.5320334434509277, 11.82845687866211, 20.79422950744629, 1.6302387714385986], step: 14600, lr: 9.973782786538036e-05
2023-03-26 19:26:45,939 44k INFO Train Epoch: 21 [33%]
2023-03-26 19:26:45,940 44k INFO Losses: [2.2314088344573975, 2.476482391357422, 13.351426124572754, 19.801959991455078, 1.2564705610275269], step: 14800, lr: 9.973782786538036e-05
2023-03-26 19:27:57,048 44k INFO Train Epoch: 21 [60%]
2023-03-26 19:27:57,049 44k INFO Losses: [2.4064671993255615, 2.1637048721313477, 10.250255584716797, 16.544618606567383, 1.50368070602417], step: 15000, lr: 9.973782786538036e-05
2023-03-26 19:28:00,452 44k INFO Saving model and optimizer state at iteration 21 to ./logs\44k\G_15000.pth
2023-03-26 19:28:01,207 44k INFO Saving model and optimizer state at iteration 21 to ./logs\44k\D_15000.pth
2023-03-26 19:28:01,887 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_12000.pth
2023-03-26 19:28:01,924 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_12000.pth
2023-03-26 19:29:13,747 44k INFO Train Epoch: 21 [88%]
2023-03-26 19:29:13,747 44k INFO Losses: [2.5564846992492676, 2.322659730911255, 12.607988357543945, 17.174489974975586, 1.3598898649215698], step: 15200, lr: 9.973782786538036e-05
2023-03-26 19:29:44,680 44k INFO ====> Epoch: 21, cost 272.01 s
2023-03-26 19:30:34,591 44k INFO Train Epoch: 22 [15%]
2023-03-26 19:30:34,592 44k INFO Losses: [2.803964138031006, 1.8290067911148071, 11.378033638000488, 18.429790496826172, 0.9149801731109619], step: 15400, lr: 9.972536063689719e-05
2023-03-26 19:31:45,027 44k INFO Train Epoch: 22 [43%]
2023-03-26 19:31:45,028 44k INFO Losses: [2.193533182144165, 2.4682841300964355, 13.08427906036377, 19.601545333862305, 1.1315336227416992], step: 15600, lr: 9.972536063689719e-05
2023-03-26 19:32:55,570 44k INFO Train Epoch: 22 [70%]
2023-03-26 19:32:55,570 44k INFO Losses: [2.635566473007202, 2.428760528564453, 9.543636322021484, 20.080408096313477, 1.233144760131836], step: 15800, lr: 9.972536063689719e-05
2023-03-26 19:34:05,920 44k INFO Train Epoch: 22 [98%]
2023-03-26 19:34:05,921 44k INFO Losses: [2.676663398742676, 2.063350200653076, 5.968891620635986, 12.396279335021973, 1.0327547788619995], step: 16000, lr: 9.972536063689719e-05
2023-03-26 19:34:09,104 44k INFO Saving model and optimizer state at iteration 22 to ./logs\44k\G_16000.pth
2023-03-26 19:34:09,836 44k INFO Saving model and optimizer state at iteration 22 to ./logs\44k\D_16000.pth
2023-03-26 19:34:10,534 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_13000.pth
2023-03-26 19:34:10,575 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_13000.pth
2023-03-26 19:34:16,117 44k INFO ====> Epoch: 22, cost 271.44 s
2023-03-26 19:35:31,344 44k INFO Train Epoch: 23 [25%]
2023-03-26 19:35:31,344 44k INFO Losses: [2.559652328491211, 2.3552708625793457, 6.873598098754883, 15.366195678710938, 1.1844950914382935], step: 16200, lr: 9.971289496681757e-05
2023-03-26 19:36:42,517 44k INFO Train Epoch: 23 [53%]
2023-03-26 19:36:42,518 44k INFO Losses: [2.428539991378784, 2.4367523193359375, 9.574040412902832, 19.215234756469727, 1.4362854957580566], step: 16400, lr: 9.971289496681757e-05
2023-03-26 19:37:53,037 44k INFO Train Epoch: 23 [80%]
2023-03-26 19:37:53,037 44k INFO Losses: [2.409686326980591, 2.3020808696746826, 12.331310272216797, 20.234670639038086, 1.1331175565719604], step: 16600, lr: 9.971289496681757e-05
2023-03-26 19:38:43,155 44k INFO ====> Epoch: 23, cost 267.04 s
2023-03-26 19:39:12,473 44k INFO Train Epoch: 24 [8%]
2023-03-26 19:39:12,474 44k INFO Losses: [2.4319372177124023, 2.559391975402832, 7.149979591369629, 14.216936111450195, 1.3410944938659668], step: 16800, lr: 9.970043085494672e-05
2023-03-26 19:40:24,696 44k INFO Train Epoch: 24 [35%]
2023-03-26 19:40:24,697 44k INFO Losses: [2.5724716186523438, 2.2109527587890625, 13.943380355834961, 19.470436096191406, 1.263765811920166], step: 17000, lr: 9.970043085494672e-05
2023-03-26 19:40:27,837 44k INFO Saving model and optimizer state at iteration 24 to ./logs\44k\G_17000.pth
2023-03-26 19:40:28,573 44k INFO Saving model and optimizer state at iteration 24 to ./logs\44k\D_17000.pth
2023-03-26 19:40:29,280 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_14000.pth
2023-03-26 19:40:29,320 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_14000.pth
2023-03-26 19:41:39,648 44k INFO Train Epoch: 24 [63%]
2023-03-26 19:41:39,648 44k INFO Losses: [2.3632164001464844, 2.2174670696258545, 14.03620719909668, 20.134252548217773, 1.1238229274749756], step: 17200, lr: 9.970043085494672e-05
2023-03-26 19:42:50,124 44k INFO Train Epoch: 24 [90%]
2023-03-26 19:42:50,124 44k INFO Losses: [2.4203712940216064, 2.272968292236328, 8.129199981689453, 18.001089096069336, 1.2764698266983032], step: 17400, lr: 9.970043085494672e-05
2023-03-26 19:43:15,354 44k INFO ====> Epoch: 24, cost 272.20 s
2023-03-26 19:44:33,092 44k INFO Train Epoch: 25 [18%]
2023-03-26 19:44:33,092 44k INFO Losses: [2.627995491027832, 1.990657091140747, 8.367237091064453, 13.38406753540039, 1.816210150718689], step: 17600, lr: 9.968796830108985e-05
2023-03-26 19:46:20,442 44k INFO Train Epoch: 25 [45%]
2023-03-26 19:46:20,442 44k INFO Losses: [2.4526002407073975, 2.442440986633301, 12.983800888061523, 19.65188217163086, 1.2659069299697876], step: 17800, lr: 9.968796830108985e-05
2023-03-26 19:48:05,774 44k INFO Train Epoch: 25 [73%]
2023-03-26 19:48:05,774 44k INFO Losses: [2.3064093589782715, 2.238471508026123, 11.532581329345703, 18.76743507385254, 1.8713524341583252], step: 18000, lr: 9.968796830108985e-05
2023-03-26 19:48:09,439 44k INFO Saving model and optimizer state at iteration 25 to ./logs\44k\G_18000.pth
2023-03-26 19:48:10,574 44k INFO Saving model and optimizer state at iteration 25 to ./logs\44k\D_18000.pth
2023-03-26 19:48:11,672 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_15000.pth
2023-03-26 19:48:11,718 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_15000.pth
2023-03-26 19:49:55,679 44k INFO ====> Epoch: 25, cost 400.33 s
2023-03-26 19:50:05,622 44k INFO Train Epoch: 26 [0%]
2023-03-26 19:50:05,623 44k INFO Losses: [2.5450186729431152, 2.6062145233154297, 11.213815689086914, 19.335420608520508, 1.6555109024047852], step: 18200, lr: 9.967550730505221e-05
2023-03-26 19:51:26,360 44k INFO Train Epoch: 26 [27%]
2023-03-26 19:51:26,361 44k INFO Losses: [2.311784505844116, 2.263667106628418, 8.29580307006836, 14.4016695022583, 1.2868053913116455], step: 18400, lr: 9.967550730505221e-05
2023-03-26 19:52:37,041 44k INFO Train Epoch: 26 [55%]
2023-03-26 19:52:37,041 44k INFO Losses: [2.5044937133789062, 2.6533823013305664, 10.89778995513916, 15.753846168518066, 1.1692155599594116], step: 18600, lr: 9.967550730505221e-05
2023-03-26 19:53:48,719 44k INFO Train Epoch: 26 [82%]
2023-03-26 19:53:48,720 44k INFO Losses: [2.5673725605010986, 2.0504236221313477, 8.545933723449707, 12.596656799316406, 0.950874924659729], step: 18800, lr: 9.967550730505221e-05
2023-03-26 19:54:33,689 44k INFO ====> Epoch: 26, cost 278.01 s
2023-03-26 19:55:08,576 44k INFO Train Epoch: 27 [10%]
2023-03-26 19:55:08,577 44k INFO Losses: [2.646456241607666, 2.1287734508514404, 12.250157356262207, 13.04813003540039, 1.4722940921783447], step: 19000, lr: 9.966304786663908e-05
2023-03-26 19:55:11,721 44k INFO Saving model and optimizer state at iteration 27 to ./logs\44k\G_19000.pth
2023-03-26 19:55:12,442 44k INFO Saving model and optimizer state at iteration 27 to ./logs\44k\D_19000.pth
2023-03-26 19:55:13,143 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_16000.pth
2023-03-26 19:55:13,173 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_16000.pth
2023-03-26 19:56:23,822 44k INFO Train Epoch: 27 [37%]
2023-03-26 19:56:23,822 44k INFO Losses: [2.7408814430236816, 2.0392229557037354, 13.874476432800293, 19.85894012451172, 1.4021087884902954], step: 19200, lr: 9.966304786663908e-05
2023-03-26 19:57:34,496 44k INFO Train Epoch: 27 [65%]
2023-03-26 19:57:34,497 44k INFO Losses: [2.3849587440490723, 2.4459309577941895, 12.022232055664062, 16.191383361816406, 1.4396075010299683], step: 19400, lr: 9.966304786663908e-05
2023-03-26 19:58:46,522 44k INFO Train Epoch: 27 [92%]
2023-03-26 19:58:46,522 44k INFO Losses: [2.5721755027770996, 2.3741061687469482, 10.125975608825684, 15.761423110961914, 1.5095386505126953], step: 19600, lr: 9.966304786663908e-05
2023-03-26 19:59:06,327 44k INFO ====> Epoch: 27, cost 272.64 s
2023-03-26 20:00:08,270 44k INFO Train Epoch: 28 [20%]
2023-03-26 20:00:08,271 44k INFO Losses: [2.6713407039642334, 2.08626127243042, 12.329195022583008, 16.386035919189453, 1.2455395460128784], step: 19800, lr: 9.965058998565574e-05
2023-03-26 20:01:20,601 44k INFO Train Epoch: 28 [47%]
2023-03-26 20:01:20,601 44k INFO Losses: [2.3901543617248535, 2.3985085487365723, 11.719197273254395, 18.378982543945312, 1.1770191192626953], step: 20000, lr: 9.965058998565574e-05
2023-03-26 20:01:23,899 44k INFO Saving model and optimizer state at iteration 28 to ./logs\44k\G_20000.pth
2023-03-26 20:01:24,637 44k INFO Saving model and optimizer state at iteration 28 to ./logs\44k\D_20000.pth
2023-03-26 20:01:25,340 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_17000.pth
2023-03-26 20:01:25,371 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_17000.pth
2023-03-26 20:02:36,668 44k INFO Train Epoch: 28 [75%]
2023-03-26 20:02:36,669 44k INFO Losses: [2.530269145965576, 2.4520785808563232, 13.67650032043457, 20.522214889526367, 1.2241243124008179], step: 20200, lr: 9.965058998565574e-05
2023-03-26 20:03:41,530 44k INFO ====> Epoch: 28, cost 275.20 s
2023-03-26 20:03:57,080 44k INFO Train Epoch: 29 [2%]
2023-03-26 20:03:57,081 44k INFO Losses: [2.7208266258239746, 1.8831546306610107, 8.64593505859375, 13.37922191619873, 1.6189135313034058], step: 20400, lr: 9.963813366190753e-05
2023-03-26 20:05:08,706 44k INFO Train Epoch: 29 [30%]
2023-03-26 20:05:08,707 44k INFO Losses: [2.4101641178131104, 2.4209752082824707, 10.895308494567871, 16.480730056762695, 0.8267279863357544], step: 20600, lr: 9.963813366190753e-05
2023-03-26 20:06:19,103 44k INFO Train Epoch: 29 [57%]
2023-03-26 20:06:19,103 44k INFO Losses: [2.302244186401367, 2.505117416381836, 11.002657890319824, 20.075153350830078, 1.1351284980773926], step: 20800, lr: 9.963813366190753e-05
2023-03-26 20:07:30,254 44k INFO Train Epoch: 29 [85%]
2023-03-26 20:07:30,254 44k INFO Losses: [2.515221118927002, 2.2566659450531006, 7.678509712219238, 13.22794246673584, 0.995908796787262], step: 21000, lr: 9.963813366190753e-05
2023-03-26 20:07:33,374 44k INFO Saving model and optimizer state at iteration 29 to ./logs\44k\G_21000.pth
2023-03-26 20:07:34,209 44k INFO Saving model and optimizer state at iteration 29 to ./logs\44k\D_21000.pth
2023-03-26 20:07:34,929 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_18000.pth
2023-03-26 20:07:34,961 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_18000.pth
2023-03-26 20:08:18,645 44k INFO ====> Epoch: 29, cost 277.12 s
2023-03-26 20:09:14,242 44k INFO Train Epoch: 30 [12%]
2023-03-26 20:09:14,243 44k INFO Losses: [2.5049850940704346, 2.070011615753174, 8.637166023254395, 16.170169830322266, 0.9112614393234253], step: 21200, lr: 9.962567889519979e-05
2023-03-26 20:11:00,343 44k INFO Train Epoch: 30 [40%]
2023-03-26 20:11:00,344 44k INFO Losses: [2.2979304790496826, 2.345335006713867, 12.233209609985352, 18.85540199279785, 1.0354423522949219], step: 21400, lr: 9.962567889519979e-05
2023-03-26 20:12:46,227 44k INFO Train Epoch: 30 [67%]
2023-03-26 20:12:46,228 44k INFO Losses: [2.483952283859253, 2.190819025039673, 6.3163042068481445, 12.24166202545166, 0.9871227741241455], step: 21600, lr: 9.962567889519979e-05
2023-03-26 20:14:31,271 44k INFO Train Epoch: 30 [95%]
2023-03-26 20:14:31,272 44k INFO Losses: [2.5674543380737305, 2.299410581588745, 11.40297794342041, 19.191362380981445, 1.645627498626709], step: 21800, lr: 9.962567889519979e-05
2023-03-26 20:14:52,071 44k INFO ====> Epoch: 30, cost 393.43 s
2023-03-26 20:21:58,724 44k INFO {'train': {'log_interval': 200, 'eval_interval': 1000, 'seed': 1234, 'epochs': 50, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kageaki': 0}, 'model_dir': './logs\\44k'}
2023-03-26 20:21:58,750 44k WARNING git hash values are different. cea6df30(saved) != fd4d47fd(current)
2023-03-26 20:22:00,767 44k INFO Loaded checkpoint './logs\44k\G_21000.pth' (iteration 29)
2023-03-26 20:22:01,116 44k INFO Loaded checkpoint './logs\44k\D_21000.pth' (iteration 29)
2023-03-26 20:22:23,128 44k INFO Train Epoch: 29 [2%]
2023-03-26 20:22:23,129 44k INFO Losses: [2.1818861961364746, 2.5212302207946777, 11.428882598876953, 16.844091415405273, 1.090970754623413], step: 20400, lr: 9.962567889519979e-05
2023-03-26 20:23:52,726 44k INFO Train Epoch: 29 [30%]
2023-03-26 20:23:52,726 44k INFO Losses: [2.506913185119629, 2.2178587913513184, 10.033429145812988, 14.903549194335938, 0.8776416182518005], step: 20600, lr: 9.962567889519979e-05
2023-03-26 20:25:17,515 44k INFO Train Epoch: 29 [57%]
2023-03-26 20:25:17,515 44k INFO Losses: [2.3614919185638428, 2.357292413711548, 10.665281295776367, 19.459697723388672, 1.2023992538452148], step: 20800, lr: 9.962567889519979e-05
2023-03-26 20:26:44,436 44k INFO Train Epoch: 29 [85%]
2023-03-26 20:26:44,437 44k INFO Losses: [2.713144540786743, 2.6324923038482666, 6.779805660247803, 11.422992706298828, 1.450287938117981], step: 21000, lr: 9.962567889519979e-05
2023-03-26 20:26:48,221 44k INFO Saving model and optimizer state at iteration 29 to ./logs\44k\G_21000.pth
2023-03-26 20:26:48,988 44k INFO Saving model and optimizer state at iteration 29 to ./logs\44k\D_21000.pth
2023-03-26 20:27:38,151 44k INFO ====> Epoch: 29, cost 339.43 s
2023-03-26 20:28:18,032 44k INFO Train Epoch: 30 [12%]
2023-03-26 20:28:18,033 44k INFO Losses: [2.406369924545288, 2.177821636199951, 10.211670875549316, 17.45622444152832, 0.887344241142273], step: 21200, lr: 9.961322568533789e-05
2023-03-26 20:29:27,639 44k INFO Train Epoch: 30 [40%]
2023-03-26 20:29:27,640 44k INFO Losses: [2.3278651237487793, 2.299908399581909, 11.354140281677246, 18.174701690673828, 1.1613898277282715], step: 21400, lr: 9.961322568533789e-05
2023-03-26 20:30:37,527 44k INFO Train Epoch: 30 [67%]
2023-03-26 20:30:37,527 44k INFO Losses: [2.475722312927246, 2.324679136276245, 7.398623466491699, 11.126361846923828, 0.7156153321266174], step: 21600, lr: 9.961322568533789e-05
2023-03-26 20:31:47,856 44k INFO Train Epoch: 30 [95%]
2023-03-26 20:31:47,856 44k INFO Losses: [2.4085958003997803, 2.380079984664917, 11.55059814453125, 17.69883155822754, 1.3650686740875244], step: 21800, lr: 9.961322568533789e-05
2023-03-26 20:32:01,545 44k INFO ====> Epoch: 30, cost 263.39 s
2023-03-26 20:33:07,044 44k INFO Train Epoch: 31 [22%]
2023-03-26 20:33:07,044 44k INFO Losses: [2.746565341949463, 2.101046562194824, 11.939288139343262, 18.45298194885254, 1.276940107345581], step: 22000, lr: 9.960077403212722e-05
2023-03-26 20:33:10,119 44k INFO Saving model and optimizer state at iteration 31 to ./logs\44k\G_22000.pth
2023-03-26 20:33:10,900 44k INFO Saving model and optimizer state at iteration 31 to ./logs\44k\D_22000.pth
2023-03-26 20:33:11,606 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_19000.pth
2023-03-26 20:33:11,637 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_19000.pth
2023-03-26 20:34:20,249 44k INFO Train Epoch: 31 [49%]
2023-03-26 20:34:20,250 44k INFO Losses: [2.622921943664551, 2.0744359493255615, 13.793534278869629, 17.437915802001953, 1.4368258714675903], step: 22200, lr: 9.960077403212722e-05
2023-03-26 20:35:29,850 44k INFO Train Epoch: 31 [77%]
2023-03-26 20:35:29,850 44k INFO Losses: [2.7193901538848877, 2.395512104034424, 8.611534118652344, 17.026023864746094, 1.804220199584961], step: 22400, lr: 9.960077403212722e-05
2023-03-26 20:36:27,662 44k INFO ====> Epoch: 31, cost 266.12 s
2023-03-26 20:36:48,354 44k INFO Train Epoch: 32 [4%]
2023-03-26 20:36:48,355 44k INFO Losses: [2.5295565128326416, 2.292466640472412, 13.563910484313965, 19.43305015563965, 1.5943859815597534], step: 22600, lr: 9.95883239353732e-05
2023-03-26 20:37:58,105 44k INFO Train Epoch: 32 [32%]
2023-03-26 20:37:58,106 44k INFO Losses: [2.6768863201141357, 2.205745220184326, 9.992913246154785, 17.748117446899414, 1.507771372795105], step: 22800, lr: 9.95883239353732e-05
2023-03-26 20:39:05,937 44k INFO Train Epoch: 32 [59%]
2023-03-26 20:39:05,938 44k INFO Losses: [2.531132698059082, 2.1875574588775635, 10.714558601379395, 15.263238906860352, 0.9883330464363098], step: 23000, lr: 9.95883239353732e-05
2023-03-26 20:39:08,984 44k INFO Saving model and optimizer state at iteration 32 to ./logs\44k\G_23000.pth
2023-03-26 20:39:09,751 44k INFO Saving model and optimizer state at iteration 32 to ./logs\44k\D_23000.pth
2023-03-26 20:39:10,439 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_20000.pth
2023-03-26 20:39:10,470 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_20000.pth
2023-03-26 20:40:18,245 44k INFO Train Epoch: 32 [87%]
2023-03-26 20:40:18,246 44k INFO Losses: [2.4366674423217773, 2.4468047618865967, 12.758763313293457, 20.2938175201416, 1.3057420253753662], step: 23200, lr: 9.95883239353732e-05
2023-03-26 20:40:50,525 44k INFO ====> Epoch: 32, cost 262.86 s
2023-03-26 20:41:35,186 44k INFO Train Epoch: 33 [14%]
2023-03-26 20:41:35,187 44k INFO Losses: [2.4389867782592773, 2.3163771629333496, 10.620076179504395, 18.002946853637695, 1.3941969871520996], step: 23400, lr: 9.957587539488128e-05
2023-03-26 20:42:43,145 44k INFO Train Epoch: 33 [42%]
2023-03-26 20:42:43,145 44k INFO Losses: [2.4462387561798096, 2.3331246376037598, 12.116209030151367, 18.596160888671875, 1.4406756162643433], step: 23600, lr: 9.957587539488128e-05
2023-03-26 20:43:51,420 44k INFO Train Epoch: 33 [69%]
2023-03-26 20:43:51,420 44k INFO Losses: [2.7295923233032227, 2.0312931537628174, 8.300894737243652, 11.675796508789062, 1.3705692291259766], step: 23800, lr: 9.957587539488128e-05
2023-03-26 20:44:59,297 44k INFO Train Epoch: 33 [97%]
2023-03-26 20:44:59,297 44k INFO Losses: [2.4058234691619873, 2.4040396213531494, 10.637872695922852, 16.23623275756836, 0.804890513420105], step: 24000, lr: 9.957587539488128e-05
2023-03-26 20:45:02,445 44k INFO Saving model and optimizer state at iteration 33 to ./logs\44k\G_24000.pth
2023-03-26 20:45:03,161 44k INFO Saving model and optimizer state at iteration 33 to ./logs\44k\D_24000.pth
2023-03-26 20:45:03,829 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_21000.pth
2023-03-26 20:45:03,859 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_21000.pth
2023-03-26 20:45:11,843 44k INFO ====> Epoch: 33, cost 261.32 s
2023-03-26 20:46:21,395 44k INFO Train Epoch: 34 [24%]
2023-03-26 20:46:21,395 44k INFO Losses: [2.4041364192962646, 2.6986656188964844, 12.151071548461914, 16.378358840942383, 1.0397751331329346], step: 24200, lr: 9.956342841045691e-05
2023-03-26 20:47:29,029 44k INFO Train Epoch: 34 [52%]
2023-03-26 20:47:29,029 44k INFO Losses: [2.4810895919799805, 2.3740761280059814, 8.478053092956543, 15.712090492248535, 0.899149477481842], step: 24400, lr: 9.956342841045691e-05
2023-03-26 20:48:45,633 44k INFO Train Epoch: 34 [79%]
2023-03-26 20:48:45,633 44k INFO Losses: [2.3709893226623535, 2.2401344776153564, 12.932862281799316, 18.402969360351562, 1.4470969438552856], step: 24600, lr: 9.956342841045691e-05
2023-03-26 20:50:03,643 44k INFO ====> Epoch: 34, cost 291.80 s
2023-03-26 20:50:37,796 44k INFO Train Epoch: 35 [7%]
2023-03-26 20:50:37,797 44k INFO Losses: [2.5201334953308105, 2.2755749225616455, 11.022621154785156, 16.03299331665039, 0.8324728608131409], step: 24800, lr: 9.95509829819056e-05
2023-03-26 20:52:20,340 44k INFO Train Epoch: 35 [34%]
2023-03-26 20:52:20,341 44k INFO Losses: [2.6387505531311035, 2.3263511657714844, 9.097269058227539, 15.535866737365723, 1.2891377210617065], step: 25000, lr: 9.95509829819056e-05
2023-03-26 20:52:23,933 44k INFO Saving model and optimizer state at iteration 35 to ./logs\44k\G_25000.pth
2023-03-26 20:52:25,157 44k INFO Saving model and optimizer state at iteration 35 to ./logs\44k\D_25000.pth
2023-03-26 20:52:26,064 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_22000.pth
2023-03-26 20:52:26,118 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_22000.pth
2023-03-26 20:54:08,890 44k INFO Train Epoch: 35 [62%]
2023-03-26 20:54:08,890 44k INFO Losses: [2.4386889934539795, 2.373467445373535, 12.214559555053711, 19.51681137084961, 1.087910771369934], step: 25200, lr: 9.95509829819056e-05
2023-03-26 20:55:52,327 44k INFO Train Epoch: 35 [89%]
2023-03-26 20:55:52,327 44k INFO Losses: [2.387831449508667, 2.203815460205078, 8.673959732055664, 15.515698432922363, 1.1257497072219849], step: 25400, lr: 9.95509829819056e-05
2023-03-26 20:56:33,687 44k INFO ====> Epoch: 35, cost 390.04 s
2023-03-26 20:57:46,815 44k INFO Train Epoch: 36 [16%]
2023-03-26 20:57:46,815 44k INFO Losses: [2.5127129554748535, 2.1894476413726807, 9.067672729492188, 17.541988372802734, 1.7558355331420898], step: 25600, lr: 9.953853910903285e-05
2023-03-26 20:59:30,554 44k INFO Train Epoch: 36 [44%]
2023-03-26 20:59:30,555 44k INFO Losses: [2.5485236644744873, 2.4422123432159424, 9.322126388549805, 14.556734085083008, 1.2394096851348877], step: 25800, lr: 9.953853910903285e-05
2023-03-26 21:00:49,541 44k INFO Train Epoch: 36 [71%]
2023-03-26 21:00:49,541 44k INFO Losses: [2.577131986618042, 2.3183388710021973, 12.595996856689453, 15.74543571472168, 1.211929440498352], step: 26000, lr: 9.953853910903285e-05
2023-03-26 21:00:52,715 44k INFO Saving model and optimizer state at iteration 36 to ./logs\44k\G_26000.pth
2023-03-26 21:00:53,493 44k INFO Saving model and optimizer state at iteration 36 to ./logs\44k\D_26000.pth
2023-03-26 21:00:54,168 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_23000.pth
2023-03-26 21:00:54,198 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_23000.pth
2023-03-26 21:02:03,353 44k INFO Train Epoch: 36 [99%]
2023-03-26 21:02:03,354 44k INFO Losses: [2.140305519104004, 2.6605606079101562, 10.437066078186035, 16.032678604125977, 1.0190494060516357], step: 26200, lr: 9.953853910903285e-05
2023-03-26 21:02:06,222 44k INFO ====> Epoch: 36, cost 332.53 s
2023-03-26 21:03:22,623 44k INFO Train Epoch: 37 [26%]
2023-03-26 21:03:22,624 44k INFO Losses: [2.320822238922119, 2.3764572143554688, 10.913688659667969, 15.41199016571045, 1.2600834369659424], step: 26400, lr: 9.952609679164422e-05
2023-03-26 21:04:31,720 44k INFO Train Epoch: 37 [54%]
2023-03-26 21:04:31,720 44k INFO Losses: [2.369985818862915, 2.22127628326416, 10.650030136108398, 16.435476303100586, 1.1520702838897705], step: 26600, lr: 9.952609679164422e-05
2023-03-26 21:05:41,408 44k INFO Train Epoch: 37 [81%]
2023-03-26 21:05:41,409 44k INFO Losses: [2.4230546951293945, 2.409635543823242, 8.992048263549805, 18.469785690307617, 1.1126004457473755], step: 26800, lr: 9.952609679164422e-05
2023-03-26 21:06:28,293 44k INFO ====> Epoch: 37, cost 262.07 s
2023-03-26 21:07:00,080 44k INFO Train Epoch: 38 [9%]
2023-03-26 21:07:00,081 44k INFO Losses: [2.3237335681915283, 2.4228711128234863, 14.604413032531738, 19.324264526367188, 1.3721626996994019], step: 27000, lr: 9.951365602954526e-05
2023-03-26 21:07:03,187 44k INFO Saving model and optimizer state at iteration 38 to ./logs\44k\G_27000.pth
2023-03-26 21:07:03,970 44k INFO Saving model and optimizer state at iteration 38 to ./logs\44k\D_27000.pth
2023-03-26 21:07:04,661 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_24000.pth
2023-03-26 21:07:04,695 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_24000.pth
2023-03-26 21:08:14,471 44k INFO Train Epoch: 38 [36%]
2023-03-26 21:08:14,472 44k INFO Losses: [2.3177566528320312, 2.325977325439453, 13.018202781677246, 16.41051483154297, 1.1823643445968628], step: 27200, lr: 9.951365602954526e-05
2023-03-26 21:09:23,465 44k INFO Train Epoch: 38 [64%]
2023-03-26 21:09:23,466 44k INFO Losses: [2.4561734199523926, 2.500232458114624, 12.181352615356445, 16.837472915649414, 1.3606674671173096], step: 27400, lr: 9.951365602954526e-05
2023-03-26 21:10:32,957 44k INFO Train Epoch: 38 [91%]
2023-03-26 21:10:32,957 44k INFO Losses: [2.460768461227417, 2.3304383754730225, 11.129576683044434, 16.081220626831055, 1.195226788520813], step: 27600, lr: 9.951365602954526e-05
2023-03-26 21:10:55,092 44k INFO ====> Epoch: 38, cost 266.80 s
2023-03-26 21:11:51,630 44k INFO Train Epoch: 39 [19%]
2023-03-26 21:11:51,631 44k INFO Losses: [2.5549423694610596, 2.332932710647583, 10.694438934326172, 17.75734519958496, 1.2634567022323608], step: 27800, lr: 9.950121682254156e-05
2023-03-26 21:13:00,348 44k INFO Train Epoch: 39 [46%]
2023-03-26 21:13:00,349 44k INFO Losses: [2.560208559036255, 2.319077491760254, 7.480192184448242, 17.929821014404297, 0.8479750156402588], step: 28000, lr: 9.950121682254156e-05
2023-03-26 21:13:03,496 44k INFO Saving model and optimizer state at iteration 39 to ./logs\44k\G_28000.pth
2023-03-26 21:13:04,233 44k INFO Saving model and optimizer state at iteration 39 to ./logs\44k\D_28000.pth
2023-03-26 21:13:04,921 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_25000.pth
2023-03-26 21:13:04,953 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_25000.pth
2023-03-26 21:14:15,399 44k INFO Train Epoch: 39 [74%]
2023-03-26 21:14:15,400 44k INFO Losses: [2.528662919998169, 2.359780788421631, 7.910620212554932, 15.978086471557617, 1.5573182106018066], step: 28200, lr: 9.950121682254156e-05
2023-03-26 21:15:20,264 44k INFO ====> Epoch: 39, cost 265.17 s
2023-03-26 21:15:32,493 44k INFO Train Epoch: 40 [1%]
2023-03-26 21:15:32,493 44k INFO Losses: [2.5307111740112305, 2.0676941871643066, 13.683015823364258, 18.8820743560791, 1.6224257946014404], step: 28400, lr: 9.948877917043875e-05
2023-03-26 21:16:41,916 44k INFO Train Epoch: 40 [29%]
2023-03-26 21:16:41,916 44k INFO Losses: [2.549201488494873, 2.1349565982818604, 8.524430274963379, 13.472315788269043, 1.2798707485198975], step: 28600, lr: 9.948877917043875e-05
2023-03-26 21:17:49,879 44k INFO Train Epoch: 40 [56%]
2023-03-26 21:17:49,879 44k INFO Losses: [2.635751485824585, 2.4183154106140137, 7.132954120635986, 12.586833953857422, 1.2585774660110474], step: 28800, lr: 9.948877917043875e-05
2023-03-26 21:18:58,017 44k INFO Train Epoch: 40 [84%]
2023-03-26 21:18:58,017 44k INFO Losses: [2.346043825149536, 2.3376169204711914, 11.767813682556152, 15.985593795776367, 1.3478666543960571], step: 29000, lr: 9.948877917043875e-05
2023-03-26 21:19:01,152 44k INFO Saving model and optimizer state at iteration 40 to ./logs\44k\G_29000.pth
2023-03-26 21:19:01,873 44k INFO Saving model and optimizer state at iteration 40 to ./logs\44k\D_29000.pth
2023-03-26 21:19:02,533 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_26000.pth
2023-03-26 21:19:02,574 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_26000.pth
2023-03-26 21:19:42,981 44k INFO ====> Epoch: 40, cost 262.72 s
2023-03-26 21:20:19,694 44k INFO Train Epoch: 41 [11%]
2023-03-26 21:20:19,695 44k INFO Losses: [2.565873622894287, 2.290444850921631, 8.034516334533691, 16.443622589111328, 1.4831417798995972], step: 29200, lr: 9.947634307304244e-05
2023-03-26 21:21:28,736 44k INFO Train Epoch: 41 [38%]
2023-03-26 21:21:28,736 44k INFO Losses: [2.4355294704437256, 2.1923205852508545, 9.081827163696289, 16.441974639892578, 1.3548475503921509], step: 29400, lr: 9.947634307304244e-05
2023-03-26 21:22:36,793 44k INFO Train Epoch: 41 [66%]
2023-03-26 21:22:36,794 44k INFO Losses: [2.6268556118011475, 2.113205909729004, 10.946870803833008, 16.89814567565918, 1.0009223222732544], step: 29600, lr: 9.947634307304244e-05
2023-03-26 21:23:44,952 44k INFO Train Epoch: 41 [93%]
2023-03-26 21:23:44,952 44k INFO Losses: [2.552823066711426, 2.2609336376190186, 7.76698637008667, 10.481614112854004, 1.6063477993011475], step: 29800, lr: 9.947634307304244e-05
2023-03-26 21:24:01,053 44k INFO ====> Epoch: 41, cost 258.07 s
2023-03-26 21:25:08,404 44k INFO Train Epoch: 42 [21%]
2023-03-26 21:25:08,404 44k INFO Losses: [2.613840103149414, 2.3464958667755127, 8.959070205688477, 14.424112319946289, 0.8349194526672363], step: 30000, lr: 9.94639085301583e-05
2023-03-26 21:25:11,429 44k INFO Saving model and optimizer state at iteration 42 to ./logs\44k\G_30000.pth
2023-03-26 21:25:12,145 44k INFO Saving model and optimizer state at iteration 42 to ./logs\44k\D_30000.pth
2023-03-26 21:25:12,814 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_27000.pth
2023-03-26 21:25:12,844 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_27000.pth
2023-03-26 21:26:20,242 44k INFO Train Epoch: 42 [48%]
2023-03-26 21:26:20,242 44k INFO Losses: [2.5178046226501465, 1.973719835281372, 9.889934539794922, 16.38356590270996, 1.1818913221359253], step: 30200, lr: 9.94639085301583e-05
2023-03-26 21:27:28,491 44k INFO Train Epoch: 42 [76%]
2023-03-26 21:27:28,492 44k INFO Losses: [2.6279070377349854, 2.250808000564575, 13.214252471923828, 19.34341812133789, 0.671466052532196], step: 30400, lr: 9.94639085301583e-05
2023-03-26 21:28:28,921 44k INFO ====> Epoch: 42, cost 267.87 s
2023-03-26 21:28:46,708 44k INFO Train Epoch: 43 [3%]
2023-03-26 21:28:46,708 44k INFO Losses: [2.517134189605713, 2.1201608180999756, 10.239909172058105, 18.890607833862305, 1.3374667167663574], step: 30600, lr: 9.945147554159202e-05
2023-03-26 21:29:55,054 44k INFO Train Epoch: 43 [31%]
2023-03-26 21:29:55,054 44k INFO Losses: [2.5170860290527344, 2.214848518371582, 13.340782165527344, 19.41912269592285, 1.0389739274978638], step: 30800, lr: 9.945147554159202e-05
2023-03-26 21:31:03,197 44k INFO Train Epoch: 43 [58%]
2023-03-26 21:31:03,198 44k INFO Losses: [2.4435911178588867, 2.495150566101074, 10.39663314819336, 19.238109588623047, 0.9299176335334778], step: 31000, lr: 9.945147554159202e-05
2023-03-26 21:31:06,281 44k INFO Saving model and optimizer state at iteration 43 to ./logs\44k\G_31000.pth
2023-03-26 21:31:07,060 44k INFO Saving model and optimizer state at iteration 43 to ./logs\44k\D_31000.pth
2023-03-26 21:31:07,738 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_28000.pth
2023-03-26 21:31:07,769 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_28000.pth
2023-03-26 21:32:15,961 44k INFO Train Epoch: 43 [86%]
2023-03-26 21:32:15,962 44k INFO Losses: [2.4328112602233887, 2.2417960166931152, 17.485950469970703, 21.571151733398438, 1.7340683937072754], step: 31200, lr: 9.945147554159202e-05
2023-03-26 21:32:51,186 44k INFO ====> Epoch: 43, cost 262.27 s
2023-03-26 21:33:33,308 44k INFO Train Epoch: 44 [13%]
2023-03-26 21:33:33,309 44k INFO Losses: [2.2247700691223145, 2.4395790100097656, 12.175507545471191, 17.37096405029297, 1.353575348854065], step: 31400, lr: 9.943904410714931e-05
2023-03-26 21:34:41,366 44k INFO Train Epoch: 44 [41%]
2023-03-26 21:34:41,366 44k INFO Losses: [2.3662893772125244, 2.7794904708862305, 9.40709400177002, 14.246566772460938, 1.2741729021072388], step: 31600, lr: 9.943904410714931e-05
2023-03-26 21:35:50,774 44k INFO Train Epoch: 44 [68%]
2023-03-26 21:35:50,775 44k INFO Losses: [2.3003039360046387, 2.4221243858337402, 11.224088668823242, 18.253585815429688, 1.1864084005355835], step: 31800, lr: 9.943904410714931e-05
2023-03-26 21:37:00,352 44k INFO Train Epoch: 44 [96%]
2023-03-26 21:37:00,353 44k INFO Losses: [2.63925838470459, 2.0777955055236816, 6.857548713684082, 15.075932502746582, 1.106040358543396], step: 32000, lr: 9.943904410714931e-05
2023-03-26 21:37:03,556 44k INFO Saving model and optimizer state at iteration 44 to ./logs\44k\G_32000.pth
2023-03-26 21:37:04,270 44k INFO Saving model and optimizer state at iteration 44 to ./logs\44k\D_32000.pth
2023-03-26 21:37:04,937 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_29000.pth
2023-03-26 21:37:04,971 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_29000.pth
2023-03-26 21:37:15,788 44k INFO ====> Epoch: 44, cost 264.60 s
2023-03-26 21:38:24,204 44k INFO Train Epoch: 45 [23%]
2023-03-26 21:38:24,204 44k INFO Losses: [2.1979780197143555, 2.6060190200805664, 12.857758522033691, 16.70841407775879, 0.9493151903152466], step: 32200, lr: 9.942661422663591e-05
2023-03-26 21:39:33,149 44k INFO Train Epoch: 45 [51%]
2023-03-26 21:39:33,149 44k INFO Losses: [2.4014205932617188, 2.219925880432129, 10.582074165344238, 15.304710388183594, 1.1473904848098755], step: 32400, lr: 9.942661422663591e-05
2023-03-26 21:40:43,439 44k INFO Train Epoch: 45 [78%]
2023-03-26 21:40:43,440 44k INFO Losses: [2.5148918628692627, 2.2440202236175537, 7.362368583679199, 13.719354629516602, 1.2587121725082397], step: 32600, lr: 9.942661422663591e-05
2023-03-26 21:41:53,654 44k INFO ====> Epoch: 45, cost 277.87 s
2023-03-26 21:42:24,856 44k INFO Train Epoch: 46 [5%]
2023-03-26 21:42:24,857 44k INFO Losses: [2.33835768699646, 2.281447649002075, 12.60938835144043, 18.733139038085938, 1.3569731712341309], step: 32800, lr: 9.941418589985758e-05
2023-03-26 21:44:08,297 44k INFO Train Epoch: 46 [33%]
2023-03-26 21:44:08,298 44k INFO Losses: [2.325303792953491, 2.715770721435547, 12.801007270812988, 16.551340103149414, 0.9130549430847168], step: 33000, lr: 9.941418589985758e-05
2023-03-26 21:44:11,862 44k INFO Saving model and optimizer state at iteration 46 to ./logs\44k\G_33000.pth
2023-03-26 21:44:13,019 44k INFO Saving model and optimizer state at iteration 46 to ./logs\44k\D_33000.pth
2023-03-26 21:44:13,930 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_30000.pth
2023-03-26 21:44:13,975 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_30000.pth
2023-03-26 21:45:36,118 44k INFO Train Epoch: 46 [60%]
2023-03-26 21:45:36,119 44k INFO Losses: [2.4854018688201904, 2.205888032913208, 10.59522819519043, 19.090288162231445, 1.1106901168823242], step: 33200, lr: 9.941418589985758e-05
2023-03-26 21:46:46,249 44k INFO Train Epoch: 46 [88%]
2023-03-26 21:46:46,249 44k INFO Losses: [2.722208261489868, 2.1175336837768555, 10.812979698181152, 16.05909538269043, 0.9372826814651489], step: 33400, lr: 9.941418589985758e-05
2023-03-26 21:47:17,411 44k INFO ====> Epoch: 46, cost 323.76 s
2023-03-26 21:48:08,513 44k INFO Train Epoch: 47 [15%]
2023-03-26 21:48:08,513 44k INFO Losses: [2.556161403656006, 1.921614170074463, 8.697649002075195, 13.587606430053711, 0.8473272919654846], step: 33600, lr: 9.940175912662009e-05
2023-03-26 21:49:20,351 44k INFO Train Epoch: 47 [43%]
2023-03-26 21:49:20,351 44k INFO Losses: [2.4933862686157227, 2.3994638919830322, 15.795209884643555, 19.68379020690918, 1.2790398597717285], step: 33800, lr: 9.940175912662009e-05
2023-03-26 21:50:31,356 44k INFO Train Epoch: 47 [70%]
2023-03-26 21:50:31,357 44k INFO Losses: [2.412247657775879, 2.1590089797973633, 9.830469131469727, 20.107654571533203, 1.542922019958496], step: 34000, lr: 9.940175912662009e-05
2023-03-26 21:50:34,590 44k INFO Saving model and optimizer state at iteration 47 to ./logs\44k\G_34000.pth
2023-03-26 21:50:35,354 44k INFO Saving model and optimizer state at iteration 47 to ./logs\44k\D_34000.pth
2023-03-26 21:50:36,137 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_31000.pth
2023-03-26 21:50:36,169 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_31000.pth
2023-03-26 21:51:47,189 44k INFO Train Epoch: 47 [98%]
2023-03-26 21:51:47,189 44k INFO Losses: [2.346374034881592, 2.297542095184326, 9.829960823059082, 18.47844696044922, 1.2716809511184692], step: 34200, lr: 9.940175912662009e-05
2023-03-26 21:51:52,859 44k INFO ====> Epoch: 47, cost 275.45 s
2023-03-26 21:53:09,004 44k INFO Train Epoch: 48 [25%]
2023-03-26 21:53:09,004 44k INFO Losses: [2.5941741466522217, 2.4173154830932617, 7.852332592010498, 17.6015625, 1.3165394067764282], step: 34400, lr: 9.938933390672926e-05
2023-03-26 21:54:19,624 44k INFO Train Epoch: 48 [53%]
2023-03-26 21:54:19,624 44k INFO Losses: [2.6824872493743896, 2.277327060699463, 10.386296272277832, 18.58452796936035, 1.4477074146270752], step: 34600, lr: 9.938933390672926e-05
2023-03-26 21:55:31,217 44k INFO Train Epoch: 48 [80%]
2023-03-26 21:55:31,218 44k INFO Losses: [2.539008140563965, 2.2044379711151123, 12.39027214050293, 19.257644653320312, 1.1891627311706543], step: 34800, lr: 9.938933390672926e-05
2023-03-26 21:56:22,892 44k INFO ====> Epoch: 48, cost 270.03 s
2023-03-26 21:56:52,841 44k INFO Train Epoch: 49 [8%]
2023-03-26 21:56:52,842 44k INFO Losses: [2.572256088256836, 2.224982261657715, 6.679467678070068, 16.84368896484375, 0.9692319631576538], step: 35000, lr: 9.937691023999092e-05
2023-03-26 21:56:56,206 44k INFO Saving model and optimizer state at iteration 49 to ./logs\44k\G_35000.pth
2023-03-26 21:56:56,962 44k INFO Saving model and optimizer state at iteration 49 to ./logs\44k\D_35000.pth
2023-03-26 21:56:57,684 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_32000.pth
2023-03-26 21:56:57,718 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_32000.pth
2023-03-26 21:58:11,575 44k INFO Train Epoch: 49 [35%]
2023-03-26 21:58:11,576 44k INFO Losses: [2.6269850730895996, 2.288778066635132, 12.382123947143555, 18.322433471679688, 1.1609718799591064], step: 35200, lr: 9.937691023999092e-05
2023-03-26 21:59:24,382 44k INFO Train Epoch: 49 [63%]
2023-03-26 21:59:24,382 44k INFO Losses: [2.4340531826019287, 2.453927993774414, 9.466423034667969, 19.50778579711914, 1.1316777467727661], step: 35400, lr: 9.937691023999092e-05
2023-03-26 22:00:36,339 44k INFO Train Epoch: 49 [90%]
2023-03-26 22:00:36,339 44k INFO Losses: [2.65840744972229, 2.1295597553253174, 10.344992637634277, 19.8116455078125, 1.8000327348709106], step: 35600, lr: 9.937691023999092e-05
2023-03-26 22:01:02,503 44k INFO ====> Epoch: 49, cost 279.61 s
2023-03-26 22:01:58,101 44k INFO Train Epoch: 50 [18%]
2023-03-26 22:01:58,102 44k INFO Losses: [2.1592166423797607, 3.1045475006103516, 8.961448669433594, 10.38120174407959, 1.0541951656341553], step: 35800, lr: 9.936448812621091e-05
2023-03-26 22:03:10,277 44k INFO Train Epoch: 50 [45%]
2023-03-26 22:03:10,277 44k INFO Losses: [2.4462239742279053, 2.322165012359619, 9.349515914916992, 13.102229118347168, 1.3284991979599], step: 36000, lr: 9.936448812621091e-05
2023-03-26 22:03:13,472 44k INFO Saving model and optimizer state at iteration 50 to ./logs\44k\G_36000.pth
2023-03-26 22:03:14,203 44k INFO Saving model and optimizer state at iteration 50 to ./logs\44k\D_36000.pth
2023-03-26 22:03:14,983 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_33000.pth
2023-03-26 22:03:15,013 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_33000.pth
2023-03-26 22:04:26,985 44k INFO Train Epoch: 50 [73%]
2023-03-26 22:04:26,985 44k INFO Losses: [2.4799132347106934, 2.2395243644714355, 11.022801399230957, 19.06214714050293, 1.249161720275879], step: 36200, lr: 9.936448812621091e-05
2023-03-26 22:05:38,022 44k INFO ====> Epoch: 50, cost 275.52 s
2023-03-27 07:15:15,710 44k INFO {'train': {'log_interval': 200, 'eval_interval': 1000, 'seed': 1234, 'epochs': 100, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kageaki': 0}, 'model_dir': './logs\\44k'}
2023-03-27 07:15:15,742 44k WARNING git hash values are different. cea6df30(saved) != fd4d47fd(current)
2023-03-27 07:15:17,744 44k INFO Loaded checkpoint './logs\44k\G_36000.pth' (iteration 50)
2023-03-27 07:15:18,107 44k INFO Loaded checkpoint './logs\44k\D_36000.pth' (iteration 50)
2023-03-27 07:16:27,356 44k INFO Train Epoch: 50 [18%]
2023-03-27 07:16:27,356 44k INFO Losses: [2.439272880554199, 2.551793098449707, 9.877571105957031, 16.454505920410156, 0.7157087326049805], step: 35800, lr: 9.935206756519513e-05
2023-03-27 07:17:53,481 44k INFO Train Epoch: 50 [45%]
2023-03-27 07:17:53,482 44k INFO Losses: [2.5554986000061035, 2.16329026222229, 6.9857892990112305, 13.731989860534668, 1.0998468399047852], step: 36000, lr: 9.935206756519513e-05
2023-03-27 07:17:57,154 44k INFO Saving model and optimizer state at iteration 50 to ./logs\44k\G_36000.pth
2023-03-27 07:17:57,927 44k INFO Saving model and optimizer state at iteration 50 to ./logs\44k\D_36000.pth
2023-03-27 07:19:22,376 44k INFO Train Epoch: 50 [73%]
2023-03-27 07:19:22,376 44k INFO Losses: [2.8222479820251465, 1.8222508430480957, 11.116238594055176, 16.301206588745117, 1.2432457208633423], step: 36200, lr: 9.935206756519513e-05
2023-03-27 07:20:44,564 44k INFO ====> Epoch: 50, cost 328.85 s
2023-03-27 07:20:53,739 44k INFO Train Epoch: 51 [0%]
2023-03-27 07:20:53,739 44k INFO Losses: [2.408649206161499, 2.4921319484710693, 12.450469017028809, 18.094934463500977, 0.6916185617446899], step: 36400, lr: 9.933964855674948e-05
2023-03-27 07:22:01,051 44k INFO Train Epoch: 51 [27%]
2023-03-27 07:22:01,051 44k INFO Losses: [2.5425407886505127, 2.1861369609832764, 8.337100982666016, 10.447877883911133, 0.9519892930984497], step: 36600, lr: 9.933964855674948e-05
2023-03-27 07:23:07,673 44k INFO Train Epoch: 51 [55%]
2023-03-27 07:23:07,673 44k INFO Losses: [2.363527536392212, 2.3665409088134766, 7.563984394073486, 13.141250610351562, 1.0272268056869507], step: 36800, lr: 9.933964855674948e-05
2023-03-27 07:24:14,837 44k INFO Train Epoch: 51 [82%]
2023-03-27 07:24:14,837 44k INFO Losses: [2.4265291690826416, 2.72171688079834, 12.115377426147461, 16.756244659423828, 1.1188489198684692], step: 37000, lr: 9.933964855674948e-05
2023-03-27 07:24:17,819 44k INFO Saving model and optimizer state at iteration 51 to ./logs\44k\G_37000.pth
2023-03-27 07:24:18,628 44k INFO Saving model and optimizer state at iteration 51 to ./logs\44k\D_37000.pth
2023-03-27 07:24:19,433 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_34000.pth
2023-03-27 07:24:19,464 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_34000.pth
2023-03-27 07:25:01,766 44k INFO ====> Epoch: 51, cost 257.20 s
2023-03-27 07:25:35,034 44k INFO Train Epoch: 52 [10%]
2023-03-27 07:25:35,035 44k INFO Losses: [2.3869516849517822, 2.4119739532470703, 12.691248893737793, 18.778392791748047, 1.300383448600769], step: 37200, lr: 9.932723110067987e-05
2023-03-27 07:26:42,173 44k INFO Train Epoch: 52 [37%]
2023-03-27 07:26:42,174 44k INFO Losses: [2.6959099769592285, 1.9351104497909546, 9.62242603302002, 16.26683235168457, 1.2408384084701538], step: 37400, lr: 9.932723110067987e-05
2023-03-27 07:27:48,994 44k INFO Train Epoch: 52 [65%]
2023-03-27 07:27:48,994 44k INFO Losses: [2.379852056503296, 2.2753829956054688, 12.291888236999512, 18.11481475830078, 1.2890477180480957], step: 37600, lr: 9.932723110067987e-05
2023-03-27 07:28:56,054 44k INFO Train Epoch: 52 [92%]
2023-03-27 07:28:56,054 44k INFO Losses: [2.475571393966675, 2.110029935836792, 12.294746398925781, 18.15684700012207, 1.2353252172470093], step: 37800, lr: 9.932723110067987e-05
2023-03-27 07:29:14,493 44k INFO ====> Epoch: 52, cost 252.73 s
2023-03-27 07:30:12,034 44k INFO Train Epoch: 53 [20%]
2023-03-27 07:30:12,034 44k INFO Losses: [2.356746196746826, 2.5480940341949463, 9.407490730285645, 14.142950057983398, 1.0317974090576172], step: 38000, lr: 9.931481519679228e-05
2023-03-27 07:30:15,070 44k INFO Saving model and optimizer state at iteration 53 to ./logs\44k\G_38000.pth
2023-03-27 07:30:15,825 44k INFO Saving model and optimizer state at iteration 53 to ./logs\44k\D_38000.pth
2023-03-27 07:30:16,506 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_35000.pth
2023-03-27 07:30:16,536 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_35000.pth
2023-03-27 07:31:22,834 44k INFO Train Epoch: 53 [47%]
2023-03-27 07:31:22,836 44k INFO Losses: [2.445138931274414, 2.1916863918304443, 10.534404754638672, 17.064529418945312, 1.0119489431381226], step: 38200, lr: 9.931481519679228e-05
2023-03-27 07:32:29,909 44k INFO Train Epoch: 53 [75%]
2023-03-27 07:32:29,909 44k INFO Losses: [2.4968106746673584, 2.2845804691314697, 11.782247543334961, 16.791181564331055, 0.9844977259635925], step: 38400, lr: 9.931481519679228e-05
2023-03-27 07:33:32,557 44k INFO ====> Epoch: 53, cost 258.06 s
2023-03-27 07:33:47,054 44k INFO Train Epoch: 54 [2%]
2023-03-27 07:33:47,054 44k INFO Losses: [2.4071342945098877, 2.0646770000457764, 7.031035900115967, 14.61463451385498, 1.2002465724945068], step: 38600, lr: 9.930240084489267e-05
2023-03-27 07:34:55,041 44k INFO Train Epoch: 54 [30%]
2023-03-27 07:34:55,041 44k INFO Losses: [2.423353672027588, 2.3260793685913086, 12.772974967956543, 19.127399444580078, 1.0438041687011719], step: 38800, lr: 9.930240084489267e-05
2023-03-27 07:36:02,242 44k INFO Train Epoch: 54 [57%]
2023-03-27 07:36:02,242 44k INFO Losses: [2.70426082611084, 2.2825329303741455, 7.793752193450928, 15.704229354858398, 1.069628357887268], step: 39000, lr: 9.930240084489267e-05
2023-03-27 07:36:05,304 44k INFO Saving model and optimizer state at iteration 54 to ./logs\44k\G_39000.pth
2023-03-27 07:36:06,069 44k INFO Saving model and optimizer state at iteration 54 to ./logs\44k\D_39000.pth
2023-03-27 07:36:06,765 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_36000.pth
2023-03-27 07:36:06,800 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_36000.pth
2023-03-27 07:37:13,642 44k INFO Train Epoch: 54 [85%]
2023-03-27 07:37:13,642 44k INFO Losses: [2.408324718475342, 2.4243626594543457, 6.805215835571289, 12.297639846801758, 0.9571473002433777], step: 39200, lr: 9.930240084489267e-05
2023-03-27 07:37:50,765 44k INFO ====> Epoch: 54, cost 258.21 s
2023-03-27 07:38:29,331 44k INFO Train Epoch: 55 [12%]
2023-03-27 07:38:29,331 44k INFO Losses: [2.608395576477051, 2.0660414695739746, 8.355164527893066, 15.556915283203125, 1.1915698051452637], step: 39400, lr: 9.928998804478705e-05
2023-03-27 07:39:36,478 44k INFO Train Epoch: 55 [40%]
2023-03-27 07:39:36,479 44k INFO Losses: [2.370467185974121, 2.3523178100585938, 12.437934875488281, 18.79640769958496, 0.8513891100883484], step: 39600, lr: 9.928998804478705e-05
2023-03-27 07:40:43,368 44k INFO Train Epoch: 55 [67%]
2023-03-27 07:40:43,369 44k INFO Losses: [2.408238649368286, 2.464332103729248, 7.424396514892578, 11.00966739654541, 1.0542269945144653], step: 39800, lr: 9.928998804478705e-05
2023-03-27 07:41:50,294 44k INFO Train Epoch: 55 [95%]
2023-03-27 07:41:50,294 44k INFO Losses: [2.2966928482055664, 2.361558437347412, 11.673095703125, 17.255613327026367, 1.6099590063095093], step: 40000, lr: 9.928998804478705e-05
2023-03-27 07:41:53,249 44k INFO Saving model and optimizer state at iteration 55 to ./logs\44k\G_40000.pth
2023-03-27 07:41:54,010 44k INFO Saving model and optimizer state at iteration 55 to ./logs\44k\D_40000.pth
2023-03-27 07:41:54,689 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_37000.pth
2023-03-27 07:41:54,733 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_37000.pth
2023-03-27 07:42:07,857 44k INFO ====> Epoch: 55, cost 257.09 s
2023-03-27 07:43:10,913 44k INFO Train Epoch: 56 [22%]
2023-03-27 07:43:10,914 44k INFO Losses: [2.548214912414551, 2.0930380821228027, 6.3152313232421875, 12.273805618286133, 0.9773663282394409], step: 40200, lr: 9.927757679628145e-05
2023-03-27 07:44:17,323 44k INFO Train Epoch: 56 [49%]
2023-03-27 07:44:17,324 44k INFO Losses: [2.6898324489593506, 2.370868682861328, 11.837089538574219, 16.003795623779297, 1.0271047353744507], step: 40400, lr: 9.927757679628145e-05
2023-03-27 07:45:24,690 44k INFO Train Epoch: 56 [77%]
2023-03-27 07:45:24,690 44k INFO Losses: [2.4483160972595215, 2.4367058277130127, 7.7856268882751465, 14.39894962310791, 1.3585786819458008], step: 40600, lr: 9.927757679628145e-05
2023-03-27 07:46:20,549 44k INFO ====> Epoch: 56, cost 252.69 s
2023-03-27 07:46:40,319 44k INFO Train Epoch: 57 [4%]
2023-03-27 07:46:40,319 44k INFO Losses: [2.2415366172790527, 2.3912246227264404, 9.676067352294922, 13.743635177612305, 1.2081562280654907], step: 40800, lr: 9.926516709918191e-05
2023-03-27 07:47:47,575 44k INFO Train Epoch: 57 [32%]
2023-03-27 07:47:47,576 44k INFO Losses: [2.8123528957366943, 2.225703477859497, 11.15468692779541, 17.746809005737305, 1.2952674627304077], step: 41000, lr: 9.926516709918191e-05
2023-03-27 07:47:50,541 44k INFO Saving model and optimizer state at iteration 57 to ./logs\44k\G_41000.pth
2023-03-27 07:47:51,302 44k INFO Saving model and optimizer state at iteration 57 to ./logs\44k\D_41000.pth
2023-03-27 07:47:51,985 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_38000.pth
2023-03-27 07:47:52,027 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_38000.pth
2023-03-27 07:48:58,728 44k INFO Train Epoch: 57 [59%]
2023-03-27 07:48:58,728 44k INFO Losses: [2.4293718338012695, 2.3511390686035156, 13.145758628845215, 17.73885726928711, 1.6850676536560059], step: 41200, lr: 9.926516709918191e-05
2023-03-27 07:50:05,578 44k INFO Train Epoch: 57 [87%]
2023-03-27 07:50:05,579 44k INFO Losses: [2.5185396671295166, 2.439281940460205, 10.314142227172852, 19.53677749633789, 1.2114888429641724], step: 41400, lr: 9.926516709918191e-05
2023-03-27 07:50:37,433 44k INFO ====> Epoch: 57, cost 256.88 s
2023-03-27 07:51:21,234 44k INFO Train Epoch: 58 [14%]
2023-03-27 07:51:21,234 44k INFO Losses: [2.5466856956481934, 2.365269899368286, 12.877598762512207, 19.400514602661133, 1.1210414171218872], step: 41600, lr: 9.92527589532945e-05
2023-03-27 07:52:28,164 44k INFO Train Epoch: 58 [42%]
2023-03-27 07:52:28,165 44k INFO Losses: [2.625436305999756, 2.200080633163452, 11.715675354003906, 18.738943099975586, 1.1321464776992798], step: 41800, lr: 9.92527589532945e-05
2023-03-27 07:53:35,336 44k INFO Train Epoch: 58 [69%]
2023-03-27 07:53:35,336 44k INFO Losses: [2.7836830615997314, 2.2318859100341797, 10.502516746520996, 14.056482315063477, 1.2004711627960205], step: 42000, lr: 9.92527589532945e-05
2023-03-27 07:53:38,377 44k INFO Saving model and optimizer state at iteration 58 to ./logs\44k\G_42000.pth
2023-03-27 07:53:39,085 44k INFO Saving model and optimizer state at iteration 58 to ./logs\44k\D_42000.pth
2023-03-27 07:53:39,766 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_39000.pth
2023-03-27 07:53:39,800 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_39000.pth
2023-03-27 07:54:46,446 44k INFO Train Epoch: 58 [97%]
2023-03-27 07:54:46,447 44k INFO Losses: [2.511121988296509, 2.1858129501342773, 10.640735626220703, 16.488994598388672, 0.9038709402084351], step: 42200, lr: 9.92527589532945e-05
2023-03-27 07:54:54,418 44k INFO ====> Epoch: 58, cost 256.98 s
2023-03-27 07:56:02,856 44k INFO Train Epoch: 59 [24%]
2023-03-27 07:56:02,857 44k INFO Losses: [2.3125972747802734, 2.6691434383392334, 9.921504974365234, 14.77564811706543, 1.3596560955047607], step: 42400, lr: 9.924035235842533e-05
2023-03-27 07:57:09,259 44k INFO Train Epoch: 59 [52%]
2023-03-27 07:57:09,259 44k INFO Losses: [2.2020654678344727, 2.584967613220215, 15.30741024017334, 18.90169906616211, 1.8213540315628052], step: 42600, lr: 9.924035235842533e-05
2023-03-27 07:58:16,599 44k INFO Train Epoch: 59 [79%]
2023-03-27 07:58:16,600 44k INFO Losses: [2.3684847354888916, 2.5879104137420654, 12.604324340820312, 19.400144577026367, 1.7623335123062134], step: 42800, lr: 9.924035235842533e-05
2023-03-27 07:59:07,211 44k INFO ====> Epoch: 59, cost 252.79 s
2023-03-27 07:59:32,450 44k INFO Train Epoch: 60 [7%]
2023-03-27 07:59:32,451 44k INFO Losses: [2.244823455810547, 2.333590507507324, 14.408945083618164, 15.799434661865234, 1.1035412549972534], step: 43000, lr: 9.922794731438052e-05
2023-03-27 07:59:35,459 44k INFO Saving model and optimizer state at iteration 60 to ./logs\44k\G_43000.pth
2023-03-27 07:59:36,225 44k INFO Saving model and optimizer state at iteration 60 to ./logs\44k\D_43000.pth
2023-03-27 07:59:36,926 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_40000.pth
2023-03-27 07:59:36,970 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_40000.pth
2023-03-27 08:00:44,303 44k INFO Train Epoch: 60 [34%]
2023-03-27 08:00:44,304 44k INFO Losses: [2.4016001224517822, 2.434709072113037, 9.852156639099121, 18.884023666381836, 1.2980703115463257], step: 43200, lr: 9.922794731438052e-05
2023-03-27 08:01:51,128 44k INFO Train Epoch: 60 [62%]
2023-03-27 08:01:51,129 44k INFO Losses: [2.3090014457702637, 2.4671285152435303, 11.094226837158203, 15.301861763000488, 1.113794207572937], step: 43400, lr: 9.922794731438052e-05
2023-03-27 08:02:57,964 44k INFO Train Epoch: 60 [89%]
2023-03-27 08:02:57,964 44k INFO Losses: [2.369256019592285, 2.246119737625122, 12.488296508789062, 16.282028198242188, 0.9009151458740234], step: 43600, lr: 9.922794731438052e-05
2023-03-27 08:03:24,635 44k INFO ====> Epoch: 60, cost 257.42 s
2023-03-27 08:04:14,104 44k INFO Train Epoch: 61 [16%]
2023-03-27 08:04:14,105 44k INFO Losses: [2.478708028793335, 2.108610153198242, 9.883505821228027, 18.352388381958008, 1.2135659456253052], step: 43800, lr: 9.921554382096622e-05
2023-03-27 08:05:20,970 44k INFO Train Epoch: 61 [44%]
2023-03-27 08:05:20,970 44k INFO Losses: [2.45816707611084, 2.4192912578582764, 9.402213096618652, 12.988810539245605, 1.2950477600097656], step: 44000, lr: 9.921554382096622e-05
2023-03-27 08:05:23,966 44k INFO Saving model and optimizer state at iteration 61 to ./logs\44k\G_44000.pth
2023-03-27 08:05:24,664 44k INFO Saving model and optimizer state at iteration 61 to ./logs\44k\D_44000.pth
2023-03-27 08:05:25,341 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_41000.pth
2023-03-27 08:05:25,384 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_41000.pth
2023-03-27 08:06:32,515 44k INFO Train Epoch: 61 [71%]
2023-03-27 08:06:32,515 44k INFO Losses: [2.6138203144073486, 2.613528251647949, 11.933440208435059, 16.9341983795166, 0.9944063425064087], step: 44200, lr: 9.921554382096622e-05
2023-03-27 08:07:39,304 44k INFO Train Epoch: 61 [99%]
2023-03-27 08:07:39,305 44k INFO Losses: [2.2650885581970215, 2.6486053466796875, 10.271162033081055, 14.67381477355957, 0.8736445307731628], step: 44400, lr: 9.921554382096622e-05
2023-03-27 08:07:42,108 44k INFO ====> Epoch: 61, cost 257.47 s
2023-03-27 08:08:55,958 44k INFO Train Epoch: 62 [26%]
2023-03-27 08:08:55,958 44k INFO Losses: [2.49867844581604, 2.6054153442382812, 11.679630279541016, 16.433103561401367, 1.1053733825683594], step: 44600, lr: 9.92031418779886e-05
2023-03-27 08:10:02,677 44k INFO Train Epoch: 62 [54%]
2023-03-27 08:10:02,677 44k INFO Losses: [2.633824348449707, 2.028231143951416, 10.854639053344727, 19.667694091796875, 0.9255536198616028], step: 44800, lr: 9.92031418779886e-05
2023-03-27 08:11:09,920 44k INFO Train Epoch: 62 [81%]
2023-03-27 08:11:09,921 44k INFO Losses: [2.7703325748443604, 2.029719829559326, 6.438015937805176, 13.480875968933105, 0.780036211013794], step: 45000, lr: 9.92031418779886e-05
2023-03-27 08:11:12,938 44k INFO Saving model and optimizer state at iteration 62 to ./logs\44k\G_45000.pth
2023-03-27 08:11:13,641 44k INFO Saving model and optimizer state at iteration 62 to ./logs\44k\D_45000.pth
2023-03-27 08:11:14,336 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_42000.pth
2023-03-27 08:11:14,375 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_42000.pth
2023-03-27 08:11:59,596 44k INFO ====> Epoch: 62, cost 257.49 s
2023-03-27 08:12:30,347 44k INFO Train Epoch: 63 [9%]
2023-03-27 08:12:30,347 44k INFO Losses: [2.577272415161133, 2.04129695892334, 6.650623798370361, 13.921662330627441, 0.8253363370895386], step: 45200, lr: 9.919074148525384e-05
2023-03-27 08:13:37,846 44k INFO Train Epoch: 63 [36%]
2023-03-27 08:13:37,846 44k INFO Losses: [2.4639358520507812, 2.2167930603027344, 10.774556159973145, 16.839433670043945, 1.022517204284668], step: 45400, lr: 9.919074148525384e-05
2023-03-27 08:14:44,593 44k INFO Train Epoch: 63 [64%]
2023-03-27 08:14:44,593 44k INFO Losses: [2.2040042877197266, 2.5396769046783447, 11.931198120117188, 17.912832260131836, 1.0549203157424927], step: 45600, lr: 9.919074148525384e-05
2023-03-27 08:15:51,758 44k INFO Train Epoch: 63 [91%]
2023-03-27 08:15:51,758 44k INFO Losses: [2.804457426071167, 1.9974802732467651, 10.67963981628418, 14.012529373168945, 1.4511919021606445], step: 45800, lr: 9.919074148525384e-05
2023-03-27 08:16:13,083 44k INFO ====> Epoch: 63, cost 253.49 s
2023-03-27 08:17:07,998 44k INFO Train Epoch: 64 [19%]
2023-03-27 08:17:07,998 44k INFO Losses: [2.355602979660034, 2.22060489654541, 11.917518615722656, 16.96002769470215, 0.8676804900169373], step: 46000, lr: 9.917834264256819e-05
2023-03-27 08:17:10,921 44k INFO Saving model and optimizer state at iteration 64 to ./logs\44k\G_46000.pth
2023-03-27 08:17:11,626 44k INFO Saving model and optimizer state at iteration 64 to ./logs\44k\D_46000.pth
2023-03-27 08:17:12,341 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_43000.pth
2023-03-27 08:17:12,379 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_43000.pth
2023-03-27 08:18:19,138 44k INFO Train Epoch: 64 [46%]
2023-03-27 08:18:19,138 44k INFO Losses: [2.590341091156006, 2.1801178455352783, 12.155670166015625, 17.905271530151367, 1.255314588546753], step: 46200, lr: 9.917834264256819e-05
2023-03-27 08:19:26,392 44k INFO Train Epoch: 64 [74%]
2023-03-27 08:19:26,392 44k INFO Losses: [2.574697494506836, 2.337441921234131, 9.293381690979004, 15.470038414001465, 1.1327518224716187], step: 46400, lr: 9.917834264256819e-05
2023-03-27 08:20:30,555 44k INFO ====> Epoch: 64, cost 257.47 s
2023-03-27 08:20:42,442 44k INFO Train Epoch: 65 [1%]
2023-03-27 08:20:42,442 44k INFO Losses: [2.417219638824463, 2.2131693363189697, 12.746596336364746, 19.03409767150879, 0.949775755405426], step: 46600, lr: 9.916594534973787e-05
2023-03-27 08:21:49,793 44k INFO Train Epoch: 65 [29%]
2023-03-27 08:21:49,794 44k INFO Losses: [2.751617431640625, 2.228302240371704, 9.095353126525879, 18.294748306274414, 1.3548369407653809], step: 46800, lr: 9.916594534973787e-05
2023-03-27 08:22:56,747 44k INFO Train Epoch: 65 [56%]
2023-03-27 08:22:56,748 44k INFO Losses: [2.7602882385253906, 1.9956519603729248, 5.618566513061523, 12.588948249816895, 1.1053184270858765], step: 47000, lr: 9.916594534973787e-05
2023-03-27 08:22:59,799 44k INFO Saving model and optimizer state at iteration 65 to ./logs\44k\G_47000.pth
2023-03-27 08:23:00,501 44k INFO Saving model and optimizer state at iteration 65 to ./logs\44k\D_47000.pth
2023-03-27 08:23:01,194 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_44000.pth
2023-03-27 08:23:01,238 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_44000.pth
2023-03-27 08:24:08,270 44k INFO Train Epoch: 65 [84%]
2023-03-27 08:24:08,270 44k INFO Losses: [2.9330339431762695, 1.6999547481536865, 2.4728968143463135, 7.893375873565674, 1.22569739818573], step: 47200, lr: 9.916594534973787e-05
2023-03-27 08:24:48,316 44k INFO ====> Epoch: 65, cost 257.76 s
2023-03-27 08:25:24,278 44k INFO Train Epoch: 66 [11%]
2023-03-27 08:25:24,278 44k INFO Losses: [2.2653191089630127, 2.2766952514648438, 13.437915802001953, 19.091768264770508, 1.3249320983886719], step: 47400, lr: 9.915354960656915e-05
2023-03-27 08:26:31,644 44k INFO Train Epoch: 66 [38%]
2023-03-27 08:26:31,644 44k INFO Losses: [2.3334031105041504, 2.3181309700012207, 10.529729843139648, 18.511070251464844, 0.6074761748313904], step: 47600, lr: 9.915354960656915e-05
2023-03-27 08:27:38,759 44k INFO Train Epoch: 66 [66%]
2023-03-27 08:27:38,759 44k INFO Losses: [2.6001267433166504, 2.0757830142974854, 6.5456109046936035, 14.123077392578125, 0.8816099166870117], step: 47800, lr: 9.915354960656915e-05
2023-03-27 08:28:45,841 44k INFO Train Epoch: 66 [93%]
2023-03-27 08:28:45,841 44k INFO Losses: [2.35400390625, 2.3757007122039795, 12.253948211669922, 17.04297637939453, 0.9455040097236633], step: 48000, lr: 9.915354960656915e-05
2023-03-27 08:28:48,853 44k INFO Saving model and optimizer state at iteration 66 to ./logs\44k\G_48000.pth
2023-03-27 08:28:49,554 44k INFO Saving model and optimizer state at iteration 66 to ./logs\44k\D_48000.pth
2023-03-27 08:28:50,230 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_45000.pth
2023-03-27 08:28:50,270 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_45000.pth
2023-03-27 08:29:06,060 44k INFO ====> Epoch: 66, cost 257.74 s
2023-03-27 08:30:06,554 44k INFO Train Epoch: 67 [21%]
2023-03-27 08:30:06,554 44k INFO Losses: [2.4292476177215576, 2.2898545265197754, 9.13322639465332, 19.369871139526367, 1.3948007822036743], step: 48200, lr: 9.914115541286833e-05
2023-03-27 08:31:13,137 44k INFO Train Epoch: 67 [48%]
2023-03-27 08:31:13,137 44k INFO Losses: [2.439253568649292, 2.4068827629089355, 10.119111061096191, 17.691139221191406, 1.0451233386993408], step: 48400, lr: 9.914115541286833e-05
2023-03-27 08:32:20,659 44k INFO Train Epoch: 67 [76%]
2023-03-27 08:32:20,659 44k INFO Losses: [2.3717217445373535, 2.5542309284210205, 11.383543014526367, 19.37965965270996, 0.9877816438674927], step: 48600, lr: 9.914115541286833e-05
2023-03-27 08:33:19,387 44k INFO ====> Epoch: 67, cost 253.33 s
2023-03-27 08:33:36,777 44k INFO Train Epoch: 68 [3%]
2023-03-27 08:33:36,778 44k INFO Losses: [2.4363417625427246, 1.970969319343567, 10.106698036193848, 16.813566207885742, 1.3568806648254395], step: 48800, lr: 9.912876276844171e-05
2023-03-27 08:34:44,192 44k INFO Train Epoch: 68 [31%]
2023-03-27 08:34:44,193 44k INFO Losses: [2.514279365539551, 2.2566275596618652, 14.26534366607666, 18.72598648071289, 1.4553413391113281], step: 49000, lr: 9.912876276844171e-05
2023-03-27 08:34:47,161 44k INFO Saving model and optimizer state at iteration 68 to ./logs\44k\G_49000.pth
2023-03-27 08:34:47,874 44k INFO Saving model and optimizer state at iteration 68 to ./logs\44k\D_49000.pth
2023-03-27 08:34:48,558 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_46000.pth
2023-03-27 08:34:48,601 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_46000.pth
2023-03-27 08:35:55,536 44k INFO Train Epoch: 68 [58%]
2023-03-27 08:35:55,536 44k INFO Losses: [2.324829339981079, 2.2598965167999268, 13.1254301071167, 19.971885681152344, 1.4181081056594849], step: 49200, lr: 9.912876276844171e-05
2023-03-27 08:37:02,530 44k INFO Train Epoch: 68 [86%]
2023-03-27 08:37:02,530 44k INFO Losses: [2.258756637573242, 2.5916836261749268, 16.431076049804688, 20.990314483642578, 1.4004255533218384], step: 49400, lr: 9.912876276844171e-05
2023-03-27 08:37:37,090 44k INFO ====> Epoch: 68, cost 257.70 s
2023-03-27 08:38:18,461 44k INFO Train Epoch: 69 [13%]
2023-03-27 08:38:18,461 44k INFO Losses: [2.5158984661102295, 2.223151683807373, 12.039804458618164, 17.176530838012695, 1.3132669925689697], step: 49600, lr: 9.911637167309565e-05
2023-03-27 08:39:25,534 44k INFO Train Epoch: 69 [41%]
2023-03-27 08:39:25,534 44k INFO Losses: [2.5750861167907715, 2.4297473430633545, 10.819536209106445, 17.961881637573242, 1.4946011304855347], step: 49800, lr: 9.911637167309565e-05
2023-03-27 08:40:32,509 44k INFO Train Epoch: 69 [68%]
2023-03-27 08:40:32,509 44k INFO Losses: [2.4331626892089844, 2.306232452392578, 12.642422676086426, 18.909610748291016, 1.103945016860962], step: 50000, lr: 9.911637167309565e-05
2023-03-27 08:40:35,568 44k INFO Saving model and optimizer state at iteration 69 to ./logs\44k\G_50000.pth
2023-03-27 08:40:36,323 44k INFO Saving model and optimizer state at iteration 69 to ./logs\44k\D_50000.pth
2023-03-27 08:40:37,000 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_47000.pth
2023-03-27 08:40:37,036 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_47000.pth
2023-03-27 08:41:43,882 44k INFO Train Epoch: 69 [96%]
2023-03-27 08:41:43,882 44k INFO Losses: [2.514343738555908, 2.2019259929656982, 4.496665954589844, 8.232398986816406, 0.6628571152687073], step: 50200, lr: 9.911637167309565e-05
2023-03-27 08:41:54,465 44k INFO ====> Epoch: 69, cost 257.38 s
2023-03-27 08:43:00,368 44k INFO Train Epoch: 70 [23%]
2023-03-27 08:43:00,369 44k INFO Losses: [2.3283638954162598, 2.360450029373169, 11.544236183166504, 17.859086990356445, 0.4431965947151184], step: 50400, lr: 9.910398212663652e-05
2023-03-27 08:44:06,663 44k INFO Train Epoch: 70 [51%]
2023-03-27 08:44:06,664 44k INFO Losses: [2.1192874908447266, 2.6293697357177734, 10.26603889465332, 15.43351936340332, 0.9887738823890686], step: 50600, lr: 9.910398212663652e-05
2023-03-27 08:45:13,890 44k INFO Train Epoch: 70 [78%]
2023-03-27 08:45:13,890 44k INFO Losses: [2.2911312580108643, 2.364351987838745, 10.513965606689453, 16.594308853149414, 1.1489366292953491], step: 50800, lr: 9.910398212663652e-05
2023-03-27 08:46:07,172 44k INFO ====> Epoch: 70, cost 252.71 s
2023-03-27 08:46:29,790 44k INFO Train Epoch: 71 [5%]
2023-03-27 08:46:29,790 44k INFO Losses: [2.380854606628418, 2.227107048034668, 12.712158203125, 16.282180786132812, 1.47393000125885], step: 51000, lr: 9.909159412887068e-05
2023-03-27 08:46:32,884 44k INFO Saving model and optimizer state at iteration 71 to ./logs\44k\G_51000.pth
2023-03-27 08:46:33,600 44k INFO Saving model and optimizer state at iteration 71 to ./logs\44k\D_51000.pth
2023-03-27 08:46:34,327 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_48000.pth
2023-03-27 08:46:34,371 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_48000.pth
2023-03-27 08:47:41,781 44k INFO Train Epoch: 71 [33%]
2023-03-27 08:47:41,782 44k INFO Losses: [2.245232105255127, 2.650056838989258, 10.456927299499512, 15.763895988464355, 1.509503960609436], step: 51200, lr: 9.909159412887068e-05
2023-03-27 08:48:48,760 44k INFO Train Epoch: 71 [60%]
2023-03-27 08:48:48,761 44k INFO Losses: [2.4527876377105713, 2.2343475818634033, 11.567914962768555, 18.552228927612305, 0.588832676410675], step: 51400, lr: 9.909159412887068e-05
2023-03-27 08:49:55,695 44k INFO Train Epoch: 71 [88%]
2023-03-27 08:49:55,695 44k INFO Losses: [2.4599967002868652, 2.8101401329040527, 13.028704643249512, 18.768552780151367, 1.428633451461792], step: 51600, lr: 9.909159412887068e-05
2023-03-27 08:50:25,149 44k INFO ====> Epoch: 71, cost 257.98 s
2023-03-27 08:51:12,071 44k INFO Train Epoch: 72 [15%]
2023-03-27 08:51:12,071 44k INFO Losses: [2.612532138824463, 1.9285584688186646, 7.47923469543457, 13.984415054321289, 0.7528865337371826], step: 51800, lr: 9.907920767960457e-05
2023-03-27 08:52:19,101 44k INFO Train Epoch: 72 [43%]
2023-03-27 08:52:19,101 44k INFO Losses: [2.1794114112854004, 2.6065738201141357, 15.174079895019531, 19.188138961791992, 1.353792428970337], step: 52000, lr: 9.907920767960457e-05
2023-03-27 08:52:22,161 44k INFO Saving model and optimizer state at iteration 72 to ./logs\44k\G_52000.pth
2023-03-27 08:52:22,874 44k INFO Saving model and optimizer state at iteration 72 to ./logs\44k\D_52000.pth
2023-03-27 08:52:23,614 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_49000.pth
2023-03-27 08:52:23,643 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_49000.pth
2023-03-27 08:53:30,801 44k INFO Train Epoch: 72 [70%]
2023-03-27 08:53:30,801 44k INFO Losses: [2.406280279159546, 2.214538097381592, 9.801506996154785, 19.0402774810791, 1.0820358991622925], step: 52200, lr: 9.907920767960457e-05
2023-03-27 08:54:37,657 44k INFO Train Epoch: 72 [98%]
2023-03-27 08:54:37,658 44k INFO Losses: [2.6462512016296387, 2.0470852851867676, 6.258289813995361, 12.651925086975098, 1.5252031087875366], step: 52400, lr: 9.907920767960457e-05
2023-03-27 08:54:43,100 44k INFO ====> Epoch: 72, cost 257.95 s
2023-03-27 08:55:54,357 44k INFO Train Epoch: 73 [25%]
2023-03-27 08:55:54,357 44k INFO Losses: [2.1774866580963135, 2.7009177207946777, 10.36209774017334, 14.559038162231445, 1.0799500942230225], step: 52600, lr: 9.906682277864462e-05
2023-03-27 08:57:00,952 44k INFO Train Epoch: 73 [53%]
2023-03-27 08:57:00,952 44k INFO Losses: [2.44010853767395, 2.3556549549102783, 10.437493324279785, 15.935498237609863, 1.107322096824646], step: 52800, lr: 9.906682277864462e-05
2023-03-27 08:58:08,333 44k INFO Train Epoch: 73 [80%]
2023-03-27 08:58:08,334 44k INFO Losses: [2.380119800567627, 2.2932958602905273, 8.417401313781738, 18.55266571044922, 1.1677244901657104], step: 53000, lr: 9.906682277864462e-05
2023-03-27 08:58:11,398 44k INFO Saving model and optimizer state at iteration 73 to ./logs\44k\G_53000.pth
2023-03-27 08:58:12,120 44k INFO Saving model and optimizer state at iteration 73 to ./logs\44k\D_53000.pth
2023-03-27 08:58:12,824 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_50000.pth
2023-03-27 08:58:12,862 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_50000.pth
2023-03-27 08:59:00,859 44k INFO ====> Epoch: 73, cost 257.76 s
2023-03-27 08:59:28,806 44k INFO Train Epoch: 74 [8%]
2023-03-27 08:59:28,806 44k INFO Losses: [2.4849491119384766, 2.3189971446990967, 10.498376846313477, 18.538747787475586, 1.192650318145752], step: 53200, lr: 9.905443942579728e-05
2023-03-27 09:00:36,271 44k INFO Train Epoch: 74 [35%]
2023-03-27 09:00:36,272 44k INFO Losses: [2.5191385746002197, 2.2135612964630127, 11.77253532409668, 16.3004207611084, 0.9401774406433105], step: 53400, lr: 9.905443942579728e-05
2023-03-27 09:01:43,200 44k INFO Train Epoch: 74 [63%]
2023-03-27 09:01:43,200 44k INFO Losses: [2.576526641845703, 2.1391115188598633, 10.905852317810059, 16.06879425048828, 0.9409028887748718], step: 53600, lr: 9.905443942579728e-05
2023-03-27 09:02:50,326 44k INFO Train Epoch: 74 [90%]
2023-03-27 09:02:50,326 44k INFO Losses: [2.6117124557495117, 2.2026305198669434, 9.521074295043945, 19.9237060546875, 1.4228111505508423], step: 53800, lr: 9.905443942579728e-05
2023-03-27 09:03:14,449 44k INFO ====> Epoch: 74, cost 253.59 s
2023-03-27 09:04:06,928 44k INFO Train Epoch: 75 [18%]
2023-03-27 09:04:06,929 44k INFO Losses: [2.6445083618164062, 2.309598207473755, 11.815102577209473, 16.004886627197266, 1.1979665756225586], step: 54000, lr: 9.904205762086905e-05
2023-03-27 09:04:10,100 44k INFO Saving model and optimizer state at iteration 75 to ./logs\44k\G_54000.pth
2023-03-27 09:04:10,833 44k INFO Saving model and optimizer state at iteration 75 to ./logs\44k\D_54000.pth
2023-03-27 09:04:11,516 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_51000.pth
2023-03-27 09:04:11,547 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_51000.pth
2023-03-27 09:05:18,318 44k INFO Train Epoch: 75 [45%]
2023-03-27 09:05:18,319 44k INFO Losses: [2.4360475540161133, 2.3844707012176514, 7.995604038238525, 13.082183837890625, 1.0375971794128418], step: 54200, lr: 9.904205762086905e-05
2023-03-27 09:06:25,696 44k INFO Train Epoch: 75 [73%]
2023-03-27 09:06:25,697 44k INFO Losses: [2.2689263820648193, 2.360481023788452, 14.13078498840332, 18.41478157043457, 0.592846155166626], step: 54400, lr: 9.904205762086905e-05
2023-03-27 09:07:32,745 44k INFO ====> Epoch: 75, cost 258.30 s
2023-03-27 09:07:42,142 44k INFO Train Epoch: 76 [0%]
2023-03-27 09:07:42,143 44k INFO Losses: [2.5342416763305664, 2.6253576278686523, 8.630722999572754, 14.879083633422852, 1.5597903728485107], step: 54600, lr: 9.902967736366644e-05
2023-03-27 09:08:49,771 44k INFO Train Epoch: 76 [27%]
2023-03-27 09:08:49,771 44k INFO Losses: [2.5071299076080322, 2.3492166996002197, 9.460559844970703, 16.63587760925293, 0.7479650378227234], step: 54800, lr: 9.902967736366644e-05
2023-03-27 09:09:56,674 44k INFO Train Epoch: 76 [55%]
2023-03-27 09:09:56,675 44k INFO Losses: [2.448258638381958, 2.424220085144043, 11.648221969604492, 18.77943992614746, 0.9916595816612244], step: 55000, lr: 9.902967736366644e-05
2023-03-27 09:09:59,640 44k INFO Saving model and optimizer state at iteration 76 to ./logs\44k\G_55000.pth
2023-03-27 09:10:00,414 44k INFO Saving model and optimizer state at iteration 76 to ./logs\44k\D_55000.pth
2023-03-27 09:10:01,098 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_52000.pth
2023-03-27 09:10:01,127 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_52000.pth
2023-03-27 09:11:08,398 44k INFO Train Epoch: 76 [82%]
2023-03-27 09:11:08,399 44k INFO Losses: [2.4801511764526367, 2.0579822063446045, 10.920303344726562, 14.480094909667969, 0.7020146250724792], step: 55200, lr: 9.902967736366644e-05
2023-03-27 09:11:51,209 44k INFO ====> Epoch: 76, cost 258.46 s
2023-03-27 09:12:24,740 44k INFO Train Epoch: 77 [10%]
2023-03-27 09:12:24,740 44k INFO Losses: [2.4954776763916016, 2.161871910095215, 13.287503242492676, 19.668434143066406, 1.184342622756958], step: 55400, lr: 9.901729865399597e-05
2023-03-27 09:13:32,346 44k INFO Train Epoch: 77 [37%]
2023-03-27 09:13:32,347 44k INFO Losses: [2.6736936569213867, 2.3389627933502197, 8.611727714538574, 13.857053756713867, 1.114434003829956], step: 55600, lr: 9.901729865399597e-05
2023-03-27 09:14:39,481 44k INFO Train Epoch: 77 [65%]
2023-03-27 09:14:39,481 44k INFO Losses: [2.6568355560302734, 2.1494619846343994, 9.386954307556152, 16.363704681396484, 1.428241491317749], step: 55800, lr: 9.901729865399597e-05
2023-03-27 09:15:46,951 44k INFO Train Epoch: 77 [92%]
2023-03-27 09:15:46,951 44k INFO Losses: [2.571241617202759, 2.1371710300445557, 10.406064987182617, 16.92513656616211, 1.3157871961593628], step: 56000, lr: 9.901729865399597e-05
2023-03-27 09:15:49,929 44k INFO Saving model and optimizer state at iteration 77 to ./logs\44k\G_56000.pth
2023-03-27 09:15:50,642 44k INFO Saving model and optimizer state at iteration 77 to ./logs\44k\D_56000.pth
2023-03-27 09:15:51,326 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_53000.pth
2023-03-27 09:15:51,356 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_53000.pth
2023-03-27 09:16:09,915 44k INFO ====> Epoch: 77, cost 258.71 s
2023-03-27 09:17:08,155 44k INFO Train Epoch: 78 [20%]
2023-03-27 09:17:08,156 44k INFO Losses: [2.4894423484802246, 2.0982768535614014, 8.572176933288574, 11.395413398742676, 1.2614377737045288], step: 56200, lr: 9.900492149166423e-05
2023-03-27 09:18:15,346 44k INFO Train Epoch: 78 [47%]
2023-03-27 09:18:15,346 44k INFO Losses: [2.4894745349884033, 2.033205032348633, 12.513121604919434, 17.252788543701172, 1.1480735540390015], step: 56400, lr: 9.900492149166423e-05
2023-03-27 09:19:23,147 44k INFO Train Epoch: 78 [75%]
2023-03-27 09:19:23,147 44k INFO Losses: [2.7561724185943604, 2.347557783126831, 8.912809371948242, 13.371015548706055, 1.0286234617233276], step: 56600, lr: 9.900492149166423e-05
2023-03-27 09:20:25,107 44k INFO ====> Epoch: 78, cost 255.19 s
2023-03-27 09:20:39,749 44k INFO Train Epoch: 79 [2%]
2023-03-27 09:20:39,749 44k INFO Losses: [2.480497360229492, 2.0194153785705566, 10.260211944580078, 15.519022941589355, 0.5704618692398071], step: 56800, lr: 9.899254587647776e-05
2023-03-27 09:21:47,593 44k INFO Train Epoch: 79 [30%]
2023-03-27 09:21:47,594 44k INFO Losses: [2.4960885047912598, 2.5035061836242676, 11.370020866394043, 15.287582397460938, 0.8488313555717468], step: 57000, lr: 9.899254587647776e-05
2023-03-27 09:21:50,612 44k INFO Saving model and optimizer state at iteration 79 to ./logs\44k\G_57000.pth
2023-03-27 09:21:51,326 44k INFO Saving model and optimizer state at iteration 79 to ./logs\44k\D_57000.pth
2023-03-27 09:21:52,003 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_54000.pth
2023-03-27 09:21:52,033 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_54000.pth
2023-03-27 09:22:59,560 44k INFO Train Epoch: 79 [57%]
2023-03-27 09:22:59,560 44k INFO Losses: [2.369487762451172, 2.401801109313965, 12.197699546813965, 19.666080474853516, 0.9156815409660339], step: 57200, lr: 9.899254587647776e-05
2023-03-27 09:24:07,231 44k INFO Train Epoch: 79 [85%]
2023-03-27 09:24:07,232 44k INFO Losses: [2.41656494140625, 2.434971570968628, 9.103594779968262, 15.229981422424316, 1.0728240013122559], step: 57400, lr: 9.899254587647776e-05
2023-03-27 09:24:44,905 44k INFO ====> Epoch: 79, cost 259.80 s
2023-03-27 09:25:23,825 44k INFO Train Epoch: 80 [12%]
2023-03-27 09:25:23,826 44k INFO Losses: [2.6013574600219727, 2.033143997192383, 9.87981128692627, 16.772701263427734, 0.5126125812530518], step: 57600, lr: 9.89801718082432e-05
2023-03-27 09:26:31,611 44k INFO Train Epoch: 80 [40%]
2023-03-27 09:26:31,611 44k INFO Losses: [2.437298536300659, 2.210724115371704, 11.168230056762695, 15.828474044799805, 1.754092812538147], step: 57800, lr: 9.89801718082432e-05
2023-03-27 09:27:39,197 44k INFO Train Epoch: 80 [67%]
2023-03-27 09:27:39,197 44k INFO Losses: [2.7317264080047607, 1.988978624343872, 8.552817344665527, 11.852904319763184, 1.2488106489181519], step: 58000, lr: 9.89801718082432e-05
2023-03-27 09:27:42,258 44k INFO Saving model and optimizer state at iteration 80 to ./logs\44k\G_58000.pth
2023-03-27 09:27:42,961 44k INFO Saving model and optimizer state at iteration 80 to ./logs\44k\D_58000.pth
2023-03-27 09:27:43,629 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_55000.pth
2023-03-27 09:27:43,658 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_55000.pth
2023-03-27 09:28:51,457 44k INFO Train Epoch: 80 [95%]
2023-03-27 09:28:51,457 44k INFO Losses: [2.6654438972473145, 1.8952109813690186, 11.002762794494629, 12.596260070800781, 1.3561766147613525], step: 58200, lr: 9.89801718082432e-05
2023-03-27 09:29:04,813 44k INFO ====> Epoch: 80, cost 259.91 s
2023-03-27 09:30:08,577 44k INFO Train Epoch: 81 [22%]
2023-03-27 09:30:08,577 44k INFO Losses: [2.757061004638672, 2.138958215713501, 8.559090614318848, 15.467161178588867, 0.7661004662513733], step: 58400, lr: 9.896779928676716e-05
2023-03-27 09:31:15,827 44k INFO Train Epoch: 81 [49%]
2023-03-27 09:31:15,828 44k INFO Losses: [2.3698627948760986, 2.256129741668701, 11.400449752807617, 13.041911125183105, 1.1506130695343018], step: 58600, lr: 9.896779928676716e-05
2023-03-27 09:32:23,762 44k INFO Train Epoch: 81 [77%]
2023-03-27 09:32:23,762 44k INFO Losses: [2.2278361320495605, 2.9400899410247803, 7.398653984069824, 11.099531173706055, 1.3681344985961914], step: 58800, lr: 9.896779928676716e-05
2023-03-27 09:33:20,289 44k INFO ====> Epoch: 81, cost 255.48 s
2023-03-27 09:33:40,299 44k INFO Train Epoch: 82 [4%]
2023-03-27 09:33:40,300 44k INFO Losses: [2.5044875144958496, 2.217015504837036, 10.401374816894531, 16.01725959777832, 0.881864070892334], step: 59000, lr: 9.895542831185631e-05
2023-03-27 09:33:43,350 44k INFO Saving model and optimizer state at iteration 82 to ./logs\44k\G_59000.pth
2023-03-27 09:33:44,064 44k INFO Saving model and optimizer state at iteration 82 to ./logs\44k\D_59000.pth
2023-03-27 09:33:44,752 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_56000.pth
2023-03-27 09:33:44,782 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_56000.pth
2023-03-27 09:34:52,751 44k INFO Train Epoch: 82 [32%]
2023-03-27 09:34:52,751 44k INFO Losses: [2.5263190269470215, 2.3537802696228027, 12.890419006347656, 19.255563735961914, 1.4706729650497437], step: 59200, lr: 9.895542831185631e-05
2023-03-27 09:36:00,435 44k INFO Train Epoch: 82 [59%]
2023-03-27 09:36:00,436 44k INFO Losses: [2.501584053039551, 2.37062406539917, 13.14730453491211, 18.04471206665039, 1.194952130317688], step: 59400, lr: 9.895542831185631e-05
2023-03-27 09:37:08,144 44k INFO Train Epoch: 82 [87%]
2023-03-27 09:37:08,144 44k INFO Losses: [2.5431067943573, 2.2430522441864014, 10.63089370727539, 17.376996994018555, 1.565305233001709], step: 59600, lr: 9.895542831185631e-05
2023-03-27 09:37:40,482 44k INFO ====> Epoch: 82, cost 260.19 s
2023-03-27 09:38:25,025 44k INFO Train Epoch: 83 [14%]
2023-03-27 09:38:25,025 44k INFO Losses: [2.6206650733947754, 2.159938097000122, 10.948968887329102, 15.664529800415039, 1.1302636861801147], step: 59800, lr: 9.894305888331732e-05
2023-03-27 09:39:32,738 44k INFO Train Epoch: 83 [42%]
2023-03-27 09:39:32,738 44k INFO Losses: [2.3239502906799316, 2.2825515270233154, 12.042638778686523, 18.646574020385742, 1.213418960571289], step: 60000, lr: 9.894305888331732e-05
2023-03-27 09:39:35,831 44k INFO Saving model and optimizer state at iteration 83 to ./logs\44k\G_60000.pth
2023-03-27 09:39:36,540 44k INFO Saving model and optimizer state at iteration 83 to ./logs\44k\D_60000.pth
2023-03-27 09:39:37,211 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_57000.pth
2023-03-27 09:39:37,257 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_57000.pth
2023-03-27 09:40:45,174 44k INFO Train Epoch: 83 [69%]
2023-03-27 09:40:45,174 44k INFO Losses: [2.3723936080932617, 2.2629446983337402, 15.236505508422852, 16.495887756347656, 1.253631353378296], step: 60200, lr: 9.894305888331732e-05
2023-03-27 09:41:52,769 44k INFO Train Epoch: 83 [97%]
2023-03-27 09:41:52,769 44k INFO Losses: [2.420388698577881, 2.220708131790161, 9.649846076965332, 15.504828453063965, 0.7926158308982849], step: 60400, lr: 9.894305888331732e-05
2023-03-27 09:42:00,900 44k INFO ====> Epoch: 83, cost 260.42 s
2023-03-27 09:43:10,259 44k INFO Train Epoch: 84 [24%]
2023-03-27 09:43:10,259 44k INFO Losses: [2.2579102516174316, 3.0494768619537354, 8.847410202026367, 14.881556510925293, 1.2498337030410767], step: 60600, lr: 9.89306910009569e-05
2023-03-27 09:44:17,534 44k INFO Train Epoch: 84 [52%]
2023-03-27 09:44:17,535 44k INFO Losses: [2.229739189147949, 2.603872537612915, 12.864727020263672, 16.156009674072266, 1.298993468284607], step: 60800, lr: 9.89306910009569e-05
2023-03-27 09:45:25,628 44k INFO Train Epoch: 84 [79%]
2023-03-27 09:45:25,629 44k INFO Losses: [2.0978479385375977, 2.632643699645996, 15.802327156066895, 20.593666076660156, 0.8586452007293701], step: 61000, lr: 9.89306910009569e-05
2023-03-27 09:45:28,656 44k INFO Saving model and optimizer state at iteration 84 to ./logs\44k\G_61000.pth
2023-03-27 09:45:29,359 44k INFO Saving model and optimizer state at iteration 84 to ./logs\44k\D_61000.pth
2023-03-27 09:45:29,991 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_58000.pth
2023-03-27 09:45:30,035 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_58000.pth
2023-03-27 09:46:21,246 44k INFO ====> Epoch: 84, cost 260.35 s
2023-03-27 09:46:46,721 44k INFO Train Epoch: 85 [7%]
2023-03-27 09:46:46,721 44k INFO Losses: [2.446881055831909, 2.3691234588623047, 13.274109840393066, 18.739173889160156, 0.9626705050468445], step: 61200, lr: 9.891832466458178e-05
2023-03-27 09:47:54,829 44k INFO Train Epoch: 85 [34%]
2023-03-27 09:47:54,829 44k INFO Losses: [2.6148881912231445, 2.008669853210449, 10.052166938781738, 15.988804817199707, 0.7912470698356628], step: 61400, lr: 9.891832466458178e-05
2023-03-27 09:49:02,547 44k INFO Train Epoch: 85 [62%]
2023-03-27 09:49:02,548 44k INFO Losses: [2.10653018951416, 2.7517142295837402, 10.341085433959961, 16.993736267089844, 1.4500058889389038], step: 61600, lr: 9.891832466458178e-05
2023-03-27 09:50:10,284 44k INFO Train Epoch: 85 [89%]
2023-03-27 09:50:10,284 44k INFO Losses: [2.5794146060943604, 2.1420674324035645, 9.77205753326416, 15.793490409851074, 1.2589272260665894], step: 61800, lr: 9.891832466458178e-05
2023-03-27 09:50:37,257 44k INFO ====> Epoch: 85, cost 256.01 s
2023-03-27 09:51:27,205 44k INFO Train Epoch: 86 [16%]
2023-03-27 09:51:27,205 44k INFO Losses: [2.645372152328491, 2.064566135406494, 6.887193202972412, 14.77082347869873, 1.1582878828048706], step: 62000, lr: 9.89059598739987e-05
2023-03-27 09:51:30,205 44k INFO Saving model and optimizer state at iteration 86 to ./logs\44k\G_62000.pth
2023-03-27 09:51:30,916 44k INFO Saving model and optimizer state at iteration 86 to ./logs\44k\D_62000.pth
2023-03-27 09:51:31,607 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_59000.pth
2023-03-27 09:51:31,650 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_59000.pth
2023-03-27 09:52:39,360 44k INFO Train Epoch: 86 [44%]
2023-03-27 09:52:39,360 44k INFO Losses: [2.9559242725372314, 1.8947367668151855, 8.411797523498535, 12.7291259765625, 1.0017719268798828], step: 62200, lr: 9.89059598739987e-05
2023-03-27 09:53:47,362 44k INFO Train Epoch: 86 [71%]
2023-03-27 09:53:47,363 44k INFO Losses: [2.5458288192749023, 2.639307975769043, 11.652161598205566, 12.950305938720703, 1.098060965538025], step: 62400, lr: 9.89059598739987e-05
2023-03-27 09:54:55,207 44k INFO Train Epoch: 86 [99%]
2023-03-27 09:54:55,207 44k INFO Losses: [2.4177370071411133, 2.3100101947784424, 7.988073348999023, 13.904374122619629, 1.2510274648666382], step: 62600, lr: 9.89059598739987e-05
2023-03-27 09:54:58,041 44k INFO ====> Epoch: 86, cost 260.78 s
2023-03-27 09:56:12,578 44k INFO Train Epoch: 87 [26%]
2023-03-27 09:56:12,578 44k INFO Losses: [2.698122978210449, 1.8715286254882812, 6.978188514709473, 15.623985290527344, 1.0246827602386475], step: 62800, lr: 9.889359662901445e-05
2023-03-27 09:57:20,060 44k INFO Train Epoch: 87 [54%]
2023-03-27 09:57:20,060 44k INFO Losses: [2.3434505462646484, 2.4575462341308594, 14.63865852355957, 19.44290542602539, 0.9221001267433167], step: 63000, lr: 9.889359662901445e-05
2023-03-27 09:57:23,076 44k INFO Saving model and optimizer state at iteration 87 to ./logs\44k\G_63000.pth
2023-03-27 09:57:23,776 44k INFO Saving model and optimizer state at iteration 87 to ./logs\44k\D_63000.pth
2023-03-27 09:57:24,458 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_60000.pth
2023-03-27 09:57:24,500 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_60000.pth
2023-03-27 09:58:32,425 44k INFO Train Epoch: 87 [81%]
2023-03-27 09:58:32,425 44k INFO Losses: [2.363724708557129, 2.596806049346924, 7.8439812660217285, 14.056066513061523, 1.124273419380188], step: 63200, lr: 9.889359662901445e-05
2023-03-27 09:59:18,404 44k INFO ====> Epoch: 87, cost 260.36 s
2023-03-27 09:59:49,244 44k INFO Train Epoch: 88 [9%]
2023-03-27 09:59:49,244 44k INFO Losses: [2.5618557929992676, 2.1410717964172363, 11.616302490234375, 18.56657600402832, 1.073987603187561], step: 63400, lr: 9.888123492943583e-05
2023-03-27 10:00:57,509 44k INFO Train Epoch: 88 [36%]
2023-03-27 10:00:57,509 44k INFO Losses: [2.5383176803588867, 2.1512532234191895, 10.708680152893066, 14.692358016967773, 0.8654391169548035], step: 63600, lr: 9.888123492943583e-05
2023-03-27 10:02:05,123 44k INFO Train Epoch: 88 [64%]
2023-03-27 10:02:05,123 44k INFO Losses: [2.676496982574463, 2.2927393913269043, 11.521885871887207, 18.887176513671875, 0.8159588575363159], step: 63800, lr: 9.888123492943583e-05
2023-03-27 10:03:13,007 44k INFO Train Epoch: 88 [91%]
2023-03-27 10:03:13,007 44k INFO Losses: [2.2897891998291016, 2.447648525238037, 6.326834201812744, 11.280462265014648, 1.217918038368225], step: 64000, lr: 9.888123492943583e-05
2023-03-27 10:03:16,107 44k INFO Saving model and optimizer state at iteration 88 to ./logs\44k\G_64000.pth
2023-03-27 10:03:16,820 44k INFO Saving model and optimizer state at iteration 88 to ./logs\44k\D_64000.pth
2023-03-27 10:03:17,510 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_61000.pth
2023-03-27 10:03:17,553 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_61000.pth
2023-03-27 10:03:39,009 44k INFO ====> Epoch: 88, cost 260.61 s
2023-03-27 10:04:34,669 44k INFO Train Epoch: 89 [19%]
2023-03-27 10:04:34,670 44k INFO Losses: [2.652923107147217, 2.200139045715332, 11.767621040344238, 15.182207107543945, 1.0573797225952148], step: 64200, lr: 9.886887477506964e-05
2023-03-27 10:05:42,202 44k INFO Train Epoch: 89 [46%]
2023-03-27 10:05:42,202 44k INFO Losses: [2.611407995223999, 2.226300001144409, 10.234874725341797, 18.68907928466797, 0.6211757063865662], step: 64400, lr: 9.886887477506964e-05
2023-03-27 10:06:50,268 44k INFO Train Epoch: 89 [74%]
2023-03-27 10:06:50,269 44k INFO Losses: [2.533033847808838, 2.7254953384399414, 9.827921867370605, 16.067947387695312, 1.2690056562423706], step: 64600, lr: 9.886887477506964e-05
2023-03-27 10:07:55,302 44k INFO ====> Epoch: 89, cost 256.29 s
2023-03-27 10:08:07,295 44k INFO Train Epoch: 90 [1%]
2023-03-27 10:08:07,295 44k INFO Losses: [2.5049080848693848, 2.2607452869415283, 12.30127239227295, 18.018543243408203, 1.2478681802749634], step: 64800, lr: 9.885651616572276e-05
2023-03-27 10:09:15,418 44k INFO Train Epoch: 90 [29%]
2023-03-27 10:09:15,419 44k INFO Losses: [2.4821276664733887, 2.1922714710235596, 9.178498268127441, 16.43462562561035, 0.6456231474876404], step: 65000, lr: 9.885651616572276e-05
2023-03-27 10:09:18,437 44k INFO Saving model and optimizer state at iteration 90 to ./logs\44k\G_65000.pth
2023-03-27 10:09:19,149 44k INFO Saving model and optimizer state at iteration 90 to ./logs\44k\D_65000.pth
2023-03-27 10:09:19,822 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_62000.pth
2023-03-27 10:09:19,864 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_62000.pth
2023-03-27 10:10:27,743 44k INFO Train Epoch: 90 [56%]
2023-03-27 10:10:27,743 44k INFO Losses: [2.428854465484619, 2.237983226776123, 11.192479133605957, 12.6817045211792, 0.9100015759468079], step: 65200, lr: 9.885651616572276e-05
2023-03-27 10:11:35,609 44k INFO Train Epoch: 90 [84%]
2023-03-27 10:11:35,610 44k INFO Losses: [2.4400575160980225, 2.6023688316345215, 9.13033390045166, 13.045367240905762, 1.1036146879196167], step: 65400, lr: 9.885651616572276e-05
2023-03-27 10:12:16,274 44k INFO ====> Epoch: 90, cost 260.97 s
2023-03-27 10:12:52,660 44k INFO Train Epoch: 91 [11%]
2023-03-27 10:12:52,660 44k INFO Losses: [2.458656072616577, 2.2818825244903564, 9.076275825500488, 17.8560791015625, 1.1643422842025757], step: 65600, lr: 9.884415910120204e-05
2023-03-27 10:14:00,867 44k INFO Train Epoch: 91 [38%]
2023-03-27 10:14:00,867 44k INFO Losses: [2.3943705558776855, 2.008321762084961, 9.396245002746582, 16.650938034057617, 1.015990138053894], step: 65800, lr: 9.884415910120204e-05
2023-03-27 10:15:08,729 44k INFO Train Epoch: 91 [66%]
2023-03-27 10:15:08,729 44k INFO Losses: [2.3966665267944336, 2.3784823417663574, 8.899850845336914, 18.146732330322266, 0.980642557144165], step: 66000, lr: 9.884415910120204e-05
2023-03-27 10:15:11,854 44k INFO Saving model and optimizer state at iteration 91 to ./logs\44k\G_66000.pth
2023-03-27 10:15:12,565 44k INFO Saving model and optimizer state at iteration 91 to ./logs\44k\D_66000.pth
2023-03-27 10:15:13,203 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_63000.pth
2023-03-27 10:15:13,247 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_63000.pth
2023-03-27 10:16:21,051 44k INFO Train Epoch: 91 [93%]
2023-03-27 10:16:21,051 44k INFO Losses: [2.401566505432129, 2.4048519134521484, 12.53559398651123, 17.551095962524414, 0.7241575717926025], step: 66200, lr: 9.884415910120204e-05
2023-03-27 10:16:37,092 44k INFO ====> Epoch: 91, cost 260.82 s
2023-03-27 10:17:38,379 44k INFO Train Epoch: 92 [21%]
2023-03-27 10:17:38,380 44k INFO Losses: [2.57940673828125, 2.2354397773742676, 10.238306999206543, 16.382198333740234, 1.1249817609786987], step: 66400, lr: 9.883180358131438e-05
2023-03-27 10:18:45,601 44k INFO Train Epoch: 92 [48%]
2023-03-27 10:18:45,601 44k INFO Losses: [2.5629360675811768, 2.3697333335876465, 7.760651588439941, 17.12743377685547, 1.1867032051086426], step: 66600, lr: 9.883180358131438e-05
2023-03-27 10:19:53,691 44k INFO Train Epoch: 92 [76%]
2023-03-27 10:19:53,692 44k INFO Losses: [2.30710768699646, 2.605785846710205, 11.880829811096191, 20.20801544189453, 0.9133543968200684], step: 66800, lr: 9.883180358131438e-05
2023-03-27 10:20:52,926 44k INFO ====> Epoch: 92, cost 255.83 s
2023-03-27 10:21:10,690 44k INFO Train Epoch: 93 [3%]
2023-03-27 10:21:10,691 44k INFO Losses: [2.368762969970703, 2.275001287460327, 8.692131042480469, 14.863801956176758, 1.3694063425064087], step: 67000, lr: 9.881944960586671e-05
2023-03-27 10:21:13,781 44k INFO Saving model and optimizer state at iteration 93 to ./logs\44k\G_67000.pth
2023-03-27 10:21:14,500 44k INFO Saving model and optimizer state at iteration 93 to ./logs\44k\D_67000.pth
2023-03-27 10:21:15,182 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_64000.pth
2023-03-27 10:21:15,211 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_64000.pth
2023-03-27 10:22:23,132 44k INFO Train Epoch: 93 [31%]
2023-03-27 10:22:23,132 44k INFO Losses: [2.4633541107177734, 2.5360605716705322, 12.693946838378906, 18.599523544311523, 1.53462553024292], step: 67200, lr: 9.881944960586671e-05
2023-03-27 10:23:30,942 44k INFO Train Epoch: 93 [58%]
2023-03-27 10:23:30,943 44k INFO Losses: [2.324860095977783, 2.4223039150238037, 13.684011459350586, 19.509849548339844, 1.481391429901123], step: 67400, lr: 9.881944960586671e-05
2023-03-27 10:24:38,805 44k INFO Train Epoch: 93 [86%]
2023-03-27 10:24:38,805 44k INFO Losses: [2.5111193656921387, 2.4296417236328125, 14.481210708618164, 19.52738380432129, 1.2184337377548218], step: 67600, lr: 9.881944960586671e-05
2023-03-27 10:25:13,738 44k INFO ====> Epoch: 93, cost 260.81 s
2023-03-27 10:25:55,738 44k INFO Train Epoch: 94 [13%]
2023-03-27 10:25:55,739 44k INFO Losses: [2.6084084510803223, 2.2933523654937744, 11.146780014038086, 17.421825408935547, 1.5685172080993652], step: 67800, lr: 9.880709717466598e-05
2023-03-27 10:27:03,662 44k INFO Train Epoch: 94 [41%]
2023-03-27 10:27:03,663 44k INFO Losses: [2.8471169471740723, 2.1127469539642334, 9.294934272766113, 16.581998825073242, 1.8182706832885742], step: 68000, lr: 9.880709717466598e-05
2023-03-27 10:27:06,727 44k INFO Saving model and optimizer state at iteration 94 to ./logs\44k\G_68000.pth
2023-03-27 10:27:07,437 44k INFO Saving model and optimizer state at iteration 94 to ./logs\44k\D_68000.pth
2023-03-27 10:27:08,126 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_65000.pth
2023-03-27 10:27:08,157 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_65000.pth
2023-03-27 10:28:16,048 44k INFO Train Epoch: 94 [68%]
2023-03-27 10:28:16,048 44k INFO Losses: [2.6941723823547363, 2.103513240814209, 8.850523948669434, 17.6321964263916, 0.6080607771873474], step: 68200, lr: 9.880709717466598e-05
2023-03-27 10:29:23,884 44k INFO Train Epoch: 94 [96%]
2023-03-27 10:29:23,884 44k INFO Losses: [2.4888651371002197, 2.2022500038146973, 8.31200885772705, 13.02242660522461, 1.215746283531189], step: 68400, lr: 9.880709717466598e-05
2023-03-27 10:29:34,580 44k INFO ====> Epoch: 94, cost 260.84 s
2023-03-27 10:30:41,275 44k INFO Train Epoch: 95 [23%]
2023-03-27 10:30:41,276 44k INFO Losses: [2.1030433177948, 2.4543075561523438, 12.44829273223877, 18.340776443481445, 1.222771406173706], step: 68600, lr: 9.879474628751914e-05
2023-03-27 10:31:48,485 44k INFO Train Epoch: 95 [51%]
2023-03-27 10:31:48,486 44k INFO Losses: [2.3531479835510254, 2.309704065322876, 11.59144401550293, 17.722187042236328, 1.3691909313201904], step: 68800, lr: 9.879474628751914e-05
2023-03-27 10:32:56,689 44k INFO Train Epoch: 95 [78%]
2023-03-27 10:32:56,690 44k INFO Losses: [2.389883518218994, 2.3950564861297607, 11.576066017150879, 19.307968139648438, 0.9323539137840271], step: 69000, lr: 9.879474628751914e-05
2023-03-27 10:32:59,936 44k INFO Saving model and optimizer state at iteration 95 to ./logs\44k\G_69000.pth
2023-03-27 10:33:00,655 44k INFO Saving model and optimizer state at iteration 95 to ./logs\44k\D_69000.pth
2023-03-27 10:33:01,339 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_66000.pth
2023-03-27 10:33:01,379 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_66000.pth
2023-03-27 10:33:55,441 44k INFO ====> Epoch: 95, cost 260.86 s
2023-03-27 10:34:18,465 44k INFO Train Epoch: 96 [5%]
2023-03-27 10:34:18,465 44k INFO Losses: [2.443260669708252, 2.2726147174835205, 12.488019943237305, 19.576217651367188, 1.2582616806030273], step: 69200, lr: 9.87823969442332e-05
2023-03-27 10:35:26,724 44k INFO Train Epoch: 96 [33%]
2023-03-27 10:35:26,725 44k INFO Losses: [2.1468734741210938, 2.5233166217803955, 12.892313957214355, 16.5212345123291, 1.1461820602416992], step: 69400, lr: 9.87823969442332e-05
2023-03-27 10:36:34,493 44k INFO Train Epoch: 96 [60%]
2023-03-27 10:36:34,493 44k INFO Losses: [2.3785758018493652, 2.255988836288452, 12.241409301757812, 18.599990844726562, 0.7781578898429871], step: 69600, lr: 9.87823969442332e-05
2023-03-27 10:37:42,318 44k INFO Train Epoch: 96 [88%]
2023-03-27 10:37:42,318 44k INFO Losses: [2.696678876876831, 2.2925474643707275, 11.508566856384277, 17.313579559326172, 1.0286533832550049], step: 69800, lr: 9.87823969442332e-05
2023-03-27 10:38:12,108 44k INFO ====> Epoch: 96, cost 256.67 s
2023-03-27 10:38:59,610 44k INFO Train Epoch: 97 [15%]
2023-03-27 10:38:59,610 44k INFO Losses: [2.643277406692505, 2.0512044429779053, 7.638540267944336, 12.838200569152832, 0.7692441940307617], step: 70000, lr: 9.877004914461517e-05
2023-03-27 10:39:02,672 44k INFO Saving model and optimizer state at iteration 97 to ./logs\44k\G_70000.pth
2023-03-27 10:39:03,377 44k INFO Saving model and optimizer state at iteration 97 to ./logs\44k\D_70000.pth
2023-03-27 10:39:04,053 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_67000.pth
2023-03-27 10:39:04,083 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_67000.pth
2023-03-27 10:40:11,914 44k INFO Train Epoch: 97 [43%]
2023-03-27 10:40:11,914 44k INFO Losses: [2.168562889099121, 2.708719491958618, 14.94810676574707, 17.953975677490234, 0.8051972985267639], step: 70200, lr: 9.877004914461517e-05
2023-03-27 10:41:20,069 44k INFO Train Epoch: 97 [70%]
2023-03-27 10:41:20,070 44k INFO Losses: [2.499677896499634, 2.1549808979034424, 7.863663673400879, 18.47486686706543, 1.098233699798584], step: 70400, lr: 9.877004914461517e-05
2023-03-27 10:42:27,958 44k INFO Train Epoch: 97 [98%]
2023-03-27 10:42:27,958 44k INFO Losses: [2.8005690574645996, 2.240535259246826, 6.180616855621338, 9.663869857788086, 0.7978716492652893], step: 70600, lr: 9.877004914461517e-05
2023-03-27 10:42:33,424 44k INFO ====> Epoch: 97, cost 261.32 s
2023-03-27 10:43:45,478 44k INFO Train Epoch: 98 [25%]
2023-03-27 10:43:45,478 44k INFO Losses: [2.19970703125, 2.8031203746795654, 6.03167724609375, 11.370527267456055, 1.09547758102417], step: 70800, lr: 9.875770288847208e-05
2023-03-27 10:44:52,899 44k INFO Train Epoch: 98 [53%]
2023-03-27 10:44:52,900 44k INFO Losses: [2.294466972351074, 2.4905385971069336, 12.073990821838379, 17.916563034057617, 1.0834671258926392], step: 71000, lr: 9.875770288847208e-05
2023-03-27 10:44:55,933 44k INFO Saving model and optimizer state at iteration 98 to ./logs\44k\G_71000.pth
2023-03-27 10:44:56,637 44k INFO Saving model and optimizer state at iteration 98 to ./logs\44k\D_71000.pth
2023-03-27 10:44:57,305 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_68000.pth
2023-03-27 10:44:57,334 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_68000.pth
2023-03-27 10:46:05,264 44k INFO Train Epoch: 98 [80%]
2023-03-27 10:46:05,264 44k INFO Losses: [2.4300363063812256, 2.497838020324707, 9.2059965133667, 18.86980628967285, 1.0573924779891968], step: 71200, lr: 9.875770288847208e-05
2023-03-27 10:46:53,949 44k INFO ====> Epoch: 98, cost 260.52 s
2023-03-27 10:47:22,115 44k INFO Train Epoch: 99 [8%]
2023-03-27 10:47:22,116 44k INFO Losses: [2.602912425994873, 2.3608086109161377, 10.209246635437012, 16.99795913696289, 1.1690219640731812], step: 71400, lr: 9.874535817561101e-05
2023-03-27 10:48:30,223 44k INFO Train Epoch: 99 [35%]
2023-03-27 10:48:30,224 44k INFO Losses: [2.4729363918304443, 2.2970941066741943, 10.366796493530273, 16.536209106445312, 1.2900009155273438], step: 71600, lr: 9.874535817561101e-05
2023-03-27 10:49:37,956 44k INFO Train Epoch: 99 [63%]
2023-03-27 10:49:37,956 44k INFO Losses: [2.5005741119384766, 2.462015151977539, 11.020744323730469, 18.92160415649414, 0.6700186729431152], step: 71800, lr: 9.874535817561101e-05
2023-03-27 10:50:45,847 44k INFO Train Epoch: 99 [90%]
2023-03-27 10:50:45,848 44k INFO Losses: [2.3498570919036865, 2.5182924270629883, 15.790327072143555, 23.053510665893555, 1.3967245817184448], step: 72000, lr: 9.874535817561101e-05
2023-03-27 10:50:48,890 44k INFO Saving model and optimizer state at iteration 99 to ./logs\44k\G_72000.pth
2023-03-27 10:50:49,640 44k INFO Saving model and optimizer state at iteration 99 to ./logs\44k\D_72000.pth
2023-03-27 10:50:50,298 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_69000.pth
2023-03-27 10:50:50,330 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_69000.pth
2023-03-27 10:51:14,551 44k INFO ====> Epoch: 99, cost 260.60 s
2023-03-27 10:52:07,442 44k INFO Train Epoch: 100 [18%]
2023-03-27 10:52:07,442 44k INFO Losses: [2.833078384399414, 1.8384217023849487, 9.7644624710083, 13.030872344970703, 0.9471670389175415], step: 72200, lr: 9.873301500583906e-05
2023-03-27 10:53:15,109 44k INFO Train Epoch: 100 [45%]
2023-03-27 10:53:15,109 44k INFO Losses: [2.7899234294891357, 1.903559684753418, 8.450926780700684, 15.736007690429688, 0.8415020108222961], step: 72400, lr: 9.873301500583906e-05
2023-03-27 10:54:23,035 44k INFO Train Epoch: 100 [73%]
2023-03-27 10:54:23,035 44k INFO Losses: [2.52764892578125, 1.9666798114776611, 13.632672309875488, 18.309648513793945, 1.66559898853302], step: 72600, lr: 9.873301500583906e-05
2023-03-27 10:55:30,673 44k INFO ====> Epoch: 100, cost 256.12 s