TachibanaKimika commited on
Commit
05228e5
1 Parent(s): ad7114a

upload kiriha130

Browse files
kiriha/{G_kiriha_60.pth → G_kiriha_130.pth} RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:97a8171c0c803f4c6775ed8ccc151c1e2b95a68b8d64f10f9446f252947975d5
3
  size 542789405
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3bd641f14d12d22e24be8cbb6d6d6f40df5d8d33b38326203c4702f7f608e42b
3
  size 542789405
kiriha/train.log CHANGED
@@ -1836,3 +1836,261 @@
1836
  2023-03-11 22:52:43,743 44k INFO Train Epoch: 100 [80%]
1837
  2023-03-11 22:52:43,743 44k INFO Losses: [2.468312978744507, 2.179637908935547, 8.105508804321289, 19.717819213867188, 0.9637770652770996], step: 49800, lr: 9.875770288847208e-05
1838
  2023-03-11 22:53:20,029 44k INFO ====> Epoch: 100, cost 192.96 s
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1836
  2023-03-11 22:52:43,743 44k INFO Train Epoch: 100 [80%]
1837
  2023-03-11 22:52:43,743 44k INFO Losses: [2.468312978744507, 2.179637908935547, 8.105508804321289, 19.717819213867188, 0.9637770652770996], step: 49800, lr: 9.875770288847208e-05
1838
  2023-03-11 22:53:20,029 44k INFO ====> Epoch: 100, cost 192.96 s
1839
+ 2023-03-12 13:52:05,616 44k INFO {'train': {'log_interval': 200, 'eval_interval': 1000, 'seed': 1234, 'epochs': 130, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/train.txt', 'validation_files': 'filelists/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'kiriha': 0}, 'model_dir': './logs\\44k'}
1840
+ 2023-03-12 13:52:05,644 44k WARNING git hash values are different. cea6df30(saved) != fd4d47fd(current)
1841
+ 2023-03-12 13:52:07,597 44k INFO Loaded checkpoint './logs\44k\G_49000.pth' (iteration 99)
1842
+ 2023-03-12 13:52:07,904 44k INFO Loaded checkpoint './logs\44k\D_49000.pth' (iteration 99)
1843
+ 2023-03-12 13:52:59,039 44k INFO Train Epoch: 99 [20%]
1844
+ 2023-03-12 13:52:59,040 44k INFO Losses: [2.530569553375244, 2.0833938121795654, 13.01250171661377, 20.13148307800293, 0.669873833656311], step: 49000, lr: 9.875770288847208e-05
1845
+ 2023-03-12 13:53:03,541 44k INFO Saving model and optimizer state at iteration 99 to ./logs\44k\G_49000.pth
1846
+ 2023-03-12 13:53:04,200 44k INFO Saving model and optimizer state at iteration 99 to ./logs\44k\D_49000.pth
1847
+ 2023-03-12 13:54:18,056 44k INFO Train Epoch: 99 [60%]
1848
+ 2023-03-12 13:54:18,056 44k INFO Losses: [2.436291217803955, 2.2720274925231934, 10.322487831115723, 21.95627784729004, 1.293831467628479], step: 49200, lr: 9.875770288847208e-05
1849
+ 2023-03-12 13:55:31,279 44k INFO Train Epoch: 99 [100%]
1850
+ 2023-03-12 13:55:31,280 44k INFO Losses: [1.7202725410461426, 2.9337515830993652, 13.944252014160156, 23.666324615478516, 1.0533711910247803], step: 49400, lr: 9.875770288847208e-05
1851
+ 2023-03-12 13:55:31,907 44k INFO ====> Epoch: 99, cost 206.29 s
1852
+ 2023-03-12 13:56:48,700 44k INFO Train Epoch: 100 [40%]
1853
+ 2023-03-12 13:56:48,700 44k INFO Losses: [2.2631478309631348, 2.5622010231018066, 10.296052932739258, 19.4139461517334, 0.7779372930526733], step: 49600, lr: 9.874535817561101e-05
1854
+ 2023-03-12 13:57:55,763 44k INFO Train Epoch: 100 [80%]
1855
+ 2023-03-12 13:57:55,763 44k INFO Losses: [2.4001083374023438, 2.2513020038604736, 9.756937980651855, 20.888771057128906, 0.9590399861335754], step: 49800, lr: 9.874535817561101e-05
1856
+ 2023-03-12 13:58:28,931 44k INFO ====> Epoch: 100, cost 177.02 s
1857
+ 2023-03-12 13:59:16,077 44k INFO Train Epoch: 101 [20%]
1858
+ 2023-03-12 13:59:16,078 44k INFO Losses: [2.569282293319702, 2.2517330646514893, 9.476912498474121, 18.263660430908203, 1.1066398620605469], step: 50000, lr: 9.873301500583906e-05
1859
+ 2023-03-12 13:59:18,924 44k INFO Saving model and optimizer state at iteration 101 to ./logs\44k\G_50000.pth
1860
+ 2023-03-12 13:59:19,567 44k INFO Saving model and optimizer state at iteration 101 to ./logs\44k\D_50000.pth
1861
+ 2023-03-12 13:59:20,188 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_47000.pth
1862
+ 2023-03-12 13:59:20,225 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_47000.pth
1863
+ 2023-03-12 14:00:26,576 44k INFO Train Epoch: 101 [60%]
1864
+ 2023-03-12 14:00:26,576 44k INFO Losses: [2.3101367950439453, 2.6184325218200684, 9.63654613494873, 19.588706970214844, 1.0586316585540771], step: 50200, lr: 9.873301500583906e-05
1865
+ 2023-03-12 14:01:32,235 44k INFO ====> Epoch: 101, cost 183.30 s
1866
+ 2023-03-12 14:01:41,879 44k INFO Train Epoch: 102 [0%]
1867
+ 2023-03-12 14:01:41,879 44k INFO Losses: [2.5946855545043945, 2.2156081199645996, 10.473191261291504, 20.46152114868164, 1.0441690683364868], step: 50400, lr: 9.872067337896332e-05
1868
+ 2023-03-12 14:02:48,056 44k INFO Train Epoch: 102 [40%]
1869
+ 2023-03-12 14:02:48,056 44k INFO Losses: [2.44809889793396, 2.35406494140625, 10.160893440246582, 18.201629638671875, 1.1300632953643799], step: 50600, lr: 9.872067337896332e-05
1870
+ 2023-03-12 14:03:54,244 44k INFO Train Epoch: 102 [80%]
1871
+ 2023-03-12 14:03:54,245 44k INFO Losses: [2.576388359069824, 2.6059000492095947, 6.143028736114502, 12.846582412719727, 1.01841402053833], step: 50800, lr: 9.872067337896332e-05
1872
+ 2023-03-12 14:04:26,797 44k INFO ====> Epoch: 102, cost 174.56 s
1873
+ 2023-03-12 14:05:09,834 44k INFO Train Epoch: 103 [20%]
1874
+ 2023-03-12 14:05:09,834 44k INFO Losses: [2.369149684906006, 2.792853355407715, 7.800324440002441, 16.652008056640625, 1.044224739074707], step: 51000, lr: 9.870833329479095e-05
1875
+ 2023-03-12 14:05:12,632 44k INFO Saving model and optimizer state at iteration 103 to ./logs\44k\G_51000.pth
1876
+ 2023-03-12 14:05:13,268 44k INFO Saving model and optimizer state at iteration 103 to ./logs\44k\D_51000.pth
1877
+ 2023-03-12 14:05:13,873 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_48000.pth
1878
+ 2023-03-12 14:05:13,901 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_48000.pth
1879
+ 2023-03-12 14:06:20,106 44k INFO Train Epoch: 103 [61%]
1880
+ 2023-03-12 14:06:20,106 44k INFO Losses: [2.403494358062744, 2.1617133617401123, 10.792238235473633, 21.960865020751953, 1.0754145383834839], step: 51200, lr: 9.870833329479095e-05
1881
+ 2023-03-12 14:07:25,191 44k INFO ====> Epoch: 103, cost 178.39 s
1882
+ 2023-03-12 14:07:35,451 44k INFO Train Epoch: 104 [1%]
1883
+ 2023-03-12 14:07:35,451 44k INFO Losses: [2.563819408416748, 2.119814872741699, 7.5755228996276855, 16.072433471679688, 0.9295576214790344], step: 51400, lr: 9.86959947531291e-05
1884
+ 2023-03-12 14:08:41,590 44k INFO Train Epoch: 104 [41%]
1885
+ 2023-03-12 14:08:41,590 44k INFO Losses: [2.5848045349121094, 2.2232654094696045, 6.657059669494629, 15.218085289001465, 0.566681444644928], step: 51600, lr: 9.86959947531291e-05
1886
+ 2023-03-12 14:09:47,818 44k INFO Train Epoch: 104 [81%]
1887
+ 2023-03-12 14:09:47,818 44k INFO Losses: [2.596031904220581, 2.1262824535369873, 8.39098072052002, 17.561927795410156, 0.9262499213218689], step: 51800, lr: 9.86959947531291e-05
1888
+ 2023-03-12 14:10:19,691 44k INFO ====> Epoch: 104, cost 174.50 s
1889
+ 2023-03-12 14:11:03,379 44k INFO Train Epoch: 105 [21%]
1890
+ 2023-03-12 14:11:03,379 44k INFO Losses: [2.4478440284729004, 2.453578472137451, 9.974337577819824, 19.7454891204834, 0.9409962296485901], step: 52000, lr: 9.868365775378495e-05
1891
+ 2023-03-12 14:11:06,156 44k INFO Saving model and optimizer state at iteration 105 to ./logs\44k\G_52000.pth
1892
+ 2023-03-12 14:11:06,807 44k INFO Saving model and optimizer state at iteration 105 to ./logs\44k\D_52000.pth
1893
+ 2023-03-12 14:11:07,409 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_49000.pth
1894
+ 2023-03-12 14:11:07,443 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_49000.pth
1895
+ 2023-03-12 14:12:13,466 44k INFO Train Epoch: 105 [61%]
1896
+ 2023-03-12 14:12:13,466 44k INFO Losses: [2.643007755279541, 2.381148338317871, 8.95638370513916, 21.44439125061035, 1.0273890495300293], step: 52200, lr: 9.868365775378495e-05
1897
+ 2023-03-12 14:13:17,899 44k INFO ====> Epoch: 105, cost 178.21 s
1898
+ 2023-03-12 14:13:28,968 44k INFO Train Epoch: 106 [1%]
1899
+ 2023-03-12 14:13:28,968 44k INFO Losses: [2.376864194869995, 2.3797502517700195, 9.995058059692383, 21.28952980041504, 1.0900520086288452], step: 52400, lr: 9.867132229656573e-05
1900
+ 2023-03-12 14:14:35,236 44k INFO Train Epoch: 106 [41%]
1901
+ 2023-03-12 14:14:35,236 44k INFO Losses: [2.578838348388672, 2.1822407245635986, 10.960586547851562, 19.175676345825195, 0.8294572830200195], step: 52600, lr: 9.867132229656573e-05
1902
+ 2023-03-12 14:15:41,538 44k INFO Train Epoch: 106 [81%]
1903
+ 2023-03-12 14:15:41,538 44k INFO Losses: [2.4425864219665527, 2.2242844104766846, 11.327040672302246, 20.50795555114746, 0.9655824303627014], step: 52800, lr: 9.867132229656573e-05
1904
+ 2023-03-12 14:16:12,727 44k INFO ====> Epoch: 106, cost 174.83 s
1905
+ 2023-03-12 14:16:57,086 44k INFO Train Epoch: 107 [21%]
1906
+ 2023-03-12 14:16:57,086 44k INFO Losses: [2.466730833053589, 2.074704170227051, 13.429642677307129, 23.142955780029297, 1.1753590106964111], step: 53000, lr: 9.865898838127865e-05
1907
+ 2023-03-12 14:16:59,809 44k INFO Saving model and optimizer state at iteration 107 to ./logs\44k\G_53000.pth
1908
+ 2023-03-12 14:17:00,466 44k INFO Saving model and optimizer state at iteration 107 to ./logs\44k\D_53000.pth
1909
+ 2023-03-12 14:17:01,069 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_50000.pth
1910
+ 2023-03-12 14:17:01,107 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_50000.pth
1911
+ 2023-03-12 14:18:07,133 44k INFO Train Epoch: 107 [61%]
1912
+ 2023-03-12 14:18:07,133 44k INFO Losses: [2.6066017150878906, 2.0134224891662598, 8.249110221862793, 16.217613220214844, 0.858394980430603], step: 53200, lr: 9.865898838127865e-05
1913
+ 2023-03-12 14:19:10,987 44k INFO ====> Epoch: 107, cost 178.26 s
1914
+ 2023-03-12 14:19:22,757 44k INFO Train Epoch: 108 [1%]
1915
+ 2023-03-12 14:19:22,757 44k INFO Losses: [2.539820432662964, 2.1354196071624756, 10.3507719039917, 19.979116439819336, 1.278574824333191], step: 53400, lr: 9.864665600773098e-05
1916
+ 2023-03-12 14:20:29,192 44k INFO Train Epoch: 108 [41%]
1917
+ 2023-03-12 14:20:29,193 44k INFO Losses: [2.4852371215820312, 2.478105068206787, 10.57544231414795, 20.714502334594727, 1.380039930343628], step: 53600, lr: 9.864665600773098e-05
1918
+ 2023-03-12 14:21:35,636 44k INFO Train Epoch: 108 [82%]
1919
+ 2023-03-12 14:21:35,636 44k INFO Losses: [2.3605451583862305, 2.227114677429199, 13.681086540222168, 24.1086368560791, 1.3899996280670166], step: 53800, lr: 9.864665600773098e-05
1920
+ 2023-03-12 14:22:06,292 44k INFO ====> Epoch: 108, cost 175.30 s
1921
+ 2023-03-12 14:22:51,606 44k INFO Train Epoch: 109 [22%]
1922
+ 2023-03-12 14:22:51,607 44k INFO Losses: [2.5398969650268555, 2.18524432182312, 5.349452972412109, 16.251239776611328, 1.2095293998718262], step: 54000, lr: 9.863432517573002e-05
1923
+ 2023-03-12 14:22:54,425 44k INFO Saving model and optimizer state at iteration 109 to ./logs\44k\G_54000.pth
1924
+ 2023-03-12 14:22:55,088 44k INFO Saving model and optimizer state at iteration 109 to ./logs\44k\D_54000.pth
1925
+ 2023-03-12 14:22:55,708 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_51000.pth
1926
+ 2023-03-12 14:22:55,741 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_51000.pth
1927
+ 2023-03-12 14:24:01,960 44k INFO Train Epoch: 109 [62%]
1928
+ 2023-03-12 14:24:01,961 44k INFO Losses: [2.3958356380462646, 2.2142605781555176, 11.136125564575195, 21.745189666748047, 1.0929265022277832], step: 54200, lr: 9.863432517573002e-05
1929
+ 2023-03-12 14:25:05,407 44k INFO ====> Epoch: 109, cost 179.12 s
1930
+ 2023-03-12 14:25:17,763 44k INFO Train Epoch: 110 [2%]
1931
+ 2023-03-12 14:25:17,764 44k INFO Losses: [2.753291606903076, 1.8332041501998901, 5.956881999969482, 17.71404266357422, 1.0322374105453491], step: 54400, lr: 9.862199588508305e-05
1932
+ 2023-03-12 14:26:24,085 44k INFO Train Epoch: 110 [42%]
1933
+ 2023-03-12 14:26:24,086 44k INFO Losses: [2.5468432903289795, 2.18591046333313, 8.624340057373047, 18.00257110595703, 1.1558327674865723], step: 54600, lr: 9.862199588508305e-05
1934
+ 2023-03-12 14:27:30,525 44k INFO Train Epoch: 110 [82%]
1935
+ 2023-03-12 14:27:30,525 44k INFO Losses: [2.801177740097046, 1.7067780494689941, 6.545862197875977, 14.167060852050781, 0.9589190483093262], step: 54800, lr: 9.862199588508305e-05
1936
+ 2023-03-12 14:28:00,532 44k INFO ====> Epoch: 110, cost 175.13 s
1937
+ 2023-03-12 14:28:46,446 44k INFO Train Epoch: 111 [22%]
1938
+ 2023-03-12 14:28:46,446 44k INFO Losses: [2.495169162750244, 2.476816415786743, 7.833529949188232, 18.81466293334961, 0.716306746006012], step: 55000, lr: 9.86096681355974e-05
1939
+ 2023-03-12 14:28:49,185 44k INFO Saving model and optimizer state at iteration 111 to ./logs\44k\G_55000.pth
1940
+ 2023-03-12 14:28:49,833 44k INFO Saving model and optimizer state at iteration 111 to ./logs\44k\D_55000.pth
1941
+ 2023-03-12 14:28:50,440 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_52000.pth
1942
+ 2023-03-12 14:28:50,476 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_52000.pth
1943
+ 2023-03-12 14:29:56,750 44k INFO Train Epoch: 111 [62%]
1944
+ 2023-03-12 14:29:56,750 44k INFO Losses: [2.4858956336975098, 2.1637279987335205, 6.538644790649414, 17.67612648010254, 1.2324265241622925], step: 55200, lr: 9.86096681355974e-05
1945
+ 2023-03-12 14:30:59,527 44k INFO ====> Epoch: 111, cost 179.00 s
1946
+ 2023-03-12 14:31:12,728 44k INFO Train Epoch: 112 [2%]
1947
+ 2023-03-12 14:31:12,729 44k INFO Losses: [2.4887993335723877, 2.034064292907715, 8.165989875793457, 18.961471557617188, 1.074874758720398], step: 55400, lr: 9.859734192708044e-05
1948
+ 2023-03-12 14:32:19,063 44k INFO Train Epoch: 112 [42%]
1949
+ 2023-03-12 14:32:19,063 44k INFO Losses: [2.438242197036743, 2.188232898712158, 9.122795104980469, 20.575374603271484, 1.2448499202728271], step: 55600, lr: 9.859734192708044e-05
1950
+ 2023-03-12 14:33:25,544 44k INFO Train Epoch: 112 [82%]
1951
+ 2023-03-12 14:33:25,545 44k INFO Losses: [2.6073851585388184, 2.3081068992614746, 7.792697429656982, 19.252273559570312, 1.0521364212036133], step: 55800, lr: 9.859734192708044e-05
1952
+ 2023-03-12 14:33:54,921 44k INFO ====> Epoch: 112, cost 175.39 s
1953
+ 2023-03-12 14:34:41,509 44k INFO Train Epoch: 113 [22%]
1954
+ 2023-03-12 14:34:41,510 44k INFO Losses: [2.535154104232788, 1.9380693435668945, 7.487481594085693, 18.76061248779297, 1.2359235286712646], step: 56000, lr: 9.858501725933955e-05
1955
+ 2023-03-12 14:34:44,246 44k INFO Saving model and optimizer state at iteration 113 to ./logs\44k\G_56000.pth
1956
+ 2023-03-12 14:34:44,893 44k INFO Saving model and optimizer state at iteration 113 to ./logs\44k\D_56000.pth
1957
+ 2023-03-12 14:34:45,508 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_53000.pth
1958
+ 2023-03-12 14:34:45,540 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_53000.pth
1959
+ 2023-03-12 14:35:51,863 44k INFO Train Epoch: 113 [63%]
1960
+ 2023-03-12 14:35:51,864 44k INFO Losses: [2.641195774078369, 2.30829119682312, 7.049264430999756, 15.466882705688477, 0.6948811411857605], step: 56200, lr: 9.858501725933955e-05
1961
+ 2023-03-12 14:36:54,051 44k INFO ====> Epoch: 113, cost 179.13 s
1962
+ 2023-03-12 14:37:08,003 44k INFO Train Epoch: 114 [3%]
1963
+ 2023-03-12 14:37:08,004 44k INFO Losses: [2.753098487854004, 2.0237607955932617, 8.051836967468262, 15.966660499572754, 0.8943933844566345], step: 56400, lr: 9.857269413218213e-05
1964
+ 2023-03-12 14:38:14,553 44k INFO Train Epoch: 114 [43%]
1965
+ 2023-03-12 14:38:14,554 44k INFO Losses: [2.526326894760132, 2.224480152130127, 8.037116050720215, 16.14765167236328, 1.1912078857421875], step: 56600, lr: 9.857269413218213e-05
1966
+ 2023-03-12 14:39:21,111 44k INFO Train Epoch: 114 [83%]
1967
+ 2023-03-12 14:39:21,111 44k INFO Losses: [2.4801909923553467, 2.089843273162842, 9.68632698059082, 18.982738494873047, 1.10832679271698], step: 56800, lr: 9.857269413218213e-05
1968
+ 2023-03-12 14:39:49,826 44k INFO ====> Epoch: 114, cost 175.78 s
1969
+ 2023-03-12 14:40:36,958 44k INFO Train Epoch: 115 [23%]
1970
+ 2023-03-12 14:40:36,958 44k INFO Losses: [2.546854257583618, 2.2685444355010986, 7.71811056137085, 19.523561477661133, 1.2662781476974487], step: 57000, lr: 9.85603725454156e-05
1971
+ 2023-03-12 14:40:39,750 44k INFO Saving model and optimizer state at iteration 115 to ./logs\44k\G_57000.pth
1972
+ 2023-03-12 14:40:40,396 44k INFO Saving model and optimizer state at iteration 115 to ./logs\44k\D_57000.pth
1973
+ 2023-03-12 14:40:41,014 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_54000.pth
1974
+ 2023-03-12 14:40:41,049 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_54000.pth
1975
+ 2023-03-12 14:41:47,364 44k INFO Train Epoch: 115 [63%]
1976
+ 2023-03-12 14:41:47,364 44k INFO Losses: [2.547548770904541, 2.1659560203552246, 8.272828102111816, 15.195450782775879, 0.6036759614944458], step: 57200, lr: 9.85603725454156e-05
1977
+ 2023-03-12 14:42:48,868 44k INFO ====> Epoch: 115, cost 179.04 s
1978
+ 2023-03-12 14:43:03,384 44k INFO Train Epoch: 116 [3%]
1979
+ 2023-03-12 14:43:03,384 44k INFO Losses: [2.797173261642456, 2.15388560295105, 9.429797172546387, 15.906258583068848, 0.9594957828521729], step: 57400, lr: 9.854805249884741e-05
1980
+ 2023-03-12 14:44:09,863 44k INFO Train Epoch: 116 [43%]
1981
+ 2023-03-12 14:44:09,864 44k INFO Losses: [2.305539131164551, 2.709357500076294, 9.138225555419922, 19.52425765991211, 0.9534539580345154], step: 57600, lr: 9.854805249884741e-05
1982
+ 2023-03-12 14:45:16,443 44k INFO Train Epoch: 116 [83%]
1983
+ 2023-03-12 14:45:16,444 44k INFO Losses: [2.7679221630096436, 2.121371030807495, 8.50774097442627, 19.459548950195312, 1.2488939762115479], step: 57800, lr: 9.854805249884741e-05
1984
+ 2023-03-12 14:45:44,602 44k INFO ====> Epoch: 116, cost 175.73 s
1985
+ 2023-03-12 14:46:32,479 44k INFO Train Epoch: 117 [23%]
1986
+ 2023-03-12 14:46:32,479 44k INFO Losses: [2.34775972366333, 2.195242166519165, 8.913904190063477, 21.135540008544922, 1.3499116897583008], step: 58000, lr: 9.853573399228505e-05
1987
+ 2023-03-12 14:46:35,352 44k INFO Saving model and optimizer state at iteration 117 to ./logs\44k\G_58000.pth
1988
+ 2023-03-12 14:46:35,999 44k INFO Saving model and optimizer state at iteration 117 to ./logs\44k\D_58000.pth
1989
+ 2023-03-12 14:46:36,639 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_55000.pth
1990
+ 2023-03-12 14:46:36,675 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_55000.pth
1991
+ 2023-03-12 14:47:43,249 44k INFO Train Epoch: 117 [63%]
1992
+ 2023-03-12 14:47:43,250 44k INFO Losses: [2.3396055698394775, 2.369509220123291, 10.00290584564209, 20.154760360717773, 0.4263248145580292], step: 58200, lr: 9.853573399228505e-05
1993
+ 2023-03-12 14:48:44,111 44k INFO ====> Epoch: 117, cost 179.51 s
1994
+ 2023-03-12 14:48:59,334 44k INFO Train Epoch: 118 [3%]
1995
+ 2023-03-12 14:48:59,335 44k INFO Losses: [2.4184534549713135, 2.2449090480804443, 6.167513370513916, 17.603717803955078, 1.045264720916748], step: 58400, lr: 9.8523417025536e-05
1996
+ 2023-03-12 14:50:05,872 44k INFO Train Epoch: 118 [43%]
1997
+ 2023-03-12 14:50:05,873 44k INFO Losses: [2.5406241416931152, 2.1599276065826416, 10.035455703735352, 18.540908813476562, 0.9100748300552368], step: 58600, lr: 9.8523417025536e-05
1998
+ 2023-03-12 14:51:12,438 44k INFO Train Epoch: 118 [84%]
1999
+ 2023-03-12 14:51:12,439 44k INFO Losses: [2.4142167568206787, 2.5588533878326416, 9.41864013671875, 20.910202026367188, 0.8729894161224365], step: 58800, lr: 9.8523417025536e-05
2000
+ 2023-03-12 14:51:39,863 44k INFO ====> Epoch: 118, cost 175.75 s
2001
+ 2023-03-12 14:52:28,599 44k INFO Train Epoch: 119 [24%]
2002
+ 2023-03-12 14:52:28,599 44k INFO Losses: [2.655186176300049, 2.422694683074951, 8.626376152038574, 19.542461395263672, 0.8591475486755371], step: 59000, lr: 9.851110159840781e-05
2003
+ 2023-03-12 14:52:31,430 44k INFO Saving model and optimizer state at iteration 119 to ./logs\44k\G_59000.pth
2004
+ 2023-03-12 14:52:32,080 44k INFO Saving model and optimizer state at iteration 119 to ./logs\44k\D_59000.pth
2005
+ 2023-03-12 14:52:32,715 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_56000.pth
2006
+ 2023-03-12 14:52:32,752 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_56000.pth
2007
+ 2023-03-12 14:53:39,053 44k INFO Train Epoch: 119 [64%]
2008
+ 2023-03-12 14:53:39,053 44k INFO Losses: [2.4799001216888428, 2.3237011432647705, 9.983625411987305, 19.37213706970215, 1.1400971412658691], step: 59200, lr: 9.851110159840781e-05
2009
+ 2023-03-12 14:54:39,295 44k INFO ====> Epoch: 119, cost 179.43 s
2010
+ 2023-03-12 14:54:55,137 44k INFO Train Epoch: 120 [4%]
2011
+ 2023-03-12 14:54:55,138 44k INFO Losses: [2.637849807739258, 1.9863094091415405, 9.282312393188477, 19.842092514038086, 1.2838584184646606], step: 59400, lr: 9.8498787710708e-05
2012
+ 2023-03-12 14:56:01,674 44k INFO Train Epoch: 120 [44%]
2013
+ 2023-03-12 14:56:01,674 44k INFO Losses: [2.6601662635803223, 2.0398268699645996, 9.083541870117188, 19.529178619384766, 1.3680793046951294], step: 59600, lr: 9.8498787710708e-05
2014
+ 2023-03-12 14:57:08,112 44k INFO Train Epoch: 120 [84%]
2015
+ 2023-03-12 14:57:08,113 44k INFO Losses: [2.4298253059387207, 2.492029905319214, 9.524579048156738, 19.913347244262695, 1.1235295534133911], step: 59800, lr: 9.8498787710708e-05
2016
+ 2023-03-12 14:57:34,732 44k INFO ====> Epoch: 120, cost 175.44 s
2017
+ 2023-03-12 14:58:23,867 44k INFO Train Epoch: 121 [24%]
2018
+ 2023-03-12 14:58:23,868 44k INFO Losses: [2.6557974815368652, 2.1976969242095947, 8.92565631866455, 19.02800750732422, 0.7957410216331482], step: 60000, lr: 9.848647536224416e-05
2019
+ 2023-03-12 14:58:26,708 44k INFO Saving model and optimizer state at iteration 121 to ./logs\44k\G_60000.pth
2020
+ 2023-03-12 14:58:27,363 44k INFO Saving model and optimizer state at iteration 121 to ./logs\44k\D_60000.pth
2021
+ 2023-03-12 14:58:27,967 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_57000.pth
2022
+ 2023-03-12 14:58:28,001 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_57000.pth
2023
+ 2023-03-12 14:59:34,192 44k INFO Train Epoch: 121 [64%]
2024
+ 2023-03-12 14:59:34,193 44k INFO Losses: [2.667863130569458, 1.8016116619110107, 5.744339942932129, 15.110112190246582, 0.7991032600402832], step: 60200, lr: 9.848647536224416e-05
2025
+ 2023-03-12 15:00:33,567 44k INFO ====> Epoch: 121, cost 178.84 s
2026
+ 2023-03-12 15:00:50,202 44k INFO Train Epoch: 122 [4%]
2027
+ 2023-03-12 15:00:50,203 44k INFO Losses: [2.5602433681488037, 2.0573768615722656, 10.4984712600708, 19.151681900024414, 0.46081700921058655], step: 60400, lr: 9.847416455282387e-05
2028
+ 2023-03-12 15:01:56,429 44k INFO Train Epoch: 122 [44%]
2029
+ 2023-03-12 15:01:56,429 44k INFO Losses: [2.4646615982055664, 2.145195722579956, 11.33735179901123, 18.245716094970703, 1.29243803024292], step: 60600, lr: 9.847416455282387e-05
2030
+ 2023-03-12 15:03:02,845 44k INFO Train Epoch: 122 [84%]
2031
+ 2023-03-12 15:03:02,846 44k INFO Losses: [2.5410327911376953, 2.1605331897735596, 8.100025177001953, 15.76478385925293, 0.9408947825431824], step: 60800, lr: 9.847416455282387e-05
2032
+ 2023-03-12 15:03:28,817 44k INFO ====> Epoch: 122, cost 175.25 s
2033
+ 2023-03-12 15:04:18,549 44k INFO Train Epoch: 123 [24%]
2034
+ 2023-03-12 15:04:18,549 44k INFO Losses: [2.6049482822418213, 2.348824977874756, 6.360328197479248, 16.428133010864258, 1.1749716997146606], step: 61000, lr: 9.846185528225477e-05
2035
+ 2023-03-12 15:04:21,416 44k INFO Saving model and optimizer state at iteration 123 to ./logs\44k\G_61000.pth
2036
+ 2023-03-12 15:04:22,064 44k INFO Saving model and optimizer state at iteration 123 to ./logs\44k\D_61000.pth
2037
+ 2023-03-12 15:04:22,688 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_58000.pth
2038
+ 2023-03-12 15:04:22,725 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_58000.pth
2039
+ 2023-03-12 15:05:28,923 44k INFO Train Epoch: 123 [65%]
2040
+ 2023-03-12 15:05:28,923 44k INFO Losses: [2.426384925842285, 2.1798641681671143, 7.430263996124268, 19.395584106445312, 0.945557177066803], step: 61200, lr: 9.846185528225477e-05
2041
+ 2023-03-12 15:06:27,791 44k INFO ====> Epoch: 123, cost 178.97 s
2042
+ 2023-03-12 15:06:45,120 44k INFO Train Epoch: 124 [5%]
2043
+ 2023-03-12 15:06:45,120 44k INFO Losses: [2.682405948638916, 2.133742094039917, 9.528481483459473, 21.89979362487793, 1.0059382915496826], step: 61400, lr: 9.84495475503445e-05
2044
+ 2023-03-12 15:07:51,547 44k INFO Train Epoch: 124 [45%]
2045
+ 2023-03-12 15:07:51,548 44k INFO Losses: [2.043673038482666, 2.9120264053344727, 17.44105339050293, 29.509580612182617, 1.66597580909729], step: 61600, lr: 9.84495475503445e-05
2046
+ 2023-03-12 15:08:58,060 44k INFO Train Epoch: 124 [85%]
2047
+ 2023-03-12 15:08:58,060 44k INFO Losses: [2.772627592086792, 1.942929983139038, 7.169674873352051, 18.296823501586914, 0.8164033889770508], step: 61800, lr: 9.84495475503445e-05
2048
+ 2023-03-12 15:09:23,428 44k INFO ====> Epoch: 124, cost 175.64 s
2049
+ 2023-03-12 15:10:13,925 44k INFO Train Epoch: 125 [25%]
2050
+ 2023-03-12 15:10:13,925 44k INFO Losses: [2.43510365486145, 2.2499849796295166, 11.82221794128418, 18.986167907714844, 0.9535871148109436], step: 62000, lr: 9.84372413569007e-05
2051
+ 2023-03-12 15:10:16,731 44k INFO Saving model and optimizer state at iteration 125 to ./logs\44k\G_62000.pth
2052
+ 2023-03-12 15:10:17,378 44k INFO Saving model and optimizer state at iteration 125 to ./logs\44k\D_62000.pth
2053
+ 2023-03-12 15:10:18,002 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_59000.pth
2054
+ 2023-03-12 15:10:18,040 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_59000.pth
2055
+ 2023-03-12 15:11:24,195 44k INFO Train Epoch: 125 [65%]
2056
+ 2023-03-12 15:11:24,195 44k INFO Losses: [2.224346876144409, 2.556025981903076, 11.752298355102539, 21.95399284362793, 0.8409875631332397], step: 62200, lr: 9.84372413569007e-05
2057
+ 2023-03-12 15:12:22,469 44k INFO ====> Epoch: 125, cost 179.04 s
2058
+ 2023-03-12 15:12:40,260 44k INFO Train Epoch: 126 [5%]
2059
+ 2023-03-12 15:12:40,261 44k INFO Losses: [2.652935266494751, 2.099679708480835, 8.035571098327637, 18.075931549072266, 0.9544440507888794], step: 62400, lr: 9.842493670173108e-05
2060
+ 2023-03-12 15:13:46,745 44k INFO Train Epoch: 126 [45%]
2061
+ 2023-03-12 15:13:46,746 44k INFO Losses: [2.646048069000244, 2.1015326976776123, 9.824590682983398, 19.902164459228516, 0.9396716356277466], step: 62600, lr: 9.842493670173108e-05
2062
+ 2023-03-12 15:14:53,029 44k INFO Train Epoch: 126 [85%]
2063
+ 2023-03-12 15:14:53,029 44k INFO Losses: [2.4422974586486816, 2.7184743881225586, 8.526591300964355, 17.8053035736084, 0.5865074992179871], step: 62800, lr: 9.842493670173108e-05
2064
+ 2023-03-12 15:15:17,768 44k INFO ====> Epoch: 126, cost 175.30 s
2065
+ 2023-03-12 15:16:08,916 44k INFO Train Epoch: 127 [25%]
2066
+ 2023-03-12 15:16:08,916 44k INFO Losses: [2.406055212020874, 2.0538723468780518, 10.580684661865234, 19.086185455322266, 0.6580051779747009], step: 63000, lr: 9.841263358464336e-05
2067
+ 2023-03-12 15:16:11,786 44k INFO Saving model and optimizer state at iteration 127 to ./logs\44k\G_63000.pth
2068
+ 2023-03-12 15:16:12,433 44k INFO Saving model and optimizer state at iteration 127 to ./logs\44k\D_63000.pth
2069
+ 2023-03-12 15:16:13,064 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_60000.pth
2070
+ 2023-03-12 15:16:13,098 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_60000.pth
2071
+ 2023-03-12 15:17:19,379 44k INFO Train Epoch: 127 [65%]
2072
+ 2023-03-12 15:17:19,380 44k INFO Losses: [2.5768938064575195, 2.029104709625244, 9.729928970336914, 18.006486892700195, 1.2117549180984497], step: 63200, lr: 9.841263358464336e-05
2073
+ 2023-03-12 15:18:16,877 44k INFO ====> Epoch: 127, cost 179.11 s
2074
+ 2023-03-12 15:18:35,407 44k INFO Train Epoch: 128 [5%]
2075
+ 2023-03-12 15:18:35,408 44k INFO Losses: [2.479503631591797, 2.05698823928833, 11.73935604095459, 18.689525604248047, 1.2549959421157837], step: 63400, lr: 9.840033200544528e-05
2076
+ 2023-03-12 15:19:41,957 44k INFO Train Epoch: 128 [45%]
2077
+ 2023-03-12 15:19:41,957 44k INFO Losses: [2.4276726245880127, 2.2981646060943604, 7.84566068649292, 18.534637451171875, 1.5560003519058228], step: 63600, lr: 9.840033200544528e-05
2078
+ 2023-03-12 15:20:48,403 44k INFO Train Epoch: 128 [86%]
2079
+ 2023-03-12 15:20:48,403 44k INFO Losses: [2.7886271476745605, 2.238522529602051, 6.702478408813477, 17.43375587463379, 1.0344722270965576], step: 63800, lr: 9.840033200544528e-05
2080
+ 2023-03-12 15:21:12,458 44k INFO ====> Epoch: 128, cost 175.58 s
2081
+ 2023-03-12 15:22:04,174 44k INFO Train Epoch: 129 [26%]
2082
+ 2023-03-12 15:22:04,175 44k INFO Losses: [2.583775758743286, 2.6667749881744385, 10.194723129272461, 21.70988655090332, 0.8028169274330139], step: 64000, lr: 9.838803196394459e-05
2083
+ 2023-03-12 15:22:06,999 44k INFO Saving model and optimizer state at iteration 129 to ./logs\44k\G_64000.pth
2084
+ 2023-03-12 15:22:07,693 44k INFO Saving model and optimizer state at iteration 129 to ./logs\44k\D_64000.pth
2085
+ 2023-03-12 15:22:08,299 44k INFO .. Free up space by deleting ckpt ./logs\44k\G_61000.pth
2086
+ 2023-03-12 15:22:08,334 44k INFO .. Free up space by deleting ckpt ./logs\44k\D_61000.pth
2087
+ 2023-03-12 15:23:14,704 44k INFO Train Epoch: 129 [66%]
2088
+ 2023-03-12 15:23:14,704 44k INFO Losses: [2.4136040210723877, 2.4286930561065674, 9.456846237182617, 22.13960838317871, 1.4705806970596313], step: 64200, lr: 9.838803196394459e-05
2089
+ 2023-03-12 15:24:11,676 44k INFO ====> Epoch: 129, cost 179.22 s
2090
+ 2023-03-12 15:24:30,913 44k INFO Train Epoch: 130 [6%]
2091
+ 2023-03-12 15:24:30,914 44k INFO Losses: [2.4987571239471436, 2.181457996368408, 11.313584327697754, 19.64276695251465, 1.1965885162353516], step: 64400, lr: 9.837573345994909e-05
2092
+ 2023-03-12 15:25:37,536 44k INFO Train Epoch: 130 [46%]
2093
+ 2023-03-12 15:25:37,537 44k INFO Losses: [2.809941530227661, 2.0686402320861816, 6.816439151763916, 15.208210945129395, 1.1769635677337646], step: 64600, lr: 9.837573345994909e-05
2094
+ 2023-03-12 15:26:43,923 44k INFO Train Epoch: 130 [86%]
2095
+ 2023-03-12 15:26:43,923 44k INFO Losses: [2.430873155593872, 2.1850247383117676, 8.337047576904297, 18.948827743530273, 0.855047345161438], step: 64800, lr: 9.837573345994909e-05
2096
+ 2023-03-12 15:27:07,294 44k INFO ====> Epoch: 130, cost 175.62 s