icefall_asr_aishell4_pruned_transducer_stateless5
/
log
/fast_beam_search
/log-decode-epoch-30-avg-25-beam-4-max-contexts-4-max-states-8-2022-06-13-20-59-55
2022-06-13 20:59:55,743 INFO [decode.py:497] Decoding started | |
2022-06-13 20:59:55,743 INFO [decode.py:503] Device: cuda:0 | |
2022-06-13 20:59:55,793 INFO [lexicon.py:176] Loading pre-compiled data/lang_char/Linv.pt | |
2022-06-13 20:59:55,801 INFO [decode.py:509] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 100, 'valid_interval': 200, 'feature_dim': 80, 'subsampling_factor': 4, 'model_warm_step': 400, 'env_info': {'k2-version': '1.15.1', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': 'f8d2dba06c000ffee36aab5b66f24e7c9809f116', 'k2-git-date': 'Thu Apr 21 12:20:34 2022', 'lhotse-version': '1.3.0.dev+git.e740036.dirty', 'torch-version': '1.11.0', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.8', 'icefall-git-branch': 'icefall-pruned-rnnt5-aishell4', 'icefall-git-sha1': 'c8cb425-dirty', 'icefall-git-date': 'Wed Jun 8 15:35:53 2022', 'icefall-path': '/ceph-meixu/luomingshuang/icefall', 'k2-path': '/ceph-ms/luomingshuang/k2_latest/k2/python/k2/__init__.py', 'lhotse-path': '/ceph-meixu/luomingshuang/anaconda3/envs/k2-python/lib/python3.8/site-packages/lhotse-1.3.0.dev0+git.e740036.dirty-py3.8.egg/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-3-0602204318-5799c545db-hhjfr', 'IP address': '10.177.24.137'}, 'epoch': 30, 'iter': 0, 'avg': 25, 'use_averaged_model': False, 'exp_dir': PosixPath('pruned_transducer_stateless5/exp_full'), 'lang_dir': 'data/lang_char', 'decoding_method': 'fast_beam_search', 'beam_size': 4, 'beam': 4, 'max_contexts': 4, 'max_states': 8, 'context_size': 2, 'max_sym_per_frame': 1, 'num_encoder_layers': 24, 'dim_feedforward': 1536, 'nhead': 8, 'encoder_dim': 384, 'decoder_dim': 512, 'joiner_dim': 512, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 1500, 'bucketing_sampler': True, 'num_buckets': 300, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'res_dir': PosixPath('pruned_transducer_stateless5/exp_full/fast_beam_search'), 'suffix': 'epoch-30-avg-25-beam-4-max-contexts-4-max-states-8', 'blank_id': 0, 'vocab_size': 3284} | |
2022-06-13 20:59:55,801 INFO [decode.py:511] About to create model | |
2022-06-13 20:59:56,367 INFO [decode.py:540] averaging ['pruned_transducer_stateless5/exp_full/epoch-6.pt', 'pruned_transducer_stateless5/exp_full/epoch-7.pt', 'pruned_transducer_stateless5/exp_full/epoch-8.pt', 'pruned_transducer_stateless5/exp_full/epoch-9.pt', 'pruned_transducer_stateless5/exp_full/epoch-10.pt', 'pruned_transducer_stateless5/exp_full/epoch-11.pt', 'pruned_transducer_stateless5/exp_full/epoch-12.pt', 'pruned_transducer_stateless5/exp_full/epoch-13.pt', 'pruned_transducer_stateless5/exp_full/epoch-14.pt', 'pruned_transducer_stateless5/exp_full/epoch-15.pt', 'pruned_transducer_stateless5/exp_full/epoch-16.pt', 'pruned_transducer_stateless5/exp_full/epoch-17.pt', 'pruned_transducer_stateless5/exp_full/epoch-18.pt', 'pruned_transducer_stateless5/exp_full/epoch-19.pt', 'pruned_transducer_stateless5/exp_full/epoch-20.pt', 'pruned_transducer_stateless5/exp_full/epoch-21.pt', 'pruned_transducer_stateless5/exp_full/epoch-22.pt', 'pruned_transducer_stateless5/exp_full/epoch-23.pt', 'pruned_transducer_stateless5/exp_full/epoch-24.pt', 'pruned_transducer_stateless5/exp_full/epoch-25.pt', 'pruned_transducer_stateless5/exp_full/epoch-26.pt', 'pruned_transducer_stateless5/exp_full/epoch-27.pt', 'pruned_transducer_stateless5/exp_full/epoch-28.pt', 'pruned_transducer_stateless5/exp_full/epoch-29.pt', 'pruned_transducer_stateless5/exp_full/epoch-30.pt'] | |
2022-06-13 21:01:26,065 INFO [decode.py:604] Number of model parameters: 94337552 | |
2022-06-13 21:01:26,066 INFO [asr_datamodule.py:451] About to get test cuts | |
2022-06-13 21:01:26,635 INFO [asr_datamodule.py:411] About to create test dataloader | |
2022-06-13 21:01:32,431 INFO [decode.py:408] batch 0/?, cuts processed until now is 182 | |
2022-06-13 21:02:18,709 INFO [decode.py:408] batch 20/?, cuts processed until now is 7428 | |
2022-06-13 21:02:38,375 INFO [decode.py:425] The transcripts are stored in pruned_transducer_stateless5/exp_full/fast_beam_search/recogs-test-beam_4_max_contexts_4_max_states_8-epoch-30-avg-25-beam-4-max-contexts-4-max-states-8.txt | |
2022-06-13 21:02:38,607 INFO [utils.py:408] [test-beam_4_max_contexts_4_max_states_8] %WER 29.20% [52751 / 180665, 5712 ins, 13931 del, 33108 sub ] | |
2022-06-13 21:02:39,246 INFO [decode.py:438] Wrote detailed error stats to pruned_transducer_stateless5/exp_full/fast_beam_search/errs-test-beam_4_max_contexts_4_max_states_8-epoch-30-avg-25-beam-4-max-contexts-4-max-states-8.txt | |
2022-06-13 21:02:39,246 INFO [decode.py:455] | |
For test, WER of different settings are: | |
beam_4_max_contexts_4_max_states_8 29.2 best for test | |
2022-06-13 21:02:39,246 INFO [decode.py:635] Done! | |