icefall_asr_aishell4_pruned_transducer_stateless5 / log /fast_beam_search /log-decode-iter-36000-avg-8-beam-4-max-contexts-4-max-states-8-use-averaged-model-2022-06-13-20-57-30
luomingshuang's picture
add pruned-transducer-stateless5 pretrained models for aishell4
695b5a0
raw
history blame
3.97 kB
2022-06-13 20:57:30,225 INFO [decode.py:497] Decoding started
2022-06-13 20:57:30,225 INFO [decode.py:503] Device: cuda:0
2022-06-13 20:57:30,276 INFO [lexicon.py:176] Loading pre-compiled data/lang_char/Linv.pt
2022-06-13 20:57:30,284 INFO [decode.py:509] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 100, 'valid_interval': 200, 'feature_dim': 80, 'subsampling_factor': 4, 'model_warm_step': 400, 'env_info': {'k2-version': '1.15.1', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': 'f8d2dba06c000ffee36aab5b66f24e7c9809f116', 'k2-git-date': 'Thu Apr 21 12:20:34 2022', 'lhotse-version': '1.3.0.dev+git.e740036.dirty', 'torch-version': '1.11.0', 'torch-cuda-available': True, 'torch-cuda-version': '10.2', 'python-version': '3.8', 'icefall-git-branch': 'icefall-pruned-rnnt5-aishell4', 'icefall-git-sha1': 'c8cb425-dirty', 'icefall-git-date': 'Wed Jun 8 15:35:53 2022', 'icefall-path': '/ceph-meixu/luomingshuang/icefall', 'k2-path': '/ceph-ms/luomingshuang/k2_latest/k2/python/k2/__init__.py', 'lhotse-path': '/ceph-meixu/luomingshuang/anaconda3/envs/k2-python/lib/python3.8/site-packages/lhotse-1.3.0.dev0+git.e740036.dirty-py3.8.egg/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-3-0602204318-5799c545db-hhjfr', 'IP address': '10.177.24.137'}, 'epoch': 30, 'iter': 36000, 'avg': 8, 'use_averaged_model': True, 'exp_dir': PosixPath('pruned_transducer_stateless5/exp_full'), 'lang_dir': 'data/lang_char', 'decoding_method': 'fast_beam_search', 'beam_size': 4, 'beam': 4, 'max_contexts': 4, 'max_states': 8, 'context_size': 2, 'max_sym_per_frame': 1, 'num_encoder_layers': 24, 'dim_feedforward': 1536, 'nhead': 8, 'encoder_dim': 384, 'decoder_dim': 512, 'joiner_dim': 512, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 1500, 'bucketing_sampler': True, 'num_buckets': 300, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'res_dir': PosixPath('pruned_transducer_stateless5/exp_full/fast_beam_search'), 'suffix': 'iter-36000-avg-8-beam-4-max-contexts-4-max-states-8-use-averaged-model', 'blank_id': 0, 'vocab_size': 3284}
2022-06-13 20:57:30,284 INFO [decode.py:511] About to create model
2022-06-13 20:57:30,849 INFO [decode.py:560] Calculating the averaged model over iteration checkpoints from pruned_transducer_stateless5/exp_full/checkpoint-4000.pt (excluded) to pruned_transducer_stateless5/exp_full/checkpoint-36000.pt
2022-06-13 20:57:48,458 INFO [decode.py:604] Number of model parameters: 94337552
2022-06-13 20:57:48,458 INFO [asr_datamodule.py:451] About to get test cuts
2022-06-13 20:57:49,089 INFO [asr_datamodule.py:411] About to create test dataloader
2022-06-13 20:57:54,874 INFO [decode.py:408] batch 0/?, cuts processed until now is 182
2022-06-13 20:59:02,232 INFO [decode.py:408] batch 20/?, cuts processed until now is 7428
2022-06-13 20:59:21,934 INFO [decode.py:425] The transcripts are stored in pruned_transducer_stateless5/exp_full/fast_beam_search/recogs-test-beam_4_max_contexts_4_max_states_8-iter-36000-avg-8-beam-4-max-contexts-4-max-states-8-use-averaged-model.txt
2022-06-13 20:59:22,202 INFO [utils.py:408] [test-beam_4_max_contexts_4_max_states_8] %WER 29.08% [52534 / 180665, 5499 ins, 14167 del, 32868 sub ]
2022-06-13 20:59:22,933 INFO [decode.py:438] Wrote detailed error stats to pruned_transducer_stateless5/exp_full/fast_beam_search/errs-test-beam_4_max_contexts_4_max_states_8-iter-36000-avg-8-beam-4-max-contexts-4-max-states-8-use-averaged-model.txt
2022-06-13 20:59:22,934 INFO [decode.py:455]
For test, WER of different settings are:
beam_4_max_contexts_4_max_states_8 29.08 best for test
2022-06-13 20:59:22,934 INFO [decode.py:635] Done!