zr_jin commited on
Commit
7985317
1 Parent(s): 94c4a98

init commit

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. decoding_result/fast_beam_search/errs-test-clean-epoch-30_avg-5_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt +0 -0
  2. decoding_result/fast_beam_search/errs-test-clean-epoch-40_avg-16_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt +0 -0
  3. decoding_result/fast_beam_search/errs-test-clean-epoch-50_avg-30_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt +0 -0
  4. decoding_result/fast_beam_search/errs-test-other-epoch-30_avg-5_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt +0 -0
  5. decoding_result/fast_beam_search/errs-test-other-epoch-40_avg-16_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt +0 -0
  6. decoding_result/fast_beam_search/errs-test-other-epoch-50_avg-30_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt +0 -0
  7. decoding_result/fast_beam_search/log-decode-epoch-30_avg-5_beam-20.0_max-contexts-8_max-states-64_use-averaged-model-2024-10-18-14-13-04 +51 -0
  8. decoding_result/fast_beam_search/log-decode-epoch-40_avg-16_beam-20.0_max-contexts-8_max-states-64_use-averaged-model-2024-10-18-19-05-31 +46 -0
  9. decoding_result/fast_beam_search/log-decode-epoch-50_avg-30_beam-20.0_max-contexts-8_max-states-64_use-averaged-model-2024-10-18-19-20-24 +46 -0
  10. decoding_result/fast_beam_search/recogs-test-clean-epoch-30_avg-5_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt +0 -0
  11. decoding_result/fast_beam_search/recogs-test-clean-epoch-40_avg-16_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt +0 -0
  12. decoding_result/fast_beam_search/recogs-test-clean-epoch-50_avg-30_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt +0 -0
  13. decoding_result/fast_beam_search/recogs-test-other-epoch-30_avg-5_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt +0 -0
  14. decoding_result/fast_beam_search/recogs-test-other-epoch-40_avg-16_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt +0 -0
  15. decoding_result/fast_beam_search/recogs-test-other-epoch-50_avg-30_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt +0 -0
  16. decoding_result/fast_beam_search/wer-summary-test-clean-epoch-30_avg-5_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt +2 -0
  17. decoding_result/fast_beam_search/wer-summary-test-clean-epoch-40_avg-16_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt +2 -0
  18. decoding_result/fast_beam_search/wer-summary-test-clean-epoch-50_avg-30_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt +2 -0
  19. decoding_result/fast_beam_search/wer-summary-test-other-epoch-30_avg-5_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt +2 -0
  20. decoding_result/fast_beam_search/wer-summary-test-other-epoch-40_avg-16_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt +2 -0
  21. decoding_result/fast_beam_search/wer-summary-test-other-epoch-50_avg-30_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt +2 -0
  22. decoding_result/greedy_search/errs-test-clean-epoch-30_avg-10_context-2_max-sym-per-frame-1_use-averaged-model.txt +0 -0
  23. decoding_result/greedy_search/errs-test-clean-epoch-30_avg-11_context-2_max-sym-per-frame-1_use-averaged-model.txt +0 -0
  24. decoding_result/greedy_search/errs-test-clean-epoch-30_avg-12_context-2_max-sym-per-frame-1_use-averaged-model.txt +0 -0
  25. decoding_result/greedy_search/errs-test-clean-epoch-30_avg-13_context-2_max-sym-per-frame-1_use-averaged-model.txt +0 -0
  26. decoding_result/greedy_search/errs-test-clean-epoch-30_avg-14_context-2_max-sym-per-frame-1_use-averaged-model.txt +0 -0
  27. decoding_result/greedy_search/errs-test-clean-epoch-30_avg-15_context-2_max-sym-per-frame-1_use-averaged-model.txt +0 -0
  28. decoding_result/greedy_search/errs-test-clean-epoch-30_avg-16_context-2_max-sym-per-frame-1_use-averaged-model.txt +0 -0
  29. decoding_result/greedy_search/errs-test-clean-epoch-30_avg-17_context-2_max-sym-per-frame-1_use-averaged-model.txt +0 -0
  30. decoding_result/greedy_search/errs-test-clean-epoch-30_avg-18_context-2_max-sym-per-frame-1_use-averaged-model.txt +0 -0
  31. decoding_result/greedy_search/errs-test-clean-epoch-30_avg-19_context-2_max-sym-per-frame-1_use-averaged-model.txt +0 -0
  32. decoding_result/greedy_search/errs-test-clean-epoch-30_avg-20_context-2_max-sym-per-frame-1_use-averaged-model.txt +0 -0
  33. decoding_result/greedy_search/errs-test-clean-epoch-30_avg-21_context-2_max-sym-per-frame-1_use-averaged-model.txt +0 -0
  34. decoding_result/greedy_search/errs-test-clean-epoch-30_avg-22_context-2_max-sym-per-frame-1_use-averaged-model.txt +0 -0
  35. decoding_result/greedy_search/errs-test-clean-epoch-30_avg-23_context-2_max-sym-per-frame-1_use-averaged-model.txt +0 -0
  36. decoding_result/greedy_search/errs-test-clean-epoch-30_avg-24_context-2_max-sym-per-frame-1_use-averaged-model.txt +0 -0
  37. decoding_result/greedy_search/errs-test-clean-epoch-30_avg-25_context-2_max-sym-per-frame-1_use-averaged-model.txt +0 -0
  38. decoding_result/greedy_search/errs-test-clean-epoch-30_avg-26_context-2_max-sym-per-frame-1_use-averaged-model.txt +0 -0
  39. decoding_result/greedy_search/errs-test-clean-epoch-30_avg-27_context-2_max-sym-per-frame-1_use-averaged-model.txt +0 -0
  40. decoding_result/greedy_search/errs-test-clean-epoch-30_avg-28_context-2_max-sym-per-frame-1_use-averaged-model.txt +0 -0
  41. decoding_result/greedy_search/errs-test-clean-epoch-30_avg-29_context-2_max-sym-per-frame-1_use-averaged-model.txt +0 -0
  42. decoding_result/greedy_search/errs-test-clean-epoch-30_avg-5_context-2_max-sym-per-frame-1_use-averaged-model.txt +0 -0
  43. decoding_result/greedy_search/errs-test-clean-epoch-30_avg-6_context-2_max-sym-per-frame-1_use-averaged-model.txt +0 -0
  44. decoding_result/greedy_search/errs-test-clean-epoch-30_avg-7_context-2_max-sym-per-frame-1_use-averaged-model.txt +0 -0
  45. decoding_result/greedy_search/errs-test-clean-epoch-30_avg-8_context-2_max-sym-per-frame-1_use-averaged-model.txt +0 -0
  46. decoding_result/greedy_search/errs-test-clean-epoch-30_avg-9_context-2_max-sym-per-frame-1_use-averaged-model.txt +0 -0
  47. decoding_result/greedy_search/errs-test-clean-epoch-40_avg-10_context-2_max-sym-per-frame-1_use-averaged-model.txt +0 -0
  48. decoding_result/greedy_search/errs-test-clean-epoch-40_avg-11_context-2_max-sym-per-frame-1_use-averaged-model.txt +0 -0
  49. decoding_result/greedy_search/errs-test-clean-epoch-40_avg-12_context-2_max-sym-per-frame-1_use-averaged-model.txt +0 -0
  50. decoding_result/greedy_search/errs-test-clean-epoch-40_avg-13_context-2_max-sym-per-frame-1_use-averaged-model.txt +0 -0
decoding_result/fast_beam_search/errs-test-clean-epoch-30_avg-5_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/fast_beam_search/errs-test-clean-epoch-40_avg-16_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/fast_beam_search/errs-test-clean-epoch-50_avg-30_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/fast_beam_search/errs-test-other-epoch-30_avg-5_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/fast_beam_search/errs-test-other-epoch-40_avg-16_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/fast_beam_search/errs-test-other-epoch-50_avg-30_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/fast_beam_search/log-decode-epoch-30_avg-5_beam-20.0_max-contexts-8_max-states-64_use-averaged-model-2024-10-18-14-13-04 ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2024-10-18 14:13:04,886 INFO [decode.py:861] Decoding started
2
+ 2024-10-18 14:13:04,887 INFO [decode.py:867] Device: cuda:0
3
+ 2024-10-18 14:13:04,889 INFO [decode.py:877] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'ignore_id': -1, 'label_smoothing': 0.1, 'warm_step': 2000, 'env_info': {'k2-version': '1.24.4', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '8905c6b481d50c1b040a466aa1602df0474818ec', 'k2-git-date': 'Thu Apr 25 12:26:15 2024', 'lhotse-version': '1.23.0.dev+git.ed5797c1.clean', 'torch-version': '2.1.0+cu121', 'torch-cuda-available': True, 'torch-cuda-version': '12.1', 'python-version': '3.1', 'icefall-git-branch': 'dev/asr/libritts', 'icefall-git-sha1': '7eee6b9e-clean', 'icefall-git-date': 'Sun Oct 13 02:40:04 2024', 'icefall-path': '/mnt/nvme_share/jinzr/miniconda3/envs/osa/lib/python3.10/site-packages/icefall-1.0-py3.10.egg', 'k2-path': '/mnt/nvme_share/jinzr/miniconda3/envs/osa/lib/python3.10/site-packages/k2-1.24.4.dev20240521+cuda12.1.torch2.1.0-py3.10-linux-x86_64.egg/k2/__init__.py', 'lhotse-path': '/mnt/nvme_share/jinzr/miniconda3/envs/osa/lib/python3.10/site-packages/lhotse-1.23.0.dev0+git.ed5797c1.clean-py3.10.egg/lhotse/__init__.py', 'hostname': 'serverx34', 'IP address': '10.0.0.234'}, 'epoch': 30, 'iter': 0, 'avg': 5, 'use_averaged_model': True, 'exp_dir': PosixPath('zipformer/exp'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'fast_beam_search', 'beam_size': 4, 'beam': 20.0, 'ngram_lm_scale': 0.01, 'max_contexts': 8, 'max_states': 64, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'use_shallow_fusion': False, 'lm_type': 'rnn', 'lm_scale': 0.3, 'tokens_ngram': 2, 'backoff_id': 500, 'context_score': 2, 'context_file': '', 'skip_scoring': False, 'num_encoder_layers': '2,2,3,4,3,2', 'downsampling_factor': '1,2,4,8,4,2', 'feedforward_dim': '512,768,1024,1536,1024,768', 'num_heads': '4,4,4,8,4,4', 'encoder_dim': '192,256,384,512,384,256', 'query_head_dim': '32', 'value_head_dim': '12', 'pos_head_dim': '4', 'pos_dim': 48, 'encoder_unmasked_dim': '192,192,256,256,256,192', 'cnn_module_kernel': '31,31,15,15,15,31', 'decoder_dim': 512, 'joiner_dim': 512, 'attention_decoder_dim': 512, 'attention_decoder_num_layers': 6, 'attention_decoder_attention_dim': 512, 'attention_decoder_num_heads': 8, 'attention_decoder_feedforward_dim': 2048, 'causal': False, 'chunk_size': '16,32,64,-1', 'left_context_frames': '64,128,256,-1', 'use_transducer': True, 'use_ctc': False, 'use_attention_decoder': False, 'full_libri': True, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 200.0, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'lm_vocab_size': 500, 'lm_epoch': 7, 'lm_avg': 1, 'lm_exp_dir': None, 'rnn_lm_embedding_dim': 2048, 'rnn_lm_hidden_dim': 2048, 'rnn_lm_num_layers': 3, 'rnn_lm_tie_weights': True, 'transformer_lm_exp_dir': None, 'transformer_lm_dim_feedforward': 2048, 'transformer_lm_encoder_dim': 768, 'transformer_lm_embedding_dim': 768, 'transformer_lm_nhead': 8, 'transformer_lm_num_layers': 16, 'transformer_lm_tie_weights': True, 'res_dir': PosixPath('zipformer/exp/fast_beam_search'), 'has_contexts': False, 'suffix': 'epoch-30_avg-5_beam-20.0_max-contexts-8_max-states-64_use-averaged-model', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 500}
4
+ 2024-10-18 14:13:04,889 INFO [decode.py:879] About to create model
5
+ 2024-10-18 14:13:05,311 INFO [decode.py:946] Calculating the averaged model over epoch range from 25 (excluded) to 30
6
+ 2024-10-18 14:13:23,182 INFO [decode.py:1040] Number of model parameters: 65549011
7
+ 2024-10-18 14:13:23,182 INFO [asr_datamodule.py:449] About to get test-clean cuts
8
+ 2024-10-18 14:13:23,186 INFO [asr_datamodule.py:456] About to get test-other cuts
9
+ 2024-10-18 14:13:28,409 INFO [decode.py:719] batch 0/?, cuts processed until now is 15
10
+ 2024-10-18 14:13:44,644 INFO [decode.py:719] batch 20/?, cuts processed until now is 413
11
+ 2024-10-18 14:13:58,560 INFO [decode.py:719] batch 40/?, cuts processed until now is 847
12
+ 2024-10-18 14:14:00,516 INFO [zipformer.py:1883] name=None, attn_weights_entropy = tensor([3.7987, 3.9275, 4.1227, 4.2174, 4.3116, 3.6226, 3.8133, 4.1902],
13
+ device='cuda:0')
14
+ 2024-10-18 14:14:04,827 INFO [zipformer.py:1883] name=None, attn_weights_entropy = tensor([4.4295, 4.3035, 4.1474, 3.8636], device='cuda:0')
15
+ 2024-10-18 14:14:11,509 INFO [decode.py:719] batch 60/?, cuts processed until now is 1387
16
+ 2024-10-18 14:14:22,195 INFO [zipformer.py:1883] name=None, attn_weights_entropy = tensor([6.9426, 6.6575, 6.5043, 6.7586], device='cuda:0')
17
+ 2024-10-18 14:14:28,367 INFO [decode.py:719] batch 80/?, cuts processed until now is 1743
18
+ 2024-10-18 14:14:35,056 INFO [zipformer.py:1883] name=None, attn_weights_entropy = tensor([4.4128, 2.5403, 2.8458, 3.3284], device='cuda:0')
19
+ 2024-10-18 14:14:40,318 INFO [decode.py:719] batch 100/?, cuts processed until now is 2292
20
+ 2024-10-18 14:14:48,454 INFO [zipformer.py:1883] name=None, attn_weights_entropy = tensor([3.0448, 3.7601, 4.4352, 4.0101], device='cuda:0')
21
+ 2024-10-18 14:14:55,602 INFO [decode.py:719] batch 120/?, cuts processed until now is 2700
22
+ 2024-10-18 14:15:06,470 INFO [decode.py:719] batch 140/?, cuts processed until now is 3306
23
+ 2024-10-18 14:15:18,639 INFO [decode.py:719] batch 160/?, cuts processed until now is 3912
24
+ 2024-10-18 14:15:27,474 INFO [decode.py:719] batch 180/?, cuts processed until now is 4765
25
+ 2024-10-18 14:15:33,362 INFO [decode.py:738] The transcripts are stored in zipformer/exp/fast_beam_search/recogs-test-clean-epoch-30_avg-5_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt
26
+ 2024-10-18 14:15:33,470 INFO [utils.py:665] [test-clean-fast_beam_search_beam-20.0_max-contexts-8_max-states-64] %WER 2.87% [2605 / 90784, 261 ins, 377 del, 1967 sub ]
27
+ 2024-10-18 14:15:33,715 INFO [decode.py:760] Wrote detailed error stats to zipformer/exp/fast_beam_search/errs-test-clean-epoch-30_avg-5_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt
28
+ 2024-10-18 14:15:33,717 INFO [decode.py:776]
29
+ For test-clean, WER of different settings are:
30
+ fast_beam_search_beam-20.0_max-contexts-8_max-states-64 2.87 best for test-clean
31
+
32
+ 2024-10-18 14:15:34,848 INFO [decode.py:719] batch 0/?, cuts processed until now is 20
33
+ 2024-10-18 14:15:47,583 INFO [decode.py:719] batch 20/?, cuts processed until now is 561
34
+ 2024-10-18 14:15:59,214 INFO [decode.py:719] batch 40/?, cuts processed until now is 1157
35
+ 2024-10-18 14:16:03,524 INFO [zipformer.py:1883] name=None, attn_weights_entropy = tensor([2.6512, 2.8289, 3.7769, 3.0094], device='cuda:0')
36
+ 2024-10-18 14:16:09,566 INFO [decode.py:719] batch 60/?, cuts processed until now is 1891
37
+ 2024-10-18 14:16:24,917 INFO [decode.py:719] batch 80/?, cuts processed until now is 2372
38
+ 2024-10-18 14:16:30,011 INFO [zipformer.py:1883] name=None, attn_weights_entropy = tensor([2.9642, 3.4401, 3.0575, 3.1019, 3.7088, 3.5234, 3.5131, 2.5965],
39
+ device='cuda:0')
40
+ 2024-10-18 14:16:35,943 INFO [decode.py:719] batch 100/?, cuts processed until now is 3102
41
+ 2024-10-18 14:16:46,969 INFO [decode.py:719] batch 120/?, cuts processed until now is 3821
42
+ 2024-10-18 14:16:49,000 INFO [zipformer.py:1883] name=None, attn_weights_entropy = tensor([2.6746, 2.0004, 2.2574, 2.2024], device='cuda:0')
43
+ 2024-10-18 14:16:54,031 INFO [decode.py:719] batch 140/?, cuts processed until now is 4927
44
+ 2024-10-18 14:17:04,746 INFO [decode.py:738] The transcripts are stored in zipformer/exp/fast_beam_search/recogs-test-other-epoch-30_avg-5_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt
45
+ 2024-10-18 14:17:04,842 INFO [utils.py:665] [test-other-fast_beam_search_beam-20.0_max-contexts-8_max-states-64] %WER 5.86% [4420 / 75460, 416 ins, 564 del, 3440 sub ]
46
+ 2024-10-18 14:17:05,064 INFO [decode.py:760] Wrote detailed error stats to zipformer/exp/fast_beam_search/errs-test-other-epoch-30_avg-5_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt
47
+ 2024-10-18 14:17:05,066 INFO [decode.py:776]
48
+ For test-other, WER of different settings are:
49
+ fast_beam_search_beam-20.0_max-contexts-8_max-states-64 5.86 best for test-other
50
+
51
+ 2024-10-18 14:17:05,066 INFO [decode.py:1082] Done!
decoding_result/fast_beam_search/log-decode-epoch-40_avg-16_beam-20.0_max-contexts-8_max-states-64_use-averaged-model-2024-10-18-19-05-31 ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2024-10-18 19:05:31,465 INFO [decode.py:861] Decoding started
2
+ 2024-10-18 19:05:31,465 INFO [decode.py:867] Device: cuda:0
3
+ 2024-10-18 19:05:31,467 INFO [decode.py:877] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'ignore_id': -1, 'label_smoothing': 0.1, 'warm_step': 2000, 'env_info': {'k2-version': '1.24.4', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '8905c6b481d50c1b040a466aa1602df0474818ec', 'k2-git-date': 'Thu Apr 25 12:26:15 2024', 'lhotse-version': '1.23.0.dev+git.ed5797c1.clean', 'torch-version': '2.1.0+cu121', 'torch-cuda-available': True, 'torch-cuda-version': '12.1', 'python-version': '3.1', 'icefall-git-branch': 'dev/asr/libritts', 'icefall-git-sha1': '7eee6b9e-clean', 'icefall-git-date': 'Sun Oct 13 02:40:04 2024', 'icefall-path': '/mnt/nvme_share/jinzr/miniconda3/envs/osa/lib/python3.10/site-packages/icefall-1.0-py3.10.egg', 'k2-path': '/mnt/nvme_share/jinzr/miniconda3/envs/osa/lib/python3.10/site-packages/k2-1.24.4.dev20240521+cuda12.1.torch2.1.0-py3.10-linux-x86_64.egg/k2/__init__.py', 'lhotse-path': '/mnt/nvme_share/jinzr/miniconda3/envs/osa/lib/python3.10/site-packages/lhotse-1.23.0.dev0+git.ed5797c1.clean-py3.10.egg/lhotse/__init__.py', 'hostname': 'serverx34', 'IP address': '10.0.0.234'}, 'epoch': 40, 'iter': 0, 'avg': 16, 'use_averaged_model': True, 'exp_dir': PosixPath('zipformer/exp'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'fast_beam_search', 'beam_size': 4, 'beam': 20.0, 'ngram_lm_scale': 0.01, 'max_contexts': 8, 'max_states': 64, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'use_shallow_fusion': False, 'lm_type': 'rnn', 'lm_scale': 0.3, 'tokens_ngram': 2, 'backoff_id': 500, 'context_score': 2, 'context_file': '', 'skip_scoring': False, 'num_encoder_layers': '2,2,3,4,3,2', 'downsampling_factor': '1,2,4,8,4,2', 'feedforward_dim': '512,768,1024,1536,1024,768', 'num_heads': '4,4,4,8,4,4', 'encoder_dim': '192,256,384,512,384,256', 'query_head_dim': '32', 'value_head_dim': '12', 'pos_head_dim': '4', 'pos_dim': 48, 'encoder_unmasked_dim': '192,192,256,256,256,192', 'cnn_module_kernel': '31,31,15,15,15,31', 'decoder_dim': 512, 'joiner_dim': 512, 'attention_decoder_dim': 512, 'attention_decoder_num_layers': 6, 'attention_decoder_attention_dim': 512, 'attention_decoder_num_heads': 8, 'attention_decoder_feedforward_dim': 2048, 'causal': False, 'chunk_size': '16,32,64,-1', 'left_context_frames': '64,128,256,-1', 'use_transducer': True, 'use_ctc': False, 'use_attention_decoder': False, 'full_libri': True, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 200.0, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'lm_vocab_size': 500, 'lm_epoch': 7, 'lm_avg': 1, 'lm_exp_dir': None, 'rnn_lm_embedding_dim': 2048, 'rnn_lm_hidden_dim': 2048, 'rnn_lm_num_layers': 3, 'rnn_lm_tie_weights': True, 'transformer_lm_exp_dir': None, 'transformer_lm_dim_feedforward': 2048, 'transformer_lm_encoder_dim': 768, 'transformer_lm_embedding_dim': 768, 'transformer_lm_nhead': 8, 'transformer_lm_num_layers': 16, 'transformer_lm_tie_weights': True, 'res_dir': PosixPath('zipformer/exp/fast_beam_search'), 'has_contexts': False, 'suffix': 'epoch-40_avg-16_beam-20.0_max-contexts-8_max-states-64_use-averaged-model', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 500}
4
+ 2024-10-18 19:05:31,467 INFO [decode.py:879] About to create model
5
+ 2024-10-18 19:05:31,877 INFO [decode.py:946] Calculating the averaged model over epoch range from 24 (excluded) to 40
6
+ 2024-10-18 19:05:52,238 INFO [decode.py:1040] Number of model parameters: 65549011
7
+ 2024-10-18 19:05:52,238 INFO [asr_datamodule.py:449] About to get test-clean cuts
8
+ 2024-10-18 19:05:52,240 INFO [asr_datamodule.py:456] About to get test-other cuts
9
+ 2024-10-18 19:05:56,355 INFO [decode.py:719] batch 0/?, cuts processed until now is 15
10
+ 2024-10-18 19:06:12,260 INFO [decode.py:719] batch 20/?, cuts processed until now is 413
11
+ 2024-10-18 19:06:17,606 INFO [zipformer.py:1883] name=None, attn_weights_entropy = tensor([6.1625, 5.4429, 5.9729, 5.4772], device='cuda:0')
12
+ 2024-10-18 19:06:26,333 INFO [decode.py:719] batch 40/?, cuts processed until now is 847
13
+ 2024-10-18 19:06:39,257 INFO [decode.py:719] batch 60/?, cuts processed until now is 1387
14
+ 2024-10-18 19:06:56,266 INFO [decode.py:719] batch 80/?, cuts processed until now is 1743
15
+ 2024-10-18 19:07:08,427 INFO [decode.py:719] batch 100/?, cuts processed until now is 2292
16
+ 2024-10-18 19:07:24,027 INFO [decode.py:719] batch 120/?, cuts processed until now is 2700
17
+ 2024-10-18 19:07:35,195 INFO [decode.py:719] batch 140/?, cuts processed until now is 3306
18
+ 2024-10-18 19:07:47,658 INFO [decode.py:719] batch 160/?, cuts processed until now is 3912
19
+ 2024-10-18 19:07:56,629 INFO [decode.py:719] batch 180/?, cuts processed until now is 4765
20
+ 2024-10-18 19:08:01,216 INFO [zipformer.py:1883] name=None, attn_weights_entropy = tensor([3.2081, 4.0158, 3.8881, 4.0675, 3.0954, 3.3535, 3.3487, 3.6114],
21
+ device='cuda:0')
22
+ 2024-10-18 19:08:02,547 INFO [decode.py:738] The transcripts are stored in zipformer/exp/fast_beam_search/recogs-test-clean-epoch-40_avg-16_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt
23
+ 2024-10-18 19:08:02,672 INFO [utils.py:665] [test-clean-fast_beam_search_beam-20.0_max-contexts-8_max-states-64] %WER 2.75% [2496 / 90784, 246 ins, 329 del, 1921 sub ]
24
+ 2024-10-18 19:08:02,921 INFO [decode.py:760] Wrote detailed error stats to zipformer/exp/fast_beam_search/errs-test-clean-epoch-40_avg-16_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt
25
+ 2024-10-18 19:08:02,922 INFO [decode.py:776]
26
+ For test-clean, WER of different settings are:
27
+ fast_beam_search_beam-20.0_max-contexts-8_max-states-64 2.75 best for test-clean
28
+
29
+ 2024-10-18 19:08:04,047 INFO [decode.py:719] batch 0/?, cuts processed until now is 20
30
+ 2024-10-18 19:08:16,770 INFO [decode.py:719] batch 20/?, cuts processed until now is 561
31
+ 2024-10-18 19:08:22,099 INFO [zipformer.py:1883] name=None, attn_weights_entropy = tensor([5.8847, 5.7889, 5.6350, 5.3005], device='cuda:0')
32
+ 2024-10-18 19:08:28,310 INFO [decode.py:719] batch 40/?, cuts processed until now is 1157
33
+ 2024-10-18 19:08:38,510 INFO [decode.py:719] batch 60/?, cuts processed until now is 1891
34
+ 2024-10-18 19:08:52,265 INFO [decode.py:719] batch 80/?, cuts processed until now is 2372
35
+ 2024-10-18 19:09:03,121 INFO [decode.py:719] batch 100/?, cuts processed until now is 3102
36
+ 2024-10-18 19:09:11,147 INFO [zipformer.py:1883] name=None, attn_weights_entropy = tensor([2.2261, 2.3643, 2.6462, 2.2886], device='cuda:0')
37
+ 2024-10-18 19:09:13,975 INFO [decode.py:719] batch 120/?, cuts processed until now is 3821
38
+ 2024-10-18 19:09:20,980 INFO [decode.py:719] batch 140/?, cuts processed until now is 4927
39
+ 2024-10-18 19:09:26,045 INFO [decode.py:738] The transcripts are stored in zipformer/exp/fast_beam_search/recogs-test-other-epoch-40_avg-16_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt
40
+ 2024-10-18 19:09:26,139 INFO [utils.py:665] [test-other-fast_beam_search_beam-20.0_max-contexts-8_max-states-64] %WER 5.67% [4282 / 75460, 398 ins, 543 del, 3341 sub ]
41
+ 2024-10-18 19:09:26,383 INFO [decode.py:760] Wrote detailed error stats to zipformer/exp/fast_beam_search/errs-test-other-epoch-40_avg-16_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt
42
+ 2024-10-18 19:09:26,384 INFO [decode.py:776]
43
+ For test-other, WER of different settings are:
44
+ fast_beam_search_beam-20.0_max-contexts-8_max-states-64 5.67 best for test-other
45
+
46
+ 2024-10-18 19:09:26,384 INFO [decode.py:1082] Done!
decoding_result/fast_beam_search/log-decode-epoch-50_avg-30_beam-20.0_max-contexts-8_max-states-64_use-averaged-model-2024-10-18-19-20-24 ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2024-10-18 19:20:24,331 INFO [decode.py:861] Decoding started
2
+ 2024-10-18 19:20:24,331 INFO [decode.py:867] Device: cuda:0
3
+ 2024-10-18 19:20:24,336 INFO [decode.py:877] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'ignore_id': -1, 'label_smoothing': 0.1, 'warm_step': 2000, 'env_info': {'k2-version': '1.24.4', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '8905c6b481d50c1b040a466aa1602df0474818ec', 'k2-git-date': 'Thu Apr 25 12:26:15 2024', 'lhotse-version': '1.23.0.dev+git.ed5797c1.clean', 'torch-version': '2.1.0+cu121', 'torch-cuda-available': True, 'torch-cuda-version': '12.1', 'python-version': '3.1', 'icefall-git-branch': 'dev/asr/libritts', 'icefall-git-sha1': '7eee6b9e-clean', 'icefall-git-date': 'Sun Oct 13 02:40:04 2024', 'icefall-path': '/mnt/nvme_share/jinzr/miniconda3/envs/osa/lib/python3.10/site-packages/icefall-1.0-py3.10.egg', 'k2-path': '/mnt/nvme_share/jinzr/miniconda3/envs/osa/lib/python3.10/site-packages/k2-1.24.4.dev20240521+cuda12.1.torch2.1.0-py3.10-linux-x86_64.egg/k2/__init__.py', 'lhotse-path': '/mnt/nvme_share/jinzr/miniconda3/envs/osa/lib/python3.10/site-packages/lhotse-1.23.0.dev0+git.ed5797c1.clean-py3.10.egg/lhotse/__init__.py', 'hostname': 'serverx34', 'IP address': '10.0.0.234'}, 'epoch': 50, 'iter': 0, 'avg': 30, 'use_averaged_model': True, 'exp_dir': PosixPath('zipformer/exp'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'fast_beam_search', 'beam_size': 4, 'beam': 20.0, 'ngram_lm_scale': 0.01, 'max_contexts': 8, 'max_states': 64, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'use_shallow_fusion': False, 'lm_type': 'rnn', 'lm_scale': 0.3, 'tokens_ngram': 2, 'backoff_id': 500, 'context_score': 2, 'context_file': '', 'skip_scoring': False, 'num_encoder_layers': '2,2,3,4,3,2', 'downsampling_factor': '1,2,4,8,4,2', 'feedforward_dim': '512,768,1024,1536,1024,768', 'num_heads': '4,4,4,8,4,4', 'encoder_dim': '192,256,384,512,384,256', 'query_head_dim': '32', 'value_head_dim': '12', 'pos_head_dim': '4', 'pos_dim': 48, 'encoder_unmasked_dim': '192,192,256,256,256,192', 'cnn_module_kernel': '31,31,15,15,15,31', 'decoder_dim': 512, 'joiner_dim': 512, 'attention_decoder_dim': 512, 'attention_decoder_num_layers': 6, 'attention_decoder_attention_dim': 512, 'attention_decoder_num_heads': 8, 'attention_decoder_feedforward_dim': 2048, 'causal': False, 'chunk_size': '16,32,64,-1', 'left_context_frames': '64,128,256,-1', 'use_transducer': True, 'use_ctc': False, 'use_attention_decoder': False, 'full_libri': True, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 200.0, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'lm_vocab_size': 500, 'lm_epoch': 7, 'lm_avg': 1, 'lm_exp_dir': None, 'rnn_lm_embedding_dim': 2048, 'rnn_lm_hidden_dim': 2048, 'rnn_lm_num_layers': 3, 'rnn_lm_tie_weights': True, 'transformer_lm_exp_dir': None, 'transformer_lm_dim_feedforward': 2048, 'transformer_lm_encoder_dim': 768, 'transformer_lm_embedding_dim': 768, 'transformer_lm_nhead': 8, 'transformer_lm_num_layers': 16, 'transformer_lm_tie_weights': True, 'res_dir': PosixPath('zipformer/exp/fast_beam_search'), 'has_contexts': False, 'suffix': 'epoch-50_avg-30_beam-20.0_max-contexts-8_max-states-64_use-averaged-model', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 500}
4
+ 2024-10-18 19:20:24,337 INFO [decode.py:879] About to create model
5
+ 2024-10-18 19:20:24,776 INFO [decode.py:946] Calculating the averaged model over epoch range from 20 (excluded) to 50
6
+ 2024-10-18 19:20:43,131 INFO [decode.py:1040] Number of model parameters: 65549011
7
+ 2024-10-18 19:20:43,132 INFO [asr_datamodule.py:449] About to get test-clean cuts
8
+ 2024-10-18 19:20:43,134 INFO [asr_datamodule.py:456] About to get test-other cuts
9
+ 2024-10-18 19:20:47,712 INFO [decode.py:719] batch 0/?, cuts processed until now is 15
10
+ 2024-10-18 19:21:03,848 INFO [decode.py:719] batch 20/?, cuts processed until now is 413
11
+ 2024-10-18 19:21:17,663 INFO [decode.py:719] batch 40/?, cuts processed until now is 847
12
+ 2024-10-18 19:21:29,269 INFO [zipformer.py:1883] name=None, attn_weights_entropy = tensor([2.3988, 2.5183, 3.2614, 2.4440], device='cuda:0')
13
+ 2024-10-18 19:21:30,495 INFO [decode.py:719] batch 60/?, cuts processed until now is 1387
14
+ 2024-10-18 19:21:47,119 INFO [decode.py:719] batch 80/?, cuts processed until now is 1743
15
+ 2024-10-18 19:21:59,070 INFO [decode.py:719] batch 100/?, cuts processed until now is 2292
16
+ 2024-10-18 19:22:14,471 INFO [decode.py:719] batch 120/?, cuts processed until now is 2700
17
+ 2024-10-18 19:22:25,319 INFO [decode.py:719] batch 140/?, cuts processed until now is 3306
18
+ 2024-10-18 19:22:37,507 INFO [decode.py:719] batch 160/?, cuts processed until now is 3912
19
+ 2024-10-18 19:22:42,907 INFO [zipformer.py:1883] name=None, attn_weights_entropy = tensor([3.5705, 4.8239, 4.4023, 4.6620], device='cuda:0')
20
+ 2024-10-18 19:22:46,172 INFO [zipformer.py:1883] name=None, attn_weights_entropy = tensor([3.5376, 2.4516, 2.6161, 2.5660], device='cuda:0')
21
+ 2024-10-18 19:22:46,369 INFO [decode.py:719] batch 180/?, cuts processed until now is 4765
22
+ 2024-10-18 19:22:52,215 INFO [decode.py:738] The transcripts are stored in zipformer/exp/fast_beam_search/recogs-test-clean-epoch-50_avg-30_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt
23
+ 2024-10-18 19:22:52,321 INFO [utils.py:665] [test-clean-fast_beam_search_beam-20.0_max-contexts-8_max-states-64] %WER 2.78% [2522 / 90784, 249 ins, 357 del, 1916 sub ]
24
+ 2024-10-18 19:22:52,620 INFO [decode.py:760] Wrote detailed error stats to zipformer/exp/fast_beam_search/errs-test-clean-epoch-50_avg-30_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt
25
+ 2024-10-18 19:22:52,621 INFO [decode.py:776]
26
+ For test-clean, WER of different settings are:
27
+ fast_beam_search_beam-20.0_max-contexts-8_max-states-64 2.78 best for test-clean
28
+
29
+ 2024-10-18 19:22:53,714 INFO [decode.py:719] batch 0/?, cuts processed until now is 20
30
+ 2024-10-18 19:22:56,527 INFO [zipformer.py:1883] name=None, attn_weights_entropy = tensor([2.8357, 3.5929, 4.2807, 3.8037], device='cuda:0')
31
+ 2024-10-18 19:23:06,283 INFO [decode.py:719] batch 20/?, cuts processed until now is 561
32
+ 2024-10-18 19:23:17,678 INFO [decode.py:719] batch 40/?, cuts processed until now is 1157
33
+ 2024-10-18 19:23:23,071 INFO [zipformer.py:1883] name=None, attn_weights_entropy = tensor([6.5835, 6.3024, 6.1502, 6.3662], device='cuda:0')
34
+ 2024-10-18 19:23:27,850 INFO [decode.py:719] batch 60/?, cuts processed until now is 1891
35
+ 2024-10-18 19:23:41,583 INFO [decode.py:719] batch 80/?, cuts processed until now is 2372
36
+ 2024-10-18 19:23:52,463 INFO [decode.py:719] batch 100/?, cuts processed until now is 3102
37
+ 2024-10-18 19:24:03,267 INFO [decode.py:719] batch 120/?, cuts processed until now is 3821
38
+ 2024-10-18 19:24:10,247 INFO [decode.py:719] batch 140/?, cuts processed until now is 4927
39
+ 2024-10-18 19:24:15,306 INFO [decode.py:738] The transcripts are stored in zipformer/exp/fast_beam_search/recogs-test-other-epoch-50_avg-30_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt
40
+ 2024-10-18 19:24:15,399 INFO [utils.py:665] [test-other-fast_beam_search_beam-20.0_max-contexts-8_max-states-64] %WER 5.61% [4236 / 75460, 387 ins, 534 del, 3315 sub ]
41
+ 2024-10-18 19:24:15,636 INFO [decode.py:760] Wrote detailed error stats to zipformer/exp/fast_beam_search/errs-test-other-epoch-50_avg-30_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt
42
+ 2024-10-18 19:24:15,637 INFO [decode.py:776]
43
+ For test-other, WER of different settings are:
44
+ fast_beam_search_beam-20.0_max-contexts-8_max-states-64 5.61 best for test-other
45
+
46
+ 2024-10-18 19:24:15,637 INFO [decode.py:1082] Done!
decoding_result/fast_beam_search/recogs-test-clean-epoch-30_avg-5_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/fast_beam_search/recogs-test-clean-epoch-40_avg-16_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/fast_beam_search/recogs-test-clean-epoch-50_avg-30_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/fast_beam_search/recogs-test-other-epoch-30_avg-5_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/fast_beam_search/recogs-test-other-epoch-40_avg-16_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/fast_beam_search/recogs-test-other-epoch-50_avg-30_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/fast_beam_search/wer-summary-test-clean-epoch-30_avg-5_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ settings WER
2
+ fast_beam_search_beam-20.0_max-contexts-8_max-states-64 2.87
decoding_result/fast_beam_search/wer-summary-test-clean-epoch-40_avg-16_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ settings WER
2
+ fast_beam_search_beam-20.0_max-contexts-8_max-states-64 2.75
decoding_result/fast_beam_search/wer-summary-test-clean-epoch-50_avg-30_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ settings WER
2
+ fast_beam_search_beam-20.0_max-contexts-8_max-states-64 2.78
decoding_result/fast_beam_search/wer-summary-test-other-epoch-30_avg-5_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ settings WER
2
+ fast_beam_search_beam-20.0_max-contexts-8_max-states-64 5.86
decoding_result/fast_beam_search/wer-summary-test-other-epoch-40_avg-16_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ settings WER
2
+ fast_beam_search_beam-20.0_max-contexts-8_max-states-64 5.67
decoding_result/fast_beam_search/wer-summary-test-other-epoch-50_avg-30_beam-20.0_max-contexts-8_max-states-64_use-averaged-model.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ settings WER
2
+ fast_beam_search_beam-20.0_max-contexts-8_max-states-64 5.61
decoding_result/greedy_search/errs-test-clean-epoch-30_avg-10_context-2_max-sym-per-frame-1_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/greedy_search/errs-test-clean-epoch-30_avg-11_context-2_max-sym-per-frame-1_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/greedy_search/errs-test-clean-epoch-30_avg-12_context-2_max-sym-per-frame-1_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/greedy_search/errs-test-clean-epoch-30_avg-13_context-2_max-sym-per-frame-1_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/greedy_search/errs-test-clean-epoch-30_avg-14_context-2_max-sym-per-frame-1_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/greedy_search/errs-test-clean-epoch-30_avg-15_context-2_max-sym-per-frame-1_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/greedy_search/errs-test-clean-epoch-30_avg-16_context-2_max-sym-per-frame-1_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/greedy_search/errs-test-clean-epoch-30_avg-17_context-2_max-sym-per-frame-1_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/greedy_search/errs-test-clean-epoch-30_avg-18_context-2_max-sym-per-frame-1_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/greedy_search/errs-test-clean-epoch-30_avg-19_context-2_max-sym-per-frame-1_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/greedy_search/errs-test-clean-epoch-30_avg-20_context-2_max-sym-per-frame-1_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/greedy_search/errs-test-clean-epoch-30_avg-21_context-2_max-sym-per-frame-1_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/greedy_search/errs-test-clean-epoch-30_avg-22_context-2_max-sym-per-frame-1_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/greedy_search/errs-test-clean-epoch-30_avg-23_context-2_max-sym-per-frame-1_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/greedy_search/errs-test-clean-epoch-30_avg-24_context-2_max-sym-per-frame-1_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/greedy_search/errs-test-clean-epoch-30_avg-25_context-2_max-sym-per-frame-1_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/greedy_search/errs-test-clean-epoch-30_avg-26_context-2_max-sym-per-frame-1_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/greedy_search/errs-test-clean-epoch-30_avg-27_context-2_max-sym-per-frame-1_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/greedy_search/errs-test-clean-epoch-30_avg-28_context-2_max-sym-per-frame-1_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/greedy_search/errs-test-clean-epoch-30_avg-29_context-2_max-sym-per-frame-1_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/greedy_search/errs-test-clean-epoch-30_avg-5_context-2_max-sym-per-frame-1_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/greedy_search/errs-test-clean-epoch-30_avg-6_context-2_max-sym-per-frame-1_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/greedy_search/errs-test-clean-epoch-30_avg-7_context-2_max-sym-per-frame-1_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/greedy_search/errs-test-clean-epoch-30_avg-8_context-2_max-sym-per-frame-1_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/greedy_search/errs-test-clean-epoch-30_avg-9_context-2_max-sym-per-frame-1_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/greedy_search/errs-test-clean-epoch-40_avg-10_context-2_max-sym-per-frame-1_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/greedy_search/errs-test-clean-epoch-40_avg-11_context-2_max-sym-per-frame-1_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/greedy_search/errs-test-clean-epoch-40_avg-12_context-2_max-sym-per-frame-1_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff
 
decoding_result/greedy_search/errs-test-clean-epoch-40_avg-13_context-2_max-sym-per-frame-1_use-averaged-model.txt ADDED
The diff for this file is too large to render. See raw diff