ESPnet2 ASR model
espnet/guangzhisun_librispeech100_asr_train_conformer_transducer_tcpgen500_deep_sche30_GCN6L_rep_suffix
This model was trained by guangzhisun using librispeech_100 recipe in espnet.
Demo: How to use in ESPnet2
Follow the ESPnet installation instructions if you haven't done that already.
cd espnet
pip install -e .
cd egs2/librispeech_100/asr1_biasing
./run.sh --skip_data_prep false --skip_train true --download_model espnet/guangzhisun_librispeech100_asr_train_conformer_transducer_tcpgen500_deep_sche30_GCN6L_rep_suffix
TCPGen in RNN-T
RESULTS
Environments
- date:
Wed Jul 5 02:01:19 BST 2023
- python version:
3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0]
- espnet version:
espnet 202304
- pytorch version:
pytorch 2.0.1+cu117
- Git hash:
6f33b9d9a999d4cd7e9bc0dcfc0ba342bdff7c17
- Commit date:
Thu Jun 29 02:16:09 2023 +0100
- Commit date:
exp/asr_train_conformer_transducer_tcpgen500_deep_sche30_GCN6L_rep_suffix
WER
dataset | Snt | Wrd | Corr | Sub | Del | Ins | Err | S.Err |
---|---|---|---|---|---|---|---|---|
decode_asr_asr_model_valid.loss.ave/dev_clean | 2703 | 54402 | 95.7 | 3.9 | 0.4 | 0.6 | 4.9 | 48.0 |
decode_asr_asr_model_valid.loss.ave/dev_other | 2864 | 50948 | 85.8 | 12.6 | 1.6 | 1.9 | 16.1 | 77.0 |
decode_asr_asr_model_valid.loss.ave/test_clean | 2620 | 52576 | 95.4 | 4.1 | 0.5 | 0.7 | 5.2 | 49.9 |
decode_asr_asr_model_valid.loss.ave/test_other | 2939 | 52343 | 86.0 | 12.2 | 1.7 | 1.8 | 15.8 | 78.4 |
decode_b20_nolm_avebest/test_clean | 2620 | 52576 | 0.0 | 0.0 | 100.0 | 0.0 | 100.0 | 100.0 |
CER
dataset | Snt | Wrd | Corr | Sub | Del | Ins | Err | S.Err |
---|---|---|---|---|---|---|---|---|
decode_asr_asr_model_valid.loss.ave/dev_clean | 2703 | 288456 | 98.4 | 1.0 | 0.7 | 0.6 | 2.3 | 48.0 |
decode_asr_asr_model_valid.loss.ave/dev_other | 2864 | 265951 | 93.3 | 4.2 | 2.5 | 2.1 | 8.8 | 77.0 |
decode_asr_asr_model_valid.loss.ave/test_clean | 2620 | 281530 | 98.3 | 1.0 | 0.7 | 0.6 | 2.3 | 49.9 |
decode_asr_asr_model_valid.loss.ave/test_other | 2939 | 272758 | 93.6 | 3.8 | 2.6 | 1.9 | 8.3 | 78.4 |
TER
dataset | Snt | Wrd | Corr | Sub | Del | Ins | Err | S.Err |
---|---|---|---|---|---|---|---|---|
decode_asr_asr_model_valid.loss.ave/dev_clean | 2703 | 103998 | 95.3 | 3.5 | 1.2 | 0.6 | 5.3 | 48.0 |
decode_asr_asr_model_valid.loss.ave/dev_other | 2864 | 95172 | 85.2 | 11.8 | 3.0 | 2.5 | 17.3 | 77.0 |
decode_asr_asr_model_valid.loss.ave/test_clean | 2620 | 102045 | 95.3 | 3.4 | 1.3 | 0.6 | 5.4 | 49.9 |
decode_asr_asr_model_valid.loss.ave/test_other | 2939 | 98108 | 85.5 | 11.0 | 3.5 | 2.2 | 16.7 | 78.4 |
ASR config
expand
config: conf/train_rnnt.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_conformer_transducer_tcpgen500_deep_sche30_GCN6L_rep_suffix
ngpu: 1
seed: 2022
num_workers: 8
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 70
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 8
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 6000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_bpe600_spsuffix/train/speech_shape
- exp/asr_stats_raw_en_bpe600_spsuffix/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_en_bpe600_spsuffix/valid/speech_shape
- exp/asr_stats_raw_en_bpe600_spsuffix/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
chunk_excluded_key_prefixes: []
train_data_path_and_name_and_type:
- - dump/raw/train_clean_100_sp/wav.scp
- speech
- kaldi_ark
- - dump/raw/train_clean_100_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- kaldi_ark
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
biasing: false
deepbiasing: false
biasinglist: ''
battndim: 256
biasingsche: 0
bmaxlen: 100
bdrop: 0.0
biasingGNN: ''
optim: adam
optim_conf:
lr: 0.002
scheduler: warmuplr
scheduler_conf:
warmup_steps: 15000
token_list:
- <blank>
- <unk>
- THE▁
- C
- AND▁
- S
- OF▁
- S▁
- TO▁
- T
- A▁
- G
- I
- ED▁
- E
- RE
- D
- IN▁
- P
- R
- N
- F
- O
- IN
- B
- T▁
- L
- ING▁
- ▁
- W
- I▁
- HE▁
- WAS▁
- A
- THAT▁
- E▁
- IT▁
- AR
- U
- H
- ES▁
- M
- RI
- ''''
- HIS▁
- AN
- D▁
- Y▁
- LY▁
- ON▁
- AS▁
- HAD▁
- WITH▁
- ST
- Y
- EN
- HER▁
- YOU▁
- K
- DE
- AT▁
- FOR▁
- V
- UN
- TH
- SE
- RO
- LI
- LO
- NOT▁
- TI
- AL
- BUT▁
- IS▁
- ER▁
- SI
- OR
- CH
- ONE▁
- SHE▁
- OR▁
- ME▁
- BE▁
- K▁
- LA
- LE
- ALL▁
- HIM▁
- BE
- CON
- HO
- PO
- AT
- THEY▁
- MY▁
- ME
- 'ON'
- BY▁
- AN▁
- VE▁
- DI
- RA
- AC
- MA
- HAVE▁
- SO▁
- WERE▁
- WHICH▁
- TED▁
- AL▁
- THIS▁
- FROM▁
- AD
- SU
- FI
- AS
- SAID▁
- ER
- TH▁
- SE▁
- RY▁
- MO
- EN▁
- FOR
- HE
- EX
- NE
- M▁
- VI
- TS▁
- SH
- BO
- COM
- PRO
- EL
- ARE▁
- FE
- WE▁
- N▁
- NO▁
- ERS▁
- QU
- THERE▁
- THEIR▁
- LE▁
- WHEN▁
- TE
- TA
- TY▁
- PER
- THEM▁
- TER
- WOULD▁
- OLD▁
- PA
- CO
- IR
- IF▁
- WHO▁
- WHAT▁
- TER▁
- MAN▁
- ATION▁
- ST▁
- BEEN▁
- OUR▁
- CA
- UP▁
- OUT▁
- PRE
- AP
- TION▁
- IT
- FA
- US
- AM
- VE
- TUR
- DO
- PAR
- PE
- 'NO'
- LU
- THEN▁
- WI
- SO
- HI
- P▁
- TO
- COULD▁
- RE▁
- Z
- WILL▁
- KING▁
- EAR▁
- DIS
- EST▁
- LL▁
- SP
- HA
- ENCE▁
- TING▁
- IS
- WE
- DU
- AND
- MORE▁
- SOME▁
- US▁
- PI
- ABLE▁
- NOW▁
- VERY▁
- GU
- EM
- ITY▁
- WA
- H▁
- ATE▁
- LL
- DO▁
- NA
- DER
- ANT▁
- LEA
- PLA
- BU
- SA
- CU
- INTO▁
- OWN▁
- ET▁
- KE
- PU
- LITTLE▁
- MENT▁
- VER
- TE▁
- DID▁
- LIKE▁
- IM
- ABOUT▁
- OUR
- TRA
- TIME▁
- THAN▁
- YOUR▁
- RED▁
- MI
- OTHER▁
- HU
- ION▁
- ANCE▁
- STR
- WELL▁
- W▁
- L▁
- ES
- ANY▁
- ITS▁
- MIS
- AB
- AGE▁
- MAR
- UPON▁
- OVER▁
- TU
- DAY▁
- TEN
- CH▁
- ALLY▁
- GRA
- CAME▁
- MEN▁
- STO
- LED▁
- AM▁
- GA
- ONLY▁
- COME▁
- TWO▁
- UG
- HOW▁
- VEN
- INE▁
- NESS▁
- EL▁
- HAS▁
- BA
- LONG▁
- AFTER▁
- IC▁
- WAY▁
- CAR
- SC
- HAR
- MADE▁
- MIN
- STE
- BEFORE▁
- MOST▁
- ILL
- FO
- GE
- DOWN▁
- DER▁
- BL
- IONS▁
- SUCH▁
- THESE▁
- DE▁
- MEN
- KED▁
- TRU
- WHERE▁
- FUL▁
- BI
- CAN▁
- SEE▁
- KNOW▁
- GO▁
- JE
- GREAT▁
- LOW▁
- MUCH▁
- NEVER▁
- MISTER▁
- GOOD▁
- SHOULD▁
- EVEN▁
- ICE▁
- STA
- LESS▁
- JO
- BLE▁
- MUST▁
- AV
- DA
- ISH▁
- MON
- TRI
- KE▁
- BACK▁
- YING▁
- AIR▁
- AU
- IOUS▁
- AGAIN▁
- MU
- FIRST▁
- F▁
- GO
- EVER▁
- VA
- COR
- OUS▁
- ATED▁
- COUNT
- ROUND▁
- OVER
- LING▁
- HERE▁
- HIMSELF▁
- SHED▁
- MIL
- G▁
- THOUGH▁
- SIDE▁
- CL
- MAY▁
- JUST▁
- WENT▁
- SAY▁
- NG▁
- PASS
- HER
- NED▁
- MIGHT▁
- FR
- MAN
- HOUSE▁
- JU
- SON▁
- PEN
- THROUGH▁
- EYES▁
- MAKE▁
- TOO▁
- THOUGHT▁
- WITHOUT▁
- THINK▁
- GEN
- THOSE▁
- MANY▁
- SPEC
- INTER
- WHILE▁
- AWAY▁
- LIFE▁
- HEAD▁
- SUR
- NTLY▁
- RIGHT▁
- DON
- TAKE▁
- PORT
- EVERY▁
- NIGHT▁
- WARD▁
- WAR
- IMP
- ALL
- GET▁
- STILL▁
- BEING▁
- FOUND▁
- NOTHING▁
- LES▁
- LAST▁
- TURNED▁
- ILL▁
- YOUNG▁
- SURE▁
- INGS▁
- PEOPLE▁
- YET▁
- THREE▁
- FACE▁
- CUR
- OFF▁
- ROOM▁
- OUT
- ASKED▁
- SAW▁
- END▁
- FER
- MISSUS▁
- EACH▁
- SAME▁
- SHA
- SENT▁
- OUL
- LET▁
- SOL
- YOU
- PLACE▁
- UNDER▁
- TOOK▁
- LIGHT▁
- LEFT▁
- PER▁
- PRESS
- USE▁
- ANOTHER▁
- ONCE▁
- TELL▁
- SHALL▁
- 'OFF'
- SEEMED▁
- ALWAYS▁
- NEW▁
- ATIONS▁
- J
- CESS
- USED▁
- WHY▁
- HEARD▁
- LOOKED▁
- GIVE▁
- PUT▁
- JA
- BECAUSE▁
- THINGS▁
- BODY▁
- FATHER▁
- SOMETHING▁
- OWING▁
- LOOK▁
- ROW▁
- GOING▁
- MOTHER▁
- MIND▁
- WORK▁
- GOT▁
- CENT
- HAVING▁
- SOON▁
- KNEW▁
- HEART▁
- FAR▁
- AGAINST▁
- WORLD▁
- FEW▁
- ICAL▁
- STOOD▁
- BEGAN▁
- SIR▁
- BETTER▁
- DOOR▁
- CALLED▁
- YEARS▁
- MOMENT▁
- ENOUGH▁
- WOMAN▁
- TOGETHER▁
- LIGHT
- OWED▁
- READ▁
- WHOLE▁
- COURSE▁
- BETWEEN▁
- FELT▁
- LONG
- HALF▁
- FULLY▁
- MORNING▁
- DENT
- WOOD
- HERSELF▁
- OLD
- DAYS▁
- HOWEVER▁
- WATER▁
- WHITE▁
- PERHAPS▁
- REPLIED▁
- GIRL▁
- QUITE▁
- HUNDRED▁
- WORDS▁
- MYSELF▁
- VOICE▁
- EARLY▁
- OUGHT▁
- AIL▁
- WORD▁
- WHOM▁
- EITHER▁
- AMONG▁
- ENDED▁
- TAKEN▁
- UNTIL▁
- ANYTHING▁
- NEXT▁
- POSSIBLE▁
- KIND▁
- BROUGHT▁
- EAST▁
- LOOKING▁
- ROAD▁
- SMALL▁
- RATHER▁
- BELIEVE▁
- SINCE▁
- MONEY▁
- OPEN▁
- INDEED▁
- DOUBT
- CERTAIN▁
- TWENTY▁
- MATTER▁
- HELD▁
- EXPECT
- DIRECT
- ANSWERED▁
- THERE
- WHOSE▁
- SHIP▁
- HIGH▁
- THEMSELVES▁
- APPEARED▁
- BLACK▁
- NATURE▁
- BEHIND▁
- POWER▁
- IZED▁
- CHILD▁
- UNCLE▁
- DEATH▁
- KNOWN▁
- OFTEN▁
- LADY▁
- POSITION▁
- KEEP▁
- CHILDREN▁
- WIFE▁
- JOHN▁
- LARGE▁
- GIVEN▁
- EIGHT▁
- SHORT▁
- SAYS▁
- EVERYTHING▁
- GENERAL▁
- DOCTOR▁
- ABOVE▁
- HAPPY▁
- Q
- X
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
joint_net_conf:
joint_space_size: 320
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram600suffix/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
aux_ctc_tasks: []
frontend: default
frontend_conf:
n_fft: 512
win_length: 400
hop_length: 160
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 5
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_en_bpe600_spsuffix/train/feats_stats.npz
model: espnet
model_conf:
ctc_weight: 0.0
report_cer: false
report_wer: false
biasinglist: data/Blist/rareword_f15.txt
bmaxlen: 500
bdrop: 0.0
battndim: 256
biasing: true
biasingsche: 30
deepbiasing: true
biasingGNN: gcn6
bpemodel: data/en_token_list/bpe_unigram600suffix/bpe.model
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 1024
num_blocks: 15
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
rel_pos_type: latest
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transducer
decoder_conf:
rnn_type: lstm
num_layers: 1
hidden_size: 256
dropout: 0.1
dropout_embed: 0.2
preprocessor: default
preprocessor_conf: {}
required:
- output_dir
- token_list
version: '202304'
distributed: false
Citing TCPGen
@INPROCEEDINGS{9687915,
author={Sun, Guangzhi and Zhang, Chao and Woodland, Philip C.},
booktitle={2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)},
title={Tree-Constrained Pointer Generator for End-to-End Contextual Speech Recognition},
year={2021},
volume={},
number={},
pages={780-787},
doi={10.1109/ASRU51503.2021.9687915}
}
@inproceedings{Sun2022TreeconstrainedPG,
title={Tree-constrained Pointer Generator with Graph Neural Network Encodings for Contextual Speech Recognition},
author={Guangzhi Sun and C. Zhang and Philip C. Woodland},
booktitle={Interspeech},
year={2022}
}
Citing ESPnet
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
- Downloads last month
- 2
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.