wyz commited on
Commit
6a86da8
1 Parent(s): 3d0fc49

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -8
README.md CHANGED
@@ -5,7 +5,7 @@ tags:
5
  - audio-to-audio
6
  language: en
7
  datasets:
8
- - universal_se
9
  license: cc-by-4.0
10
  ---
11
 
@@ -13,21 +13,67 @@ license: cc-by-4.0
13
 
14
  ### `wyz/vctk_bsrnn_small_causal`
15
 
16
- This model was trained by Emrys365 using universal_se recipe in [espnet](https://github.com/espnet/espnet/).
17
 
18
  ### Demo: How to use in ESPnet2
19
 
20
  Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
21
  if you haven't done that already.
22
 
23
- ```bash
24
- cd espnet
25
- git checkout 443028662106472c60fe8bd892cb277e5b488651
26
- pip install -e .
27
- cd egs2/universal_se/enh1
28
- ./run.sh --skip_data_prep false --skip_train true --download_model wyz/vctk_bsrnn_small_causal
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
  ```
30
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
 
32
 
33
  ## ENH config
 
5
  - audio-to-audio
6
  language: en
7
  datasets:
8
+ - VCTK_DEMAND
9
  license: cc-by-4.0
10
  ---
11
 
 
13
 
14
  ### `wyz/vctk_bsrnn_small_causal`
15
 
16
+ This model was trained by wyz based on the universal_se_v1 recipe in [espnet](https://github.com/espnet/espnet/).
17
 
18
  ### Demo: How to use in ESPnet2
19
 
20
  Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
21
  if you haven't done that already.
22
 
23
+ To use the model in the Python interface, you could use the following code:
24
+
25
+ ```python
26
+ import soundfile as sf
27
+ from espnet2.bin.enh_inference import SeparateSpeech
28
+
29
+ # For model downloading + loading
30
+ model = SeparateSpeech.from_pretrained(
31
+ model_tag="wyz/vctk_bsrnn_small_causal",
32
+ normalize_output_wav=True,
33
+ device="cuda",
34
+ )
35
+ # For loading a downloaded model
36
+ # model = SeparateSpeech(
37
+ # train_config="exp_vctk/enh_train_enh_bsrnn_small_raw/config.yaml",
38
+ # model_file="exp_vctk/enh_train_enh_bsrnn_small_raw/xxxx.pth",
39
+ # normalize_output_wav=True,
40
+ # device="cuda",
41
+ # )
42
+
43
+ audio, fs = sf.read("/path/to/noisy/utt1.flac")
44
+ enhanced = model(audio[None, :], fs=fs)[0]
45
  ```
46
 
47
+ <!-- Generated by ./scripts/utils/show_enh_score.sh -->
48
+ # RESULTS
49
+ ## Environments
50
+ - date: `Wed Feb 28 16:59:40 EST 2024`
51
+ - python version: `3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0]`
52
+ - espnet version: `espnet 202304`
53
+ - pytorch version: `pytorch 2.0.1+cu118`
54
+ - Git hash: `443028662106472c60fe8bd892cb277e5b488651`
55
+ - Commit date: `Thu May 11 03:32:59 2023 +0000`
56
+
57
+
58
+ ## enhanced_test_16k
59
+
60
+
61
+ |dataset|PESQ_WB|STOI|SAR|SDR|SIR|SI_SNR|OVRL|SIG|BAK|P808_MOS|
62
+ |---|---|---|---|---|---|---|---|---|---|---|
63
+ |chime4_et05_real_isolated_6ch_track|1.10|44.96|-3.99|-3.99|0.00|-31.45|2.32|2.66|3.43|3.12|
64
+ |chime4_et05_simu_isolated_6ch_track|1.14|61.04|3.75|3.75|0.00|-2.40|2.12|2.40|3.45|2.76|
65
+ |dns20_tt_synthetic_no_reverb|1.84|88.70|10.81|10.81|0.00|9.79|2.97|3.38|3.69|3.69|
66
+ |reverb_et_real_8ch_multich|1.08|51.66|2.32|2.32|0.00|-1.53|2.05|2.34|3.42|3.10|
67
+ |reverb_et_simu_8ch_multich|1.49|78.03|8.14|8.14|0.00|-11.17|2.68|3.05|3.64|3.52|
68
+ |whamr_tt_mix_single_reverb_max_16k|1.22|73.23|5.17|5.17|0.00|0.77|2.31|2.61|3.60|3.22|
69
+
70
+
71
+ ## enhanced_test_48k
72
+
73
+
74
+ |dataset|STOI|SAR|SDR|SIR|SI_SNR|OVRL|SIG|BAK|P808_MOS|
75
+ |---|---|---|---|---|---|---|---|---|---|
76
+ |vctk_noisy_tt_2spk|94.53|19.67|19.67|0.00|18.63|3.09|3.42|3.92|3.47|
77
 
78
 
79
  ## ENH config