patrickvonplaten commited on
Commit
0381377
1 Parent(s): 3e7e40a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -8
README.md CHANGED
@@ -8,26 +8,26 @@ tags:
8
  license: apache-2.0
9
  ---
10
 
11
- # UniSpeech-Large
12
 
13
  [Microsoft's UniSpeech](https://www.microsoft.com/en-us/research/publication/unispeech-unified-speech-representation-learning-with-labeled-and-unlabeled-data/)
14
 
15
- The large model pretrained on 16kHz sampled speech audio and phonetic labels. When using the model make sure that your speech input is also sampled at 16kHz and your text in converted into a sequence of phonemes. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more an in-detail explanation of how to fine-tune the model.
16
 
17
- [Paper: UniSpeech: Unified Speech Representation Learning
18
- with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597)
19
 
20
- Authors: Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang
21
 
22
  **Abstract**
23
- *In this paper, we propose a unified pre-training approach called UniSpeech to learn speech representations with both unlabeled and labeled data, in which supervised phonetic CTC learning and phonetically-aware contrastive self-supervised learning are conducted in a multi-task learning manner. The resultant representations can capture information more correlated with phonetic structures and improve the generalization across languages and domains. We evaluate the effectiveness of UniSpeech for cross-lingual representation learning on public CommonVoice corpus. The results show that UniSpeech outperforms self-supervised pretraining and supervised transfer learning for speech recognition by a maximum of 13.4% and 17.8% relative phone error rate reductions respectively (averaged over all testing languages). The transferability of UniSpeech is also demonstrated on a domain-shift speech recognition task, i.e., a relative word error rate reduction of 6% against the previous approach.*
24
 
25
- The original model can be found under https://github.com/microsoft/UniSpeech/tree/main/UniSpeech.
26
 
27
  # Usage
28
 
29
  This is an English pre-trained speech model that has to be fine-tuned on a downstream task like speech recognition or audio classification before it can be
30
- used in inference. The model was pre-trained in English and should therefore perform well only in English.
31
 
32
  **Note**: The model was pre-trained on phonemes rather than characters. This means that one should make sure that the input text is converted to a sequence
33
  of phonemes before fine-tuning.
@@ -40,6 +40,14 @@ To fine-tune the model for speech recognition, see [the official speech recognit
40
 
41
  To fine-tune the model for speech classification, see [the official audio classification example](https://github.com/huggingface/transformers/tree/master/examples/pytorch/audio-classification).
42
 
 
 
 
 
 
 
 
 
43
  # License
44
 
45
  The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE)
 
8
  license: apache-2.0
9
  ---
10
 
11
+ # UniSpeech-SAT-Large
12
 
13
  [Microsoft's UniSpeech](https://www.microsoft.com/en-us/research/publication/unispeech-unified-speech-representation-learning-with-labeled-and-unlabeled-data/)
14
 
15
+ The large model pretrained on 16kHz sampled speech audio with utterance and speaker contrastive loss. When using the model, make sure that your speech input is also sampled at 16kHz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more an in-detail explanation of how to fine-tune the model.
16
 
17
+ [Paper: UNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER
18
+ AWARE PRE-TRAINING](https://arxiv.org/abs/2110.05752)
19
 
20
+ Authors: Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu
21
 
22
  **Abstract**
23
+ *Self-supervised learning (SSL) is a long-standing goal for speech processing, since it utilizes large-scale unlabeled data and avoids extensive human labeling. Recent years witness great successes in applying self-supervised learning in speech recognition, while limited exploration was attempted in applying SSL for modeling speaker characteristics. In this paper, we aim to improve the existing SSL framework for speaker representation learning. Two methods are introduced for enhancing the unsupervised speaker information extraction. First, we apply the multi-task learning to the current SSL framework, where we integrate the utterance-wise contrastive loss with the SSL objective function. Second, for better speaker discrimination, we propose an utterance mixing strategy for data augmentation, where additional overlapped utterances are created unsupervisely and incorporate during training. We integrate the proposed methods into the HuBERT framework. Experiment results on SUPERB benchmark show that the proposed system achieves state-of-the-art performance in universal representation learning, especially for speaker identification oriented tasks. An ablation study is performed verifying the efficacy of each proposed method. Finally, we scale up training dataset to 94 thousand hours public audio data and achieve further performance improvement in all SUPERB tasks..*
24
 
25
+ The original model can be found under https://github.com/microsoft/UniSpeech/tree/main/UniSpeech-SAT.
26
 
27
  # Usage
28
 
29
  This is an English pre-trained speech model that has to be fine-tuned on a downstream task like speech recognition or audio classification before it can be
30
+ used in inference. The model was pre-trained in English and should therefore perform well only in English. The model has been shown to work well on task such as speaker verification, speaker identification, and speaker diarization.
31
 
32
  **Note**: The model was pre-trained on phonemes rather than characters. This means that one should make sure that the input text is converted to a sequence
33
  of phonemes before fine-tuning.
 
40
 
41
  To fine-tune the model for speech classification, see [the official audio classification example](https://github.com/huggingface/transformers/tree/master/examples/pytorch/audio-classification).
42
 
43
+ ## Speaker Verification
44
+
45
+ TODO
46
+
47
+ ## Speaker Diarization
48
+
49
+ TODO
50
+
51
  # License
52
 
53
  The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE)