|
--- |
|
license: mit |
|
language: |
|
- af |
|
- am |
|
- ar |
|
- as |
|
- az |
|
- be |
|
- bn |
|
- bs |
|
- bg |
|
- ca |
|
- cs |
|
- zh |
|
- cy |
|
- da |
|
- de |
|
- el |
|
- en |
|
- et |
|
- fi |
|
- fr |
|
- or |
|
- om |
|
- ga |
|
- gl |
|
- gu |
|
- ha |
|
- he |
|
- hi |
|
- hr |
|
- hu |
|
- hy |
|
- ig |
|
- id |
|
- is |
|
- it |
|
- jv |
|
- ja |
|
- kn |
|
- ka |
|
- kk |
|
- mn |
|
- km |
|
- ky |
|
- ko |
|
- lo |
|
- ln |
|
- lt |
|
- lb |
|
- lg |
|
- lv |
|
- ml |
|
- mr |
|
- mk |
|
- mt |
|
- mi |
|
- my |
|
- nl |
|
- nb |
|
- ne |
|
- ny |
|
- oc |
|
- pa |
|
- ps |
|
- fa |
|
- pl |
|
- pt |
|
- ro |
|
- ru |
|
- sk |
|
- sl |
|
- sn |
|
- sd |
|
- so |
|
- es |
|
- sr |
|
- sv |
|
- sw |
|
- ta |
|
- te |
|
- tg |
|
- tl |
|
- th |
|
- tr |
|
- uk |
|
- ur |
|
- uz |
|
- vi |
|
- wo |
|
- xh |
|
- yo |
|
- ms |
|
- zu |
|
- ary |
|
- arz |
|
- yue |
|
- kea |
|
inference: false |
|
--- |
|
# W2v-BERT 2.0 speech encoder |
|
|
|
We are open-sourcing our Conformer-based [W2v-BERT 2.0 speech encoder](#w2v-bert-20-speech-encoder) as described in Section 3.2.1 of the [paper](https://arxiv.org/pdf/2312.05187.pdf), which is at the core of our Seamless models. |
|
|
|
| Model Name | #params | checkpoint | |
|
| ----------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | |
|
| W2v-BERT 2.0 | 600M | [checkpoint](https://huggingface.co/reach-vb/conformer-shaw/resolve/main/conformer_shaw.pt) |
|
|
|
Scaling data size for self-supervised pre-training has been empirically proven to be a relatively cheap, yet effective way to improve speech representation quality (Zhang et al., 2023a). Following such direction, we continued to add more unlabeled speech data, increasing the amount of our pre-training data from 1M hours (Seamless Communication et al., 2023) to approximately 4.5M hours. |
|
Besides leveraging more pre-training data, we removed the random-projection quantizer (RPQ) (Chiu et al., 2022) and its associated loss previously incorporated in SeamlessM4T v1 (Seamless Communication et al., 2023).4 Akin to v1, the v2 w2v-BERT 2.0 comprises 24 Conformer layers (Gulati et al., 2020) with approximately 600M parameters and the same pre-training hyperparameters. |
|
|
|
|
|
Here's how you should do a forward pass through the speech encoder: |
|
|
|
```python |
|
import torch |
|
|
|
from fairseq2.data.audio import AudioDecoder, WaveformToFbankConverter |
|
from fairseq2.memory import MemoryBlock |
|
from fairseq2.nn.padding import get_seqs_and_padding_mask |
|
from pathlib import Path |
|
from seamless_communication.models.conformer_shaw import load_conformer_shaw_model |
|
|
|
|
|
audio_wav_path, device, dtype = ... |
|
audio_decoder = AudioDecoder(dtype=torch.float32, device=device) |
|
fbank_converter = WaveformToFbankConverter( |
|
num_mel_bins=80, |
|
waveform_scale=2**15, |
|
channel_last=True, |
|
standardize=True, |
|
device=device, |
|
dtype=dtype, |
|
) |
|
collater = Collater(pad_value=1) |
|
|
|
model = load_conformer_shaw_model("conformer_shaw", device=device, dtype=dtype) |
|
model.eval() |
|
|
|
with Path(audio_wav_path).open("rb") as fb: |
|
block = MemoryBlock(fb.read()) |
|
|
|
decoded_audio = audio_decoder(block) |
|
src = collater(fbank_converter(decoded_audio))["fbank"] |
|
seqs, padding_mask = get_seqs_and_padding_mask(src) |
|
|
|
with torch.inference_mode(): |
|
seqs, padding_mask = model.encoder_frontend(seqs, padding_mask) |
|
seqs, padding_mask = model.encoder(seqs, padding_mask) |
|
``` |
|
|