Convert finetuned facebook/w2v-bert-2.0 to ONNX
#29 opened 4 months ago
by
Phil-AB
Upload vocab.json
#28 opened 8 months ago
by
souregi
[AUTOMATED] Model Memory Requirements
#27 opened 9 months ago
by
model-sizer-bot
ValueError: negative dimensions are not allowed
2
#26 opened 10 months ago
by
StephennFernandes

How to finetune the w2v-bert2.0 with multi-GPUs?
#25 opened 10 months ago
by
kssmmm
How to use n-gram with this model?
2
#24 opened 12 months ago
by
zkarapet00
Link to "SeamlessM4T v1" paper, where the w2v-BERT 2.0 was presented for the first time.
#23 opened 12 months ago
by
zuazo

How to use an LM like n-gram LM with w2v-bert-2.0?
7
#22 opened 12 months ago
by
lukarape
Clarification: w2v-BERT 2.0 was first presented in SeamlessM4T v1 (not v2)
3
#21 opened about 1 year ago
by
zuazo

fine tuning not working .
1
#20 opened about 1 year ago
by
Imran1

Any quantization possible?
3
#18 opened about 1 year ago
by
supercharge19
Is it helpful for TTS? Is it possible to achieve the performance of OpenAI TTS?
1
#17 opened about 1 year ago
by
shawnsh
replace AutoProcesser with AutoFeatureExtractor
3
#16 opened about 1 year ago
by
talipturkmen

tokenizer issues
2
#15 opened about 1 year ago
by
Imran1

Update README.md
#5 opened about 1 year ago
by
longnv