Convert finetuned facebook/w2v-bert-2.0 to ONNX
#29 opened about 2 months ago
by
Phil-AB
Upload vocab.json
#28 opened 6 months ago
by
souregi
[AUTOMATED] Model Memory Requirements
#27 opened 7 months ago
by
model-sizer-bot
ValueError: negative dimensions are not allowed
2
#26 opened 7 months ago
by
StephennFernandes
How to finetune the w2v-bert2.0 with multi-GPUs?
#25 opened 8 months ago
by
kssmmm
How to use n-gram with this model?
2
#24 opened 9 months ago
by
zkarapet00
Link to "SeamlessM4T v1" paper, where the w2v-BERT 2.0 was presented for the first time.
#23 opened 9 months ago
by
zuazo
How to use an LM like n-gram LM with w2v-bert-2.0?
7
#22 opened 9 months ago
by
lukarape
Clarification: w2v-BERT 2.0 was first presented in SeamlessM4T v1 (not v2)
3
#21 opened 10 months ago
by
zuazo
fine tuning not working .
1
#20 opened 10 months ago
by
Imran1
Any quantization possible?
3
#18 opened 11 months ago
by
supercharge19
Is it helpful for TTS? Is it possible to achieve the performance of OpenAI TTS?
1
#17 opened 11 months ago
by
shawnsh
replace AutoProcesser with AutoFeatureExtractor
2
#16 opened 11 months ago
by
talipturkmen
tokenizer issues
2
#15 opened 11 months ago
by
Imran1
Update README.md
#5 opened 12 months ago
by
longnv