Vaibhav Srivastav
commited on
Commit
•
4e3960f
1
Parent(s):
88d8973
up
Browse files
README.md
CHANGED
@@ -5,7 +5,15 @@ tags:
|
|
5 |
license: cc-by-nc-4.0
|
6 |
---
|
7 |
|
8 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
This folder contains an example to run an exported small model covering most tasks (ASR/S2TT/S2ST). The model could be executed on popular mobile devices with Pytorch Mobile (https://pytorch.org/mobile/home/).
|
10 |
|
11 |
## Overview
|
@@ -35,4 +43,19 @@ print(f"{lang}:{text}")
|
|
35 |
torchaudio.save(f"{OUTPUT_FOLDER}/{lang}.wav", waveform.unsqueeze(0), sample_rate=16000) # Save output waveform to local file
|
36 |
```
|
37 |
|
38 |
-
Also running the exported model doesn't need python runtime. For example, you could load this model in C++ following [this tutorial](https://pytorch.org/tutorials/advanced/cpp_export.html), or building your own on-device applications similar to [this example](https://github.com/pytorch/ios-demo-app/tree/master/SpeechRecognition)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
license: cc-by-nc-4.0
|
6 |
---
|
7 |
|
8 |
+
# SeamlessM4T - On-Device
|
9 |
+
SeamlessM4T is designed to provide high quality translation, allowing people from different linguistic communities to communicate effortlessly through speech and text.
|
10 |
+
|
11 |
+
SeamlessM4T covers:
|
12 |
+
- 📥 101 languages for speech input
|
13 |
+
- ⌨️ 96 Languages for text input/output
|
14 |
+
- 🗣️ 35 languages for speech output.
|
15 |
+
|
16 |
+
Apart from [SeamlessM4T-LARGE (2.3B)](https://huggingface.co/facebook/seamless-m4t-large) and [SeamlessM4T-MEDIUM (1.2B)](https://huggingface.co/facebook/seamless-m4t-medium) models, we are also developing a small model (281M) targeting for on-device inference.
|
17 |
This folder contains an example to run an exported small model covering most tasks (ASR/S2TT/S2ST). The model could be executed on popular mobile devices with Pytorch Mobile (https://pytorch.org/mobile/home/).
|
18 |
|
19 |
## Overview
|
|
|
43 |
torchaudio.save(f"{OUTPUT_FOLDER}/{lang}.wav", waveform.unsqueeze(0), sample_rate=16000) # Save output waveform to local file
|
44 |
```
|
45 |
|
46 |
+
Also running the exported model doesn't need python runtime. For example, you could load this model in C++ following [this tutorial](https://pytorch.org/tutorials/advanced/cpp_export.html), or building your own on-device applications similar to [this example](https://github.com/pytorch/ios-demo-app/tree/master/SpeechRecognition)
|
47 |
+
|
48 |
+
# Citation
|
49 |
+
If you use SeamlessM4T in your work or any models/datasets/artifacts published in SeamlessM4T, please cite :
|
50 |
+
|
51 |
+
```bibtex
|
52 |
+
@article{seamlessm4t2023,
|
53 |
+
title={SeamlessM4T—Massively Multilingual \& Multimodal Machine Translation},
|
54 |
+
author={{Seamless Communication}, Lo\"{i}c Barrault, Yu-An Chung, Mariano Cora Meglioli, David Dale, Ning Dong, Paul-Ambroise Duquenne, Hady Elsahar, Hongyu Gong, Kevin Heffernan, John Hoffman, Christopher Klaiber, Pengwei Li, Daniel Licht, Jean Maillard, Alice Rakotoarison, Kaushik Ram Sadagopan, Guillaume Wenzek, Ethan Ye, Bapi Akula, Peng-Jen Chen, Naji El Hachem, Brian Ellis, Gabriel Mejia Gonzalez, Justin Haaheim, Prangthip Hansanti, Russ Howes, Bernie Huang, Min-Jae Hwang, Hirofumi Inaguma, Somya Jain, Elahe Kalbassi, Amanda Kallet, Ilia Kulikov, Janice Lam, Daniel Li, Xutai Ma, Ruslan Mavlyutov, Benjamin Peloquin, Mohamed Ramadan, Abinesh Ramakrishnan, Anna Sun, Kevin Tran, Tuan Tran, Igor Tufanov, Vish Vogeti, Carleigh Wood, Yilin Yang, Bokai Yu, Pierre Andrews, Can Balioglu, Marta R. Costa-juss\`{a} \footnotemark[3], Onur \,{C}elebi,Maha Elbayad,Cynthia Gao, Francisco Guzm\'an, Justine Kao, Ann Lee, Alexandre Mourachko, Juan Pino, Sravya Popuri, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, Paden Tomasello, Changhan Wang, Jeff Wang, Skyler Wang},
|
55 |
+
journal={ArXiv},
|
56 |
+
year={2023}
|
57 |
+
}
|
58 |
+
```
|
59 |
+
# License
|
60 |
+
|
61 |
+
seamless_communication is CC-BY-NC 4.0 licensed
|