Commit
•
ca3a1fd
1
Parent(s):
e43ec49
Update weights and README.md
Browse files- .gitattributes +1 -0
- README.md +9 -12
- unity_on_device_s2t.ptl +2 -2
.gitattributes
CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
*.ptl filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
@@ -14,9 +14,9 @@ SeamlessM4T covers:
|
|
14 |
- ⌨️ 96 Languages for text input/output
|
15 |
- 🗣️ 35 languages for speech output.
|
16 |
|
17 |
-
Apart from [SeamlessM4T-LARGE (2.3B)](https://huggingface.co/facebook/seamless-m4t-large) and [SeamlessM4T-MEDIUM (1.2B)](https://huggingface.co/facebook/seamless-m4t-medium) models, we are also developing a small model (281M) targeting for on-device inference.
|
18 |
|
19 |
-
|
20 |
|
21 |
## Overview
|
22 |
| Model | Checkpoint | Num Params | Disk Size | Supported Tasks | Supported Languages|
|
@@ -26,30 +26,27 @@ Refer to [UnitY-Small](https://huggingface.co/facebook/seamless-m4t-unity-small)
|
|
26 |
|
27 |
UnitY-Small-S2T is a pruned version of UnitY-Small without 2nd pass unit decoding.
|
28 |
|
29 |
-
Note: If using pytorch runtime in python, only **pytorch<=1.11.0** is supported for **UnitY-Small(281M)**. We tested UnitY-Small-S2T(235M), it works with later versions.
|
30 |
-
|
31 |
## Inference
|
32 |
To use exported model, users don't need seamless_communication or fairseq2 dependency.
|
33 |
|
34 |
```python
|
35 |
import torchaudio
|
36 |
import torch
|
|
|
37 |
audio_input, _ = torchaudio.load(TEST_AUDIO_PATH) # Load waveform using torchaudio
|
38 |
|
39 |
s2t_model = torch.jit.load("unity_on_device_s2t.ptl") # Load exported S2T model
|
40 |
-
text = s2t_model(audio_input, tgt_lang=TGT_LANG) # Forward call with tgt_lang specified for ASR or S2TT
|
41 |
-
print(f"{lang}:{text}")
|
42 |
|
43 |
-
|
44 |
-
text
|
45 |
-
|
46 |
-
|
47 |
```
|
48 |
|
49 |
Also running the exported model doesn't need python runtime. For example, you could load this model in C++ following [this tutorial](https://pytorch.org/tutorials/advanced/cpp_export.html), or building your own on-device applications similar to [this example](https://github.com/pytorch/ios-demo-app/tree/master/SpeechRecognition)
|
50 |
|
51 |
# Citation
|
52 |
-
If you use SeamlessM4T in your work or any models/datasets/artifacts published in SeamlessM4T, please cite
|
53 |
|
54 |
```bibtex
|
55 |
@article{seamlessm4t2023,
|
@@ -61,4 +58,4 @@ If you use SeamlessM4T in your work or any models/datasets/artifacts published i
|
|
61 |
```
|
62 |
# License
|
63 |
|
64 |
-
seamless_communication is CC-BY-NC 4.0 licensed
|
|
|
14 |
- ⌨️ 96 Languages for text input/output
|
15 |
- 🗣️ 35 languages for speech output.
|
16 |
|
17 |
+
Apart from [SeamlessM4T-LARGE (2.3B)](https://huggingface.co/facebook/seamless-m4t-large) and [SeamlessM4T-MEDIUM (1.2B)](https://huggingface.co/facebook/seamless-m4t-medium) models, we are also developing a small model (281M) targeting for on-device inference.
|
18 |
|
19 |
+
This README contains an example to run an exported small model covering most tasks (ASR/S2TT/S2ST). The model could be executed on popular mobile devices with Pytorch Mobile (https://pytorch.org/mobile/home/).
|
20 |
|
21 |
## Overview
|
22 |
| Model | Checkpoint | Num Params | Disk Size | Supported Tasks | Supported Languages|
|
|
|
26 |
|
27 |
UnitY-Small-S2T is a pruned version of UnitY-Small without 2nd pass unit decoding.
|
28 |
|
|
|
|
|
29 |
## Inference
|
30 |
To use exported model, users don't need seamless_communication or fairseq2 dependency.
|
31 |
|
32 |
```python
|
33 |
import torchaudio
|
34 |
import torch
|
35 |
+
|
36 |
audio_input, _ = torchaudio.load(TEST_AUDIO_PATH) # Load waveform using torchaudio
|
37 |
|
38 |
s2t_model = torch.jit.load("unity_on_device_s2t.ptl") # Load exported S2T model
|
|
|
|
|
39 |
|
40 |
+
with torch.no_grad():
|
41 |
+
text = s2t_model(audio_input, tgt_lang=TGT_LANG) # Forward call with tgt_lang specified for ASR or S2TT
|
42 |
+
|
43 |
+
print(text) # Show text output
|
44 |
```
|
45 |
|
46 |
Also running the exported model doesn't need python runtime. For example, you could load this model in C++ following [this tutorial](https://pytorch.org/tutorials/advanced/cpp_export.html), or building your own on-device applications similar to [this example](https://github.com/pytorch/ios-demo-app/tree/master/SpeechRecognition)
|
47 |
|
48 |
# Citation
|
49 |
+
If you use SeamlessM4T in your work or any models/datasets/artifacts published in SeamlessM4T, please cite:
|
50 |
|
51 |
```bibtex
|
52 |
@article{seamlessm4t2023,
|
|
|
58 |
```
|
59 |
# License
|
60 |
|
61 |
+
seamless_communication is CC-BY-NC 4.0 licensed
|
unity_on_device_s2t.ptl
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:834591dbf6df69dd08f93ab303b7d506df8830552b8472818cc16c7564b2b9df
|
3 |
+
size 504153032
|