File size: 2,073 Bytes
2485dd8
b30004d
2485dd8
 
 
 
 
070b677
bb98523
 
2485dd8
 
 
 
8373c4c
2485dd8
 
 
 
c397098
 
2485dd8
c397098
5067fec
 
 
c397098
 
2485dd8
 
c397098
2485dd8
c397098
e9f6fea
2485dd8
 
 
 
 
 
 
4c67f94
 
 
2485dd8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
title: Seamless Streaming
emoji: 📞
colorFrom: blue
colorTo: yellow
sdk: docker
pinned: false
suggested_hardware: t4-small
models:
 - facebook/seamless-streaming
---

# Seamless Streaming demo
## Running on HF spaces
You can simply duplicate the space to run it.

## Running locally
### Install backend seamless_server dependencies

> [!NOTE]
> Please note: we *do not* recommend running the model on CPU. CPU inference will be slow and introduce noticable delays in the simultaneous translation.

> [!NOTE]
> The example below is for PyTorch stable (2.1.1) and variant cu118. 
> Check [here](https://pytorch.org/get-started/locally/) to find the torch/torchaudio command for your variant. 
> Check [here](https://github.com/facebookresearch/fairseq2#variants) to find the fairseq2 command for your variant.

If running for the first time, create conda environment and install the desired torch version. Then install the rest of the requirements:
```
cd seamless_server
conda create --yes --name smlss_server python=3.8 libsndfile==1.0.31
conda activate smlss_server
conda install --yes pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
pip install fairseq2==0.2.0.dev20231123+cu118 --pre --extra-index-url https://fair.pkg.atmeta.com/fairseq2/whl/nightly/pt2.1.1/cu118
pip install -r requirements.txt
```

### Install frontend streaming-react-app dependencies
```
conda install -c conda-forge nodejs
cd streaming-react-app
npm install --global yarn
yarn
yarn build  # this will create the dist/ folder
```


### Running the server

The server can be run locally with uvicorn below.
Run the server in dev mode:

```
cd seamless_server
uvicorn app_pubsub:app --reload --host localhost
```

Run the server in prod mode:

```
cd seamless_server
uvicorn app_pubsub:app --host 0.0.0.0
```

To enable additional logging from uvicorn pass `--log-level debug` or `--log-level trace`.


### Debuging

If you enable "Server Debug Flag" when starting streaming from the client, this enables extensive debug logging and it saves audio files in /debug folder.