Spaces:
Sleeping
Sleeping
Update README
Browse files
README.md
CHANGED
@@ -32,12 +32,34 @@ python cli.py \
|
|
32 |
[--vad_max_merge_size VAD_MAX_MERGE_SIZE] \
|
33 |
[--vad_padding VAD_PADDING] \
|
34 |
[--vad_prompt_window VAD_PROMPT_WINDOW]
|
|
|
35 |
```
|
36 |
In addition, you may also use URL's in addition to file paths as input.
|
37 |
```
|
38 |
python cli.py --model large --vad silero-vad --language Japanese "https://www.youtube.com/watch?v=4cICErqqRSM"
|
39 |
```
|
40 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
41 |
# Docker
|
42 |
|
43 |
To run it in Docker, first install Docker and optionally the NVIDIA Container Toolkit in order to use the GPU. Then
|
|
|
32 |
[--vad_max_merge_size VAD_MAX_MERGE_SIZE] \
|
33 |
[--vad_padding VAD_PADDING] \
|
34 |
[--vad_prompt_window VAD_PROMPT_WINDOW]
|
35 |
+
[--vad_parallel_devices COMMA_DELIMITED_DEVICES]
|
36 |
```
|
37 |
In addition, you may also use URL's in addition to file paths as input.
|
38 |
```
|
39 |
python cli.py --model large --vad silero-vad --language Japanese "https://www.youtube.com/watch?v=4cICErqqRSM"
|
40 |
```
|
41 |
|
42 |
+
## Parallel Execution
|
43 |
+
|
44 |
+
You can also run both the Web-UI or the CLI on multiple GPUs in parallel, using the `vad_parallel_devices` option. This takes a comma-delimited list of
|
45 |
+
device IDs (0, 1, etc.) that Whisper should be distributed to and run on concurrently:
|
46 |
+
```
|
47 |
+
python cli.py --model large --vad silero-vad --language Japanese --vad_parallel_devices 0,1 "https://www.youtube.com/watch?v=4cICErqqRSM"
|
48 |
+
```
|
49 |
+
|
50 |
+
Note that this requires a VAD to function properly, otherwise only the first GPU will be used. Though you could use `period-vad` to avoid taking the hit
|
51 |
+
of running Silero-Vad, at a slight cost to accuracy.
|
52 |
+
|
53 |
+
This is achieved by creating N child processes (where N is the number of selected devices), where Whisper is run concurrently. In `app.py`, you can also
|
54 |
+
set the `vad_process_timeout` option, which configures the number of seconds until a process is killed due to inactivity, freeing RAM and video memory.
|
55 |
+
The default value is 30 minutes.
|
56 |
+
|
57 |
+
```
|
58 |
+
python app.py --input_audio_max_duration -1 --vad_parallel_devices 0,1 --vad_process_timeout 3600
|
59 |
+
```
|
60 |
+
|
61 |
+
You may also use `vad_process_timeout` with a single device (`--vad_parallel_devices 0`), if you prefer to free video memory after a period of time.
|
62 |
+
|
63 |
# Docker
|
64 |
|
65 |
To run it in Docker, first install Docker and optionally the NVIDIA Container Toolkit in order to use the GPU. Then
|
cli.py
CHANGED
@@ -32,7 +32,7 @@ def cli():
|
|
32 |
parser.add_argument("--vad_max_merge_size", type=optional_float, default=30, help="The maximum size (in seconds) of a voice segment")
|
33 |
parser.add_argument("--vad_padding", type=optional_float, default=1, help="The padding (in seconds) to add to each voice segment")
|
34 |
parser.add_argument("--vad_prompt_window", type=optional_float, default=3, help="The window size of the prompt to pass to Whisper")
|
35 |
-
parser.add_argument("--vad_parallel_devices", type=str, default="", help="A commma delimited list of CUDA devices to use for
|
36 |
|
37 |
parser.add_argument("--temperature", type=float, default=0, help="temperature to use for sampling")
|
38 |
parser.add_argument("--best_of", type=optional_int, default=5, help="number of candidates when sampling with non-zero temperature")
|
|
|
32 |
parser.add_argument("--vad_max_merge_size", type=optional_float, default=30, help="The maximum size (in seconds) of a voice segment")
|
33 |
parser.add_argument("--vad_padding", type=optional_float, default=1, help="The padding (in seconds) to add to each voice segment")
|
34 |
parser.add_argument("--vad_prompt_window", type=optional_float, default=3, help="The window size of the prompt to pass to Whisper")
|
35 |
+
parser.add_argument("--vad_parallel_devices", type=str, default="", help="A commma delimited list of CUDA devices to use for parallel processing. If None, disable parallel processing.")
|
36 |
|
37 |
parser.add_argument("--temperature", type=float, default=0, help="temperature to use for sampling")
|
38 |
parser.add_argument("--best_of", type=optional_int, default=5, help="number of candidates when sampling with non-zero temperature")
|