Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,131 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- pyannote
|
4 |
+
- pyannote-audio
|
5 |
+
- pyannote-audio-pipeline
|
6 |
+
- audio
|
7 |
+
- voice
|
8 |
+
- speech
|
9 |
+
- speaker
|
10 |
+
- speaker-diarization
|
11 |
+
- speaker-change-detection
|
12 |
+
- voice-activity-detection
|
13 |
+
- overlapped-speech-detection
|
14 |
+
- automatic-speech-recognition
|
15 |
+
datasets:
|
16 |
+
- ami
|
17 |
+
- dihard
|
18 |
+
- voxconverse
|
19 |
+
- aishell
|
20 |
+
- repere
|
21 |
+
- voxceleb
|
22 |
+
license: mit
|
23 |
+
---
|
24 |
+
|
25 |
+
# 🎹 Speaker diarization
|
26 |
+
|
27 |
+
Relies on pyannote.audio 2.0: see [installation instructions](https://github.com/pyannote/pyannote-audio/tree/develop#installation).
|
28 |
+
|
29 |
+
|
30 |
+
## TL;DR
|
31 |
+
|
32 |
+
```python
|
33 |
+
# load the pipeline from Hugginface Hub
|
34 |
+
from pyannote.audio import Pipeline
|
35 |
+
pipeline = Pipeline.from_pretrained("pyannote/speaker-diarization@2022.07")
|
36 |
+
|
37 |
+
# apply the pipeline to an audio file
|
38 |
+
diarization = pipeline("audio.wav")
|
39 |
+
|
40 |
+
# dump the diarization output to disk using RTTM format
|
41 |
+
with open("audio.rttm", "w") as rttm:
|
42 |
+
diarization.write_rttm(rttm)
|
43 |
+
```
|
44 |
+
|
45 |
+
## Advanced usage
|
46 |
+
|
47 |
+
In case the number of speakers is known in advance, one can use the `num_speakers` option:
|
48 |
+
|
49 |
+
```python
|
50 |
+
diarization = pipeline("audio.wav", num_speakers=2)
|
51 |
+
```
|
52 |
+
|
53 |
+
One can also provide lower and/or upper bounds on the number of speakers using `min_speakers` and `max_speakers` options:
|
54 |
+
|
55 |
+
```python
|
56 |
+
diarization = pipeline("audio.wav", min_speakers=2, max_speakers=5)
|
57 |
+
```
|
58 |
+
|
59 |
+
If you feel adventurous, you can try and play with the various pipeline hyper-parameters.
|
60 |
+
For instance, one can use a more aggressive voice activity detection by increasing the value of `segmentation_onset` threshold:
|
61 |
+
|
62 |
+
```python
|
63 |
+
hparams = pipeline.parameters(instantiated=True)
|
64 |
+
hparams["segmentation_onset"] += 0.1
|
65 |
+
pipeline.instantiate(hparams)
|
66 |
+
```
|
67 |
+
|
68 |
+
## Benchmark
|
69 |
+
|
70 |
+
### Real-time factor
|
71 |
+
|
72 |
+
Real-time factor is around 5% using one Nvidia Tesla V100 SXM2 GPU (for the neural inference part) and one Intel Cascade Lake 6248 CPU (for the clustering part).
|
73 |
+
|
74 |
+
In other words, it takes approximately 3 minutes to process a one hour conversation.
|
75 |
+
|
76 |
+
### Accuracy
|
77 |
+
|
78 |
+
This pipeline is benchmarked on a growing collection of datasets.
|
79 |
+
|
80 |
+
Processing is fully automatic:
|
81 |
+
|
82 |
+
* no manual voice activity detection (as is sometimes the case in the literature)
|
83 |
+
* no manual number of speakers (though it is possible to provide it to the pipeline)
|
84 |
+
* no fine-tuning of the internal models nor tuning of the pipeline hyper-parameters to each dataset
|
85 |
+
|
86 |
+
... with the least forgiving diarization error rate (DER) setup (named *"Full"* in [this paper](https://doi.org/10.1016/j.csl.2021.101254)):
|
87 |
+
|
88 |
+
* no forgiveness collar
|
89 |
+
* evaluation of overlapped speech
|
90 |
+
|
91 |
+
|
92 |
+
| Benchmark | [DER%](. "Diarization error rate") | [FA%](. "False alarm rate") | [Miss%](. "Missed detection rate") | [Conf%](. "Speaker confusion rate") | Expected output | File-level evaluation |
|
93 |
+
| ---------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------- | --------------------------- | ---------------------------------- | ----------------------------------- | ------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------ |
|
94 |
+
| [AISHELL-4](http://www.openslr.org/111/) | 14.61 | 3.31 | 4.35 | 6.95 | [RTTM](reproducible_research/AISHELL.SpeakerDiarization.Full.test.rttm) | [eval](reproducible_research/AISHELL.SpeakerDiarization.Full.test.eval) |
|
95 |
+
| [AMI *Mix-Headset*](https://groups.inf.ed.ac.uk/ami/corpus/) [*only_words*](https://github.com/BUTSpeechFIT/AMI-diarization-setup) | 18.21 | 3.28 | 11.07 | 3.87 | [RTTM](reproducible_research/2022.07/AMI.SpeakerDiarization.only_words.test.rttm) | [eval](reproducible_research/2022.07/AMI.SpeakerDiarization.only_words.test.eval) |
|
96 |
+
| [AMI *Array1-01*](https://groups.inf.ed.ac.uk/ami/corpus/) [*only_words*](https://github.com/BUTSpeechFIT/AMI-diarization-setup) | 29.00 | 2.71 | 21.61 | 4.68 | [RTTM](reproducible_research/2022.07/AMI-SDM.SpeakerDiarization.only_words.test.rttm) | [eval](reproducible_research/2022.07/AMI-SDM.SpeakerDiarization.only_words.test.eval) |
|
97 |
+
| [CALLHOME](https://catalog.ldc.upenn.edu/LDC2001S97) [*Part2*](https://github.com/BUTSpeechFIT/CALLHOME_sublists/issues/1) | 30.24 | 3.71 | 16.86 | 9.66 | [RTTM](reproducible_research/2022.07/CALLHOME.SpeakerDiarization.CALLHOME.test.rttm) | [eval](reproducible_research/2022.07/CALLHOME.SpeakerDiarization.CALLHOME.test.eval) |
|
98 |
+
| [DIHARD 3 *Full*](https://arxiv.org/abs/2012.01477) | 20.99 | 4.25 | 10.74 | 6.00 | [RTTM](reproducible_research/2022.07/DIHARD.SpeakerDiarization.Full.test.rttm) | [eval](reproducible_research/2022.07/DIHARD.SpeakerDiarization.Full.test.eval) |
|
99 |
+
| [REPERE *Phase 2*](https://islrn.org/resources/360-758-359-485-0/) | 12.62 | 1.55 | 3.30 | 7.76 | [RTTM](reproducible_research/2022.07/REPERE.SpeakerDiarization.Full.test.rttm) | [eval](reproducible_research/2022.07/REPERE.SpeakerDiarization.Full.test.eval) |
|
100 |
+
| [VoxConverse *v0.0.2*](https://github.com/joonson/voxconverse) | 12.76 | 3.45 | 3.85 | 5.46 | [RTTM](reproducible_research/2022.07/VoxConverse.SpeakerDiarization.VoxConverse.test.rttm) | [eval](reproducible_research/2022.07/VoxConverse.SpeakerDiarization.VoxConverse.test.eval) |
|
101 |
+
|
102 |
+
|
103 |
+
## Support
|
104 |
+
|
105 |
+
For commercial enquiries and scientific consulting, please contact [me](mailto:herve@niderb.fr).
|
106 |
+
For [technical questions](https://github.com/pyannote/pyannote-audio/discussions) and [bug reports](https://github.com/pyannote/pyannote-audio/issues), please check [pyannote.audio](https://github.com/pyannote/pyannote-audio) Github repository.
|
107 |
+
|
108 |
+
|
109 |
+
## Citations
|
110 |
+
|
111 |
+
```bibtex
|
112 |
+
@inproceedings{Bredin2021,
|
113 |
+
Title = {{End-to-end speaker segmentation for overlap-aware resegmentation}},
|
114 |
+
Author = {{Bredin}, Herv{\'e} and {Laurent}, Antoine},
|
115 |
+
Booktitle = {Proc. Interspeech 2021},
|
116 |
+
Address = {Brno, Czech Republic},
|
117 |
+
Month = {August},
|
118 |
+
Year = {2021},
|
119 |
+
}
|
120 |
+
```
|
121 |
+
|
122 |
+
```bibtex
|
123 |
+
@inproceedings{Bredin2020,
|
124 |
+
Title = {{pyannote.audio: neural building blocks for speaker diarization}},
|
125 |
+
Author = {{Bredin}, Herv{\'e} and {Yin}, Ruiqing and {Coria}, Juan Manuel and {Gelly}, Gregory and {Korshunov}, Pavel and {Lavechin}, Marvin and {Fustes}, Diego and {Titeux}, Hadrien and {Bouaziz}, Wassim and {Gill}, Marie-Philippe},
|
126 |
+
Booktitle = {ICASSP 2020, IEEE International Conference on Acoustics, Speech, and Signal Processing},
|
127 |
+
Address = {Barcelona, Spain},
|
128 |
+
Month = {May},
|
129 |
+
Year = {2020},
|
130 |
+
}
|
131 |
+
```
|