evie-8 commited on
Commit
3082a86
1 Parent(s): 405a30b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -1
README.md CHANGED
@@ -28,8 +28,50 @@ It achieves the following results on the evaluation set:
28
  - Confusion: 0.0520
29
 
30
  ## Model description
 
 
31
 
32
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
 
34
  ## Intended uses & limitations
35
 
 
28
  - Confusion: 0.0520
29
 
30
  ## Model description
31
+ This segmentation model has been trained on English data (backup_uganda) using [diarizers](https://github.com/huggingface/diarizers/tree/main).
32
+ It can be loaded with two lines of code:
33
 
34
+ ```python
35
+ from diarizers import SegmentationModel
36
+
37
+ segmentation_model = SegmentationModel().from_pretrained('evie-8/speaker-segmentation-fine-tuned-backup-uganda-eng')
38
+ ```
39
+
40
+ To use it within a pyannote speaker diarization pipeline, load the [pyannote/speaker-diarization-3.1](https://huggingface.co/pyannote/speaker-diarization-3.1) pipeline, and convert the model to a pyannote compatible format:
41
+
42
+ ```python
43
+
44
+ from pyannote.audio import Pipeline
45
+ import torch
46
+
47
+ device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
48
+
49
+
50
+ # load the pre-trained pyannote pipeline
51
+ pipeline = Pipeline.from_pretrained("pyannote/speaker-diarization-3.1")
52
+ pipeline.to(device)
53
+
54
+ # replace the segmentation model with your fine-tuned one
55
+ model = segmentation_model.to_pyannote_model()
56
+ pipeline._segmentation.model = model.to(device)
57
+ ```
58
+
59
+ ```python
60
+ # load dataset example
61
+ dataset = load_dataset("evie-8/backup_uganda", "eng", split="data")
62
+ sample = dataset[0]["audio"]
63
+
64
+ # pre-process inputs
65
+ sample["waveform"] = torch.from_numpy(sample.pop("array")[None, :]).to(device, dtype=model.dtype)
66
+ sample["sample_rate"] = sample.pop("sampling_rate")
67
+
68
+ # perform inference
69
+ diarization = pipeline(sample)
70
+
71
+ # dump the diarization output to disk using RTTM format
72
+ with open("audio.rttm", "w") as rttm:
73
+ diarization.write_rttm(rttm)
74
+ ```
75
 
76
  ## Intended uses & limitations
77