metadata
license: mit
datasets:
- ami
language:
- en
library_name: pyannote-audio
pipeline_tag: voice-activity-detection
tags:
- chime7_task1
Pyannote Segmentation model fine-tuned on CHiME-7 DASR data
This repo contains the Pyannote Segmentation model fine-tuned on data from CHiME-7 DASR Challenge. Only CHiME-6 (train set) data was used for training while Mixer 6 (dev set) was used for validation in order to avoid overfitting CHiME-6 scenario (Mixer 6 is arguably the most different scenario within the three in CHiME-7 DASR so I used it in validation here as the ultimate score is a macro-average across all scenarios).
It is used to perform diarization in the CHiME-7 DASR diarization baseline.
For more information see the CHiME-7 DASR baseline recipe in ESPNEt2.
Usage
Relies on pyannote.audio 2.1.1: see installation instructions.
from pyannote.audio import Model
model = Model.from_pretrained("popcornell/pyannote-segmentation-chime6-mixer6")