WhisperKit Evals Dataset
Overview
The WhisperKit Evals Dataset is a comprehensive collection of our speech recognition evaluation results, specifically designed to benchmark the performance of WhisperKit models across various devices and operating systems. This dataset provides detailed insights into performance and quality metrics, and model behavior under different conditions.
Dataset Structure
The dataset is organized into JSON files, each representing a single evaluation run. The file naming convention encodes crucial metadata:
{Date}_{CommitHash}/{DeviceIdentifier}_{ModelVersion}_{Timestamp}_{Dataset}_{UUID}.json
File Content
Each JSON file contains an array of objects with a testInfo
key, which includes:
diff
: An array of character-level differences between the reference and predicted transcriptions.prediction
: The full predicted transcription.reference
: The full reference transcription.wer
: Word Error Rate for the specific transcription.model
: The model used for the test.device
: The device on which the test was run.timings
: Various timing metrics for the transcription process.datasetRepo
: Repo on huggingface that was used as test data for the benchmarkdatasetDir
: Subfolder in the datasetRepo containing the specific audio files usedaudioFile
: The name of the audio file used.date
: Date that the benchmark was performed
It also includes various system measurements taken during the benchmarking process such as system diagnostics, memory, latency, and configuration.
Key Features
- Comprehensive Model Evaluation: Results from various WhisperKit models, including different sizes and architectures.
- Cross-Device Performance: Tests run on a range of devices, from mobile to desktop, allowing for performance comparisons.
- Detailed Metrics: Includes Word Error Rate (WER), processing speed, and detailed transcription comparisons.
- Rich Metadata: Each file contains extensive metadata about the test conditions and setup.
Use Cases
This dataset is invaluable for:
- Benchmarking speech recognition models
- Analyzing performance across different hardware
- Identifying specific strengths and weaknesses in transcription tasks
Contributing
We welcome contributions to expand and improve this dataset. Please refer to BENCHMARKS.md in the source repo.