Datasets:
The dataset viewer is not available for this split.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Jam-ALT: A Readability-Aware Lyrics Transcription Benchmark
Jam-ALT is a revision of the JamendoLyrics dataset (79 songs in 4 languages), intended for use as an automatic lyrics transcription (ALT) benchmark.
It has been published in the ISMIR 2024 paper (full citation below):
📄 Lyrics Transcription for Humans: A Readability-Aware Benchmark
👥 O. Cífka, H. Schreiber, L. Miner, F.-R. Stöter
🏢 AudioShake
The lyrics have been revised according to the newly compiled annotation guidelines, which include rules about spelling and formatting, as well as punctuation and capitalization (PnC). The audio is identical to the JamendoLyrics dataset.
Note: The dataset is not time-aligned as it does not easily map to the timestamps from JamendoLyrics. To evaluate automatic lyrics alignment (ALA), please use JamendoLyrics directly.
See the project website for details and the JamendoLyrics community for related dataset.
Loading the data
from datasets import load_dataset
dataset = load_dataset("audioshake/jam-alt", split="test")
A subset is defined for each language (en
, fr
, de
, es
);
for example, use load_dataset("audioshake/jam-alt", "es")
to load only the Spanish songs.
To control how the audio is decoded, cast the audio
column using dataset.cast_column("audio", datasets.Audio(...))
.
Useful arguments to datasets.Audio()
are:
sampling_rate
andmono=True
to control the sampling rate and number of channels.decode=False
to skip decoding the audio and just get the MP3 file paths and contents.
The load_dataset
function also accepts a columns
parameter, which can be useful for example if you want to skip downloading the audio (see the example below).
Running the benchmark
The evaluation is implemented in our alt-eval
package:
from datasets import load_dataset
from alt_eval import compute_metrics
dataset = load_dataset("audioshake/jam-alt", revision="v1.2.0", split="test")
# transcriptions: list[str]
compute_metrics(dataset["text"], transcriptions, languages=dataset["language"])
For example, the following code can be used to evaluate Whisper:
dataset = load_dataset("audioshake/jam-alt", revision="v1.2.0", split="test")
dataset = dataset.cast_column("audio", datasets.Audio(decode=False)) # Get the raw audio file, let Whisper decode it
model = whisper.load_model("tiny")
transcriptions = [
"\n".join(s["text"].strip() for s in model.transcribe(a["path"])["segments"])
for a in dataset["audio"]
]
compute_metrics(dataset["text"], transcriptions, languages=dataset["language"])
Alternatively, if you already have transcriptions, you might prefer to skip loading the audio
column:
dataset = load_dataset("audioshake/jam-alt", revision="v1.2.0", split="test", columns=["name", "text", "language", "license_type"])
Citation
When using the benchmark, please cite our paper as well as the original JamendoLyrics paper:
@misc{cifka-2024-jam-alt,
author = {Ondrej C{\'{\i}}fka and
Hendrik Schreiber and
Luke Miner and
Fabian{-}Robert St{\"{o}}ter},
title = {Lyrics Transcription for Humans: {A} Readability-Aware Benchmark},
booktitle = {Proceedings of the 25th International Society for
Music Information Retrieval Conference},
pages = {737--744},
year = 2024,
publisher = {ISMIR},
doi = {10.5281/ZENODO.14877443},
url = {https://doi.org/10.5281/zenodo.14877443}
}
@inproceedings{durand-2023-contrastive,
author={Durand, Simon and Stoller, Daniel and Ewert, Sebastian},
booktitle={2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
title={Contrastive Learning-Based Audio to Lyrics Alignment for Multiple Languages},
year={2023},
pages={1-5},
address={Rhodes Island, Greece},
doi={10.1109/ICASSP49357.2023.10096725}
}
- Downloads last month
- 2,034