--- task_categories: - automatic-speech-recognition multilinguality: - multilingual language: - en - fr - de - es tags: - music - lyrics - evaluation - benchmark - transcription - pnc pretty_name: 'Jam-ALT: A Readability-Aware Lyrics Transcription Benchmark' paperswithcode_id: jam-alt configs: - config_name: all data_files: - split: test path: - metadata.jsonl - subsets/*/audio/*.mp3 default: true - config_name: de data_files: - split: test path: - subsets/de/metadata.jsonl - subsets/de/audio/*.mp3 - config_name: en data_files: - split: test path: - subsets/en/metadata.jsonl - subsets/en/audio/*.mp3 - config_name: es data_files: - split: test path: - subsets/es/metadata.jsonl - subsets/es/audio/*.mp3 - config_name: fr data_files: - split: test path: - subsets/fr/metadata.jsonl - subsets/fr/audio/*.mp3 --- # Jam-ALT: A Readability-Aware Lyrics Transcription Benchmark ## Dataset description * **Project page:** https://audioshake.github.io/jam-alt/ * **Source code:** https://github.com/audioshake/alt-eval * **Paper (ISMIR 2024):** https://doi.org/10.5281/zenodo.14877443 * **Paper (arXiv):** https://arxiv.org/abs/2408.06370 * **Extended abstract (ISMIR 2023 LBD):** https://arxiv.org/abs/2311.13987 * **Related datasets:** https://huggingface.co/jamendolyrics Jam-ALT is a revision of the [**JamendoLyrics**](https://huggingface.co/datasets/jamendolyrics/jamendolyrics) dataset (79 songs in 4 languages), intended for use as an **automatic lyrics transcription** (**ALT**) benchmark. It has been published in the ISMIR 2024 paper (full citation [below](#citation)): \ 📄 [**Lyrics Transcription for Humans: A Readability-Aware Benchmark**](https://doi.org/10.5281/zenodo.14877443) \ 👥 O. Cífka, H. Schreiber, L. Miner, F.-R. Stöter \ 🏢 [AudioShake](https://www.audioshake.ai/) The lyrics have been revised according to the newly compiled [annotation guidelines](GUIDELINES.md), which include rules about spelling and formatting, as well as punctuation and capitalization (PnC). The audio is identical to the JamendoLyrics dataset. > [!note] > **Note:** The dataset is not time-aligned as it does not easily map to the timestamps from JamendoLyrics. To evaluate **automatic lyrics alignment** (**ALA**), please use [JamendoLyrics](https://huggingface.co/datasets/jamendolyrics/jamendolyrics) directly. > [!tip] > See the [project website](https://audioshake.github.io/jam-alt/) for details and the [JamendoLyrics community](https://huggingface.co/jamendolyrics) for related datasets. ## Loading the data ```python from datasets import load_dataset dataset = load_dataset("jamendolyrics/jam-alt", split="test") ``` A subset is defined for each language (`en`, `fr`, `de`, `es`); for example, use `load_dataset("jamendolyrics/jam-alt", "es")` to load only the Spanish songs. To control how the audio is decoded, cast the `audio` column using `dataset.cast_column("audio", datasets.Audio(...))`. Useful arguments to `datasets.Audio()` are: - `sampling_rate` and `mono=True` to control the sampling rate and number of channels. - `decode=False` to skip decoding the audio and just get the MP3 file paths and contents. The `load_dataset` function also accepts a `columns` parameter, which can be useful for example if you want to skip downloading the audio (see the example below). ## Running the benchmark The evaluation is implemented in our [`alt-eval` package](https://github.com/audioshake/alt-eval): ```python from datasets import load_dataset from alt_eval import compute_metrics dataset = load_dataset("jamendolyrics/jam-alt", revision="v1.2.0", split="test") # transcriptions: list[str] compute_metrics(dataset["text"], transcriptions, languages=dataset["language"]) ``` For example, the following code can be used to evaluate Whisper: ```python dataset = load_dataset("jamendolyrics/jam-alt", revision="v1.2.0", split="test") dataset = dataset.cast_column("audio", datasets.Audio(decode=False)) # Get the raw audio file, let Whisper decode it model = whisper.load_model("tiny") transcriptions = [ "\n".join(s["text"].strip() for s in model.transcribe(a["path"])["segments"]) for a in dataset["audio"] ] compute_metrics(dataset["text"], transcriptions, languages=dataset["language"]) ``` Alternatively, if you already have transcriptions, you might prefer to skip loading the `audio` column: ```python dataset = load_dataset("jamendolyrics/jam-alt", revision="v1.2.0", split="test", columns=["name", "text", "language", "license_type"]) ``` ## Citation When using the benchmark, please cite [our paper](https://doi.org/10.5281/zenodo.14877443) as well as the original [JamendoLyrics paper](https://arxiv.org/abs/2306.07744): ```bibtex @misc{cifka-2024-jam-alt, author = {Ondrej C{\'{\i}}fka and Hendrik Schreiber and Luke Miner and Fabian{-}Robert St{\"{o}}ter}, title = {Lyrics Transcription for Humans: {A} Readability-Aware Benchmark}, booktitle = {Proceedings of the 25th International Society for Music Information Retrieval Conference}, pages = {737--744}, year = 2024, publisher = {ISMIR}, doi = {10.5281/ZENODO.14877443}, url = {https://doi.org/10.5281/zenodo.14877443} } @inproceedings{durand-2023-contrastive, author={Durand, Simon and Stoller, Daniel and Ewert, Sebastian}, booktitle={2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, title={Contrastive Learning-Based Audio to Lyrics Alignment for Multiple Languages}, year={2023}, pages={1-5}, address={Rhodes Island, Greece}, doi={10.1109/ICASSP49357.2023.10096725} } ```