benjamin-paine's picture
Update README.md
4fad7a5 verified
|
raw
history blame
5.68 kB
metadata
dataset_info:
  features:
    - name: audio
      dtype: audio
    - name: title
      dtype: string
    - name: url
      dtype: string
    - name: artist
      dtype: string
    - name: album_title
      dtype: string
    - name: license
      dtype:
        class_label:
          names:
            '0': CC-BY 1.0
            '1': CC-BY 2.0
            '2': CC-BY 2.5
            '3': CC-BY 3.0
            '4': CC-BY 4.0
            '5': CC-Sampling+ 1.0
            '6': CC0 1.0
            '7': FMA Sound Recording Common Law
            '8': Free Art License
            '9': Public Domain Mark 1.0
    - name: copyright
      dtype: string
  splits:
    - name: train
      num_bytes: 6492778912.662
      num_examples: 8802
  download_size: 10506892695
  dataset_size: 6492778912.662
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: cc-by-4.0
task_categories:
  - audio-to-audio
  - audio-classification
tags:
  - freemusicarchive
  - freemusicarchive.org
  - fma
pretty_name: Free Music Archive Commercial 16 KHz - Full

FMA: A Dataset for Music Analysis

Michaël Defferrard, Kirell Benzi, Pierre Vandergheynst, Xavier Bresson.

International Society for Music Information Retrieval Conference (ISMIR), 2017.

We introduce the Free Music Archive (FMA), an open and easily accessible dataset suitable for evaluating several tasks in MIR, a field concerned with browsing, searching, and organizing large music collections. The community's growing interest in feature and end-to-end learning is however restrained by the limited availability of large audio datasets. The FMA aims to overcome this hurdle by providing 917 GiB and 343 days of Creative Commons-licensed audio from 106,574 tracks from 16,341 artists and 14,854 albums, arranged in a hierarchical taxonomy of 161 genres. It provides full-length and high-quality audio, pre-computed features, together with track- and user-level metadata, tags, and free-form text such as biographies. We here describe the dataset and how it was created, propose a train/validation/test split and three subsets, discuss some suitable MIR tasks, and evaluate some baselines for genre recognition. Code, data, and usage examples are available at https://github.com/mdeff/fma.

Paper: arXiv:1612.01840 - latex and reviews

Slides: doi:10.5281/zenodo.1066119

Poster: doi:10.5281/zenodo.1035847

This Pack

This is the full dataset, limited only the commercially licensed samples comprising a total of 8,802 samples clips of untrimmed length totaling 531 hours of audio in 10.5 GB of disk space.

License

  • The FMA codebase is released under The MIT License.
  • The FMA metadata is released under CC-BY 4.0.
  • The individual files are released under various Creative Commons family licenses, with a small amount of additional licenses. Each file has its license attached and important details of the license enumerated. To make it easy to use for developers and trainers, a configuration is available to limit only to commercially-usable data.

Please refer to any of the following URLs for additional details.

Total Duration by License

License Total Duration (Percentage)
CC-BY 4.0 377.0 hours (4.65%)
CC-BY 3.0 106.9 hours (1.32%)
FMA Sound Recording Common Law 19.9 hours (0.25%)
CC0 1.0 10.5 hours (0.13%)
CC-BY 1.0 10.4 hours (0.13%)
Free Art License 2.7 hours (0.03%)
CC-BY 2.0 2.5 hours (0.03%)
CC-Sampling+ 1.0 53.9 minutes (0.01%)
CC-BY 2.5 11.2 minutes (0.00%)

Citations

@inproceedings{fma_dataset,
  title = {{FMA}: A Dataset for Music Analysis},
  author = {Defferrard, Micha\"el and Benzi, Kirell and Vandergheynst, Pierre and Bresson, Xavier},
  booktitle = {18th International Society for Music Information Retrieval Conference (ISMIR)},
  year = {2017},
  archiveprefix = {arXiv},
  eprint = {1612.01840},
  url = {https://arxiv.org/abs/1612.01840},
}
@inproceedings{fma_challenge,
  title = {Learning to Recognize Musical Genre from Audio},
  subtitle = {Challenge Overview},
  author = {Defferrard, Micha\"el and Mohanty, Sharada P. and Carroll, Sean F. and Salath\'e, Marcel},
  booktitle = {The 2018 Web Conference Companion},
  year = {2018},
  publisher = {ACM Press},
  isbn = {9781450356404},
  doi = {10.1145/3184558.3192310},
  archiveprefix = {arXiv},
  eprint = {1803.05337},
  url = {https://arxiv.org/abs/1803.05337},
}