Dataset Viewer
Full Screen
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ParserError
Message:      Error tokenizing data. C error: Expected 12 fields in line 2864, saw 23

Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 233, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2831, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1845, in _head
                  return _examples_to_batch(list(self.take(n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2012, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1507, in __iter__
                  for key_example in islice(self.ex_iterable, self.n - ex_iterable_num_taken):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 268, in __iter__
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/csv/csv.py", line 190, in _generate_tables
                  for batch_idx, df in enumerate(csv_file_reader):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1843, in __next__
                  return self.get_chunk()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1985, in get_chunk
                  return self.read(nrows=size)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1923, in read
                  ) = self._engine.read(  # type: ignore[attr-defined]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 234, in read
                  chunks = self._reader.read_low_memory(nrows)
                File "parsers.pyx", line 850, in pandas._libs.parsers.TextReader.read_low_memory
                File "parsers.pyx", line 905, in pandas._libs.parsers.TextReader._read_rows
                File "parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows
                File "parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status
                File "parsers.pyx", line 2061, in pandas._libs.parsers.raise_parser_error
              pandas.errors.ParserError: Error tokenizing data. C error: Expected 12 fields in line 2864, saw 23

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

BIRDeep Audio Annotations

The BIRDeep Audio Annotations dataset is a collection of bird vocalizations from Doñana National Park, Spain. It was created as part of the BIRDeep project, which aims to optimize the detection and classification of bird species in audio recordings using deep learning techniques. The dataset is intended for use in training and evaluating models for bird vocalization detection and identification.

The research code and further information is available at the Github Repository.

Dataset Details

Dataset Description

  • Curated by: Estación Biológica de Doñana (CSIC) and Universidad de Córdoba
  • Funded by: BIRDeep project (TED2021-129871A-I00), which is funded by MICIU/AEI/10.13039/501100011033 and the 'European Union NextGenerationEU/PRTR', as well as grants PID2020-115129RJ-I00 from MCIN/AEI/10.13039/501100011033.
  • Shared by: BIRDeep Project
  • Language(s): English
  • License: MIT

Dataset Sources

  • Code Repository: BIRDeep Neural Networks
  • Paper: Decoding the Sounds of Doñana: Advancements in Bird Detection and Identification Through Deep Learning

Uses

Direct Use

The dataset is intended for use in training and evaluating models for bird vocalization detection and identification. It can be used to automate the annotation of these recordings, facilitating relevant ecological studies.

Dataset Structure

The dataset includes audio data categorized into 38 different classes, representing a variety of bird species found in the park. The data was collected from three main habitats across nine different locations within Doñana National Park, providing a diverse range of bird vocalizations.

The distribution of the 38 different classes through the 3 subdatasets (train, validation and test) is the following:

image/png

Data Files Description

There are 3 .CSV files that contain all the metadata related to each split of the dataset (train, validation, and test). Each of these .CSV files includes the following information. Each row represents one annotation (an annotated bird song). There might be more than one row per audio.

  • path: Relative path from the Audio folder to the corresponding audio. For images, change the file format to .PNG and use the images folder instead of the Audios folder.
  • annotator: Expert ornithologist who annotated the detection.
  • recorder: Code of the recorder; see below for the mapping of recorder, location, and coordinates.
  • date: Date of the recording.
  • time: Time of the recording.
  • audio_duration: Duration of the audio (all are 1-minute audios).
  • start_time: Start time of the annotated bird song relative to the full duration of the audio.
  • end_time: End time of the annotated bird song relative to the full duration of the audio.
  • low_frequency: Lower frequency of the annotated bird song.
  • high_frequency: Higher frequency of the annotated bird song.
  • specie: Species to which the annotation belongs.
  • bbox: Bounding box coordinates in the image (YOLOv8 format).

Each annotation has been adapted to the YOLOv8 required format, which follows the same folder structure as the image folder (which is the same as the Audio folder) for a labels folder. It contains a .TXT file for each image with one row per annotation, including the species and bounding box.

Dataset Creation

Curation Rationale

The dataset was created to improve the accuracy and efficiency of bird species identification using deep learning models for our study case (Doñana National Park). It addresses the challenge of managing large datasets of acoustic recordings for identifying species of interest in ecoacoustics studies.

Source Data

Data Collection and Processing

Audio recordings were collected from three main habitats across nine different locations within Doñana National Park using automatic audio recorders (AudioMoths). See map below.

image/jpeg

The names of the places correspond to the following recorders and coordinates:

Number Habitat Place Name Recorder Lat Lon Installation Date
Site 1 low shrubland Monteblanco AM1 37.074 -6.624 03/02/2023
Site 2 high shrubland Sabinar AM2 37.1869444 -6.720555556 03/02/2023
Site 3 high shrubland Ojillo AM3 37.2008333 -6.613888889 03/02/2023
Site 4 low shrubland Pozo Sta Olalla AM4 37.2202778 -6.729444444 03/02/2023
Site 5 ecotone Torre Palacio AM8 37.1052778 -6.5875 03/02/2023
Site 6 ecotone Pajarera AM10 37.1055556 -6.586944444 03/02/2023
Site 7 ecotone Caño Martinazo AM11 37.2086111 -6.512222222 03/02/2023
Site 8 marshland Cancela Millán AM15 37.0563889 -6.6025 03/02/2023
Site 9 marshland Juncabalejo AM16 36.9361111 -6.378333333 03/02/2023

All recording times and datasets are in UTC format.

Data producers

The data was produced by researchers from Estación Biológica de Doñana and Universidad de Córdoba. A research center and University at the south zone of Spain, close to the study region, National Park of Doñana.

Annotations

Approximately 500 minutes of audio data were annotated, prioritizing times when birds are most active to capture as many songs as possible, specifically from a few hours before dawn until midday.

Annotation process

Annotations were made manually by experts, resulting in 3749 annotations representing 38 different classes. In addition to the species-specific classes, other general classes were distinguished: Genus (when the species was unknown but the genus of the species was distinguished), a general "Bird" class, and a "No Audio" class for recordings that contain only soundscape without bird songs.

As the Bird Song Detector only has two classes, labels were reclassified as "Bird" or "No bird" for recordings that include only soundscape background without biotic sound or whether biotic sounds were non-avian.

Who are the annotators?

  • Eduardo Santamaría García, Estación Biológica de Doñana, Dept. of Ecology and Evolution, Sevilla, Spain
  • Giulia Bastianelli, Estación Biológica de Doñana, ICTS-Doñana (Infraestructura Científico-Técnica Singular de Doñana), Sevilla, Spain

Bias, Risks, and Limitations

The dataset may have biases due to the specific ecological context of Doñana National Park and the focus on bird vocalizations. It also exhibits class imbalance, with varying frequencies of annotations across different bird species classes. Additionally, the dataset contains inherent challenges related to environmental noise.

Recommendations

Users should be aware of the ecological context and potential biases when using the dataset. They should also consider the class imbalance and the challenges related to environmental noise.

More Information

This dataset incorporates synthetic background audio, which has been created by introducing noise and modifying the original audio intensities. This process, known as Data Augmentation, enhances the robustness of the dataset. Additionally, a subset of the ESC-50 dataset, which is a widely recognized benchmark for environmental sound classification, has also been included to enrich the diversity of the dataset. These additional datasets can be excluded as they are in separate folders within the root folders for audios, images, and labels (Data Augmentation and ESC50). Annotations for these datasets should be removed from the CSV files if they are not used in processing the dataset.

The synthetic audio was created using a Python script that took the original background audio recordings and modified their intensities and shifted them. This method allowed for the introduction of noise and variations in the audio, simulating different recording conditions and enhancing the dataset's robustness.

Dataset Card Authors and Affiliations

  • Alba Márquez-Rodríguez, Estación Biológica de Doñana, Dept. of Ecology and Evolution & Universidad de Córdoba, Dept. of Informatics and Numeric Analysis
  • Miguel Ángel Muñoz-Mohedano, Estación Biológica de Doñana, Dept. of Ecology and Evolution
  • Manuel Jesús Marín-Jiménez, Universidad de Córdoba, Dept. of Informatics and Numeric Analysis
  • Eduardo Santamaría-García, Estación Biológica de Doñana, Dept. of Ecology and Evolution
  • Giulia Bastianelli, Estación Biológica de Doñana, ICTS-Doñana (Infraestructura Científico-Técnica Singular de Doñana)
  • Irene Mendoza, Estación Biológica de Doñana, Dept. of Ecology and Evolution

Citation

@misc{birdeep_audioannotations_2024,
    author = {M{\'a}rquez-Rodr{\'i}guez, Alba and Muñoz-Mohedano, Miguel {\'A}ngel and Mar{\'i}n-Jim{\'e}nez, Manuel Jes{\'u}s and Santamar{\'i}a-Garc{\'i}a, Eduardo and Bastianelli, Giulia and Mendoza, Irene},
    title = {BIRDeepAudioAnnotations (Revision 4cf0456)},
    url = {https://huggingface.co/datasets/GrunCrow/BIRDeep_AudioAnnotations},
    year = {2024},
    doi = {10.57967/hf/2801},
    publisher = {Hugging Face}
}

Dataset Card Contact

Alba Márquez-Rodríguez - ai.gruncrow@gmail.com

Downloads last month
49