Datasets:
audio
audioduration (s) 0
27.7
| label
stringlengths 1
80
| speaker
stringlengths 12
15
| subset
stringclasses 10
values |
---|---|---|---|
syll_1 | BS1-SPEAKER-1 | BS1 |
|
syll_3 | BS1-SPEAKER-1 | BS1 |
|
syll_2 | BS1-SPEAKER-1 | BS1 |
|
syll_5 | BS1-SPEAKER-1 | BS1 |
|
syll_0 | BS1-SPEAKER-1 | BS1 |
|
syll_4 | BS1-SPEAKER-1 | BS1 |
|
syll_1 | BS1-SPEAKER-1 | BS1 |
|
syll_3 | BS1-SPEAKER-1 | BS1 |
|
syll_2 | BS1-SPEAKER-1 | BS1 |
|
syll_5 | BS1-SPEAKER-1 | BS1 |
|
syll_0 | BS1-SPEAKER-1 | BS1 |
|
syll_4 | BS1-SPEAKER-1 | BS1 |
|
syll_1 | BS1-SPEAKER-1 | BS1 |
|
syll_3 | BS1-SPEAKER-1 | BS1 |
|
syll_2 | BS1-SPEAKER-1 | BS1 |
|
syll_5 | BS1-SPEAKER-1 | BS1 |
|
syll_0 | BS1-SPEAKER-1 | BS1 |
|
syll_4 | BS1-SPEAKER-1 | BS1 |
|
syll_1 | BS1-SPEAKER-1 | BS1 |
|
syll_3 | BS1-SPEAKER-1 | BS1 |
|
syll_2 | BS1-SPEAKER-1 | BS1 |
|
syll_5 | BS1-SPEAKER-1 | BS1 |
|
syll_0 | BS1-SPEAKER-1 | BS1 |
|
syll_4 | BS1-SPEAKER-1 | BS1 |
|
syll_1 | BS1-SPEAKER-1 | BS1 |
|
syll_3 | BS1-SPEAKER-1 | BS1 |
|
syll_2 | BS1-SPEAKER-1 | BS1 |
|
syll_5 | BS1-SPEAKER-1 | BS1 |
|
syll_0 | BS1-SPEAKER-1 | BS1 |
|
syll_4 | BS1-SPEAKER-1 | BS1 |
|
syll_1 | BS1-SPEAKER-1 | BS1 |
|
syll_3 | BS1-SPEAKER-1 | BS1 |
|
syll_2 | BS1-SPEAKER-1 | BS1 |
|
syll_5 | BS1-SPEAKER-1 | BS1 |
|
syll_0 | BS1-SPEAKER-1 | BS1 |
|
syll_4 | BS1-SPEAKER-1 | BS1 |
|
syll_1 | BS1-SPEAKER-1 | BS1 |
|
syll_3 | BS1-SPEAKER-1 | BS1 |
|
syll_2 | BS1-SPEAKER-1 | BS1 |
|
syll_5 | BS1-SPEAKER-1 | BS1 |
|
syll_0 | BS1-SPEAKER-1 | BS1 |
|
syll_4 | BS1-SPEAKER-1 | BS1 |
|
syll_1 | BS1-SPEAKER-1 | BS1 |
|
syll_3 | BS1-SPEAKER-1 | BS1 |
|
syll_2 | BS1-SPEAKER-1 | BS1 |
|
syll_5 | BS1-SPEAKER-1 | BS1 |
|
syll_0 | BS1-SPEAKER-1 | BS1 |
|
syll_4 | BS1-SPEAKER-1 | BS1 |
|
syll_1 | BS1-SPEAKER-1 | BS1 |
|
syll_3 | BS1-SPEAKER-1 | BS1 |
|
syll_2 | BS1-SPEAKER-1 | BS1 |
|
syll_5 | BS1-SPEAKER-1 | BS1 |
|
syll_0 | BS1-SPEAKER-1 | BS1 |
|
syll_4 | BS1-SPEAKER-1 | BS1 |
|
syll_1 | BS1-SPEAKER-1 | BS1 |
|
syll_3 | BS1-SPEAKER-1 | BS1 |
|
syll_2 | BS1-SPEAKER-1 | BS1 |
|
syll_5 | BS1-SPEAKER-1 | BS1 |
|
syll_0 | BS1-SPEAKER-1 | BS1 |
|
syll_4 | BS1-SPEAKER-1 | BS1 |
|
syll_1 | BS1-SPEAKER-1 | BS1 |
|
syll_3 | BS1-SPEAKER-1 | BS1 |
|
syll_2 | BS1-SPEAKER-1 | BS1 |
|
syll_5 | BS1-SPEAKER-1 | BS1 |
|
syll_0 | BS1-SPEAKER-1 | BS1 |
|
syll_4 | BS1-SPEAKER-1 | BS1 |
|
syll_1 | BS1-SPEAKER-1 | BS1 |
|
syll_3 | BS1-SPEAKER-1 | BS1 |
|
syll_2 | BS1-SPEAKER-1 | BS1 |
|
syll_5 | BS1-SPEAKER-1 | BS1 |
|
syll_0 | BS1-SPEAKER-1 | BS1 |
|
syll_4 | BS1-SPEAKER-1 | BS1 |
|
syll_1 | BS1-SPEAKER-1 | BS1 |
|
syll_3 | BS1-SPEAKER-1 | BS1 |
|
syll_2 | BS1-SPEAKER-1 | BS1 |
|
syll_5 | BS1-SPEAKER-1 | BS1 |
|
syll_0 | BS1-SPEAKER-1 | BS1 |
|
syll_4 | BS1-SPEAKER-1 | BS1 |
|
syll_1 | BS1-SPEAKER-1 | BS1 |
|
syll_3 | BS1-SPEAKER-1 | BS1 |
|
syll_2 | BS1-SPEAKER-1 | BS1 |
|
syll_5 | BS1-SPEAKER-1 | BS1 |
|
syll_0 | BS1-SPEAKER-1 | BS1 |
|
syll_4 | BS1-SPEAKER-1 | BS1 |
|
syll_1 | BS1-SPEAKER-1 | BS1 |
|
syll_3 | BS1-SPEAKER-1 | BS1 |
|
syll_2 | BS1-SPEAKER-1 | BS1 |
|
syll_5 | BS1-SPEAKER-1 | BS1 |
|
syll_0 | BS1-SPEAKER-1 | BS1 |
|
syll_4 | BS1-SPEAKER-1 | BS1 |
|
syll_1 | BS1-SPEAKER-1 | BS1 |
|
syll_3 | BS1-SPEAKER-1 | BS1 |
|
syll_2 | BS1-SPEAKER-1 | BS1 |
|
syll_5 | BS1-SPEAKER-1 | BS1 |
|
syll_0 | BS1-SPEAKER-1 | BS1 |
|
syll_4 | BS1-SPEAKER-1 | BS1 |
|
syll_1 | BS1-SPEAKER-1 | BS1 |
|
syll_3 | BS1-SPEAKER-1 | BS1 |
|
syll_2 | BS1-SPEAKER-1 | BS1 |
|
syll_5 | BS1-SPEAKER-1 | BS1 |
End of preview. Expand
in Dataset Viewer.
Dataset Description
This multi-species dataset was customized to benchmark k-NN retrieval and cluster separation tecniques on Human and Songbird vocalizations.
Download Dataset
from huggingface_hub import snapshot_download
snapshot_download('anonymous-submission000/vocsim', local_dir = "data/vocsim", repo_type="dataset" )
For more usage details, please refer to the GitHub repository: https://anonymous.4open.science/anonymize/neural_embeddings-6EE5
Data Fields
- Subset: Specifies the subset/category of the dataset. It can indicate whether the sample is from humans or songbirds, and possibly more detailed categorization.
- Audio: Contains the audio sample.
- Label: Represents the label or class of the audio clip, indicating the type of vocalization or sound.
- Speaker: Identifies the speaker or source of the vocalization in the case of human datasets, or the individual bird in the case of songbird datasets.
Human Datasets
- AMI: The AMI Meeting Corpus comprises 100 hours of multi-modal meeting recordings, including audio data for utterances, words, and vocal sounds, alongside detailed speaker metadata.
- TIMIT: The TIMIT dataset contains manual phonetic transcriptions of utterances read by 630 English speakers with various dialects.
- VocImSet: The Vocal Imitation Set contains recordings of 236 unique sound sources being imitated by 248 speakers.
- VocalSketch: The Vocal Sketch Dataset contains two sets of 10'705 and 5'700 imitations respectively of 240 sounds.
Songbird Datasets
- DAS: The Deep Audio Segmenter Dataset features single male Bengalese finch songs, including 473 vocalizations of 6 vocalization types.
- Tomka: The Gold-Standard Zebrafinch dataset contains 48,059 vocalizations of 36 vocalization types from 4 zebra finches.
- Nicholson: The Bengalese finch song repository includes songs of four Bengalese finches recorded in the Sober lab at Emory University and manually clustered by two authors.
- Elie: Vocal repertoires from zebra finches, collected between 2011 and 2014 at the University of California Berkeley by Julie E Elie. This dataset contains 3,500 vocalizations from 50 individuals and 65 vocalization types.
Contact
- Downloads last month
- 66