admin commited on
Commit
7ca6a63
1 Parent(s): f2a907d
Files changed (3) hide show
  1. .gitignore +1 -0
  2. README.md +148 -1
  3. acapella.py +122 -0
.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ rename.sh
README.md CHANGED
@@ -1,3 +1,150 @@
1
  ---
2
- license: mit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: cc-by-nc-nd-4.0
3
+ task_categories:
4
+ - audio-classification
5
+ - table-question-answering
6
+ - summarization
7
+ language:
8
+ - zh
9
+ - en
10
+ tags:
11
+ - music
12
+ - art
13
+ pretty_name: Acapella Evaluation Dataset
14
+ size_categories:
15
+ - n<1K
16
+ viewer: false
17
  ---
18
+
19
+ # Dataset Card for Acapella Evaluation
20
+ The raw dataset, sourced from the [Acapella Evaluation Dataset](https://ccmusic-database.github.io/en/database/ccm.html#shou2), comprises six Mandarin pop song segments performed by 22 singers, resulting in a total of 132 audio clips. Each segment includes both a verse and a chorus. Four judges from the China Conservatory of Music assess the singing across 9 dimensions: pitch, rhythm, vocal range, timbre, pronunciation, vibrato, dynamics, breath control, and overall performance, using a 10-point scale. The evaluations are recorded in an Excel spreadsheet in .xls format.
21
+
22
+ Due to the raw dataset comprising separate files for audio recordings and evaluation sheets, which hindered efficient data retrieval, we combined the original vocal recordings with their corresponding evaluation sheets to construct the `default subset` of the current integrated version of the dataset. The data structure can be viewed in the [viewer](https://www.modelscope.cn/datasets/ccmusic-database/acapella/dataPeview). The current dataset is already endorsed by published articles, hence there is no need to construct the `eval subset`.
23
+
24
+ ## Viewer
25
+ <https://www.modelscope.cn/datasets/ccmusic-database/acapella/dataPeview>
26
+
27
+ ## Dataset Structure
28
+ <style>
29
+ .datastructure td {
30
+ vertical-align: middle !important;
31
+ text-align: center;
32
+ }
33
+ .datastructure th {
34
+ text-align: center;
35
+ }
36
+ </style>
37
+ <table class="datastructure">
38
+ <tr>
39
+ <th>audio</th>
40
+ <th>mel</th>
41
+ <th>singer_id</th>
42
+ <th>pitch / rhythm / ... / overall_performance (9 colums)</th>
43
+ </tr>
44
+ <tr>
45
+ <td>.wav, 48000Hz</td>
46
+ <td>.jpg, 48000Hz</td>
47
+ <td>int</td>
48
+ <td>float(0-10)</td>
49
+ </tr>
50
+ <tr>
51
+ <td>...</td>
52
+ <td>...</td>
53
+ <td>...</td>
54
+ <td>...</td>
55
+ </tr>
56
+ </table>
57
+
58
+ ### Data Instances
59
+ .zip(.wav), .csv
60
+
61
+ ### Data Fields
62
+ song, singer id, pitch, rhythm, vocal range, timbre, pronunciation, vibrato, dynamic, breath control and overall performance
63
+
64
+ ### Data Splits
65
+ song1-6
66
+
67
+ ## Dataset Description
68
+ - **Homepage:** <https://ccmusic-database.github.io>
69
+ - **Repository:** <https://huggingface.co/datasets/ccmusic-database/acapella_evaluation>
70
+ - **Paper:** <https://doi.org/10.5281/zenodo.5676893>
71
+ - **Leaderboard:** <https://www.modelscope.cn/datasets/ccmusic-database/acapella>
72
+ - **Point of Contact:** <https://www.mdpi.com/2076-3417/12/19/9931>
73
+
74
+ ### Dataset Summary
75
+ Due to the original dataset comprising separate files for audio recordings and evaluation sheets, which hindered efficient data retrieval, we have consolidated the raw vocal recordings with their corresponding assessments. The dataset is divided into six segments, each representing a different song, resulting in a total of six divisions. Each segment contains 22 entries, with each entry detailing the vocal recording of an individual singer sampled at 22,050 Hz, the singer's ID, and evaluations across the nine dimensions previously mentioned. Consequently, each entry encompasses 11 columns of data. This dataset is well-suited for tasks such as vocal analysis and regression-based singing voice rating. For instance, as previously stated, the final column of each entry denotes the overall performance score, allowing the audio to be utilized as data and this score to serve as the label for regression analysis.
76
+
77
+ ### Supported Tasks and Leaderboards
78
+ Acapella evaluation/scoring
79
+
80
+ ### Languages
81
+ Chinese, English
82
+
83
+ ## Maintenance
84
+ ```bash
85
+ GIT_LFS_SKIP_SMUDGE=1 git clone git@hf.co:datasets/ccmusic-database/acapella
86
+ cd acapella
87
+ ```
88
+
89
+ ## Usage
90
+ ```python
91
+ from datasets import load_dataset
92
+
93
+ dataset = load_dataset("ccmusic-database/acapella", subset="default")
94
+ for i in range(1, 7):
95
+ for item in dataset[f"song{i}"]:
96
+ print(item)
97
+ ```
98
+
99
+ ## Dataset Creation
100
+ ### Curation Rationale
101
+ Lack of a training dataset for the acapella scoring system
102
+
103
+ ### Source Data
104
+ #### Initial Data Collection and Normalization
105
+ Zhaorui Liu, Monan Zhou
106
+
107
+ #### Who are the source language producers?
108
+ Students and judges from CCMUSIC
109
+
110
+ ### Annotations
111
+ #### Annotation process
112
+ 6 Mandarin song segments were sung by 22 singers, totaling 132 audio clips. Each segment consists of a verse and a chorus. Four judges evaluate the singing from nine aspects which are pitch, rhythm, vocal range, timbre, pronunciation, vibrato, dynamic, breath control and overall performance on a 10-point scale. The scores are recorded on a sheet.
113
+
114
+ #### Who are the annotators?
115
+ Judges from CCMUSIC
116
+
117
+ ### Personal and Sensitive Information
118
+ Singers' and judges' names are hided
119
+
120
+ ## Considerations for Using the Data
121
+ ### Social Impact of Dataset
122
+ Providing a training dataset for the acapella scoring system may improve the development of related Apps
123
+
124
+ ### Discussion of Biases
125
+ Only for Mandarin songs
126
+
127
+ ### Other Known Limitations
128
+ No starting point has been marked for the vocal
129
+
130
+ ## Additional Information
131
+ ### Dataset Curators
132
+ Zijin Li
133
+
134
+ ### Evaluation
135
+ [Li, R.; Zhang, M. Singing-Voice Timbre Evaluations Based on Transfer Learning. Appl. Sci. 2022, 12, 9931. https://doi.org/10.3390/app12199931](https://www.mdpi.com/2076-3417/12/19/9931)
136
+
137
+ ### Citation Information
138
+ ```bibtex
139
+ @dataset{zhaorui_liu_2021_5676893,
140
+ author = {Monan Zhou, Shenyang Xu, Zhaorui Liu, Zhaowen Wang, Feng Yu, Wei Li and Baoqiang Han},
141
+ title = {CCMusic: an Open and Diverse Database for Chinese and General Music Information Retrieval Research},
142
+ month = {mar},
143
+ year = {2024},
144
+ publisher = {HuggingFace},
145
+ version = {1.2},
146
+ url = {https://huggingface.co/ccmusic-database}
147
+ }
148
+ ```
149
+ ### Contributions
150
+ Provide a training dataset for the acapella scoring system
acapella.py ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import datasets
3
+ import pandas as pd
4
+ from datasets.tasks import AudioClassification
5
+
6
+
7
+ _NAMES = {
8
+ "songs": [f"song{i}" for i in range(1, 7)],
9
+ "singers": [f"singer{i}" for i in range(1, 23)],
10
+ }
11
+
12
+ _DBNAME = os.path.basename(__file__).split(".")[0]
13
+
14
+ _DOMAIN = f"https://www.modelscope.cn/api/v1/datasets/ccmusic-database/{_DBNAME}/repo?Revision=master&FilePath=data"
15
+
16
+ _HOMEPAGE = f"https://www.modelscope.cn/datasets/ccmusic-database/{_DBNAME}"
17
+
18
+
19
+ _URLS = {
20
+ "audio": f"{_DOMAIN}/audio.zip",
21
+ "mel": f"{_DOMAIN}/mel.zip",
22
+ }
23
+
24
+
25
+ class acapella(datasets.GeneratorBasedBuilder):
26
+ def _info(self):
27
+ return datasets.DatasetInfo(
28
+ features=datasets.Features(
29
+ {
30
+ "audio": datasets.Audio(sampling_rate=48000),
31
+ "mel": datasets.Image(),
32
+ "singer_id": datasets.features.ClassLabel(names=_NAMES["singers"]),
33
+ "pitch": datasets.Value("float32"),
34
+ "rhythm": datasets.Value("float32"),
35
+ "vocal_range": datasets.Value("float32"),
36
+ "timbre": datasets.Value("float32"),
37
+ "pronunciation": datasets.Value("float32"),
38
+ "vibrato": datasets.Value("float32"),
39
+ "dynamic": datasets.Value("float32"),
40
+ "breath_control": datasets.Value("float32"),
41
+ "overall_performance": datasets.Value("float32"),
42
+ }
43
+ ),
44
+ supervised_keys=("audio", "singer_id"),
45
+ homepage=_HOMEPAGE,
46
+ license="CC-BY-NC-ND",
47
+ version="1.2.0",
48
+ task_templates=[
49
+ AudioClassification(
50
+ task="audio-classification",
51
+ audio_column="audio",
52
+ label_column="singer_id",
53
+ )
54
+ ],
55
+ )
56
+
57
+ def _split_generators(self, dl_manager):
58
+ songs = {}
59
+ for index in _NAMES["songs"]:
60
+ csv_files = dl_manager.download(f"{_DOMAIN}/{index}.csv")
61
+ song_eval = pd.read_csv(csv_files, index_col="singer_id")
62
+ scores = []
63
+ for i in range(22):
64
+ scores.append(
65
+ {
66
+ "pitch": song_eval.iloc[i]["pitch"],
67
+ "rhythm": song_eval.iloc[i]["rhythm"],
68
+ "vocal_range": song_eval.iloc[i]["vocal_range"],
69
+ "timbre": song_eval.iloc[i]["timbre"],
70
+ "pronunciation": song_eval.iloc[i]["pronunciation"],
71
+ "vibrato": song_eval.iloc[i]["vibrato"],
72
+ "dynamic": song_eval.iloc[i]["dynamic"],
73
+ "breath_control": song_eval.iloc[i]["breath_control"],
74
+ "overall_performance": song_eval.iloc[i]["overall_performance"],
75
+ }
76
+ )
77
+
78
+ songs[index] = scores
79
+
80
+ audio_files = dl_manager.download_and_extract(_URLS["audio"])
81
+ for fpath in dl_manager.iter_files([audio_files]):
82
+ fname: str = os.path.basename(fpath)
83
+ if fname.endswith(".wav"):
84
+ song_id = os.path.basename(os.path.dirname(fpath))
85
+ singer_id = int(fname.split("(")[1].split(")")[0]) - 1
86
+ songs[song_id][singer_id]["audio"] = fpath
87
+
88
+ mel_files = dl_manager.download_and_extract(_URLS["mel"])
89
+ for fpath in dl_manager.iter_files([mel_files]):
90
+ fname = os.path.basename(fpath)
91
+ if fname.endswith(".jpg"):
92
+ song_id = os.path.basename(os.path.dirname(fpath))
93
+ singer_id = int(fname.split("(")[1].split(")")[0]) - 1
94
+ songs[song_id][singer_id]["mel"] = fpath
95
+
96
+ split_generator = []
97
+ for key in songs.keys():
98
+ split_generator.append(
99
+ datasets.SplitGenerator(
100
+ name=key,
101
+ gen_kwargs={"files": songs[key]},
102
+ )
103
+ )
104
+
105
+ return split_generator
106
+
107
+ def _generate_examples(self, files):
108
+ for i, item in enumerate(files):
109
+ yield i, {
110
+ "audio": item["audio"],
111
+ "mel": item["mel"],
112
+ "singer_id": i,
113
+ "pitch": item["pitch"],
114
+ "rhythm": item["rhythm"],
115
+ "vocal_range": item["vocal_range"],
116
+ "timbre": item["timbre"],
117
+ "pronunciation": item["pronunciation"],
118
+ "vibrato": item["vibrato"],
119
+ "dynamic": item["dynamic"],
120
+ "breath_control": item["breath_control"],
121
+ "overall_performance": item["overall_performance"],
122
+ }