mix-name
stringclasses
1 value
song-name
stringclasses
1 value
artist-name
stringclasses
1 value
genre
stringclasses
1 value
mix evaluation
list
tracks
list
McG-A
Lead Me
The DoneFors
COUNTRY
[ { "track-num-eval": 0.35, "track-semantic-eval": "less snaps; more space on the vocal" }, { "track-num-eval": 0.65, "track-semantic-eval": "Clear vocals, but lacking space. snaps panning a little weird. general clear. Toms forward." }, { "track-num-eval": 0.44, "track-semantic-eval": "Stereo width is wide. Vocal is disattached from the rest of the mix." }, { "track-num-eval": 0.49, "track-semantic-eval": "Vocal is bizzarely filtered sounding - no highs, too much mids; good vox level vs ensemble though; nice hard panning on snaps - though too loud and present - dominates vox; nice snare drum sound; bold panning ++ but faliure to compensate for it with appropriate levels and EQ -- meager amt of bass freq" }, { "track-num-eval": 0.34, "track-semantic-eval": "Vocals very close, very dry. Need a bit of a sense of space around them. They are also too loud. Stick out." }, { "track-num-eval": 0.66, "track-semantic-eval": "Super Dry" }, { "track-num-eval": 0.45, "track-semantic-eval": "No blend of instruments at all" }, { "track-num-eval": 0.64, "track-semantic-eval": "good balance kick bass. overall could be more reverb. good balance" }, { "track-num-eval": 0.43, "track-semantic-eval": "voc too low" }, { "track-num-eval": 0.19, "track-semantic-eval": "too dry; clicks panned too far left, not enough ambience; bad comp on e piano" }, { "track-num-eval": 0.43, "track-semantic-eval": "male vox way up front (uncomfortably)" }, { "track-num-eval": 0.37, "track-semantic-eval": "0" }, { "track-num-eval": 0.49, "track-semantic-eval": "/" }, { "track-num-eval": 0.59, "track-semantic-eval": "good sounds, whoa fingersnaps! and dry" }, { "track-num-eval": 0.31, "track-semantic-eval": "Spectral balance is bad, EQ is not bad" }, { "track-num-eval": 0.25, "track-semantic-eval": "vocals sound wierd and super dry. reverb problem?" }, { "track-num-eval": 0.38, "track-semantic-eval": "sounds one dimensional" }, { "track-num-eval": 0.49, "track-semantic-eval": "/" }, { "track-num-eval": 0.58, "track-semantic-eval": "/" }, { "track-num-eval": 0, "track-semantic-eval": "3rd: Panning sounds interesting, toms unreasonably loud. Vocals are cutting well through the mix without being too loud. The finger snaps to one side stand out a bit too much (not in loudness but position). Better than (McG-H)" }, { "track-num-eval": 0.73, "track-semantic-eval": "Great percussive levels and eq, and balanced wide panning. B.vox pan to one side and lacking beef. Dry vox" }, { "track-num-eval": 0.82, "track-semantic-eval": "better 3D perception, but maybe too much..." }, { "track-num-eval": 0.61, "track-semantic-eval": "good but dry" }, { "track-num-eval": 0.39, "track-semantic-eval": "Finger clicks sound unnatural, heavily panned. The accordion is lacking the high frequencies" }, { "track-num-eval": 0.43, "track-semantic-eval": "Too trebly, not enough depth" }, { "track-num-eval": 0.29, "track-semantic-eval": "instruments sound more separated than mixed" }, { "track-num-eval": 0.09, "track-semantic-eval": "Vocals are completely unprocessed and dry, along with the finger clicks which completely ruins the second half. The dry-ness works a bit in the beginning but just feels unfinished." }, { "track-num-eval": 0.61, "track-semantic-eval": "Sounds dry compared to other mixes." }, { "track-num-eval": 0.98, "track-semantic-eval": "Vocals have lots of space." }, { "track-num-eval": 0.33, "track-semantic-eval": "too dry and sharp and aggressive sound, it seems like radio. Nice organ/synth." }, { "track-num-eval": 0.54, "track-semantic-eval": "Vocals a bit flat and dry sounding" }, { "track-num-eval": 0.12, "track-semantic-eval": "vocals dry, rhodes too present" }, { "track-num-eval": 0.32, "track-semantic-eval": "Lead voice is too loud. Now it happens the contrary to many of the previous tracks: there is a lot of panning in order to achieve separation between instruments, but the result is poor and chaotic." }, { "track-num-eval": 0.24, "track-semantic-eval": "instruments‘ sound too loud and vocal’s sound. Not enough integration." }, { "track-num-eval": 0.51, "track-semantic-eval": "the voice lacks sense of space." }, { "track-num-eval": 0.52, "track-semantic-eval": "Track 4 is not interesting. It sounds like the singer have experienced many vicissitudes of life, and timbre is kind of weak. So stronger instruments sound is needed to support the singers voice, especially multiple instruments are played. If it is a personal concert only with one Folk Guitar, this kind of show way can be concerned." } ]
[ { "track-name": "KOut", "track-type": "AUDIO", "track-audio-path": "/audio/Kout.wav", "channel-mode": "MONO", "parameters": { "gain": 0.7, "pan": [ 0 ], "eq": [ { "type": "HP", "value": { "freq": 31.3, "q": 18, "gain": null } }, { "type": "NOTCH", "value": { "freq": 80, "q": 1.44, "gain": 5.1 } }, { "type": "NOTCH", "value": { "freq": 514.6, "q": 3.97, "gain": -6.6 } }, { "type": "NOTCH", "value": { "freq": 2860, "q": 1, "gain": 2.3 } } ], "reverb": null, "compression": null }, "track-audio-path-left": null, "track-audio-path-right": null, "track-group": null }, { "track-name": "SnT2", "track-type": "AUDIO", "track-audio-path": "/audio/SnT2.wav", "channel-mode": "MONO", "parameters": { "gain": -5.2, "pan": [ 0 ], "eq": [ { "type": "HP", "value": { "freq": 20, "q": 6, "gain": null } }, { "type": "NOTCH", "value": { "freq": 200, "q": 1.17, "gain": 3.1 } }, { "type": "NOTCH", "value": { "freq": 732.4, "q": 5.87, "gain": -7.9 } }, { "type": "NOTCH", "value": { "freq": 7010, "q": 0.89, "gain": 8.3 } } ], "reverb": [ { "name": "LexRoom", "type": "ROOM", "gain": -11.1, "pan": [ 0 ] } ], "compression": null }, "track-audio-path-left": null, "track-audio-path-right": null, "track-group": null }, { "track-name": "StOH", "track-type": "AUDIO", "track-audio-path": null, "channel-mode": "STEREO", "parameters": { "gain": 3.2, "pan": [ -100, 100 ], "eq": null, "reverb": null, "compression": null }, "track-audio-path-left": "/audio/StOH.L.wav", "track-audio-path-right": "/audio/StOH.R.wav", "track-group": null }, { "track-name": "Hat", "track-type": "AUDIO", "track-audio-path": "/audio/Hat.wav", "channel-mode": "MONO", "parameters": { "gain": -1.2, "pan": [ 100 ], "eq": [ { "type": "HP", "value": { "freq": 791.8, "q": 18, "gain": null } } ], "reverb": null, "compression": null }, "track-audio-path-left": null, "track-audio-path-right": null, "track-group": null }, { "track-name": "Tom1", "track-type": "AUDIO", "track-audio-path": "/audio/Tom1.wav", "channel-mode": "MONO", "parameters": { "gain": -4.9, "pan": [ 100 ], "eq": [ { "type": "NOTCH", "value": { "freq": 97, "q": 1, "gain": 5.4 } }, { "type": "NOTCH", "value": { "freq": 602, "q": 2.79, "gain": -12.1 } }, { "type": "NOTCH", "value": { "freq": 8000, "q": 1, "gain": 8.8 } } ], "reverb": null, "compression": null }, "track-audio-path-left": null, "track-audio-path-right": null, "track-group": null }, { "track-name": "Snaps1", "track-type": "AUDIO", "track-audio-path": "/audio/Snaps1.wav", "channel-mode": "MONO", "parameters": { "gain": -14.5, "pan": [ -83 ], "eq": null, "reverb": [ { "name": "LexHall", "type": "HALL", "gain": -11.7, "pan": [ 100 ] } ], "compression": null }, "track-audio-path-left": null, "track-audio-path-right": null, "track-group": null }, { "track-name": "Floor", "track-type": "AUDIO", "track-audio-path": "/audio/Floor.wav", "channel-mode": "MONO", "parameters": { "gain": -8, "pan": [ -100 ], "eq": [ { "type": "NOTCH", "value": { "freq": 200, "q": 1, "gain": 2.1 } }, { "type": "NOTCH", "value": { "freq": 920.3, "q": 3.38, "gain": -8.5 } }, { "type": "NOTCH", "value": { "freq": 6880, "q": 0.58, "gain": 5 } } ], "reverb": null, "compression": null }, "track-audio-path-left": null, "track-audio-path-right": null, "track-group": "DRUMS" }, { "track-name": "Act Guit", "track-type": "AUDIO", "track-audio-path": null, "channel-mode": "STEREO", "parameters": { "gain": 0, "pan": [ -100, 100 ], "eq": [ { "type": "HP", "value": { "freq": 104.6, "q": 6, "gain": null } }, { "type": "NOTCH", "value": { "freq": 9820, "q": 1, "gain": -0.4 } } ], "reverb": null, "compression": null }, "track-audio-path-left": "/audio/Act Guit.L.wav", "track-audio-path-right": "/audio/Act Guit.R.wav", "track-group": null }, { "track-name": "Paul GTR", "track-type": "AUDIO", "track-audio-path": "/audio/Paul GTR.wav", "channel-mode": "MONO", "parameters": { "gain": -7, "pan": [ 66 ], "eq": [ { "type": "NOTCH", "value": { "freq": 154, "q": 1, "gain": -2.2 } }, { "type": "NOTCH", "value": { "freq": 1300, "q": 1, "gain": 5.1 } }, { "type": "NOTCH", "value": { "freq": 4270, "q": 1, "gain": 0.9 } } ], "reverb": [ { "name": "LexRoom", "type": "ROOM", "gain": -10.7, "pan": [ -99 ] } ], "compression": null }, "track-audio-path-left": null, "track-audio-path-right": null, "track-group": null }, { "track-name": "Paul GTR.double", "track-type": "AUDIO", "track-audio-path": "/audio/Paul GTR.double.wav", "channel-mode": "MONO", "parameters": { "gain": -6.4, "pan": [ -100 ], "eq": null, "reverb": null, "compression": null }, "track-audio-path-left": null, "track-audio-path-right": null, "track-group": null }, { "track-name": "Keys2 (edit)", "track-type": "AUDIO", "track-audio-path": "/audio/Keys2 (edit).wav", "channel-mode": "MONO", "parameters": { "gain": 2.1, "pan": [ -100 ], "eq": [ { "type": "NOTCH", "value": { "freq": 1000, "q": 1, "gain": 2 } }, { "type": "NOTCH", "value": { "freq": 2800, "q": 0.62, "gain": 4.9 } }, { "type": "NOTCH", "value": { "freq": 6000, "q": 1, "gain": 4.5 } } ], "reverb": null, "compression": null }, "track-audio-path-left": null, "track-audio-path-right": null, "track-group": null }, { "track-name": "Accordian (edit)", "track-type": "AUDIO", "track-audio-path": "/audio/Accordian (edit).wav", "channel-mode": "MONO", "parameters": { "gain": -2.8, "pan": [ 40 ], "eq": [ { "type": "NOTCH", "value": { "freq": 1250, "q": 0.85, "gain": 1.6 } }, { "type": "NOTCH", "value": { "freq": 9810, "q": 1, "gain": 1.6 } } ], "reverb": [ { "name": "LexPlate", "type": "PLATE", "gain": -9.8, "pan": [ 89 ] } ], "compression": { "name": "BF-76", "attack": 3.1, "release": 7, "input": 24.5, "output": 21.5, "ratio": "4:1", "knee": null, "thres": null, "gain": null } }, "track-audio-path-left": null, "track-audio-path-right": null, "track-group": null }, { "track-name": "JanineVoc", "track-type": "AUDIO", "track-audio-path": "/audio/JanineVoc.wav", "channel-mode": "MONO", "parameters": { "gain": -3.1, "pan": [ -44 ], "eq": null, "reverb": [ { "name": "LexRoom", "type": "ROOM", "gain": -12.9, "pan": [ -44 ] } ], "compression": null }, "track-audio-path-left": null, "track-audio-path-right": null, "track-group": null }, { "track-name": "GangM1", "track-type": "AUDIO", "track-audio-path": "/audio/GangM1.wav", "channel-mode": "MONO", "parameters": { "gain": 0, "pan": [ -87 ], "eq": [ { "type": "HP", "value": { "freq": 113.3, "q": 18, "gain": null } } ], "reverb": null, "compression": null }, "track-audio-path-left": null, "track-audio-path-right": null, "track-group": null }, { "track-name": "GangM2", "track-type": "AUDIO", "track-audio-path": "/audio/GangM2.wav", "channel-mode": "MONO", "parameters": { "gain": 0, "pan": [ 83 ], "eq": [ { "type": "HP", "value": { "freq": 113.3, "q": 18, "gain": null } } ], "reverb": null, "compression": null }, "track-audio-path-left": null, "track-audio-path-right": null, "track-group": null }, { "track-name": "GangM3", "track-type": "AUDIO", "track-audio-path": "/audio/GangM3.wav", "channel-mode": "MONO", "parameters": { "gain": 0, "pan": [ -70 ], "eq": [ { "type": "HP", "value": { "freq": 113.3, "q": 18, "gain": null } }, { "type": "NOTCH", "value": { "freq": 4530, "q": 0.7, "gain": 4.9 } } ], "reverb": null, "compression": null }, "track-audio-path-left": null, "track-audio-path-right": null, "track-group": null }, { "track-name": "GangM4", "track-type": "AUDIO", "track-audio-path": "/audio/GangM4.wav", "channel-mode": "MONO", "parameters": { "gain": 0, "pan": [ 65 ], "eq": [ { "type": "HP", "value": { "freq": 113.3, "q": 18, "gain": null } }, { "type": "NOTCH", "value": { "freq": 4510, "q": 0.7, "gain": 4.9 } } ], "reverb": null, "compression": null }, "track-audio-path-left": null, "track-audio-path-right": null, "track-group": null }, { "track-name": "BACKIN", "track-type": "AUX", "track-audio-path": null, "channel-mode": "STEREO", "parameters": { "gain": 3.9, "pan": [ -100, 100 ], "eq": [ { "type": "HP", "value": { "freq": 113.3, "q": 18, "gain": null } }, { "type": "NOTCH", "value": { "freq": 4510, "q": 0.7, "gain": 4.9 } } ], "reverb": [ { "name": "LexRoom", "type": "ROOM", "gain": -15.4, "pan": [ -100, 100 ] } ], "compression": null }, "track-audio-path-left": null, "track-audio-path-right": null, "track-group": null }, { "track-name": "LeadVoc", "track-type": "AUDIO", "track-audio-path": null, "channel-mode": "MONO", "parameters": { "gain": -3.4, "pan": [ 0 ], "eq": [ { "type": "NOTCH", "value": { "freq": 234.9, "q": 0.85, "gain": 1.6 } }, { "type": "NOTCH", "value": { "freq": 1220, "q": 1, "gain": 0.9 } }, { "type": "NOTCH", "value": { "freq": 3140, "q": 0.56, "gain": 2.5 } } ], "reverb": [ { "name": "LexRoom", "type": "ROOM", "gain": -12.9, "pan": [ 0 ] }, { "name": "LexHall", "type": "HALL", "gain": -23, "pan": [ 0 ] }, { "name": "LexPlate", "type": "ROOM", "gain": -12.3, "pan": [ 0 ] } ], "compression": null }, "track-audio-path-left": null, "track-audio-path-right": null, "track-group": null }, { "track-name": "VOCALS", "track-type": "AUX", "track-audio-path": null, "channel-mode": "STEREO", "parameters": { "gain": 0, "pan": [ -100, 100 ], "eq": null, "reverb": null, "compression": null }, "track-audio-path-left": null, "track-audio-path-right": null, "track-group": null }, { "track-name": "DRUMS", "track-type": "AUX", "track-audio-path": null, "channel-mode": "STEREO", "parameters": { "gain": 0, "pan": [ -100, 100 ], "eq": null, "reverb": null, "compression": { "name": "D3 CL", "attack": 20.6, "release": 119.4, "input": null, "output": null, "ratio": "4:1", "knee": 0, "thres": -12.9, "gain": 3 } }, "track-audio-path-left": null, "track-audio-path-right": null, "track-group": "INST" }, { "track-name": "Bass DI", "track-type": "AUDIO", "track-audio-path": null, "channel-mode": "MONO", "parameters": { "gain": -3.2, "pan": [ 0 ], "eq": [ { "type": "NOTCH", "value": { "freq": 40, "q": 1, "gain": 3.1 } }, { "type": "NOTCH", "value": { "freq": 99, "q": 0.63, "gain": 1.8 } }, { "type": "NOTCH", "value": { "freq": 514, "q": 1, "gain": 1.3 } } ], "reverb": null, "compression": null }, "track-audio-path-left": null, "track-audio-path-right": null, "track-group": "INST" }, { "track-name": "ACGT", "track-type": "AUX", "track-audio-path": null, "channel-mode": "STEREO", "parameters": { "gain": -0.6, "pan": [ -100, 100 ], "eq": null, "reverb": null, "compression": null }, "track-audio-path-left": null, "track-audio-path-right": null, "track-group": null }, { "track-name": "ELECGT", "track-type": "AUX", "track-audio-path": null, "channel-mode": "STEREO", "parameters": { "gain": -10.1, "pan": [ -100, 100 ], "eq": [ { "type": "NOTCH", "value": { "freq": 2110, "q": 1, "gain": 3.2 } }, { "type": "NOTCH", "value": { "freq": 4800, "q": 0.79, "gain": 1.3 } } ], "reverb": null, "compression": null }, "track-audio-path-left": null, "track-audio-path-right": null, "track-group": null }, { "track-name": "KEYS", "track-type": "AUX", "track-audio-path": null, "channel-mode": "STEREO", "parameters": { "gain": 2.5, "pan": [ -100, 100 ], "eq": null, "reverb": null, "compression": null }, "track-audio-path-left": null, "track-audio-path-right": null, "track-group": null } ]

Motivation for Dataset Creation

  • Why was the dataset created? (e.g., were there specific tasks in mind, or a specific gap that needed to be filled?) This dataset was created to help advance the field of intelligent music production, specifically targeting music mixing in a digital audio workstation (DAW).

  • What (other) tasks could the dataset be used for? Are there obvious tasks for which it should not be used? This dataset could possibly be used to predict parameter values via semantic labels provided by the mixed listening evaluations.

  • Has the dataset been used for any tasks already? If so, where are the results so others can compare (e.g., links to published papers)? Currently, this dataset is still being curated and has yet to be used for any task. This will be updated once that has changed.

  • Who funded the creation of the dataset? If there is an associated grant, provide the grant number. The National Science Foundation Graduate Research Fellowship Program (Award Abstract # 1650114) helped to financially support the creation of this dataset by helping financially support the creator through their graduate program.

  • Any other comments?

Dataset Composition

  • What are the instances? (that is, examples; e.g., documents, images, people, countries) Are there multiple types of instances? (e.g., movies, users, ratings; people, interactions between them; nodes, edges) The instances themselves are annotated of individual mixes either from Logic Pro, Pro Tools, or Reaper, depending on the artist who mixed them.

  • Are relationships between instances made explicit in the data (e.g., social network links, user/movie ratings, etc.)? Each mix is unique to the other, and there exists no evident relationship between them.

  • How many instances of each type are there? There will be 114 mixes once this dataset is finalized.

  • What data does each instance consist of? "Raw" data (e.g., unprocessed text or images)? Features/attributes? Is there a label/target associated with instances? If the instances are related to people, are subpopulations identified (e.g., by age, gender, etc.) and what is their distribution? Each instance of a mix contains the following: Mix Name, Song Name, Artist Name, Genre, Tracks, Track Name, Track Type, Track Audio Path, Channel Mode, Parameters, Gain, Pan, (Etc)

  • Is everything included or does the data rely on external resources? (e.g., websites, tweets, datasets) If external resources, a) are there guarantees that they will exist, and remain constant, over time; b) is there an official archival version. Are there licenses, fees or rights associated with any of the data? The audio that is associated with each mix is an external resource, as those audio files are original to their source. The original audio sources are from The Mixing Secrets, Weathervane, or The Open Multitrack Testbed.

  • Are there recommended data splits or evaluation measures? (e.g., training, development, testing; accuracy/AUC) There are no data splits recommended for this. However, suppose no listening evaluation is available for that current mix. In that case, we recommend leaving out that mix if you plan on using those comments for the semantic representation of the mix. All of the mixes that were annotated from Mike Senior's The Mixing Secret projects for sound on sound do not contain any listening evaluation.

  • What experiments were initially run on this dataset? Have a summary of those results and, if available, provide the link to a paper with more information here. No experiments have been run on this dataset as of yet.

  • Any other comments?

Data Collection Process

  • How was the data collected? (e.g., hardware apparatus/sensor, manual human curation, software program, software interface/API; how were these constructs/measures/methods validated?) The data was collected manually by annotating parameter values for each track in the mix. The mix projects were provided as Logic Pro, Pro Tools, or Reaper files. Each project was opened in their respective software and the author went through each track and annotated these parameters manually. A tool was created to help assemble this dataset for parameter values that plugin manufacturers obscured. This tool estimated the value of each parameter based on the visual representation that was provided in the plugin.

  • Who was involved in the data collection process? (e.g., students, crowdworkers) How were they compensated? (e.g., how much were crowdworkers paid?) The author of this dataset collected the data and is a graduate student at the University of Utah.

  • Over what time-frame was the data collected? Does the collection time-frame match the creation time-frame? This dataset has been collected from September through November of 2023. The creation time frame overlaps the collection time frame as the main structure for the dataset was created, and mixes are added iteratively.

  • How was the data associated with each instance acquired? Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey responses), or indirectly inferred/derived from other data (e.g., part of speech tags; model-based guesses for age or language)? If the latter two, were they validated/verified and if so how? The data were directly associated with each instance. The parameter values are visually represented in each session file for the mixes.

  • Does the dataset contain all possible instances? Or is it, for instance, a sample (not necessarily random) from a larger set of instances? The dataset contains all possible instances that were given by The Mix Evaluation Dataset, negating the copyrighted songs that were used in the listening evaluation.

  • If the dataset is a sample, then what is the population? What was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? Is the sample representative of the larger set (e.g., geographic coverage)? If not, why not (e.g., to cover a more diverse range of instances)? How does this affect possible uses? This dataset does not represent a sample of a larger population and thus, a sample size is not appropriate in this case.

  • Is there information missing from the dataset and why? (this does not include intentionally dropped instances; it might include, e.g., redacted text, withheld documents) Is this data missing because it was unavailable? Not all of the parameter values for every plugin used were documented. Occasionally a mix would include a saturator or a multiband compressor. Due to the low occurrence of these plugins, these were omitted for the annotating process.

  • Are there any known errors, sources of noise, or redundancies in the data? To the author's knowledge, there are no errors or sources of noise within this dataset.

  • Any other comments?

Data Preprocessing

  • What preprocessing/cleaning was done? (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values, etc.) The data preprocessing happened during the data collection stage for this dataset. Some of the data values were not available from the plugins that were used in a DAW session file. To help estimate the values on each of the parameters for that respective plugin, a tool was created and used by this author. If there wasn't a value for the parameter, the value was omitted from the data collection.

  • Was the "raw" data saved in addition to the preprocessed/cleaned data? (e.g., to support unanticipated future uses) The raw data is still saved in the project files but was not annotated and, therefore, is not contained in this dataset. For the raw files of each mix, the reader should explore The Mix Evaluation dataset for these values.

  • Is the preprocessing software available? The tool that was used to help the author annotate some of the parameter values is available for download here

  • Does this dataset collection/processing procedure achieve the motivation for creating the dataset stated in the first section of this datasheet? The authors of this dataset intended to create an ethical source repository for AI music researchers to use for music mixing. We believe by using The Mix Evaluation dataset along with publically available music mixing projects, we have achieved our goal. Although this dataset is considerably smaller than what is required for most model architectures utilized in generative AI applications, we hope this is a positive addition to the field.

  • Any other comments?

Dataset Distribution

  • How is the dataset distributed? (e.g., website, API, etc.; does the data have a DOI; is it archived redundantly?) This dataset is distributed via HuggingFace and will continue to be hosted there for the foreseeable future. There are no current plans to create an API, although a website for the dataset has been mentioned. The data is currently being archived redundantly through the University of Utah's Box account. Should HuggingFace go down or remove the dataset, the data themselves will remain at the University of Utah and will be uploaded to a separate website.

  • When will the dataset be released/first distributed? (Is there a canonical paper/reference for this dataset?) The dataset, in its entirety, will be released on December 5th, 2023.

  • What license (if any) is it distributed under? Are there any copyrights on the data? The license will be distributed via the MIT license. There are no copyrights on this data.

  • Are there any fees or access/export restrictions? There are no fees or access/export restrictions for this dataset.

  • Any other comments?

Dataset Maintenance

  • Who is supporting/hosting/maintaining the dataset? How does one contact the owner/curator/manager of the dataset (e.g. email address, or other contact info)? HuggingFace is currently hosting the dataset and Michael Clemens (email: michael.clemens at utah.edu) is maintaining the dataset.

  • Will the dataset be updated? How often and by whom? How will updates/revisions be documented and communicated (e.g., mailing list, GitHub)? Is there an erratum? The release of this dataset is set to be December 5th, 2023. Updates and revisions will be documented through the repository through HuggingFace. There is currently no erratum, but should that be the case, this will be documented here as they come about.

  • If the dataset becomes obsolete how will this be communicated? Should the dataset no longer be valid, this will be communicated through the ReadMe right here on HF.

  • Is there a repository to link to any/all papers/systems that use this dataset? There is no repo or link to any paper/systems that use the dataset. Should this dataset be used in the future for papers or system design, there will be a link to these works on this ReadMe, or a website will be created and linked here for the collection of works.

  • If others want to extend/augment/build on this dataset, is there a mechanism for them to do so? If so, is there a process for tracking/assessing the quality of those contributions. What is the process for communicating/distributing these contributions to users? This dataset is an extension of The Mix Evaluation Dataset by Brecht De Man et al., and users are free to extend/augment/build on this dataset. There is no trackable way currently of assessing these contributions.

  • Any other comments?

Legal & Ethical Considerations

  • If the dataset relates to people (e.g., their attributes) or was generated by people, were they informed about the data collection? (e.g., datasets that collect writing, photos, interactions, transactions, etc.) As this was a derivative of another work that performed the main data collection, the original music producers who mixed these tracks were not informed of the creation of this dataset.

  • If it relates to other ethically protected subjects, have appropriate obligations been met? (e.g., medical data might include information collected from animals) N/A

  • If it relates to people, were there any ethical review applications/reviews/approvals? (e.g. Institutional Review Board applications) As this is an extension of the main dataset by Brecht De Man et al. and the data collection had already been conducted, an IRB was not included in this creation of this dataset. The data themselves are not related to the music producers but instead remain as an artifact of their work. Due to the nature of these data, an IRB was not needed.

  • If it relates to people, were they told what the dataset would be used for and did they consent? What community norms exist for data collected from human communications? If consent was obtained, how? Were the people provided with any mechanism to revoke their consent in the future or for certain uses? N/A

  • If it relates to people, could this dataset expose people to harm or legal action? (e.g., financial social or otherwise) What was done to mitigate or reduce the potential for harm? The main initiative of this work was to create a dataset that was ethically sourced for parameter recommendations in the music-mixing process. With this, all of the data found here has been gathered from publically avaiable data from artists. Therefore no copyright or fair use infringement exists.

  • If it relates to people, does it unfairly advantage or disadvantage a particular social group? In what ways? How was this mitigated? If it relates to people, were they provided with privacy guarantees? If so, what guarantees and how are these ensured? N/A

  • Does the dataset comply with the EU General Data Protection Regulation (GDPR)? Does it comply with any other standards, such as the US Equal Employment Opportunity Act? Does the dataset contain information that might be considered sensitive or confidential? (e.g., personally identifying information) To the authors' knowledge, this dataset complies with the laws mentioned above.

  • Does the dataset contain information that might be considered inappropriate or offensive? No, this dataset does not contain any information like this.

  • Any other comments?

Downloads last month
56