Model Summary

audiobox-aesthetics is a unified automatic quality assessment for speech, music, and sound.

Model Details

Audiobox-Aesthetics is introduced in Meta Audiobox Aesthetics: Unified Automatic Quality Assessment for Speech, Music, and Sound

Model Developer: FAIR @ Meta AI

Model Architecture:

Model

Audiobox-Aesthetics is based on simple Transformer-based architecture. Specifically, the audio encoder based on WavLM-based structure, consisted of several CNN and 12 Transformers (Vaswani et al., 2017) layers with 768 hidden dimensions. To predict the output, we project the audio embedding through multiple multi-layer perceptron (MLP) blocks where each MLP block consisted of 5 non-linear layers with respect to each axes (PQ, PC, CE, CU). The model is trained with standard regression loss (Mean-Absolute & Mean-Squared Error).

How to install

We are providing 2 ways to run the model:

  1. Install via pip
pip install audiobox_aesthetics
  1. Install directly from source

This repository requires Python 3.9 and Pytorch 2.2 or greater. To install, you can clone this repo and run:

pip install -e .

How to run prediction:

  1. Create a jsonl files with the following format
{"path":"/path/to/a.wav"}
{"path":"/path/to/b.wav"}
...
{"path":"/path/to/z.wav"}

or if you only want to predict aesthetic scores from certain timestamp

{"path":"/path/to/a.wav", "start_time":0, "end_time": 5}
{"path":"/path/to/b.wav", "start_time":3, "end_time": 10}

and save it as input.jsonl

  1. Run following command
audio-aes input.jsonl --batch-size 100 > output.jsonl

If you haven't downloade the checkpoint, the script will try to download it automatically. Otherwise, you can provide the path by --ckpt /path/to/checkpoint.pt

If you have SLURM, run the following command

audio-aes input.jsonl --batch-size 100 --remote --array 5 --job-dir $HOME/slurm_logs/ --chunk 1000 > output.jsonl

Please adjust CPU & GPU settings using --slurm-gpu, --slurm-cpu depending on your nodes.

  1. Output file will contain the same number of rows as input.jsonl. Each row contains 4 axes of prediction with a JSON-formatted dictionary. Check the following table for more info:
    Axes name Full name
    CE Content Enjoyment
    CU Content Usefulness
    PC Production Complexity
    PQ Production Quality

Output line example:

{"CE": 5.146, "CU": 5.779, "PC": 2.148, "PQ": 7.220}
  1. (Extra) If you want to extract only one axis (i.e. CE), post-process the output file with the following command using jq utility:

    jq '.CE' output.jsonl > output-aes_ce.txt

Citation

If you found this repository useful, please cite the following BibTeX entry.

@article{tjandra2025aes,
    title={Meta Audiobox Aesthetics: Unified Automatic Quality Assessment for Speech, Music, and Sound},
    author={Andros Tjandra and Yi-Chiao Wu and Baishan Guo and John Hoffman and Brian Ellis and Apoorv Vyas and Bowen Shi and Sanyuan Chen and Matt Le and Nick Zacharov and Carleigh Wood and Ann Lee and Wei-Ning Hsu},
    year={2025},
    url={https://arxiv.org/abs/2502.05139}
}

License

The majority of audiobox-aesthetics is licensed under CC-BY 4.0, as found in the LICENSE file. However, portions of the project are available under separate license terms: https://github.com/microsoft/unilm is licensed under MIT license.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support audio-classification models for pytorch library.

Spaces using facebook/audiobox-aesthetics 2