LP-MusicCaps-HF
This is the LP-MusicCaps model but loadable by the hf library directly
Original Model Card
- Repository: LP-MusicCaps repository
- Paper: ArXiv
:sound: LP-MusicCaps: LLM-Based Pseudo Music Captioning
This is a implementation of LP-MusicCaps: LLM-Based Pseudo Music Captioning. This project aims to generate captions for music. 1) Tag-to-Caption: Using existing tags, We leverage the power of OpenAI's GPT-3.5 Turbo API to generate high-quality and contextually relevant captions based on music tag. 2) Audio-to-Caption: Using music-audio and pseudo caption pairs, we train a cross-model encoder-decoder model for end-to-end music captioning
LP-MusicCaps: LLM-Based Pseudo Music Captioning
SeungHeon Doh, Keunwoo Choi, Jongpil Lee, Juhan Nam
To appear ISMIR 2023
TL;DR
- 1.Tag-to-Caption: LLM Captioning: Generate caption from given tag input.
- 2.Pretrain Music Captioning Model: Generate pseudo caption from given audio.
- 3.Transfer Music Captioning Model: Generate human level caption from given audio.
Open Source Material
are available online for future research. example of dataset in notebook
- Downloads last month
- 6