Commit
•
47e682c
1
Parent(s):
fc81d8b
Update README.md with HF implementation (#5)
Browse files- Update README.md with HF implementation (a05d7f56e82115620dbc41fcec5a0a2d8c8715dc)
- retain audiocraft usage (7235210de72083cf126290e8d20d18a146ffa95e)
- fix typos (dc41f5125ba7117f660d6dc438b680a2e488c934)
README.md
CHANGED
@@ -6,9 +6,9 @@ license: cc-by-nc-4.0
|
|
6 |
|
7 |
# MusicGen - Small - 300M
|
8 |
|
9 |
-
|
10 |
-
|
11 |
-
Unlike existing methods like MusicLM, MusicGen doesn't
|
12 |
By introducing a small delay between the codebooks, we show we can predict them in parallel, thus having only 50 auto-regressive steps per second of audio.
|
13 |
|
14 |
MusicGen was published in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by *Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez*.
|
@@ -23,15 +23,75 @@ Four checkpoints are released:
|
|
23 |
|
24 |
Try out MusicGen yourself!
|
25 |
|
26 |
-
|
|
|
|
|
27 |
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
|
28 |
</a>
|
29 |
|
30 |
-
|
31 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
</a>
|
33 |
|
34 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
|
36 |
1. First install the [`audiocraft` library](https://github.com/facebookresearch/audiocraft)
|
37 |
```
|
@@ -46,17 +106,15 @@ apt get install ffmpeg
|
|
46 |
3. Run the following Python code:
|
47 |
|
48 |
```py
|
49 |
-
import torchaudio
|
50 |
-
|
51 |
from audiocraft.models import MusicGen
|
52 |
from audiocraft.data.audio import audio_write
|
53 |
|
54 |
-
model = MusicGen.get_pretrained(
|
55 |
model.set_generation_params(duration=8) # generate 8 seconds.
|
56 |
|
57 |
-
descriptions = [
|
58 |
|
59 |
-
wav = model.generate(descriptions) # generates
|
60 |
|
61 |
for idx, one_wav in enumerate(wav):
|
62 |
# Will save under {idx}.wav, with loudness normalization at -14 db LUFS.
|
|
|
6 |
|
7 |
# MusicGen - Small - 300M
|
8 |
|
9 |
+
MusicGen is a text-to-music model capable of genreating high-quality music samples conditioned on text descriptions or audio prompts.
|
10 |
+
It is a single stage auto-regressive Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz.
|
11 |
+
Unlike existing methods, like MusicLM, MusicGen doesn't require a self-supervised semantic representation, and it generates all 4 codebooks in one pass.
|
12 |
By introducing a small delay between the codebooks, we show we can predict them in parallel, thus having only 50 auto-regressive steps per second of audio.
|
13 |
|
14 |
MusicGen was published in [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) by *Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre Défossez*.
|
|
|
23 |
|
24 |
Try out MusicGen yourself!
|
25 |
|
26 |
+
* Audiocraft Colab:
|
27 |
+
|
28 |
+
<a target="_blank" href="https://colab.research.google.com/drive/1fxGqfg96RBUvGxZ1XXN07s3DthrKUl4-?usp=sharing">
|
29 |
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
|
30 |
</a>
|
31 |
|
32 |
+
* Hugging Face Colab:
|
33 |
+
|
34 |
+
<a target="_blank" href="https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/MusicGen.ipynb">
|
35 |
+
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
|
36 |
+
</a>
|
37 |
+
|
38 |
+
* Hugging Face Demo:
|
39 |
+
|
40 |
+
<a target="_blank" href="https://huggingface.co/spaces/facebook/MusicGen">
|
41 |
+
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
|
42 |
</a>
|
43 |
|
44 |
+
## 🤗 Transformers Usage
|
45 |
+
|
46 |
+
You can run MusicGen locally with the 🤗 Transformers library from version 4.31.0 onwards.
|
47 |
+
|
48 |
+
1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers) from main:
|
49 |
+
|
50 |
+
```
|
51 |
+
pip install git+https://github.com/huggingface/transformers.git
|
52 |
+
```
|
53 |
+
|
54 |
+
2. Run the following Python code to generate text-conditional audio samples:
|
55 |
+
|
56 |
+
```py
|
57 |
+
from transformers import AutoProcessor, MusicgenForConditionalGeneration
|
58 |
+
|
59 |
+
|
60 |
+
processor = AutoProcessor.from_pretrained("facebook/musicgen-small")
|
61 |
+
model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small")
|
62 |
+
|
63 |
+
inputs = processor(
|
64 |
+
text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"],
|
65 |
+
padding=True,
|
66 |
+
return_tensors="pt",
|
67 |
+
)
|
68 |
+
|
69 |
+
audio_values = model.generate(**inputs, do_sample=True, guidance_scale=3, max_new_tokens=256)
|
70 |
+
```
|
71 |
+
|
72 |
+
3. Listen to the audio samples either in an ipynb notebook:
|
73 |
+
|
74 |
+
```py
|
75 |
+
from IPython.display import Audio
|
76 |
+
|
77 |
+
sampling_rate = model.config.audio_encoder.sampling_rate
|
78 |
+
Audio(audio_values[0].numpy(), rate=sampling_rate)
|
79 |
+
```
|
80 |
+
|
81 |
+
Or save them as a `.wav` file using a third-party library, e.g. `scipy`:
|
82 |
+
|
83 |
+
```py
|
84 |
+
import scipy
|
85 |
+
|
86 |
+
sampling_rate = model.config.audio_encoder.sampling_rate
|
87 |
+
scipy.io.wavfile.write("musicgen_out.wav", rate=sampling_rate, data=audio_values[0, 0].numpy())
|
88 |
+
```
|
89 |
+
|
90 |
+
For more details on using the MusicGen model for inference using the 🤗 Transformers library, refer to the [MusicGen docs](https://huggingface.co/docs/transformers/model_doc/musicgen).
|
91 |
+
|
92 |
+
## Audiocraft Usage
|
93 |
+
|
94 |
+
You can also run MusicGen locally through the original [Audiocraft library]((https://github.com/facebookresearch/audiocraft):
|
95 |
|
96 |
1. First install the [`audiocraft` library](https://github.com/facebookresearch/audiocraft)
|
97 |
```
|
|
|
106 |
3. Run the following Python code:
|
107 |
|
108 |
```py
|
|
|
|
|
109 |
from audiocraft.models import MusicGen
|
110 |
from audiocraft.data.audio import audio_write
|
111 |
|
112 |
+
model = MusicGen.get_pretrained("small")
|
113 |
model.set_generation_params(duration=8) # generate 8 seconds.
|
114 |
|
115 |
+
descriptions = ["happy rock", "energetic EDM"]
|
116 |
|
117 |
+
wav = model.generate(descriptions) # generates 2 samples.
|
118 |
|
119 |
for idx, one_wav in enumerate(wav):
|
120 |
# Will save under {idx}.wav, with loudness normalization at -14 db LUFS.
|