nicolaus625
commited on
update Redme based on Huggingface TEmplate
Browse files
README.md
CHANGED
@@ -6,18 +6,22 @@ tags:
|
|
6 |
- music
|
7 |
- art
|
8 |
---
|
|
|
|
|
|
|
|
|
9 |
|
10 |
-
|
11 |
-
This repo contains the code for the following paper.
|
12 |
-
__[MusiLingo: Bridging Music and Text with Pre-trained Language Models for Music Captioning and Query Response](https://arxiv.org/abs/2309.08730)__
|
13 |
|
14 |
-
You can refer to more information at the [GitHub repo](https://github.com/zihaod/MusiLingo)
|
15 |
|
16 |
-
|
|
|
|
|
|
|
17 |
|
18 |
-
This checkpoint is developped on the MI-short.
|
19 |
|
20 |
-
|
|
|
21 |
```
|
22 |
from tqdm.auto import tqdm
|
23 |
|
|
|
6 |
- music
|
7 |
- art
|
8 |
---
|
9 |
+
# Model Card for Model ID
|
10 |
+
## Model Details
|
11 |
+
### Model Description
|
12 |
+
The model consists of a music encoder ```MERT-v1-300M```, a natural language decoder ```vicuna-7b-delta-v0```, and a linear projection laer between the two.
|
13 |
|
14 |
+
This checkpoint of MusiLingo is developed on the MusicInstruct (MI)-short and can answer short instructions with music raw audio, such as querying about the tempo, emotion, genre, tags information. You can use the [MI](https://huggingface.co/datasets/m-a-p/Music-Instruct) dataset for the following demo
|
|
|
|
|
15 |
|
|
|
16 |
|
17 |
+
### Model Sources [optional]
|
18 |
+
- **Repository:** [GitHub repo](https://github.com/zihaod/MusiLingo)
|
19 |
+
- **Paper [optional]:** __[MusiLingo: Bridging Music and Text with Pre-trained Language Models for Music Captioning and Query Response](https://arxiv.org/abs/2309.08730)__
|
20 |
+
<!-- - **Demo [optional]:** [More Information Needed] -->
|
21 |
|
|
|
22 |
|
23 |
+
|
24 |
+
## Getting Start
|
25 |
```
|
26 |
from tqdm.auto import tqdm
|
27 |
|