The checkpoints for the MERT: Acoustic Music Understanding Model with Large-Scale Self-supervised Training.
![Multimodal Art Projection's profile picture](https://cdn-avatars.huggingface.co/v1/production/uploads/63839e9962badff4326cf360/k4Q7R4XLDMp_1VF4C6GEd.jpeg)
Multimodal Art Projection
community
AI & ML interests
None defined yet.
Organization Card
About org cards
Multimodal Art Projection (M-A-P) is an open-source research community. The community members are working on Artificial Intelligence-Generated Content (AIGC) topics, including text, audio, and vision modalities. We aim to prompt open research on large language/music/multimodal models (LLMs/LMMs) training, data collection, and development of fun applications.
Welcome to join us!
- Organization page: https://m-a-p.ai
- Discord Channel
- Our Full Paper List
The development log of our Multimodal Art Projection (m-a-p) model family:
- 🔥11/04/2024: MuPT paper and demo are out. HF collection.
- 🔥08/04/2024: Chinese Tiny LLM is out. HF collection.
- 🔥28/02/2024: The release of ChatMusician's demo, code, model, data, and benchmark. 😆
- 🔥23/02/2024: The release of OpenCodeInterpreter, beats GPT-4 code interpreter on HumanEval.
- 23/01/2024: we release CMMMU for better Chinese LMMs' Evaluation.
- 13/01/2024: we release a series of Music Pretrained Transformer (MuPT) checkpoints, with size up to 1.3B and 8192 context length. Our models are LLAMA2-based, pre-trained on world's largest 10B tokens symbolic music dataset (ABC notation format). We currently support Megatron-LM format and will release huggingface checkpoints soon.
- 02/06/2023: officially release the MERT pre-print paper and training codes.
- 17/03/2023: we release two advanced music understanding models, MERT-v1-95M and MERT-v1-330M , trained with new paradigm and dataset. They outperform the previous models and can better generalize to more tasks.
- 14/03/2023: we retrained the MERT-v0 model with open-source-only music dataset MERT-v0-public
- 29/12/2022: a music understanding model MERT-v0 trained with MLM paradigm, which performs better at downstream tasks.
- 29/10/2022: a pre-trained MIR model music2vec trained with BYOL paradigm.
models
100
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63839e9962badff4326cf360/k4Q7R4XLDMp_1VF4C6GEd.jpeg)
m-a-p/neo_scalinglaw_460M
Updated
•
1
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63839e9962badff4326cf360/k4Q7R4XLDMp_1VF4C6GEd.jpeg)
m-a-p/neo_scalinglaw_980M
Updated
•
1
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63839e9962badff4326cf360/k4Q7R4XLDMp_1VF4C6GEd.jpeg)
m-a-p/neo_2b_general
Updated
•
4
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63839e9962badff4326cf360/k4Q7R4XLDMp_1VF4C6GEd.jpeg)
m-a-p/neo_scalinglaw_250M
Updated
•
2
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63839e9962badff4326cf360/k4Q7R4XLDMp_1VF4C6GEd.jpeg)
m-a-p/neo_7b_intermediate
Updated
•
2
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63839e9962badff4326cf360/k4Q7R4XLDMp_1VF4C6GEd.jpeg)
m-a-p/neo_7b
Text Generation
•
Updated
•
1.24k
•
40
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63839e9962badff4326cf360/k4Q7R4XLDMp_1VF4C6GEd.jpeg)
m-a-p/neo_7b_decay
Updated
•
4
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63839e9962badff4326cf360/k4Q7R4XLDMp_1VF4C6GEd.jpeg)
m-a-p/neo_7b_instruct_v0.1
Text Generation
•
Updated
•
29
•
5
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63839e9962badff4326cf360/k4Q7R4XLDMp_1VF4C6GEd.jpeg)
m-a-p/neo_7b_sft_v0.1
Text Generation
•
Updated
•
83
•
1
![](https://cdn-avatars.huggingface.co/v1/production/uploads/63839e9962badff4326cf360/k4Q7R4XLDMp_1VF4C6GEd.jpeg)
m-a-p/OpenCodeInterpreter-CL-70B
Text Generation
•
Updated
•
3
•
25
datasets
18
m-a-p/GIEBench
Viewer
•
Updated
m-a-p/neo_sft_phase2
Viewer
•
Updated
•
37
•
42
m-a-p/II-Bench
Viewer
•
Updated
•
5
m-a-p/MusicPile
Viewer
•
Updated
•
58
•
26
m-a-p/Matrix
Viewer
•
Updated
•
96
•
125
m-a-p/COIG-CQIA
Viewer
•
Updated
•
5.29k
•
494
m-a-p/MAP-CC
Viewer
•
Updated
•
17
•
48
m-a-p/COIG-Kun
Viewer
•
Updated
•
27
m-a-p/CHC-Bench
Viewer
•
Updated
•
1
•
6
m-a-p/CodeEditorBench
Preview
•
Updated
•
14
•
16