Weiyun1025 commited on
Commit
a9b4b89
1 Parent(s): b544f90

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -25,6 +25,7 @@ Welcome to OpenGVLab! We are a research group from Shanghai AI Lab focused on Vi
25
 
26
  - [ShareGPT4o](https://sharegpt4o.github.io/): a groundbreaking large-scale resource that we plan to open-source with 200K meticulously annotated images, 10K videos with highly descriptive captions, and 10K audio files with detailed descriptions.
27
  - [InternVid](https://github.com/OpenGVLab/InternVideo/tree/main/Data/InternVid): a large-scale video-text dataset for multimodal understanding and generation.
 
28
 
29
  # Benchmarks
30
  - [MVBench](https://github.com/OpenGVLab/Ask-Anything/tree/main/video_chat2): a comprehensive benchmark for multimodal video understanding.
 
25
 
26
  - [ShareGPT4o](https://sharegpt4o.github.io/): a groundbreaking large-scale resource that we plan to open-source with 200K meticulously annotated images, 10K videos with highly descriptive captions, and 10K audio files with detailed descriptions.
27
  - [InternVid](https://github.com/OpenGVLab/InternVideo/tree/main/Data/InternVid): a large-scale video-text dataset for multimodal understanding and generation.
28
+ - [MMPR](https://huggingface.co/datasets/OpenGVLab/MMPR): a high-quality, large-scale multimodal preference dataset.
29
 
30
  # Benchmarks
31
  - [MVBench](https://github.com/OpenGVLab/Ask-Anything/tree/main/video_chat2): a comprehensive benchmark for multimodal video understanding.