Weiyun1025 commited on
Commit
e4e0de1
1 Parent(s): f1908d0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -0
README.md CHANGED
@@ -1,3 +1,39 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+ # ASMv2 Model Card
5
+
6
+ This is a pretrained checkpoint, you can use it to instruct tune your multimodal models.
7
+
8
+ Check out the instructions [here](https://github.com/OpenGVLab/all-seeing/tree/main/all-seeing-v2#stage2-pretraining).
9
+
10
+ ## Model details
11
+
12
+ **Model type:**
13
+ ASMv2 is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on multimodal instruction-following data.
14
+ It integrates the Relation Conversation (ReC) ability while maintaining powerful general capabilities.
15
+ This model is also endowed with grounding and referring capabilities, exhibiting state-of-the-art performance on region-level tasks, and can be naturally adapted to the Scene Graph Generation task in an open-ended manner.
16
+
17
+ **Model date:**
18
+ ASMv2-Pretrain was trained in January 2024.
19
+
20
+ **Paper or resources for more information:**
21
+ https://github.com/OpenGVLab/all-seeing
22
+
23
+ ## License
24
+ ASMv2-Pretrain is open-sourced under the Apache License 2.0,
25
+
26
+ **Where to send questions or comments about the model:**
27
+ https://github.com/OpenGVLab/all-seeing/issues
28
+
29
+ ## Intended use
30
+ **Primary intended uses:**
31
+ The primary use of ASMv2 is research on large multimodal models and chatbots.
32
+
33
+ **Primary intended users:**
34
+ The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
35
+
36
+ ## Training dataset
37
+ The pretrain phase employs [5M filtered samples](https://storage.googleapis.com/sfr-vision-language-research/BLIP/datasets/ccs_filtered.json) from CC12M, [10M filtered samples](https://huggingface.co/datasets/Weiyun1025/AS-V2/blob/main/as_pretrain_10m.json) from AS-1B, and 15M filtered samples from [GRiT](https://huggingface.co/datasets/zzliang/GRIT).
38
+
39
+ See [here](https://github.com/OpenGVLab/all-seeing/tree/main/all-seeing-v2#training) for more details.