wenyi commited on
Commit
9ab2ccb
β€’
1 Parent(s): d88b352

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -15
README.md CHANGED
@@ -19,7 +19,7 @@ inference: false
19
  <img src=https://raw.githubusercontent.com/THUDM/CogVLM2/53d5d5ea1aa8d535edffc0d15e31685bac40f878/resources/logo.svg width="40%"/>
20
  </div>
21
  <p align="center">
22
- πŸ‘‹ <a href="resources/WECHAT.md" target="_blank">Wechat</a> Β· πŸ’‘<a href="http://36.103.203.44:7861/" target="_blank">Online Demo</a> Β· 🎈<a href="https://github.com/THUDM/CogVLM2" target="_blank">Github Page</a>
23
  </p>
24
  <p align="center">
25
  πŸ“Experience the larger-scale CogVLM model on the <a href="https://open.bigmodel.cn/dev/api#glm-4v">ZhipuAI Open Platform</a>.
@@ -50,20 +50,20 @@ You can see the details of the CogVLM2 family of open source models in the table
50
 
51
  Our open source models have achieved good results in many lists compared to the previous generation of CogVLM open source models. Its excellent performance can compete with some non-open source models, as shown in the table below:
52
 
53
- | Model | Open Source | LLM Size | TextVQA | DocVQA | ChartQA | OCRbench | MMMU | MMVet | MMBench |
54
- |--------------------------------|-------------|----------|----------|----------|----------|----------|----------|----------|----------|
55
- | CogVLM1.1 | βœ… | 7B | 69.7 | - | 68.3 | 590 | 37.3 | 52.0 | 65.8 |
56
- | LLaVA-1.5 | βœ… | 13B | 61.3 | - | - | 337 | 37.0 | 35.4 | 67.7 |
57
- | Mini-Gemini | βœ… | 34B | 74.1 | - | - | - | 48.0 | 59.3 | 80.6 |
58
- | LLaVA-NeXT-LLaMA3 | βœ… | 8B | - | 78.2 | 69.5 | - | 41.7 | - | 72.1 |
59
- | LLaVA-NeXT-110B | βœ… | 110B | - | 85.7 | 79.7 | - | 49.1 | - | 80.5 |
60
- | InternVL-1.5 | βœ… | 20B | 80.6 | 90.9 | **83.8** | 720 | 46.8 | 55.4 | **82.3** |
61
- | QwenVL-Plus | ❌ | - | 78.9 | 91.4 | 78.1 | 726 | 51.4 | 55.7 | 67.0 |
62
- | Claude3-Opus | ❌ | - | - | 89.3 | 80.8 | 694 | **59.4** | 51.7 | 63.3 |
63
- | Gemini Pro 1.5 | ❌ | - | 73.5 | 86.5 | 81.3 | - | 58.5 | - | - |
64
- | GPT-4V | ❌ | - | 78.0 | 88.4 | 78.5 | 656 | 56.8 | **67.7** | 75.0 |
65
- | CogVLM2-LLaMA3 (Ours) | βœ… | 8B | 84.2 | **92.3** | 81.0 | 756 | 44.3 | 60.4 | 80.5 |
66
- | CogVLM2-LLaMA3-Chinese (Ours) | βœ… | 8B | **85.0** | 88.4 | 74.7 | **780** | 42.8 | 60.5 | 78.9 |
67
 
68
  All reviews were obtained without using any external OCR tools ("pixel only").
69
  ## Quick Start
@@ -159,6 +159,15 @@ This model is released under the CogVLM2 [LICENSE](LICENSE). For models built wi
159
  If you find our work helpful, please consider citing the following papers
160
 
161
  ```
 
 
 
 
 
 
 
 
 
162
  @misc{wang2023cogvlm,
163
  title={CogVLM: Visual Expert for Pretrained Language Models},
164
  author={Weihan Wang and Qingsong Lv and Wenmeng Yu and Wenyi Hong and Ji Qi and Yan Wang and Junhui Ji and Zhuoyi Yang and Lei Zhao and Xixuan Song and Jiazheng Xu and Bin Xu and Juanzi Li and Yuxiao Dong and Ming Ding and Jie Tang},
 
19
  <img src=https://raw.githubusercontent.com/THUDM/CogVLM2/53d5d5ea1aa8d535edffc0d15e31685bac40f878/resources/logo.svg width="40%"/>
20
  </div>
21
  <p align="center">
22
+ πŸ‘‹ <a href="resources/WECHAT.md" target="_blank">Wechat</a> Β· πŸ’‘<a href="http://36.103.203.44:7861/" target="_blank">Online Demo</a> Β· 🎈<a href="https://github.com/THUDM/CogVLM2" target="_blank">Github Page</a> Β· πŸ“‘ <a href="https://arxiv.org/pdf/2408.16500" target="_blank">Paper</a>
23
  </p>
24
  <p align="center">
25
  πŸ“Experience the larger-scale CogVLM model on the <a href="https://open.bigmodel.cn/dev/api#glm-4v">ZhipuAI Open Platform</a>.
 
50
 
51
  Our open source models have achieved good results in many lists compared to the previous generation of CogVLM open source models. Its excellent performance can compete with some non-open source models, as shown in the table below:
52
 
53
+ | Model | Open Source | LLM Size | TextVQA | DocVQA | ChartQA | OCRbench | VCR_EASY | VCR_HARD | MMMU | MMVet | MMBench |
54
+ |----------------------------|-------------|----------|----------|----------|----------|----------|-------------|-------------|----------|----------|----------|
55
+ | CogVLM1.1 | βœ… | 7B | 69.7 | - | 68.3 | 590 | 73.9 | 34.6 | 37.3 | 52.0 | 65.8 |
56
+ | LLaVA-1.5 | βœ… | 13B | 61.3 | - | - | 337 | - | - | 37.0 | 35.4 | 67.7 |
57
+ | Mini-Gemini | βœ… | 34B | 74.1 | - | - | - | - | - | 48.0 | 59.3 | 80.6 |
58
+ | LLaVA-NeXT-LLaMA3 | βœ… | 8B | - | 78.2 | 69.5 | - | - | - | 41.7 | - | 72.1 |
59
+ | LLaVA-NeXT-110B | βœ… | 110B | - | 85.7 | 79.7 | - | - | - | 49.1 | - | 80.5 |
60
+ | InternVL-1.5 | βœ… | 20B | 80.6 | 90.9 | **83.8** | 720 | 14.7 | 2.0 | 46.8 | 55.4 | **82.3** |
61
+ | QwenVL-Plus | ❌ | - | 78.9 | 91.4 | 78.1 | 726 | - | - | 51.4 | 55.7 | 67.0 |
62
+ | Claude3-Opus | ❌ | - | - | 89.3 | 80.8 | 694 | 63.85 | 37.8 | **59.4** | 51.7 | 63.3 |
63
+ | Gemini Pro 1.5 | ❌ | - | 73.5 | 86.5 | 81.3 | - | 62.73 | 28.1 | 58.5 | - | - |
64
+ | GPT-4V | ❌ | - | 78.0 | 88.4 | 78.5 | 656 | 52.04 | 25.8 | 56.8 | **67.7** | 75.0 |
65
+ | **CogVLM2-LLaMA3** | βœ… | 8B | 84.2 | **92.3** | 81.0 | 756 | **83.3** | **38.0** | 44.3 | 60.4 | 80.5 |
66
+ | **CogVLM2-LLaMA3-Chinese** | βœ… | 8B | **85.0** | 88.4 | 74.7 | **780** | 79.9 | 25.1 | 42.8 | 60.5 | 78.9 |
67
 
68
  All reviews were obtained without using any external OCR tools ("pixel only").
69
  ## Quick Start
 
159
  If you find our work helpful, please consider citing the following papers
160
 
161
  ```
162
+ @misc{hong2024cogvlm2,
163
+ title={CogVLM2: Visual Language Models for Image and Video Understanding},
164
+ author={Hong, Wenyi and Wang, Weihan and Ding, Ming and Yu, Wenmeng and Lv, Qingsong and Wang, Yan and Cheng, Yean and Huang, Shiyu and Ji, Junhui and Xue, Zhao and others},
165
+ year={2024}
166
+ eprint={2408.16500},
167
+ archivePrefix={arXiv},
168
+ primaryClass={cs.CV}
169
+ }
170
+
171
  @misc{wang2023cogvlm,
172
  title={CogVLM: Visual Expert for Pretrained Language Models},
173
  author={Weihan Wang and Qingsong Lv and Wenmeng Yu and Wenyi Hong and Ji Qi and Yan Wang and Junhui Ji and Zhuoyi Yang and Lei Zhao and Xixuan Song and Jiazheng Xu and Bin Xu and Juanzi Li and Yuxiao Dong and Ming Ding and Jie Tang},