czczup commited on
Commit
77ea1c3
·
verified ·
1 Parent(s): 0e3c44a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -15
README.md CHANGED
@@ -11,10 +11,6 @@ language:
11
  - multilingual
12
  tags:
13
  - internvl
14
- - vision
15
- - ocr
16
- - multi-image
17
- - video
18
  - custom_code
19
  ---
20
 
@@ -60,16 +56,12 @@ As shown in the figure below, we adopted the same model architecture as InternVL
60
 
61
  - We simultaneously use [InternVL](https://github.com/OpenGVLab/InternVL) and [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) repositories for model evaluation. Specifically, the results reported for DocVQA, ChartQA, InfoVQA, TextVQA, MME, AI2D, MMBench, CCBench, MMVet, and SEED-Image were tested using the InternVL repository. OCRBench, RealWorldQA, HallBench, and MathVista were evaluated using the VLMEvalKit.
62
 
63
- - Please note that evaluating the same model using different testing toolkits like [InternVL](https://github.com/OpenGVLab/InternVL) and [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) can result in slight differences, which is normal. Updates to code versions and variations in environment and hardware can also cause minor discrepancies in results.
64
-
65
  Limitations: Although we have made efforts to ensure the safety of the model during the training process and to encourage the model to generate text that complies with ethical and legal requirements, the model may still produce unexpected outputs due to its size and probabilistic generation paradigm. For example, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We are not responsible for any consequences resulting from the dissemination of harmful information.
66
 
67
  ## Quick Start
68
 
69
  We provide an example code to run Mini-InternVL-Chat-2B-V1-5 using `transformers`.
70
 
71
- We also welcome you to experience the InternVL2 series models in our [online demo](https://internvl.opengvlab.com/).
72
-
73
  > Please use transformers>=4.37.2 to ensure the model works normally.
74
 
75
  ### Model Loading
@@ -548,7 +540,7 @@ print(response)
548
 
549
  ## License
550
 
551
- This project is released under the MIT license, while InternLM2 is licensed under the Apache-2.0 license.
552
 
553
  ## Citation
554
 
@@ -561,16 +553,16 @@ If you find this project useful in your research, please consider citing:
561
  journal={arXiv preprint arXiv:2410.16261},
562
  year={2024}
563
  }
564
- @article{chen2023internvl,
565
- title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks},
566
- author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng},
567
- journal={arXiv preprint arXiv:2312.14238},
568
- year={2023}
569
- }
570
  @article{chen2024far,
571
  title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
572
  author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
573
  journal={arXiv preprint arXiv:2404.16821},
574
  year={2024}
575
  }
 
 
 
 
 
 
576
  ```
 
11
  - multilingual
12
  tags:
13
  - internvl
 
 
 
 
14
  - custom_code
15
  ---
16
 
 
56
 
57
  - We simultaneously use [InternVL](https://github.com/OpenGVLab/InternVL) and [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) repositories for model evaluation. Specifically, the results reported for DocVQA, ChartQA, InfoVQA, TextVQA, MME, AI2D, MMBench, CCBench, MMVet, and SEED-Image were tested using the InternVL repository. OCRBench, RealWorldQA, HallBench, and MathVista were evaluated using the VLMEvalKit.
58
 
 
 
59
  Limitations: Although we have made efforts to ensure the safety of the model during the training process and to encourage the model to generate text that complies with ethical and legal requirements, the model may still produce unexpected outputs due to its size and probabilistic generation paradigm. For example, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We are not responsible for any consequences resulting from the dissemination of harmful information.
60
 
61
  ## Quick Start
62
 
63
  We provide an example code to run Mini-InternVL-Chat-2B-V1-5 using `transformers`.
64
 
 
 
65
  > Please use transformers>=4.37.2 to ensure the model works normally.
66
 
67
  ### Model Loading
 
540
 
541
  ## License
542
 
543
+ This project is released under the MIT License. This project uses the pre-trained internlm2-chat-1_8b as a component, which is licensed under the Apache License 2.0.
544
 
545
  ## Citation
546
 
 
553
  journal={arXiv preprint arXiv:2410.16261},
554
  year={2024}
555
  }
 
 
 
 
 
 
556
  @article{chen2024far,
557
  title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
558
  author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
559
  journal={arXiv preprint arXiv:2404.16821},
560
  year={2024}
561
  }
562
+ @article{chen2023internvl,
563
+ title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks},
564
+ author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng},
565
+ journal={arXiv preprint arXiv:2312.14238},
566
+ year={2023}
567
+ }
568
  ```