czczup commited on
Commit
9a9ef7c
·
verified ·
1 Parent(s): 46bbe53

fix compatibility issue for transformers 4.46+

Browse files
README.md CHANGED
@@ -5,6 +5,7 @@ library_name: transformers
5
  base_model:
6
  - OpenGVLab/InternViT-6B-448px-V1-5
7
  - NousResearch/Nous-Hermes-2-Yi-34B
 
8
  base_model_relation: merge
9
  language:
10
  - multilingual
@@ -19,13 +20,13 @@ tags:
19
 
20
  # InternVL2-40B
21
 
22
- [\[📂 GitHub\]](https://github.com/OpenGVLab/InternVL) [\[🆕 Blog\]](https://internvl.github.io/blog/) [\[📜 InternVL 1.0 Paper\]](https://arxiv.org/abs/2312.14238) [\[📜 InternVL 1.5 Report\]](https://arxiv.org/abs/2404.16821)
23
 
24
  [\[🗨️ Chat Demo\]](https://internvl.opengvlab.com/) [\[🤗 HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[🚀 Quick Start\]](#quick-start) [\[📖 中文解读\]](https://zhuanlan.zhihu.com/p/706547971) [\[📖 Documents\]](https://internvl.readthedocs.io/en/latest/)
25
 
26
- [切换至中文版](#简介)
27
-
28
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/_mLpMwsav5eMeNcZdrIQl.png)
29
 
30
  ## Introduction
31
 
@@ -65,7 +66,7 @@ InternVL 2.0 is a multimodal large language model series, featuring models of va
65
  | MME<sub>sum</sub> | 2070.2 | 2110.6 | 2260.7 | 2315.0 |
66
  | RealWorldQA | 68.0 | 67.5 | 68.3 | 71.8 |
67
  | AI2D<sub>test</sub> | 89.4 | 80.3 | 84.5 | 87.1 |
68
- | MMMU<sub>val</sub> | 63.1 / 61.7 | 58.5 / 60.6 | 48.3 / 51.2 | 53.9 / 55.2 |
69
  | MMBench-EN<sub>test</sub> | 81.0 | 73.9 | 83.4 | 86.8 |
70
  | MMBench-CN<sub>test</sub> | 80.2 | 73.8 | 82.0 | 86.5 |
71
  | CCBench<sub>dev</sub> | 57.3 | 28.4 | 73.5 | 80.6 |
@@ -78,9 +79,7 @@ InternVL 2.0 is a multimodal large language model series, featuring models of va
78
 
79
  - For more details and evaluation reproduction, please refer to our [Evaluation Guide](https://internvl.readthedocs.io/en/latest/internvl2.0/evaluation.html).
80
 
81
- - We simultaneously use [InternVL](https://github.com/OpenGVLab/InternVL) and [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) repositories for model evaluation. Specifically, the results reported for DocVQA, ChartQA, InfoVQA, TextVQA, MME, AI2D, MMBench, CCBench, MMVet, and SEED-Image were tested using the InternVL repository. OCRBench, RealWorldQA, HallBench, and MathVista were evaluated using the VLMEvalKit.
82
-
83
- - For MMMU, we report both the original scores (left side: evaluated using the InternVL codebase for InternVL series models, and sourced from technical reports or webpages for other models) and the VLMEvalKit scores (right side: collected from the OpenCompass leaderboard).
84
 
85
  - Please note that evaluating the same model using different testing toolkits like [InternVL](https://github.com/OpenGVLab/InternVL) and [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) can result in slight differences, which is normal. Updates to code versions and variations in environment and hardware can also cause minor discrepancies in results.
86
 
@@ -130,7 +129,7 @@ We provide an example code to run InternVL2-40B using `transformers`.
130
 
131
  We also welcome you to experience the InternVL2 series models in our [online demo](https://internvl.opengvlab.com/).
132
 
133
- > Please use transformers==4.37.2 to ensure the model works normally.
134
 
135
  ### Model Loading
136
 
@@ -462,7 +461,7 @@ response, history = model.chat(tokenizer, pixel_values, question, generation_con
462
  print(f'User: {question}\nAssistant: {response}')
463
  ```
464
 
465
- #### Streaming output
466
 
467
  Besides this method, you can also use the following code to get streamed output.
468
 
@@ -502,12 +501,12 @@ Many repositories now support fine-tuning of the InternVL series models, includi
502
  LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the MMRazor and MMDeploy teams.
503
 
504
  ```sh
505
- pip install lmdeploy==0.5.3
506
  ```
507
 
508
  LMDeploy abstracts the complex inference process of multi-modal Vision-Language Models (VLM) into an easy-to-use pipeline, similar to the Large Language Model (LLM) inference pipeline.
509
 
510
- #### A 'Hello, world' example
511
 
512
  ```python
513
  from lmdeploy import pipeline, TurbomindEngineConfig
@@ -522,7 +521,7 @@ print(response.text)
522
 
523
  If `ImportError` occurs while executing this case, please install the required dependency packages as prompted.
524
 
525
- #### Multi-images inference
526
 
527
  When dealing with multiple images, you can put them all in one list. Keep in mind that multiple images will lead to a higher number of input tokens, and as a result, the size of the context window typically needs to be increased.
528
 
@@ -547,7 +546,7 @@ response = pipe((f'Image-1: {IMAGE_TOKEN}\nImage-2: {IMAGE_TOKEN}\ndescribe thes
547
  print(response.text)
548
  ```
549
 
550
- #### Batch prompts inference
551
 
552
  Conducting inference with batch prompts is quite straightforward; just place them within a list structure:
553
 
@@ -567,7 +566,7 @@ response = pipe(prompts)
567
  print(response)
568
  ```
569
 
570
- #### Multi-turn conversation
571
 
572
  There are two ways to do the multi-turn conversations with the pipeline. One is to construct messages according to the format of OpenAI and use above introduced method, the other is to use the `pipeline.chat` interface.
573
 
@@ -637,271 +636,12 @@ This project is released under the MIT license, while InternLM2 is licensed unde
637
  If you find this project useful in your research, please consider citing:
638
 
639
  ```BibTeX
640
- @article{chen2023internvl,
641
- title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks},
642
- author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng},
643
- journal={arXiv preprint arXiv:2312.14238},
644
- year={2023}
645
- }
646
- @article{chen2024far,
647
- title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
648
- author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
649
- journal={arXiv preprint arXiv:2404.16821},
650
  year={2024}
651
  }
652
- ```
653
-
654
- ## 简介
655
-
656
- 我们很高兴宣布 InternVL 2.0 的发布,这是 InternVL 系列多模态大语言模型的最新版本。InternVL 2.0 提供了多种**指令微调**的模型,参数从 10 亿到 1080 亿不等。此仓库包含经过指令微调的 InternVL2-40B 模型。
657
-
658
- 与最先进的开源多模态大语言模型相比,InternVL 2.0 超越了大多数开源模型。它在各种能力上表现出与闭源商业模型相媲美的竞争力,包括文档和图表理解、信息图表问答、场景文本理解和 OCR 任务、科学和数学问题解决,以及文化理解和综合多模态能力。
659
-
660
- InternVL 2.0 使用 8k 上下文窗口进行训练,训练数据包含长文本、多图和视频数据,与 InternVL 1.5 相比,其处理这些类型输入的能力显著提高。更多详细信息,请参阅我们的博客和 GitHub。
661
-
662
- | 模型名称 | 视觉部分 | 语言部分 | HF 链接 | MS 链接 |
663
- | :------------------: | :---------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------: | :--------------------------------------------------------------: | :--------------------------------------------------------------------: |
664
- | InternVL2-1B | [InternViT-300M-448px](https://huggingface.co/OpenGVLab/InternViT-300M-448px) | [Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL2-1B) | [🤖 link](https://modelscope.cn/models/OpenGVLab/InternVL2-1B) |
665
- | InternVL2-2B | [InternViT-300M-448px](https://huggingface.co/OpenGVLab/InternViT-300M-448px) | [internlm2-chat-1_8b](https://huggingface.co/internlm/internlm2-chat-1_8b) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL2-2B) | [🤖 link](https://modelscope.cn/models/OpenGVLab/InternVL2-2B) |
666
- | InternVL2-4B | [InternViT-300M-448px](https://huggingface.co/OpenGVLab/InternViT-300M-448px) | [Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL2-4B) | [🤖 link](https://modelscope.cn/models/OpenGVLab/InternVL2-4B) |
667
- | InternVL2-8B | [InternViT-300M-448px](https://huggingface.co/OpenGVLab/InternViT-300M-448px) | [internlm2_5-7b-chat](https://huggingface.co/internlm/internlm2_5-7b-chat) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL2-8B) | [🤖 link](https://modelscope.cn/models/OpenGVLab/InternVL2-8B) |
668
- | InternVL2-26B | [InternViT-6B-448px-V1-5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5) | [internlm2-chat-20b](https://huggingface.co/internlm/internlm2-chat-20b) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL2-26B) | [🤖 link](https://modelscope.cn/models/OpenGVLab/InternVL2-26B) |
669
- | InternVL2-40B | [InternViT-6B-448px-V1-5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5) | [Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL2-40B) | [🤖 link](https://modelscope.cn/models/OpenGVLab/InternVL2-40B) |
670
- | InternVL2-Llama3-76B | [InternViT-6B-448px-V1-5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5) | [Hermes-2-Theta-Llama-3-70B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-70B) | [🤗 link](https://huggingface.co/OpenGVLab/InternVL2-Llama3-76B) | [🤖 link](https://modelscope.cn/models/OpenGVLab/InternVL2-Llama3-76B) |
671
-
672
- ## 模型细节
673
-
674
- InternVL 2.0 是一个多模态大语言模型系列,包含各种规模的模型。对于每个规模的模型,我们都会发布针对多模态任务优化的指令微调模型。InternVL2-40B 包含 [InternViT-6B-448px-V1-5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5)、一个 MLP 投影器和 [Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B)。
675
-
676
- ## 性能测试
677
-
678
- ### 图像相关评测
679
-
680
- | 评测数据集 | GPT-4T-20240409 | Gemini-1.5-Pro | InternVL2-26B | InternVL2-40B |
681
- | :--------------------------: | :-------------: | :------------: | :-----------: | :-----------: |
682
- | 模型大小 | - | - | 25.5B | 40B |
683
- | | | | | |
684
- | DocVQA<sub>test</sub> | 87.2 | 86.5 | 92.9 | 93.9 |
685
- | ChartQA<sub>test</sub> | 78.1 | 81.3 | 84.9 | 86.2 |
686
- | InfoVQA<sub>test</sub> | - | 72.7 | 75.9 | 78.7 |
687
- | TextVQA<sub>val</sub> | - | 73.5 | 82.3 | 83.0 |
688
- | OCRBench | 678 | 754 | 825 | 837 |
689
- | MME<sub>sum</sub> | 2070.2 | 2110.6 | 2260.7 | 2315.0 |
690
- | RealWorldQA | 68.0 | 67.5 | 68.3 | 71.8 |
691
- | AI2D<sub>test</sub> | 89.4 | 80.3 | 84.5 | 87.1 |
692
- | MMMU<sub>val</sub> | 63.1 / 61.7 | 58.5 / 60.6 | 48.3 / 51.2 | 53.9 / 55.2 |
693
- | MMBench-EN<sub>test</sub> | 81.0 | 73.9 | 83.4 | 86.8 |
694
- | MMBench-CN<sub>test</sub> | 80.2 | 73.8 | 82.0 | 86.5 |
695
- | CCBench<sub>dev</sub> | 57.3 | 28.4 | 73.5 | 80.6 |
696
- | MMVet<sub>GPT-4-0613</sub> | - | - | 64.2 | 68.5 |
697
- | MMVet<sub>GPT-4-Turbo</sub> | 67.5 | 64.0 | 62.1 | 65.5 |
698
- | SEED-Image | - | - | 76.8 | 78.2 |
699
- | HallBench<sub>avg</sub> | 43.9 | 45.6 | 50.7 | 56.9 |
700
- | MathVista<sub>testmini</sub> | 58.1 | 57.7 | 59.4 | 63.7 |
701
- | OpenCompass<sub>avg</sub> | 63.5 | 64.4 | 66.4 | 69.7 |
702
-
703
- - 关于更多的细节以及评测复现,请看我们的[评测指南](https://internvl.readthedocs.io/en/latest/internvl2.0/evaluation.html)。
704
-
705
- - 我们同时使用 InternVL 和 VLMEvalKit 仓库进行模型评估。具体来说,DocVQA、ChartQA、InfoVQA、TextVQA、MME、AI2D、MMBench、CCBench、MMVet 和 SEED-Image 的结果是使用 InternVL 仓库测试的。OCRBench、RealWorldQA、HallBench 和 MathVista 是使用 VLMEvalKit 进行评估的。
706
-
707
- - 对于MMMU,我们报告了原始分数(左侧:InternVL系列模型使用InternVL代码库评测,其他模型的分数来自其技术报告或网页)和VLMEvalKit分数(右侧:从OpenCompass排行榜收集)。
708
-
709
- - 请注意,使用���同的测试工具包(如 InternVL 和 VLMEvalKit)评估同一模型可能会导致细微差异,这是正常的。代码版本的更新、环境和硬件的变化也可能导致结果的微小差异。
710
-
711
- ### 视频相关评测
712
-
713
- | 评测数据集 | GPT-4V | VILA-1.5 | LLaVA-NeXT-Video | InternVL2-26B | InternVL2-40B |
714
- | :-------------------------: | :----: | :------: | :--------------: | :-----------: | :-----------: |
715
- | 模型大小 | - | 34B | 34B | 25.5B | 40B |
716
- | | | | | | |
717
- | MVBench | - | - | - | 67.5 | 72.5 |
718
- | MMBench-Video<sub>8f</sub> | 1.53 | - | - | 1.27 | 1.32 |
719
- | MMBench-Video<sub>16f</sub> | 1.68 | - | - | 1.41 | 1.45 |
720
- | Video-MME<br>w/o subs | 59.9 | 59.0 | 52.0 | 54.8 | 61.2 |
721
- | Video-MME<br>w subs | 63.3 | 59.4 | 54.9 | 57.1 | 62.4 |
722
-
723
- - 我们通过从每个视频中提取 16 帧来评估我们的模型在 MVBench 和 Video-MME 上的性能,每个视频帧被调整为 448x448 的图像。
724
-
725
- ### 定位相关评测
726
-
727
- | 模型 | avg. | RefCOCO<br>(val) | RefCOCO<br>(testA) | RefCOCO<br>(testB) | RefCOCO+<br>(val) | RefCOCO+<br>(testA) | RefCOCO+<br>(testB) | RefCOCO‑g<br>(val) | RefCOCO‑g<br>(test) |
728
- | :----------------------------: | :--: | :--------------: | :----------------: | :----------------: | :---------------: | :-----------------: | :-----------------: | :----------------: | :-----------------: |
729
- | UNINEXT-H<br>(Specialist SOTA) | 88.9 | 92.6 | 94.3 | 91.5 | 85.2 | 89.6 | 79.8 | 88.7 | 89.4 |
730
- | | | | | | | | | | |
731
- | Mini-InternVL-<br>Chat-2B-V1-5 | 75.8 | 80.7 | 86.7 | 72.9 | 72.5 | 82.3 | 60.8 | 75.6 | 74.9 |
732
- | Mini-InternVL-<br>Chat-4B-V1-5 | 84.4 | 88.0 | 91.4 | 83.5 | 81.5 | 87.4 | 73.8 | 84.7 | 84.6 |
733
- | InternVL‑Chat‑V1‑5 | 88.8 | 91.4 | 93.7 | 87.1 | 87.0 | 92.3 | 80.9 | 88.5 | 89.3 |
734
- | | | | | | | | | | |
735
- | InternVL2‑1B | 79.9 | 83.6 | 88.7 | 79.8 | 76.0 | 83.6 | 67.7 | 80.2 | 79.9 |
736
- | InternVL2‑2B | 77.7 | 82.3 | 88.2 | 75.9 | 73.5 | 82.8 | 63.3 | 77.6 | 78.3 |
737
- | InternVL2‑4B | 84.4 | 88.5 | 91.2 | 83.9 | 81.2 | 87.2 | 73.8 | 84.6 | 84.6 |
738
- | InternVL2‑8B | 82.9 | 87.1 | 91.1 | 80.7 | 79.8 | 87.9 | 71.4 | 82.7 | 82.7 |
739
- | InternVL2‑26B | 88.5 | 91.2 | 93.3 | 87.4 | 86.8 | 91.0 | 81.2 | 88.5 | 88.6 |
740
- | InternVL2‑40B | 90.3 | 93.0 | 94.7 | 89.2 | 88.5 | 92.8 | 83.6 | 90.3 | 90.6 |
741
- | InternVL2-<br>Llama3‑76B | 90.0 | 92.2 | 94.8 | 88.4 | 88.8 | 93.1 | 82.8 | 89.5 | 90.3 |
742
-
743
- - 我们使用以下 Prompt 来评测 InternVL 的 Grounding 能力: `Please provide the bounding box coordinates of the region this sentence describes: <ref>{}</ref>`
744
-
745
- 限制:尽管在训练过程中我们非常注重模型的安全性,尽力促使模型输出符合伦理和法律要求的文本,但受限于模型大小以及概率生成范式,模型可能会产生各种不符合预期的输出,例如回复内容包含偏见、歧视等有害内容,请勿传播这些内容。由于传播不良信息导致的任何后果,本项目���承担责任。
746
-
747
- ### 邀请评测 InternVL
748
-
749
- 我们欢迎各位 MLLM benchmark 的开发者对我们的 InternVL1.5 以及 InternVL2 系列模型进行评测。如果需要在此处添加评测结果,请与我联系([wztxy89@163.com](mailto:wztxy89@163.com))。
750
-
751
- ## 快速启动
752
-
753
- 我们提供了一个示例代码,用于使用 `transformers` 运行 InternVL2-40B。
754
-
755
- 我们也欢迎你在我们的[在线demo](https://internvl.opengvlab.com/)中体验InternVL2的系列模型。
756
-
757
- > 请使用 transformers==4.37.2 以确保模型正常运行。
758
-
759
- 示例代码请[点击这里](#quick-start)。
760
-
761
- ## 微调
762
-
763
- 许多仓库现在都支持 InternVL 系列模型的微调,包括 [InternVL](https://github.com/OpenGVLab/InternVL)、[SWIFT](https://github.com/modelscope/ms-swift)、[XTurner](https://github.com/InternLM/xtuner) 等。请参阅它们的文档以获取更多微调细节。
764
-
765
- ## 部署
766
-
767
- ### LMDeploy
768
-
769
- LMDeploy 是由 MMRazor 和 MMDeploy 团队开发的用于压缩、部署和服务大语言模型(LLM)的工具包。
770
-
771
- ```sh
772
- pip install lmdeploy==0.5.3
773
- ```
774
-
775
- LMDeploy 将多模态视觉-语言模型(VLM)的复杂推理过程抽象为一个易于使用的管道,类似于大语言模型(LLM)的推理管道。
776
-
777
- #### 一个“你好,世界”示例
778
-
779
- ```python
780
- from lmdeploy import pipeline, TurbomindEngineConfig
781
- from lmdeploy.vl import load_image
782
-
783
- model = 'OpenGVLab/InternVL2-40B'
784
- image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
785
- pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=8192))
786
- response = pipe(('describe this image', image))
787
- print(response.text)
788
- ```
789
-
790
- 如果在执行此示例时出现 `ImportError`,请按照提示安装所需的依赖包。
791
-
792
- #### 多图像推理
793
-
794
- 在处理多张图像时,可以将它们全部放入一个列表中。请注意,多张图像会导致输入 token 数量增加,因此通常需要增加上下文窗口的大小。
795
-
796
- ```python
797
- from lmdeploy import pipeline, TurbomindEngineConfig
798
- from lmdeploy.vl import load_image
799
- from lmdeploy.vl.constants import IMAGE_TOKEN
800
-
801
- model = 'OpenGVLab/InternVL2-40B'
802
- pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=8192))
803
-
804
- image_urls=[
805
- 'https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg',
806
- 'https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/det.jpg'
807
- ]
808
-
809
- images = [load_image(img_url) for img_url in image_urls]
810
- # Numbering images improves multi-image conversations
811
- response = pipe((f'Image-1: {IMAGE_TOKEN}\nImage-2: {IMAGE_TOKEN}\ndescribe these two images', images))
812
- print(response.text)
813
- ```
814
-
815
- #### 批量Prompt推理
816
-
817
- 使用批量Prompt进行推理非常简单;只需将它们放在一个列表结构中:
818
-
819
- ```python
820
- from lmdeploy import pipeline, TurbomindEngineConfig
821
- from lmdeploy.vl import load_image
822
-
823
- model = 'OpenGVLab/InternVL2-40B'
824
- pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=8192))
825
-
826
- image_urls=[
827
- "https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg",
828
- "https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/det.jpg"
829
- ]
830
- prompts = [('describe this image', load_image(img_url)) for img_url in image_urls]
831
- response = pipe(prompts)
832
- print(response)
833
- ```
834
-
835
- #### 多轮对话
836
-
837
- 使用管道进行多轮对话有两种方法。一种是根据 OpenAI 的格式构建消息并使用上述方法,另一种是使用 `pipeline.chat` 接口。
838
-
839
- ```python
840
- from lmdeploy import pipeline, TurbomindEngineConfig, GenerationConfig
841
- from lmdeploy.vl import load_image
842
-
843
- model = 'OpenGVLab/InternVL2-40B'
844
- pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=8192))
845
-
846
- image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg')
847
- gen_config = GenerationConfig(top_k=40, top_p=0.8, temperature=0.8)
848
- sess = pipe.chat(('describe this image', image), gen_config=gen_config)
849
- print(sess.response.text)
850
- sess = pipe.chat('What is the woman doing?', session=sess, gen_config=gen_config)
851
- print(sess.response.text)
852
- ```
853
-
854
- #### API部署
855
-
856
- LMDeploy 的 `api_server` 使模型能够通过一个命令轻松打包成服务。提供的 RESTful API 与 OpenAI 的接口兼容。以下是服务启动的示例:
857
-
858
- ```shell
859
- lmdeploy serve api_server OpenGVLab/InternVL2-40B --backend turbomind --server-port 23333
860
- ```
861
-
862
- 为了使用OpenAI风格的API接口,您需要安装OpenAI:
863
-
864
- ```shell
865
- pip install openai
866
- ```
867
-
868
- 然后,使用下面的代码进行API调用:
869
-
870
- ```python
871
- from openai import OpenAI
872
-
873
- client = OpenAI(api_key='YOUR_API_KEY', base_url='http://0.0.0.0:23333/v1')
874
- model_name = client.models.list().data[0].id
875
- response = client.chat.completions.create(
876
- model=model_name,
877
- messages=[{
878
- 'role':
879
- 'user',
880
- 'content': [{
881
- 'type': 'text',
882
- 'text': 'describe this image',
883
- }, {
884
- 'type': 'image_url',
885
- 'image_url': {
886
- 'url':
887
- 'https://modelscope.oss-cn-beijing.aliyuncs.com/resource/tiger.jpeg',
888
- },
889
- }],
890
- }],
891
- temperature=0.8,
892
- top_p=0.8)
893
- print(response)
894
- ```
895
-
896
- ## 开源许可证
897
-
898
- 该项目采用 MIT 许可证发布,而 InternLM2 则采用 Apache-2.0 许可证。
899
-
900
- ## 引用
901
-
902
- 如果您发现此项目对您的研究有用,可以考虑引用我们的论文:
903
-
904
- ```BibTeX
905
  @article{chen2023internvl,
906
  title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks},
907
  author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng},
 
5
  base_model:
6
  - OpenGVLab/InternViT-6B-448px-V1-5
7
  - NousResearch/Nous-Hermes-2-Yi-34B
8
+ new_version: OpenGVLab/InternVL2_5-38B
9
  base_model_relation: merge
10
  language:
11
  - multilingual
 
20
 
21
  # InternVL2-40B
22
 
23
+ [\[📂 GitHub\]](https://github.com/OpenGVLab/InternVL) [\[🆕 Blog\]](https://internvl.github.io/blog/) [\[📜 InternVL 1.0\]](https://arxiv.org/abs/2312.14238) [\[📜 InternVL 1.5\]](https://arxiv.org/abs/2404.16821) [\[📜 Mini-InternVL\]](https://arxiv.org/abs/2410.16261)
24
 
25
  [\[🗨️ Chat Demo\]](https://internvl.opengvlab.com/) [\[🤗 HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[🚀 Quick Start\]](#quick-start) [\[📖 中文解读\]](https://zhuanlan.zhihu.com/p/706547971) [\[📖 Documents\]](https://internvl.readthedocs.io/en/latest/)
26
 
27
+ <div align="center">
28
+ <img width="500" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/64006c09330a45b03605bba3/zJsd2hqd3EevgXo6fNgC-.png">
29
+ </div>
30
 
31
  ## Introduction
32
 
 
66
  | MME<sub>sum</sub> | 2070.2 | 2110.6 | 2260.7 | 2315.0 |
67
  | RealWorldQA | 68.0 | 67.5 | 68.3 | 71.8 |
68
  | AI2D<sub>test</sub> | 89.4 | 80.3 | 84.5 | 87.1 |
69
+ | MMMU<sub>val</sub> | 63.1 | 58.5 | 51.2 | 55.2 |
70
  | MMBench-EN<sub>test</sub> | 81.0 | 73.9 | 83.4 | 86.8 |
71
  | MMBench-CN<sub>test</sub> | 80.2 | 73.8 | 82.0 | 86.5 |
72
  | CCBench<sub>dev</sub> | 57.3 | 28.4 | 73.5 | 80.6 |
 
79
 
80
  - For more details and evaluation reproduction, please refer to our [Evaluation Guide](https://internvl.readthedocs.io/en/latest/internvl2.0/evaluation.html).
81
 
82
+ - We simultaneously use [InternVL](https://github.com/OpenGVLab/InternVL) and [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) repositories for model evaluation. Specifically, the results reported for DocVQA, ChartQA, InfoVQA, TextVQA, MME, AI2D, MMBench, CCBench, MMVet (GPT-4-0613), and SEED-Image were tested using the InternVL repository. MMMU, OCRBench, RealWorldQA, HallBench, MMVet (GPT-4-Turbo), and MathVista were evaluated using the VLMEvalKit.
 
 
83
 
84
  - Please note that evaluating the same model using different testing toolkits like [InternVL](https://github.com/OpenGVLab/InternVL) and [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) can result in slight differences, which is normal. Updates to code versions and variations in environment and hardware can also cause minor discrepancies in results.
85
 
 
129
 
130
  We also welcome you to experience the InternVL2 series models in our [online demo](https://internvl.opengvlab.com/).
131
 
132
+ > Please use transformers>=4.37.2 to ensure the model works normally.
133
 
134
  ### Model Loading
135
 
 
461
  print(f'User: {question}\nAssistant: {response}')
462
  ```
463
 
464
+ #### Streaming Output
465
 
466
  Besides this method, you can also use the following code to get streamed output.
467
 
 
501
  LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the MMRazor and MMDeploy teams.
502
 
503
  ```sh
504
+ pip install lmdeploy>=0.5.3
505
  ```
506
 
507
  LMDeploy abstracts the complex inference process of multi-modal Vision-Language Models (VLM) into an easy-to-use pipeline, similar to the Large Language Model (LLM) inference pipeline.
508
 
509
+ #### A 'Hello, world' Example
510
 
511
  ```python
512
  from lmdeploy import pipeline, TurbomindEngineConfig
 
521
 
522
  If `ImportError` occurs while executing this case, please install the required dependency packages as prompted.
523
 
524
+ #### Multi-images Inference
525
 
526
  When dealing with multiple images, you can put them all in one list. Keep in mind that multiple images will lead to a higher number of input tokens, and as a result, the size of the context window typically needs to be increased.
527
 
 
546
  print(response.text)
547
  ```
548
 
549
+ #### Batch Prompts Inference
550
 
551
  Conducting inference with batch prompts is quite straightforward; just place them within a list structure:
552
 
 
566
  print(response)
567
  ```
568
 
569
+ #### Multi-turn Conversation
570
 
571
  There are two ways to do the multi-turn conversations with the pipeline. One is to construct messages according to the format of OpenAI and use above introduced method, the other is to use the `pipeline.chat` interface.
572
 
 
636
  If you find this project useful in your research, please consider citing:
637
 
638
  ```BibTeX
639
+ @article{gao2024mini,
640
+ title={Mini-internvl: A flexible-transfer pocket multimodal model with 5\% parameters and 90\% performance},
641
+ author={Gao, Zhangwei and Chen, Zhe and Cui, Erfei and Ren, Yiming and Wang, Weiyun and Zhu, Jinguo and Tian, Hao and Ye, Shenglong and He, Junjun and Zhu, Xizhou and others},
642
+ journal={arXiv preprint arXiv:2410.16261},
 
 
 
 
 
 
643
  year={2024}
644
  }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
645
  @article{chen2023internvl,
646
  title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks},
647
  author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng},
config.json CHANGED
@@ -17,6 +17,7 @@
17
  "architectures": [
18
  "LlamaForCausalLM"
19
  ],
 
20
  "attention_bias": false,
21
  "attention_dropout": 0.0,
22
  "bad_words_ids": null,
 
17
  "architectures": [
18
  "LlamaForCausalLM"
19
  ],
20
+ "_attn_implementation": "flash_attention_2",
21
  "attention_bias": false,
22
  "attention_dropout": 0.0,
23
  "bad_words_ids": null,
configuration_internvl_chat.py CHANGED
@@ -38,11 +38,11 @@ class InternVLChatConfig(PretrainedConfig):
38
  super().__init__(**kwargs)
39
 
40
  if vision_config is None:
41
- vision_config = {}
42
  logger.info('vision_config is None. Initializing the InternVisionConfig with default values.')
43
 
44
  if llm_config is None:
45
- llm_config = {}
46
  logger.info('llm_config is None. Initializing the LlamaConfig config with default values (`LlamaConfig`).')
47
 
48
  self.vision_config = InternVisionConfig(**vision_config)
 
38
  super().__init__(**kwargs)
39
 
40
  if vision_config is None:
41
+ vision_config = {'architectures': ['InternVisionModel']}
42
  logger.info('vision_config is None. Initializing the InternVisionConfig with default values.')
43
 
44
  if llm_config is None:
45
+ llm_config = {'architectures': ['LlamaForCausalLM']}
46
  logger.info('llm_config is None. Initializing the LlamaConfig config with default values (`LlamaConfig`).')
47
 
48
  self.vision_config = InternVisionConfig(**vision_config)
modeling_intern_vit.py CHANGED
@@ -3,6 +3,7 @@
3
  # Copyright (c) 2024 OpenGVLab
4
  # Licensed under The MIT License [see LICENSE for details]
5
  # --------------------------------------------------------
 
6
  from typing import Optional, Tuple, Union
7
 
8
  import torch
 
3
  # Copyright (c) 2024 OpenGVLab
4
  # Licensed under The MIT License [see LICENSE for details]
5
  # --------------------------------------------------------
6
+
7
  from typing import Optional, Tuple, Union
8
 
9
  import torch