czczup commited on
Commit
b5e11b0
โ€ข
1 Parent(s): 988ea31

fix compatibility issue for transformers 4.46+

Browse files
Files changed (3) hide show
  1. README.md +45 -317
  2. config.json +2 -1
  3. configuration_internvl_chat.py +2 -2
README.md CHANGED
@@ -5,6 +5,7 @@ library_name: transformers
5
  base_model:
6
  - OpenGVLab/InternViT-300M-448px
7
  - Qwen/Qwen2-0.5B-Instruct
 
8
  base_model_relation: merge
9
  language:
10
  - multilingual
@@ -19,13 +20,13 @@ tags:
19
 
20
  # InternVL2-1B
21
 
22
- [\[๐Ÿ“‚ GitHub\]](https://github.com/OpenGVLab/InternVL) [\[๐Ÿ†• Blog\]](https://internvl.github.io/blog/) [\[๐Ÿ“œ InternVL 1.0 Paper\]](https://arxiv.org/abs/2312.14238) [\[๐Ÿ“œ InternVL 1.5 Report\]](https://arxiv.org/abs/2404.16821) [\[๐Ÿ“œ Mini-InternVL Report\]](https://arxiv.org/abs/2410.16261)
23
 
24
  [\[๐Ÿ—จ๏ธ Chat Demo\]](https://internvl.opengvlab.com/) [\[๐Ÿค— HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[๐Ÿš€ Quick Start\]](#quick-start) [\[๐Ÿ“– ไธญๆ–‡่งฃ่ฏป\]](https://zhuanlan.zhihu.com/p/706547971) [\[๐Ÿ“– Documents\]](https://internvl.readthedocs.io/en/latest/)
25
 
26
- [ๅˆ‡ๆข่‡ณไธญๆ–‡็‰ˆ](#็ฎ€ไป‹)
27
-
28
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/_mLpMwsav5eMeNcZdrIQl.png)
29
 
30
  ## Introduction
31
 
@@ -53,48 +54,46 @@ InternVL 2.0 is a multimodal large language model series, featuring models of va
53
 
54
  ### Image Benchmarks
55
 
56
- | Benchmark | PaliGemma-3B | Mini-InternVL-2B-1-5 | InternVL2-2B | InternVL2-1B |
57
- | :--------------------------: | :----------: | :------------------: | :----------: | :----------: |
58
- | Model Size | 2.9B | 2.2B | 2.2B | 0.9B |
59
- | | | | | |
60
- | DocVQA<sub>test</sub> | - | 85.0 | 86.9 | 81.7 |
61
- | ChartQA<sub>test</sub> | - | 74.8 | 76.2 | 72.9 |
62
- | InfoVQA<sub>test</sub> | - | 55.4 | 58.9 | 50.9 |
63
- | TextVQA<sub>val</sub> | 68.1 | 70.5 | 73.4 | 70.5 |
64
- | OCRBench | 614 | 654 | 784 | 754 |
65
- | MME<sub>sum</sub> | 1686.1 | 1901.5 | 1876.8 | 1794.4 |
66
- | RealWorldQA | 55.2 | 57.9 | 57.3 | 50.3 |
67
- | AI2D<sub>test</sub> | 68.3 | 69.8 | 74.1 | 64.1 |
68
- | MMMU<sub>val</sub> | 34.9 | 34.6 / 37.4 | 34.3 / 36.3 | 35.4 / 36.7 |
69
- | MMBench-EN<sub>test</sub> | 71.0 | 70.9 | 73.2 | 65.4 |
70
- | MMBench-CN<sub>test</sub> | 63.6 | 66.2 | 70.9 | 60.7 |
71
- | CCBench<sub>dev</sub> | 29.6 | 63.5 | 74.7 | 75.7 |
72
- | MMVet<sub>GPT-4-0613</sub> | - | 39.3 | 44.6 | 37.8 |
73
- | MMVet<sub>GPT-4-Turbo</sub> | 33.1 | 35.5 | 39.5 | 33.3 |
74
- | SEED-Image | 69.6 | 69.8 | 71.6 | 65.6 |
75
- | HallBench<sub>avg</sub> | 32.2 | 37.5 | 37.9 | 33.4 |
76
- | MathVista<sub>testmini</sub> | 28.7 | 41.1 | 46.3 | 37.7 |
77
- | OpenCompass<sub>avg</sub> | 46.6 | 49.8 | 54.0 | 48.3 |
78
 
79
  - For more details and evaluation reproduction, please refer to our [Evaluation Guide](https://internvl.readthedocs.io/en/latest/internvl2.0/evaluation.html).
80
 
81
- - We simultaneously use [InternVL](https://github.com/OpenGVLab/InternVL) and [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) repositories for model evaluation. Specifically, the results reported for DocVQA, ChartQA, InfoVQA, TextVQA, MME, AI2D, MMBench, CCBench, MMVet, and SEED-Image were tested using the InternVL repository. OCRBench, RealWorldQA, HallBench, and MathVista were evaluated using the VLMEvalKit.
82
-
83
- - For MMMU, we report both the original scores (left side: evaluated using the InternVL codebase for InternVL series models, and sourced from technical reports or webpages for other models) and the VLMEvalKit scores (right side: collected from the OpenCompass leaderboard).
84
 
85
  - Please note that evaluating the same model using different testing toolkits like [InternVL](https://github.com/OpenGVLab/InternVL) and [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) can result in slight differences, which is normal. Updates to code versions and variations in environment and hardware can also cause minor discrepancies in results.
86
 
87
  ### Video Benchmarks
88
 
89
- | Benchmark | VideoChat2-Phi3 | Mini-InternVL-2B-1-5 | InternVL2-2B | InternVL2-1B |
90
- | :-------------------------: | :-------------: | :------------------: | :----------: | :----------: |
91
- | Model Size | 4B | 2.2B | 2.2B | 0.9B |
92
- | | | | | |
93
- | MVBench | 55.1 | 37.0 | 60.2 | 57.9 |
94
- | MMBench-Video<sub>8f</sub> | - | 0.99 | 0.97 | 0.95 |
95
- | MMBench-Video<sub>16f</sub> | - | 1.04 | 1.03 | 0.98 |
96
- | Video-MME<br>w/o subs | - | 42.9 | 45.0 | 42.6 |
97
- | Video-MME<br>w subs | - | 44.7 | 47.3 | 44.7 |
98
 
99
  - We evaluate our models on MVBench and Video-MME by extracting 16 frames from each video, and each frame was resized to a 448x448 image.
100
 
@@ -130,7 +129,7 @@ We provide an example code to run InternVL2-1B using `transformers`.
130
 
131
  We also welcome you to experience the InternVL2 series models in our [online demo](https://internvl.opengvlab.com/).
132
 
133
- > Please use transformers==4.37.2 to ensure the model works normally.
134
 
135
  ### Model Loading
136
 
@@ -442,7 +441,7 @@ response, history = model.chat(tokenizer, pixel_values, question, generation_con
442
  print(f'User: {question}\nAssistant: {response}')
443
  ```
444
 
445
- #### Streaming output
446
 
447
  Besides this method, you can also use the following code to get streamed output.
448
 
@@ -482,12 +481,12 @@ Many repositories now support fine-tuning of the InternVL series models, includi
482
  LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the MMRazor and MMDeploy teams.
483
 
484
  ```sh
485
- pip install lmdeploy==0.5.3
486
  ```
487
 
488
  LMDeploy abstracts the complex inference process of multi-modal Vision-Language Models (VLM) into an easy-to-use pipeline, similar to the Large Language Model (LLM) inference pipeline.
489
 
490
- #### A 'Hello, world' example
491
 
492
  ```python
493
  from lmdeploy import pipeline, TurbomindEngineConfig
@@ -502,7 +501,7 @@ print(response.text)
502
 
503
  If `ImportError` occurs while executing this case, please install the required dependency packages as prompted.
504
 
505
- #### Multi-images inference
506
 
507
  When dealing with multiple images, you can put them all in one list. Keep in mind that multiple images will lead to a higher number of input tokens, and as a result, the size of the context window typically needs to be increased.
508
 
@@ -527,7 +526,7 @@ response = pipe((f'Image-1: {IMAGE_TOKEN}\nImage-2: {IMAGE_TOKEN}\ndescribe thes
527
  print(response.text)
528
  ```
529
 
530
- #### Batch prompts inference
531
 
532
  Conducting inference with batch prompts is quite straightforward; just place them within a list structure:
533
 
@@ -547,7 +546,7 @@ response = pipe(prompts)
547
  print(response)
548
  ```
549
 
550
- #### Multi-turn conversation
551
 
552
  There are two ways to do the multi-turn conversations with the pipeline. One is to construct messages according to the format of OpenAI and use above introduced method, the other is to use the `pipeline.chat` interface.
553
 
@@ -610,7 +609,7 @@ print(response)
610
 
611
  ## License
612
 
613
- This project is released under the MIT license, while Qwen2 is licensed under the Tongyi Qianwen LICENSE.
614
 
615
  ## Citation
616
 
@@ -636,274 +635,3 @@ If you find this project useful in your research, please consider citing:
636
  year={2024}
637
  }
638
  ```
639
-
640
- ## ็ฎ€ไป‹
641
-
642
- ๆˆ‘ไปฌๅพˆ้ซ˜ๅ…ดๅฎฃๅธƒ InternVL 2.0 ็š„ๅ‘ๅธƒ๏ผŒ่ฟ™ๆ˜ฏ InternVL ็ณปๅˆ—ๅคšๆจกๆ€ๅคง่ฏญ่จ€ๆจกๅž‹็š„ๆœ€ๆ–ฐ็‰ˆๆœฌใ€‚InternVL 2.0 ๆไพ›ไบ†ๅคš็ง**ๆŒ‡ไปคๅพฎ่ฐƒ**็š„ๆจกๅž‹๏ผŒๅ‚ๆ•ฐไปŽ 10 ไบฟๅˆฐ 1080 ไบฟไธ็ญ‰ใ€‚ๆญคไป“ๅบ“ๅŒ…ๅซ็ป่ฟ‡ๆŒ‡ไปคๅพฎ่ฐƒ็š„ InternVL2-1B ๆจกๅž‹ใ€‚
643
-
644
- ไธŽๆœ€ๅ…ˆ่ฟ›็š„ๅผ€ๆบๅคšๆจกๆ€ๅคง่ฏญ่จ€ๆจกๅž‹็›ธๆฏ”๏ผŒInternVL 2.0 ่ถ…่ถŠไบ†ๅคงๅคšๆ•ฐๅผ€ๆบๆจกๅž‹ใ€‚ๅฎƒๅœจๅ„็ง่ƒฝๅŠ›ไธŠ่กจ็Žฐๅ‡บไธŽ้—ญๆบๅ•†ไธšๆจกๅž‹็›ธๅชฒ็พŽ็š„็ซžไบ‰ๅŠ›๏ผŒๅŒ…ๆ‹ฌๆ–‡ๆกฃๅ’Œๅ›พ่กจ็†่งฃใ€ไฟกๆฏๅ›พ่กจ้—ฎ็ญ”ใ€ๅœบๆ™ฏๆ–‡ๆœฌ็†่งฃๅ’Œ OCR ไปปๅŠกใ€็ง‘ๅญฆๅ’Œๆ•ฐๅญฆ้—ฎ้ข˜่งฃๅ†ณ๏ผŒไปฅๅŠๆ–‡ๅŒ–็†่งฃๅ’Œ็ปผๅˆๅคšๆจกๆ€่ƒฝๅŠ›ใ€‚
645
-
646
- InternVL 2.0 ไฝฟ็”จ 8k ไธŠไธ‹ๆ–‡็ช—ๅฃ่ฟ›่กŒ่ฎญ็ปƒ๏ผŒ่ฎญ็ปƒๆ•ฐๆฎๅŒ…ๅซ้•ฟๆ–‡ๆœฌใ€ๅคšๅ›พๅ’Œ่ง†้ข‘ๆ•ฐๆฎ๏ผŒไธŽ InternVL 1.5 ็›ธๆฏ”๏ผŒๅ…ถๅค„็†่ฟ™ไบ›็ฑปๅž‹่พ“ๅ…ฅ็š„่ƒฝๅŠ›ๆ˜พ่‘—ๆ้ซ˜ใ€‚ๆ›ดๅคš่ฏฆ็ป†ไฟกๆฏ๏ผŒ่ฏทๅ‚้˜…ๆˆ‘ไปฌ็š„ๅšๅฎขๅ’Œ GitHubใ€‚
647
-
648
- | ๆจกๅž‹ๅ็งฐ | ่ง†่ง‰้ƒจๅˆ† | ่ฏญ่จ€้ƒจๅˆ† | HF ้“พๆŽฅ | MS ้“พๆŽฅ |
649
- | :------------------: | :---------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------: | :--------------------------------------------------------------: | :--------------------------------------------------------------------: |
650
- | InternVL2-1B | [InternViT-300M-448px](https://huggingface.co/OpenGVLab/InternViT-300M-448px) | [Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) | [๐Ÿค— link](https://huggingface.co/OpenGVLab/InternVL2-1B) | [๐Ÿค– link](https://modelscope.cn/models/OpenGVLab/InternVL2-1B) |
651
- | InternVL2-2B | [InternViT-300M-448px](https://huggingface.co/OpenGVLab/InternViT-300M-448px) | [internlm2-chat-1_8b](https://huggingface.co/internlm/internlm2-chat-1_8b) | [๐Ÿค— link](https://huggingface.co/OpenGVLab/InternVL2-2B) | [๐Ÿค– link](https://modelscope.cn/models/OpenGVLab/InternVL2-2B) |
652
- | InternVL2-4B | [InternViT-300M-448px](https://huggingface.co/OpenGVLab/InternViT-300M-448px) | [Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) | [๐Ÿค— link](https://huggingface.co/OpenGVLab/InternVL2-4B) | [๐Ÿค– link](https://modelscope.cn/models/OpenGVLab/InternVL2-4B) |
653
- | InternVL2-8B | [InternViT-300M-448px](https://huggingface.co/OpenGVLab/InternViT-300M-448px) | [internlm2_5-7b-chat](https://huggingface.co/internlm/internlm2_5-7b-chat) | [๐Ÿค— link](https://huggingface.co/OpenGVLab/InternVL2-8B) | [๐Ÿค– link](https://modelscope.cn/models/OpenGVLab/InternVL2-8B) |
654
- | InternVL2-26B | [InternViT-6B-448px-V1-5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5) | [internlm2-chat-20b](https://huggingface.co/internlm/internlm2-chat-20b) | [๐Ÿค— link](https://huggingface.co/OpenGVLab/InternVL2-26B) | [๐Ÿค– link](https://modelscope.cn/models/OpenGVLab/InternVL2-26B) |
655
- | InternVL2-40B | [InternViT-6B-448px-V1-5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5) | [Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B) | [๐Ÿค— link](https://huggingface.co/OpenGVLab/InternVL2-40B) | [๐Ÿค– link](https://modelscope.cn/models/OpenGVLab/InternVL2-40B) |
656
- | InternVL2-Llama3-76B | [InternViT-6B-448px-V1-5](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5) | [Hermes-2-Theta-Llama-3-70B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-70B) | [๐Ÿค— link](https://huggingface.co/OpenGVLab/InternVL2-Llama3-76B) | [๐Ÿค– link](https://modelscope.cn/models/OpenGVLab/InternVL2-Llama3-76B) |
657
-
658
- ## ๆจกๅž‹็ป†่Š‚
659
-
660
- InternVL 2.0 ๆ˜ฏไธ€ไธชๅคšๆจกๆ€ๅคง่ฏญ่จ€ๆจกๅž‹็ณปๅˆ—๏ผŒๅŒ…ๅซๅ„็ง่ง„ๆจก็š„ๆจกๅž‹ใ€‚ๅฏนไบŽๆฏไธช่ง„ๆจก็š„ๆจกๅž‹๏ผŒๆˆ‘ไปฌ้ƒฝไผšๅ‘ๅธƒ้’ˆๅฏนๅคšๆจกๆ€ไปปๅŠกไผ˜ๅŒ–็š„ๆŒ‡ไปคๅพฎ่ฐƒๆจกๅž‹ใ€‚InternVL2-1B ๅŒ…ๅซ [InternViT-300M-448px](https://huggingface.co/OpenGVLab/InternViT-300M-448px)ใ€ไธ€ไธช MLP ๆŠ•ๅฝฑๅ™จๅ’Œ [Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct)ใ€‚
661
-
662
- ## ๆ€ง่ƒฝๆต‹่ฏ•
663
-
664
- ### ๅ›พๅƒ็›ธๅ…ณ่ฏ„ๆต‹
665
-
666
- | ่ฏ„ๆต‹ๆ•ฐๆฎ้›† | PaliGemma-3B | Mini-InternVL-2B-1-5 | InternVL2-2B | InternVL2-1B |
667
- | :--------------------------: | :----------: | :------------------: | :----------: | :----------: |
668
- | ๆจกๅž‹ๅคงๅฐ | 2.9B | 2.2B | 2.2B | 0.9B |
669
- | | | | | |
670
- | DocVQA<sub>test</sub> | - | 85.0 | 86.9 | 81.7 |
671
- | ChartQA<sub>test</sub> | - | 74.8 | 76.2 | 72.9 |
672
- | InfoVQA<sub>test</sub> | - | 55.4 | 58.9 | 50.9 |
673
- | TextVQA<sub>val</sub> | 68.1 | 70.5 | 73.4 | 70.5 |
674
- | OCRBench | 614 | 654 | 784 | 754 |
675
- | MME<sub>sum</sub> | 1686.1 | 1901.5 | 1876.8 | 1794.4 |
676
- | RealWorldQA | 55.2 | 57.9 | 57.3 | 50.3 |
677
- | AI2D<sub>test</sub> | 68.3 | 69.8 | 74.1 | 64.1 |
678
- | MMMU<sub>val</sub> | 34.9 | 34.6 / 37.4 | 34.3 / 36.3 | 35.4 / 36.7 |
679
- | MMBench-EN<sub>test</sub> | 71.0 | 70.9 | 73.2 | 65.4 |
680
- | MMBench-CN<sub>test</sub> | 63.6 | 66.2 | 70.9 | 60.7 |
681
- | CCBench<sub>dev</sub> | 29.6 | 63.5 | 74.7 | 75.7 |
682
- | MMVet<sub>GPT-4-0613</sub> | - | 39.3 | 44.6 | 37.8 |
683
- | MMVet<sub>GPT-4-Turbo</sub> | 33.1 | 35.5 | 39.5 | 37.3 |
684
- | SEED-Image | 69.6 | 69.8 | 71.6 | 65.6 |
685
- | HallBench<sub>avg</sub> | 32.2 | 37.5 | 37.9 | 33.4 |
686
- | MathVista<sub>testmini</sub> | 28.7 | 41.1 | 46.3 | 37.7 |
687
- | OpenCompass<sub>avg</sub> | 46.6 | 49.8 | 54.0 | 48.3 |
688
-
689
- - ๅ…ณไบŽๆ›ดๅคš็š„็ป†่Š‚ไปฅๅŠ่ฏ„ๆต‹ๅค็Žฐ๏ผŒ่ฏท็œ‹ๆˆ‘ไปฌ็š„[่ฏ„ๆต‹ๆŒ‡ๅ—](https://internvl.readthedocs.io/en/latest/internvl2.0/evaluation.html)ใ€‚
690
-
691
- - ๆˆ‘ไปฌๅŒๆ—ถไฝฟ็”จ InternVL ๅ’Œ VLMEvalKit ไป“ๅบ“่ฟ›่กŒๆจกๅž‹่ฏ„ไผฐใ€‚ๅ…ทไฝ“ๆฅ่ฏด๏ผŒDocVQAใ€ChartQAใ€InfoVQAใ€TextVQAใ€MMEใ€AI2Dใ€MMBenchใ€CCBenchใ€MMVet ๅ’Œ SEED-Image ็š„็ป“ๆžœๆ˜ฏไฝฟ็”จ InternVL ไป“ๅบ“ๆต‹่ฏ•็š„ใ€‚OCRBenchใ€RealWorldQAใ€HallBench ๅ’Œ MathVista ๆ˜ฏไฝฟ็”จ VLMEvalKit ่ฟ›่กŒ่ฏ„ไผฐ็š„ใ€‚
692
-
693
- - ๅฏนไบŽMMMU๏ผŒๆˆ‘ไปฌๆŠฅๅ‘Šไบ†ๅŽŸๅง‹ๅˆ†ๆ•ฐ๏ผˆๅทฆไพง๏ผšInternVL็ณปๅˆ—ๆจกๅž‹ไฝฟ็”จInternVLไปฃ็ ๅบ“่ฏ„ๆต‹๏ผŒๅ…ถไป–ๆจกๅž‹็š„ๅˆ†ๆ•ฐๆฅ่‡ชๅ…ถๆŠ€ๆœฏๆŠฅๅ‘Šๆˆ–็ฝ‘้กต๏ผ‰ๅ’ŒVLMEvalKitๅˆ†ๆ•ฐ๏ผˆๅณไพง๏ผšไปŽOpenCompassๆŽ’่กŒๆฆœๆ”ถ้›†๏ผ‰ใ€‚
694
-
695
- - ่ฏทๆณจๆ„๏ผŒไฝฟ็”จไธๅŒ็š„ๆต‹่ฏ•ๅทฅๅ…ทๅŒ…๏ผˆๅฆ‚ InternVL ๅ’Œ VLMEvalKit๏ผ‰่ฏ„ไผฐๅŒไธ€ๆจกๅž‹ๅฏ่ƒฝไผšๅฏผ่‡ด็ป†ๅพฎๅทฎๅผ‚๏ผŒ่ฟ™ๆ˜ฏๆญฃๅธธ็š„ใ€‚ไปฃ็ ็‰ˆๆœฌ็š„ๆ›ดๆ–ฐใ€็Žฏๅขƒๅ’Œ็กฌไปถ็š„ๅ˜ๅŒ–ไนŸๅฏ่ƒฝๅฏผ่‡ด็ป“ๆžœ็š„ๅพฎๅฐๅทฎๅผ‚ใ€‚
696
-
697
- ### ่ง†้ข‘็›ธๅ…ณ่ฏ„ๆต‹
698
-
699
- | ่ฏ„ๆต‹ๆ•ฐๆฎ้›† | VideoChat2-Phi3 | Mini-InternVL-2B-1-5 | InternVL2-2B | InternVL2-1B |
700
- | :-------------------------: | :-------------: | :------------------: | :----------: | :----------: |
701
- | ๆจกๅž‹ๅคงๅฐ | 4B | 2.2B | 2.2B | 0.9B |
702
- | | | | | |
703
- | MVBench | 55.1 | 37.0 | 60.2 | 57.9 |
704
- | MMBench-Video<sub>8f</sub> | - | 0.99 | 0.97 | 0.95 |
705
- | MMBench-Video<sub>16f</sub> | - | 1.04 | 1.03 | 0.98 |
706
- | Video-MME<br>w/o subs | - | 42.9 | 45.0 | 42.6 |
707
- | Video-MME<br>w subs | - | 44.7 | 47.3 | 44.7 |
708
-
709
- - ๆˆ‘ไปฌ้€š่ฟ‡ไปŽๆฏไธช่ง†้ข‘ไธญๆๅ– 16 ๅธงๆฅ่ฏ„ไผฐๆˆ‘ไปฌ็š„ๆจกๅž‹ๅœจ MVBench ๅ’Œ Video-MME ไธŠ็š„ๆ€ง่ƒฝ๏ผŒๆฏไธช่ง†้ข‘ๅธง่ขซ่ฐƒๆ•ดไธบ 448x448 ็š„ๅ›พๅƒใ€‚
710
-
711
- ### ๅฎšไฝ็›ธๅ…ณ่ฏ„ๆต‹
712
-
713
- | ๆจกๅž‹ | avg. | RefCOCO<br>(val) | RefCOCO<br>(testA) | RefCOCO<br>(testB) | RefCOCO+<br>(val) | RefCOCO+<br>(testA) | RefCOCO+<br>(testB) | RefCOCOโ€‘g<br>(val) | RefCOCOโ€‘g<br>(test) |
714
- | :----------------------------: | :--: | :--------------: | :----------------: | :----------------: | :---------------: | :-----------------: | :-----------------: | :----------------: | :-----------------: |
715
- | UNINEXT-H<br>(Specialist SOTA) | 88.9 | 92.6 | 94.3 | 91.5 | 85.2 | 89.6 | 79.8 | 88.7 | 89.4 |
716
- | | | | | | | | | | |
717
- | Mini-InternVL-<br>Chat-2B-V1-5 | 75.8 | 80.7 | 86.7 | 72.9 | 72.5 | 82.3 | 60.8 | 75.6 | 74.9 |
718
- | Mini-InternVL-<br>Chat-4B-V1-5 | 84.4 | 88.0 | 91.4 | 83.5 | 81.5 | 87.4 | 73.8 | 84.7 | 84.6 |
719
- | InternVLโ€‘Chatโ€‘V1โ€‘5 | 88.8 | 91.4 | 93.7 | 87.1 | 87.0 | 92.3 | 80.9 | 88.5 | 89.3 |
720
- | | | | | | | | | | |
721
- | InternVL2โ€‘1B | 79.9 | 83.6 | 88.7 | 79.8 | 76.0 | 83.6 | 67.7 | 80.2 | 79.9 |
722
- | InternVL2โ€‘2B | 77.7 | 82.3 | 88.2 | 75.9 | 73.5 | 82.8 | 63.3 | 77.6 | 78.3 |
723
- | InternVL2โ€‘4B | 84.4 | 88.5 | 91.2 | 83.9 | 81.2 | 87.2 | 73.8 | 84.6 | 84.6 |
724
- | InternVL2โ€‘8B | 82.9 | 87.1 | 91.1 | 80.7 | 79.8 | 87.9 | 71.4 | 82.7 | 82.7 |
725
- | InternVL2โ€‘26B | 88.5 | 91.2 | 93.3 | 87.4 | 86.8 | 91.0 | 81.2 | 88.5 | 88.6 |
726
- | InternVL2โ€‘40B | 90.3 | 93.0 | 94.7 | 89.2 | 88.5 | 92.8 | 83.6 | 90.3 | 90.6 |
727
- | InternVL2-<br>Llama3โ€‘76B | 90.0 | 92.2 | 94.8 | 88.4 | 88.8 | 93.1 | 82.8 | 89.5 | 90.3 |
728
-
729
- - ๆˆ‘ไปฌไฝฟ็”จไปฅไธ‹ Prompt ๆฅ่ฏ„ๆต‹ InternVL ็š„ Grounding ่ƒฝๅŠ›: `Please provide the bounding box coordinates of the region this sentence describes: <ref>{}</ref>`
730
-
731
- ้™ๅˆถ๏ผšๅฐฝ็ฎกๅœจ่ฎญ็ปƒ่ฟ‡็จ‹ไธญๆˆ‘ไปฌ้žๅธธๆณจ้‡ๆจกๅž‹็š„ๅฎ‰ๅ…จๆ€ง๏ผŒๅฐฝๅŠ›ไฟƒไฝฟๆจกๅž‹่พ“ๅ‡บ็ฌฆๅˆไผฆ็†ๅ’Œๆณ•ๅพ‹่ฆๆฑ‚็š„ๆ–‡ๆœฌ๏ผŒไฝ†ๅ—้™ไบŽๆจกๅž‹ๅคงๅฐไปฅๅŠๆฆ‚็Ž‡็”Ÿๆˆ่Œƒๅผ๏ผŒๆจกๅž‹ๅฏ่ƒฝไผšไบง็”Ÿๅ„็งไธ็ฌฆๅˆ้ข„ๆœŸ็š„่พ“ๅ‡บ๏ผŒไพ‹ๅฆ‚ๅ›žๅคๅ†…ๅฎนๅŒ…ๅซๅ่งใ€ๆญง่ง†็ญ‰ๆœ‰ๅฎณๅ†…ๅฎน๏ผŒ่ฏทๅ‹ฟไผ ๆ’ญ่ฟ™ไบ›ๅ†…ๅฎนใ€‚็”ฑไบŽไผ ๆ’ญไธ่‰ฏไฟกๆฏๅฏผ่‡ด็š„ไปปไฝ•ๅŽๆžœ๏ผŒๆœฌ้กน็›ฎไธๆ‰ฟๆ‹…่ดฃไปปใ€‚
732
-
733
- ### ้‚€่ฏท่ฏ„ๆต‹ InternVL
734
-
735
- ๆˆ‘ไปฌๆฌข่ฟŽๅ„ไฝ MLLM benchmark ็š„ๅผ€ๅ‘่€…ๅฏนๆˆ‘ไปฌ็š„ InternVL1.5 ไปฅๅŠ InternVL2 ็ณปๅˆ—ๆจกๅž‹่ฟ›่กŒ่ฏ„ๆต‹ใ€‚ๅฆ‚ๆžœ้œ€่ฆๅœจๆญคๅค„ๆทปๅŠ ่ฏ„ๆต‹็ป“ๆžœ๏ผŒ่ฏทไธŽๆˆ‘่”็ณป๏ผˆ[wztxy89@163.com](mailto:wztxy89@163.com)๏ผ‰ใ€‚
736
-
737
- ## ๅฟซ้€ŸๅฏๅŠจ
738
-
739
- ๆˆ‘ไปฌๆไพ›ไบ†ไธ€ไธช็คบไพ‹ไปฃ็ ๏ผŒ็”จไบŽไฝฟ็”จ `transformers` ่ฟ่กŒ InternVL2-1Bใ€‚
740
-
741
- ๆˆ‘ไปฌไนŸๆฌข่ฟŽไฝ ๅœจๆˆ‘ไปฌ็š„[ๅœจ็บฟdemo](https://internvl.opengvlab.com/)ไธญไฝ“้ชŒInternVL2็š„็ณปๅˆ—ๆจกๅž‹ใ€‚
742
-
743
- > ่ฏทไฝฟ็”จ transformers==4.37.2 ไปฅ็กฎไฟๆจกๅž‹ๆญฃๅธธ่ฟ่กŒใ€‚
744
-
745
- ็คบไพ‹ไปฃ็ ่ฏท[็‚นๅ‡ป่ฟ™้‡Œ](#quick-start)ใ€‚
746
-
747
- ## ๅพฎ่ฐƒ
748
-
749
- ่ฎธๅคšไป“ๅบ“็Žฐๅœจ้ƒฝๆ”ฏๆŒ InternVL ็ณปๅˆ—ๆจกๅž‹็š„ๅพฎ่ฐƒ๏ผŒๅŒ…ๆ‹ฌ [InternVL](https://github.com/OpenGVLab/InternVL)ใ€[SWIFT](https://github.com/modelscope/ms-swift)ใ€[XTurner](https://github.com/InternLM/xtuner) ็ญ‰ใ€‚่ฏทๅ‚้˜…ๅฎƒไปฌ็š„ๆ–‡ๆกฃไปฅ่Žทๅ–ๆ›ดๅคšๅพฎ่ฐƒ็ป†่Š‚ใ€‚
750
-
751
- ## ้ƒจ็ฝฒ
752
-
753
- ### LMDeploy
754
-
755
- LMDeploy ๆ˜ฏ็”ฑ MMRazor ๅ’Œ MMDeploy ๅ›ข้˜Ÿๅผ€ๅ‘็š„็”จไบŽๅŽ‹็ผฉใ€้ƒจ็ฝฒๅ’ŒๆœๅŠกๅคง่ฏญ่จ€ๆจกๅž‹๏ผˆLLM๏ผ‰็š„ๅทฅๅ…ทๅŒ…ใ€‚
756
-
757
- ```sh
758
- pip install lmdeploy==0.5.3
759
- ```
760
-
761
- LMDeploy ๅฐ†ๅคšๆจกๆ€่ง†่ง‰-่ฏญ่จ€ๆจกๅž‹๏ผˆVLM๏ผ‰็š„ๅคๆ‚ๆŽจ็†่ฟ‡็จ‹ๆŠฝ่ฑกไธบไธ€ไธชๆ˜“ไบŽไฝฟ็”จ็š„็ฎก้“๏ผŒ็ฑปไผผไบŽๅคง่ฏญ่จ€ๆจกๅž‹๏ผˆLLM๏ผ‰็š„ๆŽจ็†็ฎก้“ใ€‚
762
-
763
- #### ไธ€ไธชโ€œไฝ ๅฅฝ๏ผŒไธ–็•Œโ€็คบไพ‹
764
-
765
- ```python
766
- from lmdeploy import pipeline, TurbomindEngineConfig
767
- from lmdeploy.vl import load_image
768
-
769
- model = 'OpenGVLab/InternVL2-1B'
770
- image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
771
- pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=8192))
772
- response = pipe(('describe this image', image))
773
- print(response.text)
774
- ```
775
-
776
- ๅฆ‚ๆžœๅœจๆ‰ง่กŒๆญค็คบไพ‹ๆ—ถๅ‡บ็Žฐ `ImportError`๏ผŒ่ฏทๆŒ‰็…งๆ็คบๅฎ‰่ฃ…ๆ‰€้œ€็š„ไพ่ต–ๅŒ…ใ€‚
777
-
778
- #### ๅคšๅ›พๅƒๆŽจ็†
779
-
780
- ๅœจๅค„็†ๅคšๅผ ๅ›พๅƒๆ—ถ๏ผŒๅฏไปฅๅฐ†ๅฎƒไปฌๅ…จ้ƒจๆ”พๅ…ฅไธ€ไธชๅˆ—่กจไธญใ€‚่ฏทๆณจๆ„๏ผŒๅคšๅผ ๅ›พๅƒไผšๅฏผ่‡ด่พ“ๅ…ฅ token ๆ•ฐ้‡ๅขžๅŠ ๏ผŒๅ› ๆญค้€šๅธธ้œ€่ฆๅขžๅŠ ไธŠไธ‹ๆ–‡็ช—ๅฃ็š„ๅคงๅฐใ€‚
781
-
782
- ```python
783
- from lmdeploy import pipeline, TurbomindEngineConfig
784
- from lmdeploy.vl import load_image
785
- from lmdeploy.vl.constants import IMAGE_TOKEN
786
-
787
- model = 'OpenGVLab/InternVL2-1B'
788
- pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=8192))
789
-
790
- image_urls=[
791
- 'https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg',
792
- 'https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/det.jpg'
793
- ]
794
-
795
- images = [load_image(img_url) for img_url in image_urls]
796
- # Numbering images improves multi-image conversations
797
- response = pipe((f'Image-1: {IMAGE_TOKEN}\nImage-2: {IMAGE_TOKEN}\ndescribe these two images', images))
798
- print(response.text)
799
- ```
800
-
801
- #### ๆ‰น้‡PromptๆŽจ็†
802
-
803
- ไฝฟ็”จๆ‰น้‡Prompt่ฟ›่กŒๆŽจ็†้žๅธธ็ฎ€ๅ•๏ผ›ๅช้œ€ๅฐ†ๅฎƒไปฌๆ”พๅœจไธ€ไธชๅˆ—่กจ็ป“ๆž„ไธญ๏ผš
804
-
805
- ```python
806
- from lmdeploy import pipeline, TurbomindEngineConfig
807
- from lmdeploy.vl import load_image
808
-
809
- model = 'OpenGVLab/InternVL2-1B'
810
- pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=8192))
811
-
812
- image_urls=[
813
- "https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg",
814
- "https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/det.jpg"
815
- ]
816
- prompts = [('describe this image', load_image(img_url)) for img_url in image_urls]
817
- response = pipe(prompts)
818
- print(response)
819
- ```
820
-
821
- #### ๅคš่ฝฎๅฏน่ฏ
822
-
823
- ไฝฟ็”จ็ฎก้“่ฟ›่กŒๅคš่ฝฎๅฏน่ฏๆœ‰ไธค็งๆ–นๆณ•ใ€‚ไธ€็งๆ˜ฏๆ นๆฎ OpenAI ็š„ๆ ผๅผๆž„ๅปบๆถˆๆฏๅนถไฝฟ็”จไธŠ่ฟฐๆ–นๆณ•๏ผŒๅฆไธ€็งๆ˜ฏไฝฟ็”จ `pipeline.chat` ๆŽฅๅฃใ€‚
824
-
825
- ```python
826
- from lmdeploy import pipeline, TurbomindEngineConfig, GenerationConfig
827
- from lmdeploy.vl import load_image
828
-
829
- model = 'OpenGVLab/InternVL2-1B'
830
- pipe = pipeline(model, backend_config=TurbomindEngineConfig(session_len=8192))
831
-
832
- image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/demo/resources/human-pose.jpg')
833
- gen_config = GenerationConfig(top_k=40, top_p=0.8, temperature=0.8)
834
- sess = pipe.chat(('describe this image', image), gen_config=gen_config)
835
- print(sess.response.text)
836
- sess = pipe.chat('What is the woman doing?', session=sess, gen_config=gen_config)
837
- print(sess.response.text)
838
- ```
839
-
840
- #### API้ƒจ็ฝฒ
841
-
842
- LMDeploy ็š„ `api_server` ไฝฟๆจกๅž‹่ƒฝๅคŸ้€š่ฟ‡ไธ€ไธชๅ‘ฝไปค่ฝปๆพๆ‰“ๅŒ…ๆˆๆœๅŠกใ€‚ๆไพ›็š„ RESTful API ไธŽ OpenAI ็š„ๆŽฅๅฃๅ…ผๅฎนใ€‚ไปฅไธ‹ๆ˜ฏๆœๅŠกๅฏๅŠจ็š„็คบไพ‹๏ผš
843
-
844
- ```shell
845
- lmdeploy serve api_server OpenGVLab/InternVL2-1B --backend turbomind --server-port 23333
846
- ```
847
-
848
- ไธบไบ†ไฝฟ็”จOpenAI้ฃŽๆ ผ็š„APIๆŽฅๅฃ๏ผŒๆ‚จ้œ€่ฆๅฎ‰่ฃ…OpenAI:
849
-
850
- ```shell
851
- pip install openai
852
- ```
853
-
854
- ็„ถๅŽ๏ผŒไฝฟ็”จไธ‹้ข็š„ไปฃ็ ่ฟ›่กŒAPI่ฐƒ็”จ:
855
-
856
- ```python
857
- from openai import OpenAI
858
-
859
- client = OpenAI(api_key='YOUR_API_KEY', base_url='http://0.0.0.0:23333/v1')
860
- model_name = client.models.list().data[0].id
861
- response = client.chat.completions.create(
862
- model=model_name,
863
- messages=[{
864
- 'role':
865
- 'user',
866
- 'content': [{
867
- 'type': 'text',
868
- 'text': 'describe this image',
869
- }, {
870
- 'type': 'image_url',
871
- 'image_url': {
872
- 'url':
873
- 'https://modelscope.oss-cn-beijing.aliyuncs.com/resource/tiger.jpeg',
874
- },
875
- }],
876
- }],
877
- temperature=0.8,
878
- top_p=0.8)
879
- print(response)
880
- ```
881
-
882
- ## ๅผ€ๆบ่ฎธๅฏ่ฏ
883
-
884
- ่ฏฅ้กน็›ฎ้‡‡็”จ MIT ่ฎธๅฏ่ฏๅ‘ๅธƒ๏ผŒ่€Œ Qwen2 ๅˆ™้‡‡็”จ ้€šไน‰ๅƒ้—ฎ ่ฎธๅฏ่ฏใ€‚
885
-
886
- ## ๅผ•็”จ
887
-
888
- ๅฆ‚ๆžœๆ‚จๅ‘็Žฐๆญค้กน็›ฎๅฏนๆ‚จ็š„็ ”็ฉถๆœ‰็”จ๏ผŒๅฏไปฅ่€ƒ่™‘ๅผ•็”จๆˆ‘ไปฌ็š„่ฎบๆ–‡๏ผš
889
-
890
- ```BibTeX
891
- @article{gao2024mini,
892
- title={Mini-internvl: A flexible-transfer pocket multimodal model with 5\% parameters and 90\% performance},
893
- author={Gao, Zhangwei and Chen, Zhe and Cui, Erfei and Ren, Yiming and Wang, Weiyun and Zhu, Jinguo and Tian, Hao and Ye, Shenglong and He, Junjun and Zhu, Xizhou and others},
894
- journal={arXiv preprint arXiv:2410.16261},
895
- year={2024}
896
- }
897
- @article{chen2023internvl,
898
- title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks},
899
- author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng},
900
- journal={arXiv preprint arXiv:2312.14238},
901
- year={2023}
902
- }
903
- @article{chen2024far,
904
- title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
905
- author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
906
- journal={arXiv preprint arXiv:2404.16821},
907
- year={2024}
908
- }
909
- ```
 
5
  base_model:
6
  - OpenGVLab/InternViT-300M-448px
7
  - Qwen/Qwen2-0.5B-Instruct
8
+ new_version: OpenGVLab/InternVL2_5-1B
9
  base_model_relation: merge
10
  language:
11
  - multilingual
 
20
 
21
  # InternVL2-1B
22
 
23
+ [\[๐Ÿ“‚ GitHub\]](https://github.com/OpenGVLab/InternVL) [\[๐Ÿ†• Blog\]](https://internvl.github.io/blog/) [\[๐Ÿ“œ InternVL 1.0\]](https://arxiv.org/abs/2312.14238) [\[๐Ÿ“œ InternVL 1.5\]](https://arxiv.org/abs/2404.16821) [\[๐Ÿ“œ Mini-InternVL\]](https://arxiv.org/abs/2410.16261)
24
 
25
  [\[๐Ÿ—จ๏ธ Chat Demo\]](https://internvl.opengvlab.com/) [\[๐Ÿค— HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[๐Ÿš€ Quick Start\]](#quick-start) [\[๐Ÿ“– ไธญๆ–‡่งฃ่ฏป\]](https://zhuanlan.zhihu.com/p/706547971) [\[๐Ÿ“– Documents\]](https://internvl.readthedocs.io/en/latest/)
26
 
27
+ <div align="center">
28
+ <img width="500" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/64006c09330a45b03605bba3/zJsd2hqd3EevgXo6fNgC-.png">
29
+ </div>
30
 
31
  ## Introduction
32
 
 
54
 
55
  ### Image Benchmarks
56
 
57
+ | Benchmark | PaliGemma-3B | Mini-InternVL-2B-1-5 | InternVL2-1B |
58
+ | :--------------------------: | :----------: | :------------------: | :----------: |
59
+ | Model Size | 2.9B | 2.2B | 0.9B |
60
+ | | | | |
61
+ | DocVQA<sub>test</sub> | - | 85.0 | 81.7 |
62
+ | ChartQA<sub>test</sub> | - | 74.8 | 72.9 |
63
+ | InfoVQA<sub>test</sub> | - | 55.4 | 50.9 |
64
+ | TextVQA<sub>val</sub> | 68.1 | 70.5 | 70.5 |
65
+ | OCRBench | 614 | 654 | 754 |
66
+ | MME<sub>sum</sub> | 1686.1 | 1901.5 | 1794.4 |
67
+ | RealWorldQA | 55.2 | 57.9 | 50.3 |
68
+ | AI2D<sub>test</sub> | 68.3 | 69.8 | 64.1 |
69
+ | MMMU<sub>val</sub> | 34.9 | 37.4 | 36.7 |
70
+ | MMBench-EN<sub>test</sub> | 71.0 | 70.9 | 65.4 |
71
+ | MMBench-CN<sub>test</sub> | 63.6 | 66.2 | 60.7 |
72
+ | CCBench<sub>dev</sub> | 29.6 | 63.5 | 75.7 |
73
+ | MMVet<sub>GPT-4-0613</sub> | - | 39.3 | 37.8 |
74
+ | MMVet<sub>GPT-4-Turbo</sub> | 33.1 | 35.5 | 33.3 |
75
+ | SEED-Image | 69.6 | 69.8 | 65.6 |
76
+ | HallBench<sub>avg</sub> | 32.2 | 37.5 | 33.4 |
77
+ | MathVista<sub>testmini</sub> | 28.7 | 41.1 | 37.7 |
78
+ | OpenCompass<sub>avg</sub> | 46.6 | 49.8 | 48.3 |
79
 
80
  - For more details and evaluation reproduction, please refer to our [Evaluation Guide](https://internvl.readthedocs.io/en/latest/internvl2.0/evaluation.html).
81
 
82
+ - We simultaneously use [InternVL](https://github.com/OpenGVLab/InternVL) and [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) repositories for model evaluation. Specifically, the results reported for DocVQA, ChartQA, InfoVQA, TextVQA, MME, AI2D, MMBench, CCBench, MMVet (GPT-4-0613), and SEED-Image were tested using the InternVL repository. MMMU, OCRBench, RealWorldQA, HallBench, MMVet (GPT-4-Turbo), and MathVista were evaluated using the VLMEvalKit.
 
 
83
 
84
  - Please note that evaluating the same model using different testing toolkits like [InternVL](https://github.com/OpenGVLab/InternVL) and [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) can result in slight differences, which is normal. Updates to code versions and variations in environment and hardware can also cause minor discrepancies in results.
85
 
86
  ### Video Benchmarks
87
 
88
+ | Benchmark | VideoChat2-Phi3 | Mini-InternVL-2B-1-5 | InternVL2-1B |
89
+ | :-------------------------: | :-------------: | :------------------: | :----------: |
90
+ | Model Size | 4B | 2.2B | 0.9B |
91
+ | | | | |
92
+ | MVBench | 55.1 | 37.0 | 57.5 |
93
+ | MMBench-Video<sub>8f</sub> | - | 0.99 | 0.95 |
94
+ | MMBench-Video<sub>16f</sub> | - | 1.04 | 0.98 |
95
+ | Video-MME<br>w/o subs | - | 42.9 | 42.6 |
96
+ | Video-MME<br>w subs | - | 44.7 | 44.7 |
97
 
98
  - We evaluate our models on MVBench and Video-MME by extracting 16 frames from each video, and each frame was resized to a 448x448 image.
99
 
 
129
 
130
  We also welcome you to experience the InternVL2 series models in our [online demo](https://internvl.opengvlab.com/).
131
 
132
+ > Please use transformers>=4.37.2 to ensure the model works normally.
133
 
134
  ### Model Loading
135
 
 
441
  print(f'User: {question}\nAssistant: {response}')
442
  ```
443
 
444
+ #### Streaming Output
445
 
446
  Besides this method, you can also use the following code to get streamed output.
447
 
 
481
  LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the MMRazor and MMDeploy teams.
482
 
483
  ```sh
484
+ pip install lmdeploy>=0.5.3
485
  ```
486
 
487
  LMDeploy abstracts the complex inference process of multi-modal Vision-Language Models (VLM) into an easy-to-use pipeline, similar to the Large Language Model (LLM) inference pipeline.
488
 
489
+ #### A 'Hello, world' Example
490
 
491
  ```python
492
  from lmdeploy import pipeline, TurbomindEngineConfig
 
501
 
502
  If `ImportError` occurs while executing this case, please install the required dependency packages as prompted.
503
 
504
+ #### Multi-images Inference
505
 
506
  When dealing with multiple images, you can put them all in one list. Keep in mind that multiple images will lead to a higher number of input tokens, and as a result, the size of the context window typically needs to be increased.
507
 
 
526
  print(response.text)
527
  ```
528
 
529
+ #### Batch Prompts Inference
530
 
531
  Conducting inference with batch prompts is quite straightforward; just place them within a list structure:
532
 
 
546
  print(response)
547
  ```
548
 
549
+ #### Multi-turn Conversation
550
 
551
  There are two ways to do the multi-turn conversations with the pipeline. One is to construct messages according to the format of OpenAI and use above introduced method, the other is to use the `pipeline.chat` interface.
552
 
 
609
 
610
  ## License
611
 
612
+ This project is released under the MIT license, while Qwen2-0.5B is licensed under the Apache-2.0 license.
613
 
614
  ## Citation
615
 
 
635
  year={2024}
636
  }
637
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
config.json CHANGED
@@ -17,6 +17,7 @@
17
  "architectures": [
18
  "Qwen2ForCausalLM"
19
  ],
 
20
  "attention_dropout": 0.0,
21
  "bad_words_ids": null,
22
  "begin_suppress_tokens": null,
@@ -80,7 +81,7 @@
80
  "temperature": 1.0,
81
  "tf_legacy_loss": false,
82
  "tie_encoder_decoder": false,
83
- "tie_word_embeddings": true,
84
  "tokenizer_class": null,
85
  "top_k": 50,
86
  "top_p": 1.0,
 
17
  "architectures": [
18
  "Qwen2ForCausalLM"
19
  ],
20
+ "_attn_implementation": "flash_attention_2",
21
  "attention_dropout": 0.0,
22
  "bad_words_ids": null,
23
  "begin_suppress_tokens": null,
 
81
  "temperature": 1.0,
82
  "tf_legacy_loss": false,
83
  "tie_encoder_decoder": false,
84
+ "tie_word_embeddings": false,
85
  "tokenizer_class": null,
86
  "top_k": 50,
87
  "top_p": 1.0,
configuration_internvl_chat.py CHANGED
@@ -38,11 +38,11 @@ class InternVLChatConfig(PretrainedConfig):
38
  super().__init__(**kwargs)
39
 
40
  if vision_config is None:
41
- vision_config = {}
42
  logger.info('vision_config is None. Initializing the InternVisionConfig with default values.')
43
 
44
  if llm_config is None:
45
- llm_config = {}
46
  logger.info('llm_config is None. Initializing the LlamaConfig config with default values (`LlamaConfig`).')
47
 
48
  self.vision_config = InternVisionConfig(**vision_config)
 
38
  super().__init__(**kwargs)
39
 
40
  if vision_config is None:
41
+ vision_config = {'architectures': ['InternVisionModel']}
42
  logger.info('vision_config is None. Initializing the InternVisionConfig with default values.')
43
 
44
  if llm_config is None:
45
+ llm_config = {'architectures': ['Qwen2ForCausalLM']}
46
  logger.info('llm_config is None. Initializing the LlamaConfig config with default values (`LlamaConfig`).')
47
 
48
  self.vision_config = InternVisionConfig(**vision_config)