Image Feature Extraction
Transformers
PyTorch
internvl
feature-extraction
custom_code
File size: 7,283 Bytes
8e927bb
 
44e4329
 
 
 
 
 
 
bb6b287
8e927bb
253108a
44e4329
 
4f22248
8a67969
4f22248
44e4329
17f90c4
 
 
44e4329
8a67969
 
17f90c4
 
 
8a67969
 
0b827a6
 
44e4329
 
79b55e2
44e4329
6d492b5
44e4329
 
 
d76eb45
 
37cb9c9
d76eb45
 
 
 
 
 
 
44e4329
 
5956092
 
253108a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44e4329
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bc5ea9e
44e4329
 
 
 
 
 
 
 
17f90c4
 
 
 
 
 
44e4329
 
 
 
 
bb6b287
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
---
license: mit
datasets:
- laion/laion2B-en
- laion/laion-coco
- laion/laion2B-multi
- kakaobrain/coyo-700m
- conceptual_captions
- wanng/wukong100m
pipeline_tag: image-feature-extraction
---

# Model Card for InternVL-14B-224px

<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/2yzk5wUY-obL6H4rKiHlU.webp" alt="Image Description" width="300" height="300">
</p>

[\[๐Ÿ†• Blog\]](https://internvl.github.io/blog/)  [\[๐Ÿ“œ InternVL 1.0 Paper\]](https://arxiv.org/abs/2312.14238)  [\[๐Ÿ“œ InternVL 1.5 Report\]](https://arxiv.org/abs/2404.16821)  [\[๐Ÿ—จ๏ธ Chat Demo\]](https://internvl.opengvlab.com/)

[\[๐Ÿค— HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL)  [\[๐Ÿš€ Quick Start\]](#model-usage)  [\[๐ŸŒ Community-hosted API\]](https://rapidapi.com/adushar1320/api/internvl-chat)  [\[๐Ÿ“– ไธญๆ–‡่งฃ่ฏป\]](https://zhuanlan.zhihu.com/p/675877376)

| Model                   | Date       | Download                                                               | Note                             |
| ----------------------- | ---------- | ---------------------------------------------------------------------- | -------------------------------- |
| InternViT-6B-448px-V1-5 | 2024.04.20 | ๐Ÿค— [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-5) | support dynamic resolution, super strong OCR (๐Ÿ”ฅnew) |
| InternViT-6B-448px-V1-2 | 2024.02.11 | ๐Ÿค— [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2) | 448 resolution                   |
| InternViT-6B-448px-V1-0 | 2024.01.30 | ๐Ÿค— [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-0) | 448 resolution                   |
| InternViT-6B-224px      | 2023.12.22 | ๐Ÿค— [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-224px)      | vision foundation model          |
| InternVL-14B-224px      | 2023.12.22 | ๐Ÿค— [HF link](https://huggingface.co/OpenGVLab/InternVL-14B-224px)      | vision-language foundation model |


## Model Details
- **Model Type:** vision-language foundation model
- **Support Tasks:** zero-shot image/video classification, image-text/video retrieval, image captioning
- **Model Stats:**
  - Params: 14B
  - Image size: 224 x 224
- **Pretrain Dataset:** LAION-en, LAION-COCO, COYO, CC12M, CC3M, SBU, Wukong, LAION-multi

## Zero-Shot Performance

See this [document](https://github.com/OpenGVLab/InternVL/tree/main/clip_benchmark#-evaluation-zero-shot-image-classification) for more details about the zero-shot evaluation.

![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/KfsrXioPU77T48sRb60oL.png)


![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/q5UkfrEix6w3mnn_1w4ja.png)


## Model Usage

**Note: the prefix `'summarize:'` and `tokenizer.pad_token_id = 0` are necessary. Their absence will lead to abnormal results.**

```python
import torch
from PIL import Image
from transformers import AutoModel, CLIPImageProcessor
from transformers import AutoTokenizer


model = AutoModel.from_pretrained(
    'OpenGVLab/InternVL-14B-224px',
    torch_dtype=torch.bfloat16,
    low_cpu_mem_usage=True,
    trust_remote_code=True).cuda().eval()

image_processor = CLIPImageProcessor.from_pretrained('OpenGVLab/InternVL-14B-224px')

tokenizer = AutoTokenizer.from_pretrained(
    'OpenGVLab/InternVL-14B-224px', use_fast=False, add_eos_token=True)
tokenizer.pad_token_id = 0  # set pad_token_id to 0

images = [
    Image.open('./examples/image1.jpg').convert('RGB'),
    Image.open('./examples/image2.jpg').convert('RGB'),
    Image.open('./examples/image3.jpg').convert('RGB')
]
prefix = 'summarize:'
texts = [
    prefix + 'a photo of a red panda',  # English
    prefix + 'ไธ€ๅผ ็†Š็Œซ็š„็…ง็‰‡',  # Chinese
    prefix + 'ไบŒๅŒนใฎ็Œซใฎๅ†™็œŸ'  # Japanese
]

pixel_values = image_processor(images=images, return_tensors='pt').pixel_values
pixel_values = pixel_values.to(torch.bfloat16).cuda()
input_ids = tokenizer(texts, return_tensors='pt', max_length=80,
                      truncation=True, padding='max_length').input_ids.cuda()

# InternVL-C
logits_per_image, logits_per_text = model(
    image=pixel_values, text=input_ids, mode='InternVL-C')
probs = logits_per_image.softmax(dim=-1)
# tensor([[9.9609e-01, 5.2185e-03, 6.0070e-08],
#         [2.2949e-02, 9.7656e-01, 5.9903e-06],
#         [3.2932e-06, 7.4863e-05, 1.0000e+00]], device='cuda:0',
#        dtype=torch.bfloat16, grad_fn=<SoftmaxBackward0>)

# InternVL-G
logits_per_image, logits_per_text = model(
    image=pixel_values, text=input_ids, mode='InternVL-G')
probs = logits_per_image.softmax(dim=-1)
# tensor([[9.9609e-01, 3.1738e-03, 3.6322e-08],
#         [8.6060e-03, 9.9219e-01, 2.8759e-06],
#         [1.7583e-06, 3.1233e-05, 1.0000e+00]], device='cuda:0',
#        dtype=torch.bfloat16, grad_fn=<SoftmaxBackward0>)

# please set add_eos_token to False for generation
tokenizer.add_eos_token = False
image = Image.open('./examples/image1.jpg').convert('RGB')
pixel_values = image_processor(images=image, return_tensors='pt').pixel_values
pixel_values = pixel_values.to(torch.bfloat16).cuda()

tokenized = tokenizer("English caption:", return_tensors='pt')
pred = model.generate(
    pixel_values=pixel_values,
    input_ids=tokenized.input_ids.cuda(),
    attention_mask=tokenized.attention_mask.cuda(),
    num_beams=5,
    min_new_tokens=8,
)
caption = tokenizer.decode(pred[0].cpu(), skip_special_tokens=True).strip()
# English caption: a red panda sitting on top of a wooden platform
```

## Citation

If you find this project useful in your research, please consider citing:

```BibTeX
@article{chen2023internvl,
  title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks},
  author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng},
  journal={arXiv preprint arXiv:2312.14238},
  year={2023}
}
@article{chen2024far,
  title={How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites},
  author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
  journal={arXiv preprint arXiv:2404.16821},
  year={2024}
}
```


## Acknowledgement

InternVL is built with reference to the code of the following projects: [OpenAI CLIP](https://github.com/openai/CLIP), [Open CLIP](https://github.com/mlfoundations/open_clip), [CLIP Benchmark](https://github.com/LAION-AI/CLIP_benchmark), [EVA](https://github.com/baaivision/EVA/tree/master), [InternImage](https://github.com/OpenGVLab/InternImage), [ViT-Adapter](https://github.com/czczup/ViT-Adapter), [MMSegmentation](https://github.com/open-mmlab/mmsegmentation), [Transformers](https://github.com/huggingface/transformers), [DINOv2](https://github.com/facebookresearch/dinov2), [BLIP-2](https://github.com/salesforce/LAVIS/tree/main/projects/blip2), [Qwen-VL](https://github.com/QwenLM/Qwen-VL/tree/master/eval_mm), and [LLaVA-1.5](https://github.com/haotian-liu/LLaVA). Thanks for their awesome work!