czczup commited on
Commit
002eeca
1 Parent(s): 3d01b64

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +67 -0
README.md CHANGED
@@ -1,3 +1,70 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ datasets:
4
+ - laion/laion2B-en
5
+ - laion/laion-coco
6
+ - laion/laion2B-multi
7
+ - kakaobrain/coyo-700m
8
+ - conceptual_captions
9
+ - wanng/wukong100m
10
  ---
11
+
12
+ # Model Card for InternViT-6B-448px
13
+
14
+ ## What is InternVL?
15
+
16
+ \[[Paper](https://arxiv.org/abs/2312.14238)\] \[[GitHub](https://github.com/OpenGVLab/InternVL)\]
17
+
18
+ InternVL scales up the ViT to _**6B parameters**_ and aligns it with LLM.
19
+
20
+ It is _**the largest open-source vision/vision-language foundation model (14B)**_ to date, achieving _**32 state-of-the-art**_ performances on a wide range of tasks such as visual perception, cross-modal retrieval, multimodal dialogue, etc.
21
+
22
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/k5UATwX5W2b5KJBN5C58x.png)
23
+
24
+ ## Model Details
25
+ - **Model Type:** feature backbone
26
+ - **Model Stats:**
27
+ - Params (M): 5903
28
+ - Image size: 448 x 448
29
+ - **Pretrain Dataset:** LAION-en, LAION-COCO, COYO, CC12M, CC3M, SBU, Wukong, LAION-multi
30
+
31
+ ## Model Usage (Image Embeddings)
32
+
33
+ ```python
34
+ import torch
35
+ from PIL import Image
36
+ from transformers import AutoModel, CLIPImageProcessor
37
+
38
+ model = AutoModel.from_pretrained(
39
+ 'OpenGVLab/InternViT-6B-448px',
40
+ torch_dtype=torch.bfloat16,
41
+ low_cpu_mem_usage=True,
42
+ trust_remote_code=True).cuda().eval()
43
+
44
+ image = Image.open('./examples/image1.jpg').convert('RGB')
45
+
46
+ image_processor = CLIPImageProcessor.from_pretrained('OpenGVLab/InternViT-6B-448px')
47
+
48
+ pixel_values = image_processor(images=image, return_tensors='pt').pixel_values
49
+ pixel_values = pixel_values.to(torch.bfloat16).cuda()
50
+
51
+ outputs = model(pixel_values)
52
+ ```
53
+
54
+ ## Citation
55
+
56
+ If you find this project useful in your research, please consider cite:
57
+
58
+ ```BibTeX
59
+ @article{chen2023internvl,
60
+ title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks},
61
+ author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng},
62
+ journal={arXiv preprint arXiv:2312.14238},
63
+ year={2023}
64
+ }
65
+ ```
66
+
67
+
68
+ ## Acknowledgement
69
+
70
+ InternVL is built with reference to the code of the following projects: [OpenAI CLIP](https://github.com/openai/CLIP), [Open CLIP](https://github.com/mlfoundations/open_clip), [CLIP Benchmark](https://github.com/LAION-AI/CLIP_benchmark), [EVA](https://github.com/baaivision/EVA/tree/master), [InternImage](https://github.com/OpenGVLab/InternImage), [ViT-Adapter](https://github.com/czczup/ViT-Adapter), [MMSegmentation](https://github.com/open-mmlab/mmsegmentation), [Transformers](https://github.com/huggingface/transformers), [DINOv2](https://github.com/facebookresearch/dinov2), [BLIP-2](https://github.com/salesforce/LAVIS/tree/main/projects/blip2), [Qwen-VL](https://github.com/QwenLM/Qwen-VL/tree/master/eval_mm), and [LLaVA-1.5](https://github.com/haotian-liu/LLaVA). Thanks for their awesome work!