InternVL-Chat-V1-1 / README.md
czczup's picture
Update README.md
869e5e3 verified
|
raw
history blame
5.25 kB
metadata
license: mit
datasets:
  - laion/laion2B-en
  - laion/laion-coco
  - laion/laion2B-multi
  - kakaobrain/coyo-700m
  - conceptual_captions
  - wanng/wukong100m

Model Card for InternVL-Chat-Chinese-V1.1

What is InternVL?

[Paper] [GitHub] [Demo]

InternVL scales up the ViT to 6B parameters and aligns it with LLM.

It is the largest open-source vision/vision-language foundation model (14B) to date, achieving 32 state-of-the-art performances on a wide range of tasks such as visual perception, cross-modal retrieval, multimodal dialogue, etc.

image/png

Model Details

  • Model Type: multimodal chatbot

  • Model Stats:

    • Architecture: InternViT-6B + MLP + LLaMA2-13B
    • Params: 19B
    • Image size: 448 x 448
    • Number of visual tokens: 256
  • Training Strategy:

    • Pretraining Stage
      • Learnable Component: InternViT-6B + MLP
      • Data: Trained on 72M samples, including COYO, LAION, CC12M, CC3M, SBU, Wukong, GRIT, Objects365, OpenImages, and OCR data.
      • Note: In this stage, we load the pretrained weights of InternViT-6B-224px and interpolate its position embedding to the size corresponding to 448 x 448 pixels. Moreover, in order to reduce the number of visual tokens, we use a pixel shuffle to reduce 1024 tokens to 256 tokens.
    • SFT Stage
      • Learnable Component: MLP + LLM
      • Data: A comprehensive collection of open-source SFT datasets, along with their Chinese translation versions, totaling approximately 10M samples.

Model Usage

We will provide a minimum code example to run InternVL-Chat using only the transformers library.

TODO

Examples

In this update, InternVL-Chat has improved support for Chinese and OCR.

As you can see, although the Lynyrd Skynyrd in the image has some letters that are out of the camera's lens, and TOUR's T is blocked, the model is still able to recognize it correctly.

image/png

Evaluation

MultiModal Benchmark

MME MMBdev/test MMB-CNdev/test POPE MMMUval/test CMMMUval/test TinyLVLM LLaVAbench
1672.3 / 341.1 76.6 / 75.4 71.5 / 70.1 87.2 39.1 / 35.3 TODO 344.5 76.3

Visual Question Answering

VQAv2test OKVQAval TextVQAval VizWizval/test AI2Dtest GQAtest SQAtest
80.9 64.2 65.8 58.3 / 57.3 70.23 62.4 91.2

Image Captioning

COCOtest Flickr30Ktest NoCapsval
141.8 84.3 120.4

Citation

If you find this project useful in your research, please consider cite:

@article{chen2023internvl,
  title={InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks},
  author={Chen, Zhe and Wu, Jiannan and Wang, Wenhai and Su, Weijie and Chen, Guo and Xing, Sen and Zhong, Muyan and Zhang, Qinglong and Zhu, Xizhou and Lu, Lewei and Li, Bin and Luo, Ping and Lu, Tong and Qiao, Yu and Dai, Jifeng},
  journal={arXiv preprint arXiv:2312.14238},
  year={2023}
}

License

This project is released under the MIT license. Parts of this project contain code and models (e.g., LLaMA2) from other sources, which are subject to their respective licenses.

Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.

Acknowledgement

InternVL is built with reference to the code of the following projects: OpenAI CLIP, Open CLIP, CLIP Benchmark, EVA, InternImage, ViT-Adapter, MMSegmentation, Transformers, DINOv2, BLIP-2, Qwen-VL, and LLaVA-1.5. Thanks for their awesome work!