OmniLMM-12B / README.md
finalf0's picture
Create README.md
3efec51 verified
|
raw
history blame
No virus
5.31 kB
metadata
pipeline_tag: text-generation

OmniLMM 12B

OmniLMM-12B is the most capable version. The model is built based on EVA02-5B and Zephyr-7B-β, connected with a perceiver resampler layer, and trained on multimodal data in a curriculum fashion. The model has three notable features:

  • 🔥 Strong Performance.

    OmniLMM-12B achieves leading performance among models with comparable sizes, surpassing established LMMs on multiple benchmarks (including MME, MMBench, SEED-Bench, etc). The model also supports OCR capability and endows rich multimodal world knowledge.

  • 🏆 Trustworthy Behavior.

    LMMs are known for suffering from hallucination, often generating text that is not factually grounded in images (e.g., faithfully describing non-existing objects in images). OmniLMM-12B is the first state-of-the-art open-source LMM aligned via multimodal RLHF for trustworthy behavior (using our recent RLHF-V technique) and ranked #1 among open-source models on MMHal-Bench.

  • 🕹 Real-time Multimodal Interaction.

    We combine the OmniLMM-12B and GPT-3.5 into a real-time multimodal interactive assistant. The assistant accepts video streams from the camera and speech streams from the microphone and emits speech output. While still primary, we find the model can replicate some of the fun cases shown in the Gemini Demo video, without any video edition.

Model Size MME MMMU val MMHal-Bench SeedBench-I LLaVA Bench W MathVista MMB dev (en)
GPT-4V † - 1409 56.8 3.53 / 70.8 71.6 93.1 47.8 75.1
Qwen-VL-Plus † - 1681 45.2 - 65.7 73.7 36.0 66.2
Yi-VL 6B 6.7B - 39.1 - 66.1 39.9 28.0 68.2
CogVLM 17.4B 1438 32.1 2.68 / 52.1 68.8 73.9 34.7 63.7
Qwen-VL-Chat 9.6B 1488 35.9 2.93 / 59.4 64.8 67.7 33.8 60.6
LLaVA 1.5 13.6B 1531 36.4 2.71 / 51.0 68.1 64.6 26.4 68.2
OmniLMM-12B 11.6B 1637 40.7 3.45 / 68.8 71.1 72.0 34.9 71.6
†: closed-source models

Demo

Click here to try out the Demo of OmniLMM-12B.

Install

  1. Clone this repository and navigate to the source folder
git clone https://github.com/OpenBMB/OmniLMM.git
cd OmniLMM
  1. Create conda environment
conda create -n OmniLMM python=3.10 -y
conda activate OmniLMM
  1. Install dependencies
pip install -r requirements.txt

Inference

Multi-turn Conversation

Please refer to the following codes to run OmniLMM.

OmniLMM-12B
from chat import OmniLMMChat, img2base64

chat_model = OmniLMMChat('openbmb/OmniLMM-12B')

im_64 = img2base64('./data/COCO_test2015_000000262144.jpg')

# First round chat 
msgs = [{"role": "user", "content": "What are the people doing?"}]

inputs = {"image": im_64, "question": json.dumps(msgs)}
answer = chat_model.process(inputs)
print(answer)

# Second round chat 
# pass history context of multi-turn conversation
msgs.append({"role": "assistant", "content": answer})
msgs.append({"role": "user", "content": "Describe the image"})

inputs = {"image": im_64, "question": json.dumps(msgs)}
answer = chat_model.process(inputs)
print(answer)

We can obtain the following results:

"The people in the image are playing baseball. One person is pitching a ball, another one is swinging a bat to hit it, and there's also an umpire present who appears to be watching the game closely."

"The image depicts a baseball game in progress. A pitcher is throwing the ball, while another player is swinging his bat to hit it. An umpire can be seen observing the play closely."