Visual Question Answering
Transformers
PyTorch
omnilmm
text-generation
Inference Endpoints
Yirany commited on
Commit
bc00fa1
1 Parent(s): 01d64d7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -8,6 +8,8 @@ datasets:
8
 
9
  ## OmniLMM 12B
10
 
 
 
11
  **OmniLMM-12B** is the most capable version of OmniLMM currently. The model is built based on EVA02-5B and Zephyr-7B-β, connected with a perceiver resampler layer, and trained on multimodal data in a curriculum fashion. The model has three notable features:
12
 
13
  - 🔥 **Strong Performance.**
 
8
 
9
  ## OmniLMM 12B
10
 
11
+ [GitHub](https://github.com/OpenBMB/MiniCPM-V) | [Demo](http://120.92.209.146:8081/)
12
+
13
  **OmniLMM-12B** is the most capable version of OmniLMM currently. The model is built based on EVA02-5B and Zephyr-7B-β, connected with a perceiver resampler layer, and trained on multimodal data in a curriculum fashion. The model has three notable features:
14
 
15
  - 🔥 **Strong Performance.**