alanzhuly commited on
Commit
568fb5b
·
verified ·
1 Parent(s): 84dd548

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -1
README.md CHANGED
@@ -8,6 +8,9 @@ tags:
8
  ---
9
  # Omnivision
10
 
 
 
 
11
  ## Introduction
12
 
13
  Omnivision is a compact, sub-billion (968M) multimodal model for processing both visual and text inputs, optimized for edge devices. Improved on LLaVA's architecture, it features:
@@ -16,7 +19,7 @@ Omnivision is a compact, sub-billion (968M) multimodal model for processing both
16
  - **Trustworthy Result**: Reduces hallucinations using **DPO** training from trustworthy data.
17
 
18
  **Quick Links:**
19
- 1. Interactive Demo in our [Hugging Face Space](https://huggingface.co/spaces/NexaAIDev/omnivlm-dpo-demo).
20
  2. [Quickstart for local setup](#how-to-use-on-device)
21
  3. Learn more in our [Blogs](https://nexa.ai/blogs/omni-vision)
22
 
 
8
  ---
9
  # Omnivision
10
 
11
+ ## Latest Update
12
+ - [Nov 21, 2024] We improved Omnivision-968M based on your feedback! 🚀 Test the preview in our [Hugging Face Space](https://huggingface.co/spaces/NexaAIDev/omnivlm-dpo-demo). The updated GGUF and **safetensors** will be released after final alignment tweaks.
13
+
14
  ## Introduction
15
 
16
  Omnivision is a compact, sub-billion (968M) multimodal model for processing both visual and text inputs, optimized for edge devices. Improved on LLaVA's architecture, it features:
 
19
  - **Trustworthy Result**: Reduces hallucinations using **DPO** training from trustworthy data.
20
 
21
  **Quick Links:**
22
+ 1. Interactive Demo in our [Hugging Face Space](https://huggingface.co/spaces/NexaAIDev/omnivlm-dpo-demo). (Updated 2024 Nov 21)
23
  2. [Quickstart for local setup](#how-to-use-on-device)
24
  3. Learn more in our [Blogs](https://nexa.ai/blogs/omni-vision)
25