|
--- |
|
license: mit |
|
datasets: |
|
- Vi-VLM/Vista |
|
language: |
|
- vi |
|
library_name: adapter-transformers |
|
pipeline_tag: text-classification |
|
--- |
|
|
|
<p align="center"> |
|
<div style="display: flex;text-align: center;"> |
|
<div> |
|
<img src="https://firebasestorage.googleapis.com/v0/b/database-7ca5c.appspot.com/o/llm%2F68747470733a2f2f7331312e617831782e636f6d2f323032332f31322f32382f70697176444d562e706e67.png?alt=media&token=30a2470d-861e-4295-a7f4-da48231724cf" width="250" style="margin-bottom: 0.2;"/> |
|
</div> |
|
<div> |
|
<img src="https://firebasestorage.googleapis.com/v0/b/database-7ca5c.appspot.com/o/llm%2Flogo_qwen.jpg?alt=media&token=fd2cd557-2f45-4f94-86d3-a5e7c9eef630" width="600" style="margin-bottom: 1rem;"/> |
|
</div> |
|
</div> |
|
<p> |
|
<h1 align="center">MoE-LLaVA-Qwen1.5-1.8B×4-Top2: When Vision meet Small-scaled Language Model and Vietnamese Synthetic Dataset</h1> |
|
|
|
<h5 align="center"> |
|
|
|
# Introducing MoE-LLaVA-Qwen1.5-1.8B×4-Top2 for Vietnamese |
|
|
|
We are excited to present MoE-LLaVA-Qwen1.5-1.8B×4-Top2, tailored for the Vietnamese language. This model is part of our ongoing efforts to develop Vision Language Models (VLM) for Vietnamese, a domain that is currently limited and predominantly features larger models (**~7B parameters**). Our model activates approximately **2.2B** 🤗😎 parameters per call, significantly reducing the memory footprint, and it can be quantized for local execution. |
|
|
|
## Training Dataset |
|
|
|
Our model is trained on the comprehensive [Vi-VLM/Vista dataset](https://huggingface.co/datasets/Vi-VLM/Vista), which includes around 700,000 Vietnamese vision-language samples curated by Gemini Pro. We employed various prompt engineering techniques, including: |
|
|
|
- **Few-shot Learning** |
|
- **Caption-based Prompting** |
|
- **Image-based Prompting** |
|
|
|
For the COCO dataset, we utilized Llava-style prompts to generate data. For the ShareGPT4V dataset, translation prompts were applied. |
|
|
|
### Techniques Used |
|
|
|
- **MoE-LLaVA**: [MoE-LLaVA](https://github.com/PKU-YuanGroup/MoE-LLaVA/tree/main) |
|
- |
|
## Evaluation |
|
- Comming soon 🫡 |
|
|
|
## Bias, Risks, and Limitations |
|
|
|
The dataset may contain biases originating from its sources. Users should remain aware of these potential biases when utilizing the dataset. |
|
|
|
## More Information |
|
|
|
This dataset represents the first stage of a two-stage development process for a larger model. Stay tuned for future developments by subscribing to our updates. |