Update README.md
Browse files
README.md
CHANGED
@@ -2,14 +2,13 @@
|
|
2 |
license: apache-2.0
|
3 |
datasets:
|
4 |
- liuhaotian/LLaVA-CC3M-Pretrain-595K
|
5 |
-
- liuhaotian/LLaVA-Instruct-150K
|
6 |
library_name: transformers
|
7 |
pipeline_tag: image-text-to-text
|
8 |
---
|
9 |
|
10 |
# Model Card: LLaVA_MORE-llama_3_1-8B-pretrain
|
11 |
|
12 |
-
```LLaVA-MORE``` enhances the well-known LLaVA architecture by integrating
|
13 |
|
14 |
In this model space, you will find the stage one (pretrain) weights of LLaVA-MORE LLaMA 3.1 8B.
|
15 |
|
|
|
2 |
license: apache-2.0
|
3 |
datasets:
|
4 |
- liuhaotian/LLaVA-CC3M-Pretrain-595K
|
|
|
5 |
library_name: transformers
|
6 |
pipeline_tag: image-text-to-text
|
7 |
---
|
8 |
|
9 |
# Model Card: LLaVA_MORE-llama_3_1-8B-pretrain
|
10 |
|
11 |
+
```LLaVA-MORE``` enhances the well-known LLaVA architecture by integrating the use of LLaMA 3.1 as the language model. We are publicly releasing the checkpoints for stages one and two for the first model with 8B parameters.
|
12 |
|
13 |
In this model space, you will find the stage one (pretrain) weights of LLaVA-MORE LLaMA 3.1 8B.
|
14 |
|