jjhsnail0822
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -33,7 +33,7 @@ We used the [LLaVA-NeXT](https://github.com/LLaVA-VL/LLaVA-NeXT) technique to ad
|
|
33 |
|
34 |
## Training Datasets
|
35 |
|
36 |
-
In the multi-modal pretrain stage, images filtered from LAION/CC/SBU dataset
|
37 |
|
38 |
## Chat Template
|
39 |
|
|
|
33 |
|
34 |
## Training Datasets
|
35 |
|
36 |
+
In the multi-modal pretrain stage, images filtered from LAION/CC/SBU dataset were used. For the visual instruction tuning stage, we prepared the training dataset from COCO, GQA, Visual Genome datasets and EKVQA dataset from AI-Hub. About 90GB of compressed image data was used for the whole training process.
|
37 |
|
38 |
## Chat Template
|
39 |
|