jjhsnail0822 commited on
Commit
c65e141
·
verified ·
1 Parent(s): 9e243f2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -33,7 +33,7 @@ We used the [LLaVA-NeXT](https://github.com/LLaVA-VL/LLaVA-NeXT) technique to ad
33
 
34
  ## Training Datasets
35
 
36
- In the multi-modal pretrain stage, images filtered from LAION/CC/SBU dataset was used. For the visual instruction tuning stage, we prepared the training dataset from COCO, GQA, Visual Genome datasets and EKVQA dataset from AI-Hub. About 90GB of compressed image data was used for the whole training process.
37
 
38
  ## Chat Template
39
 
 
33
 
34
  ## Training Datasets
35
 
36
+ In the multi-modal pretrain stage, images filtered from LAION/CC/SBU dataset were used. For the visual instruction tuning stage, we prepared the training dataset from COCO, GQA, Visual Genome datasets and EKVQA dataset from AI-Hub. About 90GB of compressed image data was used for the whole training process.
37
 
38
  ## Chat Template
39