jjhsnail0822
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -35,6 +35,12 @@ We used the [LLaVA-NeXT](https://github.com/LLaVA-VL/LLaVA-NeXT) technique to ad
|
|
35 |
|
36 |
In the multi-modal pretrain stage, images filtered from LAION/CC/SBU dataset was used. For the visual instruction tuning stage, we prepared the training dataset from COCO, GQA, Visual Genome datasets and EKVQA dataset from AI-Hub. About 90GB of compressed image data was used for the whole training process.
|
37 |
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
## Model Benchmark
|
39 |
|
40 |
TBA
|
|
|
35 |
|
36 |
In the multi-modal pretrain stage, images filtered from LAION/CC/SBU dataset was used. For the visual instruction tuning stage, we prepared the training dataset from COCO, GQA, Visual Genome datasets and EKVQA dataset from AI-Hub. About 90GB of compressed image data was used for the whole training process.
|
37 |
|
38 |
+
## Chat Template
|
39 |
+
|
40 |
+
```
|
41 |
+
A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.###Human: 질문내용###Assistant: 답변내용
|
42 |
+
```
|
43 |
+
|
44 |
## Model Benchmark
|
45 |
|
46 |
TBA
|