Zhiqiang007 commited on
Commit
2a22130
1 Parent(s): 40a42fa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -0
README.md CHANGED
@@ -16,3 +16,20 @@ Math-LLaVA-13B was trained in June 2024.
16
 
17
  **Paper or resources for more information:**
18
  [[Paper](http://arxiv.org/abs/2406.17294)] [[Code](https://github.com/HZQ950419/Math-LLaVA)]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
 
17
  **Paper or resources for more information:**
18
  [[Paper](http://arxiv.org/abs/2406.17294)] [[Code](https://github.com/HZQ950419/Math-LLaVA)]
19
+
20
+ ## License
21
+ Llama 2 is licensed under the LLAMA 2 Community License,
22
+ Copyright (c) Meta Platforms, Inc. All Rights Reserved.
23
+
24
+ ## Intended use
25
+ **Primary intended uses:**
26
+ The primary use of Math-LLaVA is research on multimodal large language models, multimodal reasoning and question answering.
27
+
28
+ **Primary intended users:**
29
+ The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
30
+
31
+ ## Training dataset
32
+ - MathV360K instruction-tuning data
33
+
34
+ ## Evaluation dataset
35
+ A collection of 3 benchmarks, including 2 multimodal mathematical reasoning benchmarks and 1 benchmark for multi-discipline multimodal reasoning.