shengz commited on
Commit
c130aa0
·
verified ·
1 Parent(s): d801df8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +58 -3
README.md CHANGED
@@ -1,3 +1,58 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - image-text-to-text
5
+ - medical
6
+ - vision
7
+ ---
8
+
9
+
10
+ # LLaVA-Med v1.5, using [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) as LLM for a better commercial license
11
+
12
+ LLaVA-Med combines a pre-trained large language model with a pre-trained image encoder for biomedical multimodal chatbot use cases.
13
+ LLaVA-Med was proposed in [LLaVA-Med: Training a Large Language-and-Vision Assistant for Biomedicine in One Day](https://arxiv.org/abs/2306.00890) by Chunyuan Li, Cliff Wong, Sheng Zhang, Naoto Usuyama, Haotian Liu, Jianwei Yang, Tristan Naumann, Hoifung Poon, Jianfeng Gao.
14
+
15
+
16
+ **Model date:**
17
+ LLaVA-Med-v1.5-Mistral-7B was trained in April 2024.
18
+
19
+ **Paper or resources for more information:**
20
+ https://aka.ms/llava-med
21
+
22
+ **Where to send questions or comments about the model:**
23
+ https://github.com/microsoft/LLaVA-Med/issues
24
+
25
+
26
+ ## License
27
+ [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) license.
28
+
29
+ ## Intended use
30
+ **Primary intended uses:**
31
+ The primary use of LLaVA-Med is biomedical research on large multimodal models and chatbots.
32
+
33
+ **Primary intended users:**
34
+ The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
35
+
36
+ ## Training dataset
37
+ - 500K filtered image-text pairs from PubMed.
38
+ - 60K GPT-generated multimodal instruction-following data.
39
+
40
+ ## Evaluation dataset
41
+ [Medical Visual Chat](https://github.com/microsoft/LLaVA-Med?tab=readme-ov-file#medical-visual-chat-gpt-assisted-evaluation)
42
+
43
+
44
+ ### How to use
45
+ See [Serving](https://github.com/microsoft/LLaVA-Med?tab=readme-ov-file#serving) and [Evaluation](https://github.com/microsoft/LLaVA-Med?tab=readme-ov-file#evaluation).
46
+
47
+
48
+ ### BibTeX entry and citation info
49
+
50
+ ```bibtex
51
+ @article{li2023llavamed,
52
+ title={Llava-med: Training a large language-and-vision assistant for biomedicine in one day},
53
+ author={Li, Chunyuan and Wong, Cliff and Zhang, Sheng and Usuyama, Naoto and Liu, Haotian and Yang, Jianwei and Naumann, Tristan and Poon, Hoifung and Gao, Jianfeng},
54
+ journal={arXiv preprint arXiv:2306.00890},
55
+ year={2023}
56
+ }
57
+ }
58
+ ```