jymcc commited on
Commit
666353d
1 Parent(s): f04c5e3

Update README.md

Browse files

<div align="center">
<h1>
HuatuoGPT-Vision-34B
</h1>
</div>

<div align="center">
<a href="https://github.com/FreedomIntelligence/HuatuoGPT-Vision" target="_blank">GitHub</a> | <a href="https://arxiv.org/abs/2406.19280" target="_blank">Our Paper</a>
</div>

# <span id="Start">Introduction</span>

HuatuoGPT-Vision is a multimodal LLM for medical purposes, built with the [PubMedVision dataset](https://huggingface.co/datasets/FreedomIntelligence/PubMedVision). HuatuoGPT-Vision-34B is trained based on Yi-1.5-34B using the LLaVA-v1.5 architecture.


# <span id="Start">Quick Start</span>

- Get the model inference code from [Github](https://github.com/FreedomIntelligence/HuatuoGPT-Vision).
```bash
git clone https://github.com/FreedomIntelligence/HuatuoGPT-Vision.git
```
- Model inference
```python
query = 'What does the picture show?'
image_paths = ['image_path1']

from cli import HuatuoChatbot
bot = HuatuoChatbot(huatuogpt_vision_model_path)
output = bot.inference(query, image_paths)
print(output) # Prints the model output
```



## Citation

```


@misc
{chen2024huatuogptvisioninjectingmedicalvisual,
title={HuatuoGPT-Vision, Towards Injecting Medical Visual Knowledge into Multimodal LLMs at Scale},
author={Junying Chen and Ruyi Ouyang and Anningzhe Gao and Shunian Chen and Guiming Hardy Chen and Xidong Wang and Ruifei Zhang and Zhenyang Cai and Ke Ji and Guangjun Yu and Xiang Wan and Benyou Wang},
year={2024},
eprint={2406.19280},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2406.19280},
}
```

Files changed (1) hide show
  1. README.md +9 -3
README.md CHANGED
@@ -1,3 +1,9 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - FreedomIntelligence/PubMedVision
5
+ language:
6
+ - en
7
+ - zh
8
+ pipeline_tag: text-generation
9
+ ---