Update README.md
Browse files
README.md
CHANGED
@@ -156,4 +156,33 @@ Please look at [GitHub](https://github.com/OpenBMB/OmniLMM) for more detail abou
|
|
156 |
|
157 |
#### Statement
|
158 |
* As LMMs, OmniLMM generates contents by learning a large mount of texts, but it cannot comprehend, express personal opinions or make value judgement. Anything generated by OmniLMM does not represent the views and positions of the model developers
|
159 |
-
* We will not be liable for any problems arising from the use of the OmniLMM open Source model, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
156 |
|
157 |
#### Statement
|
158 |
* As LMMs, OmniLMM generates contents by learning a large mount of texts, but it cannot comprehend, express personal opinions or make value judgement. Anything generated by OmniLMM does not represent the views and positions of the model developers
|
159 |
+
* We will not be liable for any problems arising from the use of the OmniLMM open Source model, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model.
|
160 |
+
|
161 |
+
## Multimodal Projects of Our Team <!-- omit in toc -->
|
162 |
+
|
163 |
+
[VisCPM](https://github.com/OpenBMB/VisCPM/tree/main) | [RLHF-V](https://github.com/RLHF-V/RLHF-V) | [LLaVA-UHD](https://github.com/thunlp/LLaVA-UHD) | [Muffin](https://github.com/thunlp/Muffin/tree/main)
|
164 |
+
|
165 |
+
## Citation
|
166 |
+
|
167 |
+
If you find our work helpful, please consider citing the following papers
|
168 |
+
|
169 |
+
```bib
|
170 |
+
@article{yu2023rlhf,
|
171 |
+
title={Rlhf-v: Towards trustworthy mllms via behavior alignment from fine-grained correctional human feedback},
|
172 |
+
author={Yu, Tianyu and Yao, Yuan and Zhang, Haoye and He, Taiwen and Han, Yifeng and Cui, Ganqu and Hu, Jinyi and Liu, Zhiyuan and Zheng, Hai-Tao and Sun, Maosong and others},
|
173 |
+
journal={arXiv preprint arXiv:2312.00849},
|
174 |
+
year={2023}
|
175 |
+
}
|
176 |
+
@article{viscpm,
|
177 |
+
title={Large Multilingual Models Pivot Zero-Shot Multimodal Learning across Languages},
|
178 |
+
author={Jinyi Hu and Yuan Yao and Chongyi Wang and Shan Wang and Yinxu Pan and Qianyu Chen and Tianyu Yu and Hanghao Wu and Yue Zhao and Haoye Zhang and Xu Han and Yankai Lin and Jiao Xue and Dahai Li and Zhiyuan Liu and Maosong Sun},
|
179 |
+
journal={arXiv preprint arXiv:2308.12038},
|
180 |
+
year={2023}
|
181 |
+
}
|
182 |
+
@article{xu2024llava-uhd,
|
183 |
+
title={{LLaVA-UHD}: an LMM Perceiving Any Aspect Ratio and High-Resolution Images},
|
184 |
+
author={Xu, Ruyi and Yao, Yuan and Guo, Zonghao and Cui, Junbo and Ni, Zanlin and Ge, Chunjiang and Chua, Tat-Seng and Liu, Zhiyuan and Huang, Gao},
|
185 |
+
journal={arXiv preprint arXiv:2403.11703},
|
186 |
+
year={2024}
|
187 |
+
}
|
188 |
+
```
|