Improve model card, add link to paper
Browse filesThis PR ensures the model can be found at https://huggingface.co/papers/2412.03565.
Feel free to link the other models/datasets too :)
Cheers
README.md
CHANGED
@@ -1,6 +1,13 @@
|
|
1 |
-
---
|
2 |
-
license: other
|
3 |
-
license_name: tongyi-qwen
|
4 |
-
license_link: >-
|
5 |
-
https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT
|
6 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: other
|
3 |
+
license_name: tongyi-qwen
|
4 |
+
license_link: >-
|
5 |
+
https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT
|
6 |
+
pipeline_tag: image-text-to-text
|
7 |
+
---
|
8 |
+
|
9 |
+
This repository contains the model described in [Inst-IT: Boosting Multimodal Instance Understanding via Explicit Visual Prompt Instruction Tuning](https://huggingface.co/papers/2412.03565).
|
10 |
+
|
11 |
+
Project page: https://inst-it.github.io/
|
12 |
+
|
13 |
+
Code: https://github.com/inst-it/inst-it
|