Vision-Flan's picture
Update README.md
cf8e273
|
raw
history blame
2.08 kB
metadata
task_categories:
  - visual-question-answering
language:
  - en
pretty_name: Vision-Flan
size_categories:
  - 100K<n<1M

πŸš€ Vision-Flan Dataset

vision-flan_191-task-1k is a human-labeled visual instruction tuning dataset consisting of 191 diverse tasks and 1,000 examples for each task. It is constructed for visual instruction tuning and for building large-scale vision-language models.

Paper or blog for more information:

https://github.com/VT-NLP/MultiInstruct/

https://vision-flan.github.io/

Paper coming soon 😊

Citation

Paper coming soon 😊. If you use Vision-Flan, please use the following cites:

@misc{visionFlan2023,
        title = {Vision-Flan:Scaling Visual Instruction Tuning},
        url = {https://vision-flan.github.io/},
        author = {Zhiyang Xu and Trevor Ashby and Chao Feng and Rulin Shao and Ying Shen and Di Jin and Qifan Wang and Lifu Huang},
        month = {Sep},
        year = {2023}
    }
@inproceedings{DBLP:conf/acl/XuSH23,
        author = {Zhiyang Xu and Ying Shen and Lifu Huang},
        editor = {Anna Rogers and Jordan L. Boyd{-}Graber and Naoaki Okazaki},
        title = {MultiInstruct: Improving Multi-Modal Zero-Shot Learning via Instruction Tuning},
        booktitle = {Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), {ACL} 2023, Toronto, Canada, July 9-14, 2023},
        pages = {11445--11465},
        publisher = {Association for Computational Linguistics},
        year = {2023},
        url = {https://doi.org/10.18653/v1/2023.acl-long.641},
        doi = {10.18653/v1/2023.acl-long.641},
        timestamp = {Thu, 10 Aug 2023 12:35:59 +0200},
        biburl = {https://dblp.org/rec/conf/acl/XuSH23.bib},
        bibsource = {dblp computer science bibliography, https://dblp.org}
    }

License:

Please carefully check the licenses for all the datasets on this page before use.

Contact:

If you have any questions or concerns please contact us at zhiyangx@vt.edu .