VizWiz-VQA / README.md
luodian's picture
Update README.md
d428a2d verified
---
dataset_info:
features:
- name: question_id
dtype: string
- name: image
dtype: image
- name: question
dtype: string
- name: answers
sequence: string
- name: category
dtype: string
splits:
- name: val
num_bytes: 2097998373.0
num_examples: 4319
- name: test
num_bytes: 3982325314.0
num_examples: 8000
download_size: 6050372614
dataset_size: 6080323687.0
---
# Dataset Card for "VizWiz-VQA"
<p align="center" width="100%">
<img src="https://i.postimg.cc/g0QRgMVv/WX20240228-113337-2x.png" width="100%" height="80%">
</p>
# Large-scale Multi-modality Models Evaluation Suite
> Accelerating the development of large-scale multi-modality models (LMMs) with `lmms-eval`
🏠 [Homepage](https://lmms-lab.github.io/) | πŸ“š [Documentation](docs/README.md) | πŸ€— [Huggingface Datasets](https://huggingface.co/lmms-lab)
# This Dataset
This is a formatted version of [VizWiz-VQA](https://vizwiz.org/tasks-and-datasets/vqa/). It is used in our `lmms-eval` pipeline to allow for one-click evaluations of large multi-modality models.
```
@inproceedings{gurari2018vizwiz,
title={Vizwiz grand challenge: Answering visual questions from blind people},
author={Gurari, Danna and Li, Qing and Stangl, Abigale J and Guo, Anhong and Lin, Chi and Grauman, Kristen and Luo, Jiebo and Bigham, Jeffrey P},
booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
pages={3608--3617},
year={2018}
}
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)