Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
File size: 2,010 Bytes
000307a
5f23a75
 
5011b09
 
5f23a75
 
 
 
 
 
 
 
5011b09
5f23a75
 
 
 
 
 
 
 
 
 
 
 
000307a
 
 
 
5f23a75
000307a
5f23a75
000307a
 
 
5f23a75
000307a
5f23a75
000307a
 
 
5f23a75
000307a
5f23a75
000307a
08e9bb9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2b6958a
f16737a
 
 
 
 
 
 
 
 
 
 
 
08e9bb9
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
---
language:
- en
dataset_info:
  features:
  - name: id
    dtype: string
  - name: instance_id
    dtype: int64
  - name: question
    dtype: string
  - name: answer
    list:
      dtype: string
  - name: A
    dtype: string
  - name: B
    dtype: string
  - name: C
    dtype: string
  - name: D
    dtype: string
  - name: category
    dtype: string
  - name: img
    dtype: image
configs:
- config_name: 1_correct
  data_files:
  - split: validation
    path: 1_correct/validation/0000.parquet
  - split: test
    path: 1_correct/test/0000.parquet
- config_name: 1_correct_var
  data_files:
  - split: validation
    path: 1_correct_var/validation/0000.parquet
  - split: test
    path: 1_correct_var/test/0000.parquet
- config_name: n_correct
  data_files:
  - split: validation
    path: n_correct/validation/0000.parquet
  - split: test
    path: n_correct/test/0000.parquet
---
 # DARE

DARE (Diverse Visual Question Answering with Robustness Evaluation) is a carefully created and curated multiple-choice VQA benchmark.
DARE evaluates VLM performance on five diverse categories and includes four robustness-oriented evaluations based on the variations of:
- prompts
- the subsets of answer options
- the output format
- the number of correct answers.

The validation split of the dataset contains images, questions, answer options, and correct answers. We are not publishing the correct answers for the test split to prevent contamination.

## Load the Dataset

To use the dataset use the huggingface datasets library: 

```
from datasets import load_dataset

# Load the dataset
subset = "1_correct" # Change to the subset that you want to use
dataset = load_dataset("cambridgeltl/DARE", subset)
```

## Citation

If you use this dataset, please cite our paper:
```
@article{sterz2024dare,
  title={DARE: Diverse Visual Question Answering with Robustness Evaluation},
  author={Sterz, Hannah and Pfeiffer, Jonas and Vuli{\'c}, Ivan},
  journal={arXiv preprint arXiv:2409.18023},
  year={2024}
}
```