Datasets:
abhi1nandy2
commited on
Commit
•
f5ba630
1
Parent(s):
010f822
Update README.md
Browse files
README.md
CHANGED
@@ -32,4 +32,113 @@ size_categories:
|
|
32 |
- 1K<n<10K
|
33 |
tags:
|
34 |
- arxiv:2409.13592
|
35 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
- 1K<n<10K
|
33 |
tags:
|
34 |
- arxiv:2409.13592
|
35 |
+
---
|
36 |
+
# *YesBut Dataset*
|
37 |
+
|
38 |
+
Understanding satire and humor is a challenging task for even current Vision-Language models. In this paper, we propose the challenging tasks of Satirical Image Detection (detecting whether an image is satirical), Understanding (generating the reason behind the image being satirical), and Completion (given one half of the image, selecting the other half from 2 given options, such that the complete image is satirical) and release a high-quality dataset YesBut, consisting of 2547 images, 1084 satirical and 1463 non-satirical, containing different artistic styles, to evaluate those tasks. Each satirical image in the dataset depicts a normal scenario, along with a conflicting scenario which is funny or ironic. Despite the success of current Vision-Language Models on multimodal tasks such as Visual QA and Image Captioning, our benchmarking experiments show that such models perform poorly on the proposed tasks on the YesBut Dataset in Zero-Shot Settings w.r.t both automated as well as human evaluation. Additionally, we release a dataset of 119 real, satirical photographs for further research. The dataset and code are available at https://github.com/abhi1nandy2/yesbut_dataset.
|
39 |
+
|
40 |
+
## Dataset Details
|
41 |
+
|
42 |
+
YesBut Dataset consists of 2547 images, 1084 satirical and 1463 non-satirical, containing different artistic styles. Each satirical image is posed in a “Yes, But” format, where the left half of the image depicts a normal scenario, while the right half depicts a conflicting scenario which is funny or ironic.
|
43 |
+
|
44 |
+
**Currently on huggingface, the dataset contains the satirical images from 3 different stages of annotation, along with corresponding metadata and image descriptions.**
|
45 |
+
|
46 |
+
> Download non-satirical images from the following Google Drive Links -
|
47 |
+
> https://drive.google.com/file/d/1Tzs4OcEJK469myApGqOUKPQNUtVyTRDy/view?usp=sharing - Non-Satirical Images annotated in Stage 3
|
48 |
+
> https://drive.google.com/file/d/1i4Fy01uBZ_2YGPzyVArZjijleNbt8xRu/view?usp=sharing - Non-Satirical Images annotated in Stage 4
|
49 |
+
|
50 |
+
|
51 |
+
### Dataset Description
|
52 |
+
|
53 |
+
The YesBut dataset is a high-quality annotated dataset designed to evaluate the satire comprehension capabilities of vision-language models. It consists of 2547 multimodal images, 1084 of which are satirical, while 1463 are non-satirical. The dataset covers a variety of artistic styles, such as colorized sketches, 2D stick figures, and 3D stick figures, making it highly diverse. The satirical images follow a "Yes, But" structure, where the left half depicts a normal scenario, and the right half contains an ironic or humorous twist. The dataset is curated to challenge models in three main tasks: Satirical Image Detection, Understanding, and Completion. An additional dataset of 119 real, satirical photographs is also provided to further assess real-world satire comprehension.
|
54 |
+
|
55 |
+
- **Curated by:** Annotators who met the qualification criteria of being undergraduate sophomore students or above, enrolled in English-medium colleges.
|
56 |
+
- **Language(s) (NLP):** English
|
57 |
+
- **License:** Apache license 2.0
|
58 |
+
|
59 |
+
### Dataset Sources
|
60 |
+
|
61 |
+
- **Repository:** https://github.com/abhi1nandy2/yesbut_dataset
|
62 |
+
- **Paper:**
|
63 |
+
- HuggingFace: https://huggingface.co/papers/2409.13592
|
64 |
+
- ArXiv - https://arxiv.org/abs/2409.13592
|
65 |
+
- This work has been accepted **as a long paper in EMNLP Main 2024**
|
66 |
+
|
67 |
+
## Uses
|
68 |
+
|
69 |
+
<!-- Address questions around how the dataset is intended to be used. -->
|
70 |
+
|
71 |
+
### Direct Use
|
72 |
+
|
73 |
+
The YesBut dataset is intended for benchmarking the performance of vision-language models on satire and humor comprehension tasks. Researchers and developers can use this dataset to test their models on Satirical Image Detection (classifying an image as satirical or non-satirical), Satirical Image Understanding (generating the reason behind the satire), and Satirical Image Completion (choosing the correct half of an image that makes the completed image satirical). This dataset is particularly suitable for models developed for image-text reasoning and cross-modal understanding.
|
74 |
+
|
75 |
+
### Out-of-Scope Use
|
76 |
+
|
77 |
+
YesBut Dataset is not recommended for tasks unrelated to multimodal understanding, such as basic image classification without the context of humor or satire.
|
78 |
+
|
79 |
+
## Dataset Structure
|
80 |
+
|
81 |
+
The YesBut dataset contains two types of images: satirical and non-satirical. Satirical images are annotated with textual descriptions for both the left and right sub-images, as well as an overall description containing the punchline. Non-satirical images are also annotated but lack the element of irony or contradiction. The dataset is divided into multiple annotation stages (Stage 2, Stage 3, Stage 4), with each stage including a mix of original and generated sub-images. The data is stored in image and metadata formats, with the annotations being key to the benchmarking tasks.
|
82 |
+
|
83 |
+
## Dataset Creation
|
84 |
+
|
85 |
+
### Curation Rationale
|
86 |
+
|
87 |
+
The YesBut dataset was curated to address the gap in the ability of existing vision-language models to comprehend satire and humor. Satire comprehension involves a higher level of reasoning and understanding of context, irony, and social cues, making it a challenging task for models. By including a diverse range of images and satirical scenarios, the dataset aims to push the boundaries of multimodal understanding in artificial intelligence.
|
88 |
+
|
89 |
+
### Source Data
|
90 |
+
|
91 |
+
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
|
92 |
+
|
93 |
+
#### Data Collection and Processing
|
94 |
+
|
95 |
+
The images in the YesBut dataset were collected from social media platforms, particularly from the "X" (formerly Twitter) handle @_yesbut_. The original satirical images were manually annotated, and additional synthetic images were generated using DALL-E 3. These images were then manually labeled as satirical or non-satirical. The annotations include textual descriptions, binary features (e.g., presence of text), and difficulty ratings based on the difficulty of understanding satire/irony in the images. Data processing involved adding new variations of sub-images to expand the dataset's diversity.
|
96 |
+
|
97 |
+
#### Who are the source data producers?
|
98 |
+
|
99 |
+
Prior to annotation and expansion of the dataset, images were downloaded from the posts in ‘X’ (erstwhile known as Twitter) handle @_yesbut_ (with proper consent).
|
100 |
+
|
101 |
+
#### Annotation process
|
102 |
+
|
103 |
+
The annotation process was carried out in four stages: (1) collecting satirical images from social media, (2) manual annotation of the images, (3) generating additional 2D stick figure images using DALL-E 3, and (4) generating 3D stick figure images.
|
104 |
+
|
105 |
+
#### Who are the annotators?
|
106 |
+
|
107 |
+
YesBut was curated by annotators who met the qualification criteria of being undergraduate sophomore students or above, enrolled in English-medium colleges.
|
108 |
+
|
109 |
+
#### Personal and Sensitive Information
|
110 |
+
|
111 |
+
The images in YesBut Dataset do not include any personal identifiable information, and the annotations are general descriptions related to the satirical content of the images.
|
112 |
+
|
113 |
+
## Bias, Risks, and Limitations
|
114 |
+
|
115 |
+
- Subjectivity of annotations: The annotation task involves utilizing background knowledge that may differ among annotators. Consequently, we manually reviewed the annotations to minimize the number of incorrect annotations in the dataset. However, some subjectivity still remains.
|
116 |
+
- Extension to languages other than English: This work is in the English Language. However, we plan to extend our work to languages other than English.
|
117 |
+
|
118 |
+
<!-- ### Recommendations -->
|
119 |
+
|
120 |
+
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
121 |
+
|
122 |
+
<!-- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. -->
|
123 |
+
|
124 |
+
## Citation
|
125 |
+
|
126 |
+
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
|
127 |
+
|
128 |
+
**BibTeX:**
|
129 |
+
|
130 |
+
@article{nandy2024yesbut,
|
131 |
+
title={YesBut: A High-Quality Annotated Multimodal Dataset for evaluating Satire Comprehension capability of Vision-Language Models},
|
132 |
+
author={Nandy, Abhilash and Agarwal, Yash and Patwa, Ashish and Das, Millon Madhur and Bansal, Aman and Raj, Ankit and Goyal, Pawan and Ganguly, Niloy},
|
133 |
+
journal={arXiv preprint arXiv:2409.13592},
|
134 |
+
year={2024}
|
135 |
+
}
|
136 |
+
|
137 |
+
**APA:**
|
138 |
+
|
139 |
+
Nandy, A., Agarwal, Y., Patwa, A., Das, M. M., Bansal, A., Raj, A., ... & Ganguly, N. (2024). YesBut: A High-Quality Annotated Multimodal Dataset for evaluating Satire Comprehension capability of Vision-Language Models. arXiv preprint arXiv:2409.13592.
|
140 |
+
|
141 |
+
## Dataset Card Contact
|
142 |
+
|
143 |
+
Get in touch at nandyabhilash@gmail.com
|
144 |
+
|