Datasets:
Languages:
English
Size:
10K - 100K
Tags:
sarcasm
sarcasm-detection
mulitmodal-sarcasm-detection
sarcasm detection
multimodao sarcasm detection
tweets
License:
File size: 5,901 Bytes
f65a6d4 4b1249c f65a6d4 4b1249c cc78864 9ff6251 cc78864 4b1249c 17153e6 c05a213 4b1249c 42daad1 4b1249c 42daad1 4b1249c 42daad1 4b1249c 42daad1 4b1249c 42daad1 c05a213 9ff6251 c05a213 4b1249c 17153e6 4b1249c c05a213 f65a6d4 cc78864 0d597c9 cc78864 53e74b2 cc78864 53e74b2 8faf1e8 53e74b2 cc78864 9ff6251 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 |
---
language:
- en
license: unknown
size_categories:
- 10K<n<100K
task_categories:
- feature-extraction
- text-classification
- image-classification
- image-feature-extraction
- zero-shot-classification
- zero-shot-image-classification
pretty_name: multimodal-sarcasm-dataset
tags:
- sarcasm
- sarcasm-detection
- mulitmodal-sarcasm-detection
- sarcasm detection
- multimodao sarcasm detection
- tweets
dataset_info:
- config_name: mmsd-original
features:
- name: image
dtype: image
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 1816409874.384
num_examples: 19816
- name: validation
num_bytes: 260024770.0
num_examples: 2410
- name: test
num_bytes: 262626922.717
num_examples: 2409
download_size: 2690054686
dataset_size: 2339061567.101
- config_name: mmsd-v1
features:
- name: image
dtype: image
- name: text
dtype: string
- name: label
dtype: int64
- name: id
dtype: string
splits:
- name: train
num_bytes: 1797951865.232
num_examples: 19557
- name: validation
num_bytes: 259504817.817
num_examples: 2387
- name: test
num_bytes: 261609842.749
num_examples: 2373
download_size: 2668004199
dataset_size: 2319066525.798
- config_name: mmsd-v2
features:
- name: image
dtype: image
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 1816105257.384
num_examples: 19816
- name: validation
num_bytes: 259989983
num_examples: 2410
- name: test
num_bytes: 262588464.717
num_examples: 2409
download_size: 2689804711
dataset_size: 2338683705.101
configs:
- config_name: mmsd-original
data_files:
- split: train
path: mmsd-original/train-*
- split: validation
path: mmsd-original/validation-*
- split: test
path: mmsd-original/test-*
- config_name: mmsd-v1
data_files:
- split: train
path: mmsd-v1/train-*
- split: validation
path: mmsd-v1/validation-*
- split: test
path: mmsd-v1/test-*
- config_name: mmsd-v2
data_files:
- split: train
path: mmsd-v2/train-*
- split: validation
path: mmsd-v2/validation-*
- split: test
path: mmsd-v2/test-*
---
# MMSD2.0: Towards a Reliable Multi-modal Sarcasm Detection System
This is a copy of the dataset uploaded on Hugging Face for easy access. The original data comes from this [work](https://aclanthology.org/2023.findings-acl.689/), which is an improvement upon a [previous study](https://aclanthology.org/P19-1239).
## Usage
```python
from typing import TypedDict, cast
import pytorch_lightning as pl
from datasets import Dataset, load_dataset
from torch import Tensor
from torch.utils.data import DataLoader
from transformers import CLIPProcessor
class MMSDModelInput(TypedDict):
pixel_values: Tensor
input_ids: Tensor
attention_mask: Tensor
label: Tensor
id: list[str]
class MMSDDatasetModule(pl.LightningDataModule):
def __init__(
self,
clip_ckpt_name: str = "openai/clip-vit-base-patch32",
dataset_version: str = "mmsd-v2",
max_length: int = 77,
train_batch_size: int = 32,
val_batch_size: int = 32,
test_batch_size: int = 32,
num_workers: int = 19,
) -> None:
super().__init__()
self.clip_ckpt_name = clip_ckpt_name
self.dataset_version = dataset_version
self.train_batch_size = train_batch_size
self.val_batch_size = val_batch_size
self.test_batch_size = test_batch_size
self.num_workers = num_workers
self.max_length = max_length
def setup(self, stage: str) -> None:
processor = CLIPProcessor.from_pretrained(self.clip_ckpt_name)
def preprocess(example):
inputs = processor(
text=example["text"],
images=example["image"],
return_tensors="pt",
padding="max_length",
truncation=True,
max_length=self.max_length,
)
return {
"pixel_values": inputs["pixel_values"],
"input_ids": inputs["input_ids"],
"attention_mask": inputs["attention_mask"],
"label": example["label"],
}
self.raw_dataset = cast(
Dataset,
load_dataset("coderchen01/MMSD2.0", name=self.dataset_version),
)
self.dataset = self.raw_dataset.map(
preprocess,
batched=True,
remove_columns=["text", "image"],
)
def train_dataloader(self) -> DataLoader:
return DataLoader(
self.dataset["train"],
batch_size=self.train_batch_size,
shuffle=True,
num_workers=self.num_workers,
)
def val_dataloader(self) -> DataLoader:
return DataLoader(
self.dataset["validation"],
batch_size=self.val_batch_size,
num_workers=self.num_workers,
)
def test_dataloader(self) -> DataLoader:
return DataLoader(
self.dataset["test"],
batch_size=self.test_batch_size,
num_workers=self.num_workers,
)
```
## References
[1] Yitao Cai, Huiyu Cai, and Xiaojun Wan. 2019. Multi-Modal Sarcasm Detection in Twitter with Hierarchical Fusion Model. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2506–2515, Florence, Italy. Association for Computational Linguistics.
[2] Libo Qin, Shijue Huang, Qiguang Chen, Chenran Cai, Yudi Zhang, Bin Liang, Wanxiang Che, and Ruifeng Xu. 2023. MMSD2.0: Towards a Reliable Multi-modal Sarcasm Detection System. In Findings of the Association for Computational Linguistics: ACL 2023, pages 10834–10845, Toronto, Canada. Association for Computational Linguistics.
|