Datasets:
Languages:
English
Size:
10K - 100K
Tags:
sarcasm
sarcasm-detection
mulitmodal-sarcasm-detection
sarcasm detection
multimodao sarcasm detection
tweets
License:
metadata
language:
- en
license: unknown
size_categories:
- 10K<n<100K
task_categories:
- feature-extraction
- text-classification
- image-classification
- image-feature-extraction
- zero-shot-classification
- zero-shot-image-classification
pretty_name: mmsd_v2
tags:
- sarcasm
- sarcasm-detection
- mulitmodal-sarcasm-detection
- sarcasm detection
- multimodao sarcasm detection
- tweets
dataset_info:
config_name: mmsd-v1
features:
- name: image
dtype: image
- name: text
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 1797521611.232
num_examples: 19557
- name: validation
num_bytes: 259452303.817
num_examples: 2387
- name: test
num_bytes: 261557636.749
num_examples: 2373
download_size: 2667548595
dataset_size: 2318531551.798
configs:
- config_name: mmsd-v1
data_files:
- split: train
path: mmsd-v1/train-*
- split: validation
path: mmsd-v1/validation-*
- split: test
path: mmsd-v1/test-*
MMSD2.0: Towards a Reliable Multi-modal Sarcasm Detection System
This is a copy of the dataset uploaded on Hugging Face for easy access. The original data comes from this work.
Usage
# usage
from datasets import load_dataset
from transformers import CLIPImageProcessor, CLIPTokenizer
from torch.utils.data import DataLoader
image_processor = CLIPImageProcessor.from_pretrained(clip_path)
tokenizer = CLIPTokenizer.from_pretrained(clip_path)
def tokenization(example):
text_inputs = tokenizer(example["text"], truncation=True, padding=True, return_tensors="pt")
image_inputs = image_processor(example["image"], return_tensors="pt")
return {'pixel_values': image_inputs['pixel_values'],
'input_ids': text_inputs['input_ids'],
'attention_mask': text_inputs['attention_mask'],
"label": example["label"]}
dataset = load_dataset('quaeast/multimodal_sarcasm_detection')
dataset.set_transform(tokenization)
# get torch dataloader
train_dl = DataLoader(dataset['train'], batch_size=256, shuffle=True)
test_dl = DataLoader(dataset['test'], batch_size=256, shuffle=True)
val_dl = DataLoader(dataset['validation'], batch_size=256, shuffle=True)