File size: 2,245 Bytes
f65a6d4
4b1249c
 
f65a6d4
4b1249c
 
cc78864
 
 
 
 
 
 
4b1249c
cc78864
 
 
 
 
 
 
4b1249c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f65a6d4
cc78864
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
language:
- en
license: unknown
size_categories:
- 10K<n<100K
task_categories:
- feature-extraction
- text-classification
- image-classification
- image-feature-extraction
- zero-shot-classification
- zero-shot-image-classification
pretty_name: mmsd_v2
tags:
- sarcasm
- sarcasm-detection
- mulitmodal-sarcasm-detection
- sarcasm detection
- multimodao sarcasm detection
- tweets
dataset_info:
  config_name: mmsd-v1
  features:
  - name: image
    dtype: image
  - name: text
    dtype: string
  - name: label
    dtype: int64
  splits:
  - name: train
    num_bytes: 1797521611.232
    num_examples: 19557
  - name: validation
    num_bytes: 259452303.817
    num_examples: 2387
  - name: test
    num_bytes: 261557636.749
    num_examples: 2373
  download_size: 2667548595
  dataset_size: 2318531551.798
configs:
- config_name: mmsd-v1
  data_files:
  - split: train
    path: mmsd-v1/train-*
  - split: validation
    path: mmsd-v1/validation-*
  - split: test
    path: mmsd-v1/test-*
---

# MMSD2.0: Towards a Reliable Multi-modal Sarcasm Detection System

This is a copy of the dataset uploaded on Hugging Face for easy access. The original data comes from this [work](https://aclanthology.org/2023.findings-acl.689/).

## Usage

```python
# usage
from datasets import load_dataset
from transformers import CLIPImageProcessor, CLIPTokenizer
from torch.utils.data import DataLoader

image_processor = CLIPImageProcessor.from_pretrained(clip_path)
tokenizer = CLIPTokenizer.from_pretrained(clip_path)

def tokenization(example):
  text_inputs = tokenizer(example["text"], truncation=True, padding=True, return_tensors="pt")
  image_inputs = image_processor(example["image"], return_tensors="pt")
  return {'pixel_values': image_inputs['pixel_values'],
          'input_ids': text_inputs['input_ids'],
          'attention_mask': text_inputs['attention_mask'],
          "label": example["label"]}

dataset = load_dataset('quaeast/multimodal_sarcasm_detection')
dataset.set_transform(tokenization)

# get torch dataloader
train_dl = DataLoader(dataset['train'], batch_size=256, shuffle=True)
test_dl = DataLoader(dataset['test'], batch_size=256, shuffle=True)
val_dl = DataLoader(dataset['validation'], batch_size=256, shuffle=True)
```