|
--- |
|
license: cc-by-nc-sa-4.0 |
|
task_categories: |
|
- image-classification |
|
- text-classification |
|
- visual-question-answering |
|
language: |
|
- ar |
|
pretty_name: ArMeme |
|
size_categories: |
|
- 1K<n<10K |
|
dataset_info: |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: image |
|
dtype: image |
|
- name: img_path |
|
dtype: string |
|
- name: class_label |
|
dtype: |
|
class_label: |
|
names: |
|
'0': not_propaganda |
|
'1': propaganda |
|
'2': not-meme |
|
'3': other |
|
splits: |
|
- name: train |
|
num_bytes: 288878900.171 |
|
num_examples: 4007 |
|
- name: dev |
|
num_bytes: 45908447.0 |
|
num_examples: 584 |
|
- name: test |
|
num_bytes: 81787436.176 |
|
num_examples: 1134 |
|
download_size: 423396230 |
|
dataset_size: 416574783.347 |
|
--- |
|
# ArMeme Dataset |
|
|
|
## Overview |
|
|
|
ArMeme is the first multimodal Arabic memes dataset that includes both text and images, collected from various social media platforms. It serves as the first resource dedicated to Arabic multimodal research. While the dataset has been annotated to identify propaganda in memes, it is versatile and can be utilized for a wide range of other research purposes, including sentiment analysis, hate speech detection, cultural studies, meme generation, and cross-lingual transfer learning. The dataset opens new avenues for exploring the intersection of language, culture, and visual communication. |
|
|
|
## Dataset Structure |
|
|
|
The dataset is divided into three splits: |
|
- **Train**: The training set |
|
- **Dev**: The development/validation set |
|
- **Test**: The test set |
|
|
|
Each entry in the dataset includes: |
|
- `id`: id corresponds to the entry |
|
- `text`: The textual content associated with the image. |
|
- `image`: The corresponding image. |
|
- `img_path`: The file path to the image. |
|
|
|
|
|
## How to Use |
|
|
|
You can load the dataset using the `datasets` library from Hugging Face: |
|
|
|
```python |
|
from datasets import load_dataset |
|
|
|
dataset = load_dataset("QCRI/ArMeme") |
|
|
|
# Specify the directory where you want to save the dataset |
|
output_dir="./ArMeme/" |
|
|
|
# Save the dataset to the specified directory. This will save all splits to the output directory. |
|
dataset.save_to_disk(output_dir) |
|
|
|
# If you want to get the raw images from HF dataset format |
|
|
|
from PIL import Image |
|
import os |
|
import json |
|
|
|
# Directory to save the images |
|
output_dir="./ArMeme/" |
|
os.makedirs(output_dir, exist_ok=True) |
|
|
|
# Iterate over the dataset and save each image |
|
for split in ['train','dev','test']: |
|
jsonl_path = os.path.join(output_dir, f"arabic_memes_categorization_{split}.jsonl") |
|
with open(jsonl_path, 'w', encoding='utf-8') as f: |
|
for idx, item in enumerate(dataset[split]): |
|
# Access the image directly as it's already a PIL.Image object |
|
image = item['image'] |
|
image_path = os.path.join(output_dir, item['img_path']) |
|
# Ensure the directory exists |
|
os.makedirs(os.path.dirname(image_path), exist_ok=True) |
|
image.save(image_path) |
|
del item['image'] |
|
f.write(json.dumps(item, ensure_ascii=False) + '\n') |
|
``` |
|
|
|
**Language:** Arabic |
|
|
|
**Modality:** Multimodal (text + image) |
|
|
|
**Number of Samples:** ~6000 |
|
|
|
|
|
## License |
|
|
|
This dataset is licensed under the **CC-By-NC-SA-4.0** license. |
|
|
|
## Citation |
|
Please find the paper on [ArXiv](https://arxiv.org/pdf/2406.03916v2) and use the bib info below to cite the paper. |
|
|
|
``` |
|
@inproceedings{alam2024armeme, |
|
title={{ArMeme}: Propagandistic Content in Arabic Memes}, |
|
author={Alam, Firoj and Hasnat, Abul and Ahmed, Fatema and Hasan, Md Arid and Hasanain, Maram}, |
|
booktitle={Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP)}, |
|
year={2024}, |
|
address={Miami, Florida}, |
|
month={November 12--16}, |
|
publisher={Association for Computational Linguistics}, |
|
} |
|
|
|
``` |