Datasets:
QCRI
/

Modalities:
Image
Text
Formats:
parquet
Languages:
Arabic
ArXiv:
Libraries:
Datasets
pandas
License:
ArMeme / README.md
Firoj's picture
Update README.md
b6a64bd verified
|
raw
history blame
3.28 kB
metadata
license: cc-by-nc-sa-4.0
task_categories:
  - image-classification
  - text-classification
  - visual-question-answering
language:
  - ar
pretty_name: ArMeme
size_categories:
  - 1K<n<10K
dataset_info:
  description: Arabic multimodal memes dataset.
  features:
    - name: id
      dtype: string
    - name: text
      dtype: string
    - name: image
      dtype: Image
    - name: img_path
      dtype: string
    - name: class_label
      dtype: string
  splits:
    - name: train
      num_examples: 4007
    - name: dev
      num_examples: 584
    - name: test
      num_examples: 1134

ArMeme Dataset

Overview

ArMeme is the first multimodal Arabic memes dataset that includes both text and images, collected from various social media platforms. It serves as the first resource dedicated to Arabic multimodal research. While the dataset has been annotated to identify propaganda in memes, it is versatile and can be utilized for a wide range of other research purposes, including sentiment analysis, hate speech detection, cultural studies, meme generation, and cross-lingual transfer learning. The dataset opens new avenues for exploring the intersection of language, culture, and visual communication.

Dataset Structure

The dataset is divided into three splits:

  • Train: The training set
  • Dev: The development/validation set
  • Test: The test set

Each entry in the dataset includes:

  • id: id corresponds to the entry
  • text: The textual content associated with the image.
  • image: The corresponding image.
  • img_path: The file path to the image.

How to Use

You can load the dataset using the datasets library from Hugging Face:

from datasets import load_dataset

dataset = load_dataset("QCRI/ArMeme")

# Specify the directory where you want to save the dataset
output_dir="./ArMeme/"

# Save the dataset to the specified directory. This will save all splits to the output directory.
dataset.save_to_disk(output_dir)

# If you want to get the raw images from HF dataset format

from PIL import Image
import os
import json

# Directory to save the images
output_dir="./ArMeme/"
os.makedirs(output_dir, exist_ok=True)

# Iterate over the dataset and save each image
for split in ['train','dev','test']:     
    jsonl_path = os.path.join(output_dir, f"arabic_memes_categorization_{split}.jsonl")
    with open(jsonl_path, 'w', encoding='utf-8') as f:    
        for idx, item in enumerate(dataset[split]):
            # Access the image directly as it's already a PIL.Image object
            image = item['image']
            image_path = os.path.join(output_dir, item['img_path'])
            # Ensure the directory exists
            os.makedirs(os.path.dirname(image_path), exist_ok=True)
            image.save(image_path)
            del item['image']
            f.write(json.dumps(item, ensure_ascii=False) + '\n')

Language: Arabic

Modality: Multimodal (text + image)

Number of Samples: ~6000

License

This dataset is licensed under the CC-By-NC-SA-4.0 license.

Citation

@article{alam2024armeme,
  title={{ArMeme}: Propagandistic Content in Arabic Memes},
  author={Alam, Firoj and Hasnat, Abul and Ahmed, Fatema and Hasan, Md Arid and Hasanain, Maram},
  year={2024},
  journal={arXiv: 2406.03916},
}