content-moderation / README.md
JosephCatrambone's picture
Create README.md
b8fa8fa verified
metadata
task_categories:
  - text-classification
language:
  - en

Note:

This dataset contains the EVAL portion of the Jigsaw Toxic Comment Dataset.

It should be used for model evaluation. For training, one can use the original Jigsaw dataset: https://huggingface.co/datasets/google/jigsaw_toxicity_pred

Overview:

The Jigsaw Toxic Comment Dataset is a large collection of Wikipedia comments labeled by human raters for toxic behavior.

It contains approximately 159,000 comments from Wikipedia talk pages, annotated for six types of toxicity: toxic, severe toxic, obscene, threat, insult, and identity hate. Each comment can have one or more of these labels. The dataset is part of the Toxic Comment Classification Challenge originally hosted on Kaggle and is designed to help develop models that can identify and classify toxic online comments.

The original dataset was split into training and testing sets, with about 80% for training and 20% for testing. The data has been used in various research projects and competitions aimed at improving online content moderation and creating safer online spaces.

Citations:

@misc{jigsaw-toxic-comment-classification-challenge,
    author = {cjadams and Jeffrey Sorensen and Julia Elliott and Lucas Dixon and Mark McDonald and nithum and Will Cukierski},
    title = {Toxic Comment Classification Challenge},
    year = {2017},
    howpublished = {\url{https://kaggle.com/competitions/jigsaw-toxic-comment-classification-challenge}},
    note = {Kaggle}
}