|
--- |
|
license: apache-2.0 |
|
task_categories: |
|
- text-classification |
|
language: |
|
- en |
|
size_categories: |
|
- 1K<n<10K |
|
pretty_name: 'DeTexD: A Benchmark Dataset for Delicate Text Detection' |
|
dataset_info: |
|
features: |
|
- name: text |
|
dtype: string |
|
- name: annotator_1 |
|
dtype: int32 |
|
- name: annotator_2 |
|
dtype: int32 |
|
- name: annotator_3 |
|
dtype: int32 |
|
- name: label |
|
dtype: |
|
class_label: |
|
names: |
|
'0': negative |
|
'1': positive |
|
splits: |
|
- name: test |
|
num_examples: 1023 |
|
--- |
|
# Dataset Card for DeTexD: A Benchmark Dataset for Delicate Text Detection |
|
|
|
## Dataset Description |
|
|
|
- **Repository:** [DeTexD repository](https://github.com/grammarly/detexd) |
|
- **Paper:** [DeTexD: A Benchmark Dataset for Delicate Text Detection](TODO) |
|
|
|
### Dataset Summary |
|
|
|
We define *delicate text* as any text that is emotionally charged or potentially triggering such that engaging with it has the potential to result in harm. This broad term covers a range of sensitive texts that vary across four major dimensions: 1) riskiness, 2) explicitness, 3) topic, and 4) target. |
|
|
|
This dataset contains texts with fine-grained individual annotator labels from 0 to 5 (where 0 indicates no risk and 5 indicates high risk) and averaged binary labels. See paper for more details. |
|
|
|
**Repository:** [DeTexD repository](https://github.com/grammarly/detexd) <br> |
|
**Paper:** [DeTexD: A Benchmark Dataset for Delicate Text Detection](TODO) |
|
|
|
## Dataset Structure |
|
|
|
### Data Instances |
|
|
|
``` |
|
{'text': '"He asked me and the club if we could give him a couple of days off just to clear up his mind and he will be back in the group, I suppose, next Monday, back for training and then be a regular part of the whole squad again," Rangnick said.', |
|
'annotator_1': 0, |
|
'annotator_2': 0, |
|
'annotator_3': 0, |
|
'label': 0} |
|
``` |
|
|
|
### Data Fields |
|
|
|
- `text`: Text to be classified |
|
- `annotator_1`: Annotator 1 score (0-5) |
|
- `annotator_2`: Annotator 2 score (0-5) |
|
- `annotator_3`: Annotator 3 score (0-5) |
|
- `label`: Averaged binary score (>=3), either "negative" (0) or positive (1) |
|
|
|
### Data Splits |
|
|
|
| | test | |
|
|--------------------|-----:| |
|
| Number of examples | 1023 | |
|
|
|
### Citation Information |
|
|
|
TODO |