Update README.md
Browse files
README.md
CHANGED
@@ -11,41 +11,44 @@ Alexis Palmer, Christine Carr, Melissa Robinson, and Jordan Sanders. 2020 (to ap
|
|
11 |
|
12 |
The COLD data set is intended for researchers to diagnose and assess their automatic hate speech detection systems. The corpus highlights 4 different types of complex offensive language: slurs, reclaimed slurs, adjective nominalization, distancing, and also non-offensive texts. The corpus contains a set of tweets collected from 3 different data sets: Davidson et al (2017), Waseem and Hovy (2016), and Robinson (2017). The data is annotated by 6 annotators, with each instance being annotated by at least 3 different annotators.
|
13 |
|
14 |
-
|
15 |
-
|
16 |
-
1. **COLD-2016** is the data set used for the analyses and experimental results described in the JLCL paper. This version of the data set contains 2016 instances, selected using filters aiming to capture the complex offensive language types listed above.
|
17 |
-
|
18 |
-
2. **COLD-all**: The full COLD data set contains 2500 instances. The instances from COLDID#1 - COLDID#2152 are the original tweets, selected using filters aiming to capture the complex offensive language types listed above. Some instances originally had fewer than three annotations, and we acquired the missing annotations after having finished the work described in the paper. In addition, we randomly selected another 348 tweets from Davidson et al. (2017) to bring the total number of instances in the data set up to 2500. These instances were annotated by three of the original annotators. The randomly-selected data can be found starting with COLDID#2153.
|
19 |
|
20 |
## Format and annotations
|
21 |
|
22 |
The data are made available here as .tsv files. The format consists of eight columns: four informational and four annotation-related.
|
23 |
|
24 |
### Informational columns:
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
|
26 |
-
|
27 |
|
28 |
-
|
29 |
|
30 |
-
|
31 |
|
32 |
-
|
33 |
|
34 |
-
|
35 |
|
36 |
-
|
|
|
37 |
|
38 |
-
1. **
|
39 |
|
40 |
-
2. **
|
41 |
|
42 |
-
3. **
|
43 |
|
44 |
-
4. **
|
45 |
|
46 |
-
|
47 |
|
48 |
-
|
49 |
|
50 |
## Contact
|
51 |
If you have any questions please contact carrc9953@gmail.com, alexis.palmer@unt.edu, or melissa.robinson@my.unt.edu.
|
@@ -76,4 +79,4 @@ nominalization in pejorative meaning. Master's thesis, Department of Linguistics
|
|
76 |
|
77 |
Waseem, Z., & Hovy, D. (2016). Hateful Symbols or Hateful People? Predictive Features for
|
78 |
Hate Speech Detection on Twitter. In Proceedings of the NAACL Student Research Workshop. San Diego, California.
|
79 |
-
<a href="https://www.aclweb.org/anthology/N16-2013/">[the paper]</a>
|
|
|
11 |
|
12 |
The COLD data set is intended for researchers to diagnose and assess their automatic hate speech detection systems. The corpus highlights 4 different types of complex offensive language: slurs, reclaimed slurs, adjective nominalization, distancing, and also non-offensive texts. The corpus contains a set of tweets collected from 3 different data sets: Davidson et al (2017), Waseem and Hovy (2016), and Robinson (2017). The data is annotated by 6 annotators, with each instance being annotated by at least 3 different annotators.
|
13 |
|
14 |
+
**COLD-2016** is the data set used for the analyses and experimental results described in the JLCL paper. This version of the data set contains 2016 instances, selected using filters aiming to capture the complex offensive language types listed above.
|
|
|
|
|
|
|
|
|
15 |
|
16 |
## Format and annotations
|
17 |
|
18 |
The data are made available here as .tsv files. The format consists of eight columns: four informational and four annotation-related.
|
19 |
|
20 |
### Informational columns:
|
21 |
+
1. **ID** - information about the original data set and the textual instance's ID from the data set it was extracted from. The ID includes a letter indicating which data set it originates from, followed by a hyphen and the corresponding ID of the instance in the original data set. For example: D-63 means that the instance is from the Davidson et al. (2017) data set, originally with the ID number 63.
|
22 |
+
|
23 |
+
2. **Dataset** - a letter indicating from which dataset this instance originates.
|
24 |
+
3. **Text** - the text of the instance.
|
25 |
+
|
26 |
+
### Majority Vote Columns:
|
27 |
|
28 |
+
For each instance, annotators were asked to answer Yes or No to each of four questions. Theses columns are the majority vote from three annotators (See the paper for much more detailed discussion, as well as distributions, etc.)
|
29 |
|
30 |
+
1. **Off** Is this text offensive?
|
31 |
|
32 |
+
2. **Slur** Is there a slur in the text?
|
33 |
|
34 |
+
3. **Nom** Is there an adjectival nominalization in the text?
|
35 |
|
36 |
+
4. **Dist** Is there (linguistic) distancing in the text?
|
37 |
|
38 |
+
### Individual Annotator Columns:
|
39 |
+
For each instance, annotators were asked to answer Yes or No to each of four questions. Theses columns are the individual response from each annotators (See the paper for much more detailed discussion, as well as distributions, etc.)
|
40 |
|
41 |
+
1. **Off1/2/3** Is this text offensive?
|
42 |
|
43 |
+
2. **Slur1/2/3** Is there a slur in the text?
|
44 |
|
45 |
+
3. **Nom1/2/3** Is there an adjectival nominalization in the text?
|
46 |
|
47 |
+
4. **Dist1/2/3** Is there (linguistic) distancing in the text?
|
48 |
|
49 |
+
### Category
|
50 |
|
51 |
+
1. **Cat** This column is deduced based on the majority votes for OFF/SLUR/NOM/DIST. (See the paper for detailed explination the categories, as well as distributions, etc.)
|
52 |
|
53 |
## Contact
|
54 |
If you have any questions please contact carrc9953@gmail.com, alexis.palmer@unt.edu, or melissa.robinson@my.unt.edu.
|
|
|
79 |
|
80 |
Waseem, Z., & Hovy, D. (2016). Hateful Symbols or Hateful People? Predictive Features for
|
81 |
Hate Speech Detection on Twitter. In Proceedings of the NAACL Student Research Workshop. San Diego, California.
|
82 |
+
<a href="https://www.aclweb.org/anthology/N16-2013/">[the paper]</a>
|