Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
dardem commited on
Commit
f90fe51
1 Parent(s): 323aa7e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -33,12 +33,12 @@ This is the first of its kind toxicity classification dataset for the Ukrainian
33
 
34
  Due to the subjective nature of toxicity, definitions of toxic language will vary. We include items that are commonly referred to as vulgar or profane language. ([NLLB paper](https://arxiv.org/pdf/2207.04672.pdf))
35
 
36
- Dataset formation:
37
- 1. Filtering Ukrainian tweets so that only tweets containing toxic language remain with toxic keywords. Source of Ukrainian data: https://github.com/saganoren/ukr-twi-corpus
38
  2. Non-toxic sentences were obtained from a previous dataset of tweets as well as sentences from news and fiction from UD Ukrainian IU: https://universaldependencies.org/treebanks/uk_iu/index.html
39
  3. After that, the dataset was split into a train-test-val and all data were balanced both by the toxic/non-toxic criterion and by data source.
40
 
41
- Load dataset:
42
  ```
43
  from datasets import load_dataset
44
  dataset = load_dataset("ukr-detect/ukr-toxicity-dataset")
 
33
 
34
  Due to the subjective nature of toxicity, definitions of toxic language will vary. We include items that are commonly referred to as vulgar or profane language. ([NLLB paper](https://arxiv.org/pdf/2207.04672.pdf))
35
 
36
+ ## Dataset formation:
37
+ 1. Filtering Ukrainian tweets so that only tweets containing toxic language remain with toxic keywords. Source data: https://github.com/saganoren/ukr-twi-corpus
38
  2. Non-toxic sentences were obtained from a previous dataset of tweets as well as sentences from news and fiction from UD Ukrainian IU: https://universaldependencies.org/treebanks/uk_iu/index.html
39
  3. After that, the dataset was split into a train-test-val and all data were balanced both by the toxic/non-toxic criterion and by data source.
40
 
41
+ ## Load dataset:
42
  ```
43
  from datasets import load_dataset
44
  dataset = load_dataset("ukr-detect/ukr-toxicity-dataset")