cointegrated
commited on
Commit
•
77f6923
1
Parent(s):
f9844b9
Update README.md
Browse files
README.md
CHANGED
@@ -60,11 +60,11 @@ print(text2toxicity(['я люблю нигеров', 'я люблю африка
|
|
60 |
|
61 |
## Training
|
62 |
|
63 |
-
The model has been trained on the joint dataset of [OK ML Cup](https://cups.mail.ru/ru/tasks/1048) and [Babakov et.al.](https://arxiv.org/abs/2103.05345) with `Adam` optimizer, learning rate of `1e-5`, and batch size of `
|
64 |
```
|
65 |
-
non-toxic : 0.
|
66 |
-
insult : 0.
|
67 |
-
obscenity : 0.
|
68 |
-
threat : 0.
|
69 |
-
dangerous : 0.
|
70 |
```
|
|
|
60 |
|
61 |
## Training
|
62 |
|
63 |
+
The model has been trained on the joint dataset of [OK ML Cup](https://cups.mail.ru/ru/tasks/1048) and [Babakov et.al.](https://arxiv.org/abs/2103.05345) with `Adam` optimizer, learning rate of `1e-5`, and batch size of `64` for `15` epochs. A text was considered inappropriate if its inappropritateness score was higher than 0.8, and appropriate - if it was lower than 0.2. The per-label ROC AUC on the dev set is:
|
64 |
```
|
65 |
+
non-toxic : 0.9937
|
66 |
+
insult : 0.9912
|
67 |
+
obscenity : 0.9881
|
68 |
+
threat : 0.9910
|
69 |
+
dangerous : 0.8295
|
70 |
```
|