cointegrated commited on
Commit
77f6923
1 Parent(s): f9844b9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -60,11 +60,11 @@ print(text2toxicity(['я люблю нигеров', 'я люблю африка
60
 
61
  ## Training
62
 
63
- The model has been trained on the joint dataset of [OK ML Cup](https://cups.mail.ru/ru/tasks/1048) and [Babakov et.al.](https://arxiv.org/abs/2103.05345) with `Adam` optimizer, learning rate of `1e-5`, and batch size of `128` for `5` epochs. The data was not filtered in any way. A text was considered inappropriate if its inappropritateness score was higher than 0.2. The per-label ROC AUC on the dev set is:
64
  ```
65
- non-toxic : 0.9909
66
- insult : 0.9882
67
- obscenity : 0.9824
68
- threat : 0.9868
69
- dangerous : 0.7758
70
  ```
 
60
 
61
  ## Training
62
 
63
+ The model has been trained on the joint dataset of [OK ML Cup](https://cups.mail.ru/ru/tasks/1048) and [Babakov et.al.](https://arxiv.org/abs/2103.05345) with `Adam` optimizer, learning rate of `1e-5`, and batch size of `64` for `15` epochs. A text was considered inappropriate if its inappropritateness score was higher than 0.8, and appropriate - if it was lower than 0.2. The per-label ROC AUC on the dev set is:
64
  ```
65
+ non-toxic : 0.9937
66
+ insult : 0.9912
67
+ obscenity : 0.9881
68
+ threat : 0.9910
69
+ dangerous : 0.8295
70
  ```