sberbank-ai commited on
Commit
c9bafea
1 Parent(s): 1e3415b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -16,7 +16,7 @@ It trained on Russian language corpus (300GB). Dataset is the same as for ruT5
16
 
17
  Bbpe tokenizer. 50257 + special tokens 107. Prefix tokens: '\<LM\>', '\<SC1>',.. '\<SC6>'
18
 
19
- First half of the time model trained on the small part of all datasets (1%,3GB) and without prefixes in each task.
20
 
21
  For RSG we trained as described in the T5 paper. First, we trained multitask for all tasks. Then we took the best checkpoint for the task and trained it further.
22
  RSG submit here https://russiansuperglue.com/login/submit_info/1936
 
16
 
17
  Bbpe tokenizer. 50257 + special tokens 107. Prefix tokens: '\<LM\>', '\<SC1>',.. '\<SC6>'
18
 
19
+ First half of the time model trained on the small part of all datasets (1%,3GB) and without prefixes in each tasks.
20
 
21
  For RSG we trained as described in the T5 paper. First, we trained multitask for all tasks. Then we took the best checkpoint for the task and trained it further.
22
  RSG submit here https://russiansuperglue.com/login/submit_info/1936