UzRoBerta model. Pre-prepared model in Uzbek (Cyrillic script) to model the masked language and predict the next sentences.
Training data. UzBERT model was pretrained on ≈167K news articles (≈568Mb).