Edit model card

This model was finetuned on errorful sentences from the train subset of UA-GEC corpus, introduced in UA-GEC: Grammatical Error Correction and Fluency Corpus for the Ukrainian Language paper.

Only sentences containing errors were used; 8,874 sentences for training and 987 sentences for validation. The training arguments were defined as follows:

batch_size = 4
num_train_epochs = 3
learning_rate=5e-5
weight_decay=0.01
optim = "adamw_hf"
Downloads last month
26
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using schhwmn/mbart-large-50-finetuned-ukr-gec 1