--- license: apache-2.0 datasets: - thebogko/bulgarian-grammar-mistakes language: - bg pipeline_tag: text2text-generation --- # mt5-base finetuned bulgarian-grammar-mistakes This model is a fine-tune checkpoint of mt5-base, fine-tuned on [bulgarian-grammar-mistakes](https://huggingface.co/thebogko/mt5-finetuned-bulgarian-grammar-mistakes) by only taking into account two of the four error types: - article_misuse, and - pronoun_misuse This is done so the model can focus on these mistakes more clearly, as they are more common with native Bulgarian speakers, as opposed to the latter two types which are more common with Bulgarian learners. ## Model Details ### Model Description - **Model type:** Sequence2sequence Generation - **Language(s) (NLP):** Bulgarian - **License:** apache2.0 - **Finetuned from model:** [google/mt5-base](https://huggingface.co/google/mt5-base) ## Uses Intended use of the model includes but is not limited to: - comparison and development of Bulgarian error correction NLP systems by developers - incorporation in Bulgarian language learner applications - research in the field of Bulgarian NLP grammar error correction ### Direct Use No need to fine-tuning further, unless needing to add more error types. ### Out-of-Scope Use The model should not be used to intentionally create hostile or alienating environments for people - especially Bulgarian learners. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. ## Bias, Risks, and Limitations The model specialises in identifying and correcting Bulgarian grammar errors that have to do with article and pronoun misuse, so it will likely not perform well on other types of errors. Additionally, the dataset used for fine-tuning does not encompass all possible errors of those types, so use should be with caution of grammatical validity of output. ### Recommendations Users are strongly advised to double check the validity of the model's outputs and to strive to understand the underlying grammatical rules behind the language, instead of using the model's outputs as given. ## How to Get Started with the Model Use the code below to get started with the model. ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("thebogko/mt5-finetuned-bulgarian-grammar-mistakes") model = AutoModelForSeq2SeqLM.from_pretrained("thebogko/mt5-finetuned-bulgarian-grammar-mistakes") erroneous_sentence = 'Владетеля умря още млад.' encoded_source = tokenizer(erroneous_sentence, return_tensors='pt', max_length=100, padding='max_length') encoded_source.to(device) correct_sentence = ft_model.generate(**encoded_source, max_length=100) correct_sentence = [tokenizer.decode(t, skip_special_tokens=True) for t in correct_sentence][0] print(correct_sentence) "Владетелят умря още млад." ``` ## Training Details ### Training Data The training data used is from a collection of [Bulgarian grammar mistakes](https://huggingface.co/datasets/thebogko/bulgarian-grammar-mistakes), which contains 7.59k rows of data, spanning over four different types of grammar errors: 1) **Misuse of articles** 2) **Misuse of pronouns** 3) Incorrect appending of 'me' for plural verbs in the first person 4) Word disagreement between nouns and adjectives in terms of grammatical gender and number Only the first two were used in the fine-tuning of this model, as the rationale was that these two types of errors are much more common overall (especially with native Bulgarian speakers), and it would allow the model to focus on these. After filtering only these two types we are left with 3090 pairs, which were then split into training/validation/test (72/18/10), respectively. With this split we are left with 2224 training pairs. ### Training Procedure The standard fine-tuning training procedure was applied by creating batches from the training samples and evaluating on each epoch. The model weights are optimised using cross-entropy loss. #### Training Hyperparameters Gridspace search was applied to find the best learning rate, epoch number, weight decay and batch size. The gridspace searched is as follows: ``` gridSpace = { 'batch_size': [4, 8], 'lr_rate': [0.002, 0.0002, 0.00002], 'w_decay': [0.1, 0.01, 0.001] } ``` Along with an epoch number from 1 to 16. The chosen setup at the end of experimentation stage was chosen to be: 1) **batch_size**: 8 2) **learning_rate**: 0.0002 3) **wight_decay**: 0.01 4) **epoch number**: 4 This gridspace search was performed 3 separate times, and it resulted in the lowest average validation loss of 0.01431. ## Evaluation Evaluation was performed against four other models: - bespoke RNN encoder-decoder model with attention - [gpt3.5 Turbo model](https://platform.openai.com/docs/models/gpt-3-5-turbo) by [OpenAI](https://openai.com) - [BgGPT model](https://huggingface.co/INSAIT-Institute/BgGPT-7B-Instruct-v0.1) by [INSAIT](https://insait.ai) ### Testing Data, Factors & Metrics #### Testing Data The testing data is 309 pairs, from the original train/validation/test split of (72/18/10) over 3090 pairs. #### Metrics Evaluated using recall, precision, f1 score, f0.5 score and BLEU. ### Results The resuls are averaged over the testing pairs. **mt5-base finetuned bulgarian-grammar-mistakes**: - precision: **0.6812** - recall: **0.6861** - f1 score: **0.6828** - f0.5 score: **0.6818** - BLEU: **0.9623** **gpt3.5 Turbo** - precision: 0.3751 - recall: 0.6052 - f1 score: 0.4331 - f0.5 score: 0.3934 - BLEU: 0.7666 **BgGPT** - precision: 0.3307 - recall: 0.5987 - f1 score: 0.3934 - f0.5 score: 0.3503 - BLEU: 0.7110 **RNN encoder-decoder model with attention** - precision: 0.1717 - recall: 0.2362 - f1 score: 0.1820 - f0.5 score: 0.1748 - BLEU: 0.2087 #### Summary The evaluation showcases that the fine-tuned model ourperforms all other models across the chosen metrics, particularly precision. This implies that the model's strength lies in being able to ensure that the corrections it makes are, in fact, valid, as opposed to the other models, all of which exhibit a recall value that's much higher than their respective precision.