File size: 8,316 Bytes
d650771
 
2cd184b
 
 
 
 
a0fa44f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8f1d478
 
 
 
 
a0fa44f
8f1d478
 
 
a0fa44f
 
 
 
cb20471
a0fa44f
 
 
cb20471
 
 
 
 
d3f7498
cb20471
 
 
 
 
 
 
d3f7498
8f1d478
d3f7498
8f1d478
a0fa44f
cb20471
a0fa44f
 
 
 
8f1d478
 
 
 
a0fa44f
 
 
 
 
8f1d478
a0fa44f
 
 
 
8f1d478
a0fa44f
 
 
8f1d478
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a0fa44f
 
 
cb20471
a0fa44f
8f1d478
a0fa44f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8f1d478
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
---
license: apache-2.0
datasets:
- thebogko/bulgarian-grammar-mistakes
language:
- bg
pipeline_tag: text2text-generation
---
# mt5-base finetuned bulgarian-grammar-mistakes

<!-- Provide a quick summary of what the model is/does. -->
This model is a fine-tune checkpoint of mt5-base, fine-tuned on [bulgarian-grammar-mistakes](https://huggingface.co/thebogko/mt5-finetuned-bulgarian-grammar-mistakes) by only taking into account two of the four error types:
- article_misuse, and
- pronoun_misuse
This is done so the model can focus on these mistakes more clearly, as they are more common with native Bulgarian speakers, as opposed to the latter two types which are more common with Bulgarian learners.

## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. 
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]-->
- **Model type:** Sequence2sequence Generation
- **Language(s) (NLP):** Bulgarian
- **License:** apache2.0
- **Finetuned from model:** [google/mt5-base](https://huggingface.co/google/mt5-base)

<!--### Model Sources [optional]-->

<!-- Provide the basic links for the model. -->

<!--- **Paper [optional]:** [More Information Needed]-->

## Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
Intended use of the model includes but is not limited to:
  - comparison and development of Bulgarian error correction NLP systems by developers
  - incorporation in Bulgarian language learner applications
  - research in the field of Bulgarian NLP grammar error correction

### Direct Use

<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
No need to fine-tuning further, unless needing to add more error types.

### Out-of-Scope Use

<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
The model should not be used to intentionally create hostile or alienating environments for people - especially Bulgarian learners. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.

## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->
The model specialises in identifying and correcting Bulgarian grammar errors that have to do with article and pronoun misuse, so it will likely not perform well on other types of errors. Additionally, the dataset used for fine-tuning does not encompass all possible errors of those types, so use should be with caution of grammatical validity of output.

### Recommendations

<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users are strongly advised to double check the validity of the model's outputs and to strive to understand the underlying grammatical rules behind the language, instead of using the model's outputs as given.

## How to Get Started with the Model

Use the code below to get started with the model.

```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("thebogko/mt5-finetuned-bulgarian-grammar-mistakes")
model = AutoModelForSeq2SeqLM.from_pretrained("thebogko/mt5-finetuned-bulgarian-grammar-mistakes")


erroneous_sentence = 'Владетеля умря още млад.'
encoded_source = tokenizer(erroneous_sentence,
                           return_tensors='pt',
                           max_length=100,
                           padding='max_length')
encoded_source.to(device)

correct_sentence = ft_model.generate(**encoded_source, max_length=100)
correct_sentence = [tokenizer.decode(t, skip_special_tokens=True) for t in correct_sentence][0]
print(correct_sentence)

"Владетелят умря още млад."
```

## Training Details

### Training Data

<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The training data used is from a collection of [Bulgarian grammar mistakes](https://huggingface.co/datasets/thebogko/bulgarian-grammar-mistakes), which contains 7.59k rows of data, spanning over four different types of grammar errors:
1) **Misuse of articles**
2) **Misuse of pronouns**
3) Incorrect appending of 'me' for plural verbs in the first person
4) Word disagreement between nouns and adjectives in terms of grammatical gender and number

Only the first two were used in the fine-tuning of this model, as the rationale was that these two types of errors are much more common overall (especially with native Bulgarian speakers), and it would allow the model to focus on these.

After filtering only these two types we are left with 3090 pairs, which were then split into training/validation/test (72/18/10), respectively. With this split we are left with 2224 training pairs.

### Training Procedure

<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
The standard fine-tuning training procedure was applied by creating batches from the training samples and evaluating on each epoch. The model weights are optimised using cross-entropy loss.

#### Training Hyperparameters

Gridspace search was applied to find the best learning rate, epoch number, weight decay and batch size. The gridspace searched is as follows:

```
gridSpace = {
    'batch_size': [4, 8],
    'lr_rate': [0.002, 0.0002, 0.00002],
    'w_decay': [0.1, 0.01, 0.001]
}
```

Along with an epoch number from 1 to 16.

The chosen setup at the end of experimentation stage was chosen to be:
  1) **batch_size**: 8
  2) **learning_rate**: 0.0002
  3) **wight_decay**: 0.01
  4) **epoch number**: 4

This gridspace search was performed 3 separate times, and it resulted in the lowest average validation loss of 0.01431.

## Evaluation

<!-- This section describes the evaluation protocols and provides the results. -->
Evaluation was performed against four other models:
  - bespoke RNN encoder-decoder model with attention
  - [gpt3.5 Turbo model](https://platform.openai.com/docs/models/gpt-3-5-turbo) by [OpenAI](https://openai.com)
  - [BgGPT model](https://huggingface.co/INSAIT-Institute/BgGPT-7B-Instruct-v0.1) by [INSAIT](https://insait.ai)

### Testing Data, Factors & Metrics

#### Testing Data

The testing data is 309 pairs, from the original train/validation/test split of (72/18/10) over 3090 pairs.

#### Metrics

<!-- These are the evaluation metrics being used, ideally with a description of why. -->
Evaluated using recall, precision, f1 score, f0.5 score and BLEU.

### Results

The resuls are averaged over the testing pairs.

**mt5-base finetuned bulgarian-grammar-mistakes**:
  - precision: **0.6812**
  - recall: **0.6861**
  - f1 score: **0.6828**
  - f0.5 score: **0.6818**
  - BLEU: **0.9623**

**gpt3.5 Turbo**
  - precision: 0.3751
  - recall: 0.6052
  - f1 score: 0.4331
  - f0.5 score: 0.3934
  - BLEU: 0.7666

**BgGPT**
  - precision: 0.3307
  - recall: 0.5987
  - f1 score: 0.3934
  - f0.5 score: 0.3503
  - BLEU: 0.7110

**RNN encoder-decoder model with attention**
  - precision: 0.1717
  - recall: 0.2362
  - f1 score: 0.1820
  - f0.5 score: 0.1748
  - BLEU: 0.2087

#### Summary

The evaluation showcases that the fine-tuned model ourperforms all other models across the chosen metrics, particularly precision. This implies that the model's strength lies in being able to ensure that the corrections it makes are, in fact, valid, as opposed to the other models, all of which exhibit a recall value that's much higher than their respective precision.

<!--
## Citation [optional]

**BibTeX:**

[More Information Needed]

**APA:**

[More Information Needed]

## Glossary [optional]

[More Information Needed]

## More Information [optional]

[More Information Needed]

## Model Card Authors [optional]

[More Information Needed]

## Model Card Contact

[More Information Needed]-->