Update README.md
Browse files
README.md
CHANGED
@@ -41,4 +41,4 @@ model = BertForMaskedLM.from_pretrained("Sifal/DzarbiBert")
|
|
41 |
|
42 |
## Limitations
|
43 |
|
44 |
-
The pre-training data used in
|
|
|
41 |
|
42 |
## Limitations
|
43 |
|
44 |
+
The pre-training data used in the base model comes from social media (Twitter). Therefore, the Masked Language Modeling objective may predict offensive words in some situations. Modeling this kind of words may be either an advantage (e.g. when training a hate speech model) or a disadvantage (e.g. when generating answers that are directly sent to the end user). Depending on your downstream task, you may need to filter out such words especially when returning automatically generated text to the end user.
|