Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,47 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- ar
|
4 |
+
- dz
|
5 |
+
|
6 |
+
tags:
|
7 |
+
- pytorch
|
8 |
+
- bert
|
9 |
+
- multilingual
|
10 |
+
- ar
|
11 |
+
- dz
|
12 |
+
|
13 |
+
license: apache-2.0
|
14 |
+
|
15 |
+
widget:
|
16 |
+
- text: " أنا من الجزائر من ولاية [MASK] "
|
17 |
+
- text: "rabi [MASK] khouya sami"
|
18 |
+
- text: " ربي [MASK] خويا لعزيز"
|
19 |
+
- text: "tahya el [MASK]."
|
20 |
+
- text: "rouhi ya dzayer [MASK]"
|
21 |
+
|
22 |
+
inference: true
|
23 |
+
---
|
24 |
+
|
25 |
+
<img src="https://raw.githubusercontent.com/alger-ia/dziribert/main/dziribert_drawing.png" alt="drawing" width="25%" height="25%" align="right"/>
|
26 |
+
|
27 |
+
|
28 |
+
# DzarbiBert
|
29 |
+
|
30 |
+
|
31 |
+
DzarbiBert is a pruned model of first Transformer-based Language Model that has been pre-trained specifically for the Algerian Dialect. It handles Algerian text contents written using both Arabic and Latin characters. It sets new state of the art results on Algerian text classification datasets, even if it has been pre-trained on much less data (~1 million tweets).
|
32 |
+
|
33 |
+
For more information, please visit their paper: https://arxiv.org/pdf/2109.12346.pdf.
|
34 |
+
|
35 |
+
## How to use
|
36 |
+
|
37 |
+
```python
|
38 |
+
from transformers import BertTokenizer, BertForMaskedLM
|
39 |
+
|
40 |
+
tokenizer = BertTokenizer.from_pretrained("Sifal/DzarbiBert")
|
41 |
+
model = BertForMaskedLM.from_pretrained("Sifal/DzarbiBert")
|
42 |
+
|
43 |
+
```
|
44 |
+
|
45 |
+
## Limitations
|
46 |
+
|
47 |
+
The pre-training data used in this project comes from social media (Twitter). Therefore, the Masked Language Modeling objective may predict offensive words in some situations. Modeling this kind of words may be either an advantage (e.g. when training a hate speech model) or a disadvantage (e.g. when generating answers that are directly sent to the end user). Depending on your downstream task, you may need to filter out such words especially when returning automatically generated text to the end user.
|