Update README.md
Browse files
README.md
CHANGED
@@ -12,14 +12,12 @@ widget:
|
|
12 |
---
|
13 |
# Arabic BERT Model
|
14 |
**AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert).
|
15 |
-
AraBERTMo_base uses the same BERT-Base config.
|
16 |
-
AraBERTMo_base now comes in 10 new variants
|
17 |
All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name.
|
18 |
Checkpoints are available in PyTorch formats.
|
19 |
|
20 |
## Pretraining Corpus
|
21 |
-
`AraBertMo_base_V8' model was pre-trained on ~3 million words:
|
22 |
-
- [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar".
|
23 |
|
24 |
## Training results
|
25 |
this model achieves the following results:
|
|
|
12 |
---
|
13 |
# Arabic BERT Model
|
14 |
**AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert).
|
15 |
+
AraBERTMo_base uses the same BERT-Base config. AraBERTMo_base now comes in 10 new variants
|
|
|
16 |
All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name.
|
17 |
Checkpoints are available in PyTorch formats.
|
18 |
|
19 |
## Pretraining Corpus
|
20 |
+
`AraBertMo_base_V8' model was pre-trained on ~3 million words: [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar".
|
|
|
21 |
|
22 |
## Training results
|
23 |
this model achieves the following results:
|