RichardErkhov
commited on
uploaded readme
Browse files
README.md
ADDED
@@ -0,0 +1,63 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Quantization made by Richard Erkhov.
|
2 |
+
|
3 |
+
[Github](https://github.com/RichardErkhov)
|
4 |
+
|
5 |
+
[Discord](https://discord.gg/pvy7H8DZMG)
|
6 |
+
|
7 |
+
[Request more models](https://github.com/RichardErkhov/quant_request)
|
8 |
+
|
9 |
+
|
10 |
+
gpt2-small-arabic - bnb 4bits
|
11 |
+
- Model creator: https://huggingface.co/akhooli/
|
12 |
+
- Original model: https://huggingface.co/akhooli/gpt2-small-arabic/
|
13 |
+
|
14 |
+
|
15 |
+
|
16 |
+
|
17 |
+
Original model description:
|
18 |
+
---
|
19 |
+
language: "ar"
|
20 |
+
datasets:
|
21 |
+
- Arabic Wikipedia
|
22 |
+
metrics:
|
23 |
+
- none
|
24 |
+
---
|
25 |
+
|
26 |
+
# GPT2-Small-Arabic
|
27 |
+
|
28 |
+
## Model description
|
29 |
+
|
30 |
+
GPT2 model from Arabic Wikipedia dataset based on gpt2-small (using Fastai2).
|
31 |
+
|
32 |
+
## Intended uses & limitations
|
33 |
+
|
34 |
+
#### How to use
|
35 |
+
|
36 |
+
An example is provided in this [colab notebook](https://colab.research.google.com/drive/1mRl7c-5v-Klx27EEAEOAbrfkustL4g7a?usp=sharing).
|
37 |
+
Both text and poetry (fine-tuned model) generation are included.
|
38 |
+
|
39 |
+
#### Limitations and bias
|
40 |
+
|
41 |
+
GPT2-small-arabic (trained on Arabic Wikipedia) has several limitations in terms of coverage (Arabic Wikipeedia quality, no diacritics) and training performance.
|
42 |
+
Use as demonstration or proof of concepts but not as production code.
|
43 |
+
|
44 |
+
## Training data
|
45 |
+
|
46 |
+
This pretrained model used the Arabic Wikipedia dump (around 900 MB).
|
47 |
+
|
48 |
+
## Training procedure
|
49 |
+
|
50 |
+
Training was done using [Fastai2](https://github.com/fastai/fastai2/) library on Kaggle, using free GPU.
|
51 |
+
|
52 |
+
## Eval results
|
53 |
+
Final perplexity reached was 72.19, loss: 4.28, accuracy: 0.307
|
54 |
+
|
55 |
+
### BibTeX entry and citation info
|
56 |
+
|
57 |
+
```bibtex
|
58 |
+
@inproceedings{Abed Khooli,
|
59 |
+
year={2020}
|
60 |
+
}
|
61 |
+
```
|
62 |
+
|
63 |
+
|