Update README.md
Browse files
README.md
CHANGED
@@ -1,42 +1,35 @@
|
|
|
|
|
|
|
|
|
|
1 |
# Releasing Hindi ELECTRA model
|
2 |
|
3 |
This is a first attempt at a Hindi language model trained with Google Research's [ELECTRA](https://github.com/google-research/electra). **I don't modify ELECTRA until we get into finetuning**
|
4 |
|
5 |
-
|
6 |
|
7 |
-
|
8 |
|
9 |
I was greatly influenced by: https://huggingface.co/blog/how-to-train
|
10 |
|
11 |
## Corpus
|
12 |
|
13 |
-
Download: https://drive.google.com/drive/
|
14 |
|
15 |
The corpus is two files:
|
16 |
- Hindi CommonCrawl deduped by OSCAR https://traces1.inria.fr/oscar/
|
17 |
-
- latest Hindi Wikipedia ( https://dumps.wikimedia.org/hiwiki/
|
18 |
|
19 |
Bonus notes:
|
20 |
- Adding English wiki text or parallel corpus could help with cross-lingual tasks and training
|
21 |
|
22 |
## Vocabulary
|
23 |
|
24 |
-
https://drive.google.com/file/d/1-
|
25 |
|
26 |
Bonus notes:
|
27 |
- Created with HuggingFace Tokenizers; could be longer or shorter, review ELECTRA vocab_size param
|
28 |
|
29 |
-
## Pretrain TF Records
|
30 |
-
|
31 |
-
[build_pretraining_dataset.py](https://github.com/google-research/electra/blob/master/build_pretraining_dataset.py) splits the corpus into training documents
|
32 |
-
|
33 |
-
Set the ELECTRA model size and whether to split the corpus by newlines. This process can take hours on its own.
|
34 |
-
|
35 |
-
https://drive.google.com/drive/u/1/folders/1--wBjSH59HSFOVkYi4X-z5bigLnD32R5
|
36 |
-
|
37 |
-
Bonus notes:
|
38 |
-
- I am not sure of the meaning of the corpus newline split (what is the alternative?) and given this corpus, which creates the better training docs
|
39 |
-
|
40 |
## Training
|
41 |
|
42 |
Structure your files, with data-dir named "trainer" here
|
@@ -57,8 +50,6 @@ CoLab notebook gives examples of GPU vs. TPU setup
|
|
57 |
|
58 |
[configure_pretraining.py](https://github.com/google-research/electra/blob/master/configure_pretraining.py)
|
59 |
|
60 |
-
Model https://drive.google.com/drive/folders/1cwQlWryLE4nlke4OixXA7NK8hzlmUR0c?usp=sharing
|
61 |
-
|
62 |
## Using this model with Transformers
|
63 |
|
64 |
Sample movie reviews classifier: https://colab.research.google.com/drive/1mSeeSfVSOT7e-dVhPlmSsQRvpn6xC05w
|
|
|
1 |
+
---
|
2 |
+
language: hi
|
3 |
+
---
|
4 |
+
|
5 |
# Releasing Hindi ELECTRA model
|
6 |
|
7 |
This is a first attempt at a Hindi language model trained with Google Research's [ELECTRA](https://github.com/google-research/electra). **I don't modify ELECTRA until we get into finetuning**
|
8 |
|
9 |
+
<a href="https://colab.research.google.com/drive/1R8TciRSM7BONJRBc9CBZbzOmz39FTLl_">Tokenization and training CoLab</a>
|
10 |
|
11 |
+
<a href="https://medium.com/@mapmeld/teaching-hindi-to-electra-b11084baab81">Blog post</a>
|
12 |
|
13 |
I was greatly influenced by: https://huggingface.co/blog/how-to-train
|
14 |
|
15 |
## Corpus
|
16 |
|
17 |
+
Download: https://drive.google.com/drive/folders/1SXzisKq33wuqrwbfp428xeu_hDxXVUUu?usp=sharing
|
18 |
|
19 |
The corpus is two files:
|
20 |
- Hindi CommonCrawl deduped by OSCAR https://traces1.inria.fr/oscar/
|
21 |
+
- latest Hindi Wikipedia ( https://dumps.wikimedia.org/hiwiki/ ) + WikiExtractor to txt
|
22 |
|
23 |
Bonus notes:
|
24 |
- Adding English wiki text or parallel corpus could help with cross-lingual tasks and training
|
25 |
|
26 |
## Vocabulary
|
27 |
|
28 |
+
https://drive.google.com/file/d/1-6tXrii3tVxjkbrpSJE9MOG_HhbvP66V/view?usp=sharing
|
29 |
|
30 |
Bonus notes:
|
31 |
- Created with HuggingFace Tokenizers; could be longer or shorter, review ELECTRA vocab_size param
|
32 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
## Training
|
34 |
|
35 |
Structure your files, with data-dir named "trainer" here
|
|
|
50 |
|
51 |
[configure_pretraining.py](https://github.com/google-research/electra/blob/master/configure_pretraining.py)
|
52 |
|
|
|
|
|
53 |
## Using this model with Transformers
|
54 |
|
55 |
Sample movie reviews classifier: https://colab.research.google.com/drive/1mSeeSfVSOT7e-dVhPlmSsQRvpn6xC05w
|