julien-c HF staff commited on
Commit
ba0ff72
1 Parent(s): 20f6f9d

Migrate model card from transformers-repo

Browse files

Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/dkleczek/bert-base-polish-uncased-v1/README.md

Files changed (1) hide show
  1. README.md +135 -0
README.md ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: pl
3
+ thumbnail: https://raw.githubusercontent.com/kldarek/polbert/master/img/polbert.png
4
+ ---
5
+
6
+ # Polbert - Polish BERT
7
+ Polish version of BERT language model is here! It is now available in two variants: cased and uncased, both can be downloaded and used via HuggingFace transformers library. I recommend using the cased model, more info on the differences and benchmark results below.
8
+
9
+ ![PolBERT image](https://raw.githubusercontent.com/kldarek/polbert/master/img/polbert.png)
10
+
11
+ ## Cased and uncased variants
12
+
13
+ * I initially trained the uncased model, the corpus and training details are referenced below. Here are some issues I found after I published the uncased model:
14
+ * Some Polish characters and accents are not tokenized correctly through the BERT tokenizer when applying lowercase. This doesn't impact sequence classification much, but may influence token classfication tasks significantly.
15
+ * I noticed a lot of duplicates in the Open Subtitles dataset, which dominates the training corpus.
16
+ * I didn't use Whole Word Masking.
17
+ * The cased model improves on the uncased model in the following ways:
18
+ * All Polish characters and accents should now be tokenized correctly.
19
+ * I removed duplicates from Open Subtitles dataset. The corpus is smaller, but more balanced now.
20
+ * The model is trained with Whole Word Masking.
21
+
22
+ ## Pre-training corpora
23
+
24
+ Below is the list of corpora used along with the output of `wc` command (counting lines, words and characters). These corpora were divided into sentences with srxsegmenter (see references), concatenated and tokenized with HuggingFace BERT Tokenizer.
25
+
26
+ ### Uncased
27
+
28
+ | Tables | Lines | Words | Characters |
29
+ | ------------- |--------------:| -----:| -----:|
30
+ | [Polish subset of Open Subtitles](http://opus.nlpl.eu/OpenSubtitles-v2018.php) | 236635408| 1431199601 | 7628097730 |
31
+ | [Polish subset of ParaCrawl](http://opus.nlpl.eu/ParaCrawl.php) | 8470950 | 176670885 | 1163505275 |
32
+ | [Polish Parliamentary Corpus](http://clip.ipipan.waw.pl/PPC) | 9799859 | 121154785 | 938896963 |
33
+ | [Polish Wikipedia - Feb 2020](https://dumps.wikimedia.org/plwiki/latest/plwiki-latest-pages-articles.xml.bz2) | 8014206 | 132067986 | 1015849191 |
34
+ | Total | 262920423 | 1861093257 | 10746349159 |
35
+
36
+ ### Cased
37
+
38
+ | Tables | Lines | Words | Characters |
39
+ | ------------- |--------------:| -----:| -----:|
40
+ | [Polish subset of Open Subtitles (Deduplicated) ](http://opus.nlpl.eu/OpenSubtitles-v2018.php) | 41998942| 213590656 | 1424873235 |
41
+ | [Polish subset of ParaCrawl](http://opus.nlpl.eu/ParaCrawl.php) | 8470950 | 176670885 | 1163505275 |
42
+ | [Polish Parliamentary Corpus](http://clip.ipipan.waw.pl/PPC) | 9799859 | 121154785 | 938896963 |
43
+ | [Polish Wikipedia - Feb 2020](https://dumps.wikimedia.org/plwiki/latest/plwiki-latest-pages-articles.xml.bz2) | 8014206 | 132067986 | 1015849191 |
44
+ | Total | 68283960 | 646479197 | 4543124667 |
45
+
46
+
47
+ ## Pre-training details
48
+
49
+ ### Uncased
50
+
51
+ * Polbert was trained with code provided in Google BERT's github repository (https://github.com/google-research/bert)
52
+ * Currently released model follows bert-base-uncased model architecture (12-layer, 768-hidden, 12-heads, 110M parameters)
53
+ * Training set-up: in total 1 million training steps:
54
+ * 100.000 steps - 128 sequence length, batch size 512, learning rate 1e-4 (10.000 steps warmup)
55
+ * 800.000 steps - 128 sequence length, batch size 512, learning rate 5e-5
56
+ * 100.000 steps - 512 sequence length, batch size 256, learning rate 2e-5
57
+ * The model was trained on a single Google Cloud TPU v3-8
58
+
59
+ ### Cased
60
+
61
+ * Same approach as uncased model, with the following differences:
62
+ * Whole Word Masking
63
+ * Training set-up:
64
+ * 100.000 steps - 128 sequence length, batch size 2048, learning rate 1e-4 (10.000 steps warmup)
65
+ * 100.000 steps - 128 sequence length, batch size 2048, learning rate 5e-5
66
+ * 100.000 steps - 512 sequence length, batch size 256, learning rate 2e-5
67
+
68
+
69
+ ## Usage
70
+ Polbert is released via [HuggingFace Transformers library](https://huggingface.co/transformers/).
71
+
72
+ For an example use as language model, see [this notebook](/LM_testing.ipynb) file.
73
+
74
+ ### Uncased
75
+
76
+ ```python
77
+ from transformers import *
78
+ model = BertForMaskedLM.from_pretrained("dkleczek/bert-base-polish-uncased-v1")
79
+ tokenizer = BertTokenizer.from_pretrained("dkleczek/bert-base-polish-uncased-v1")
80
+ nlp = pipeline('fill-mask', model=model, tokenizer=tokenizer)
81
+ for pred in nlp(f"Adam Mickiewicz wielkim polskim {nlp.tokenizer.mask_token} był."):
82
+ print(pred)
83
+ # Output:
84
+ # {'sequence': '[CLS] adam mickiewicz wielkim polskim poeta był. [SEP]', 'score': 0.47196975350379944, 'token': 26596}
85
+ # {'sequence': '[CLS] adam mickiewicz wielkim polskim bohaterem był. [SEP]', 'score': 0.09127858281135559, 'token': 10953}
86
+ # {'sequence': '[CLS] adam mickiewicz wielkim polskim człowiekiem był. [SEP]', 'score': 0.0647173821926117, 'token': 5182}
87
+ # {'sequence': '[CLS] adam mickiewicz wielkim polskim pisarzem był. [SEP]', 'score': 0.05232388526201248, 'token': 24293}
88
+ # {'sequence': '[CLS] adam mickiewicz wielkim polskim politykiem był. [SEP]', 'score': 0.04554257541894913, 'token': 44095}
89
+ ```
90
+
91
+ ### Cased
92
+
93
+ ```python
94
+ model = BertForMaskedLM.from_pretrained("dkleczek/bert-base-polish-cased-v1")
95
+ tokenizer = BertTokenizer.from_pretrained("dkleczek/bert-base-polish-cased-v1")
96
+ nlp = pipeline('fill-mask', model=model, tokenizer=tokenizer)
97
+ for pred in nlp(f"Adam Mickiewicz wielkim polskim {nlp.tokenizer.mask_token} był."):
98
+ print(pred)
99
+ # Output:
100
+ # {'sequence': '[CLS] Adam Mickiewicz wielkim polskim pisarzem był. [SEP]', 'score': 0.5391148328781128, 'token': 37120}
101
+ # {'sequence': '[CLS] Adam Mickiewicz wielkim polskim człowiekiem był. [SEP]', 'score': 0.11683262139558792, 'token': 6810}
102
+ # {'sequence': '[CLS] Adam Mickiewicz wielkim polskim bohaterem był. [SEP]', 'score': 0.06021466106176376, 'token': 17709}
103
+ # {'sequence': '[CLS] Adam Mickiewicz wielkim polskim mistrzem był. [SEP]', 'score': 0.051870670169591904, 'token': 14652}
104
+ # {'sequence': '[CLS] Adam Mickiewicz wielkim polskim artystą był. [SEP]', 'score': 0.031787533313035965, 'token': 35680}
105
+ ```
106
+
107
+ See the next section for an example usage of Polbert in downstream tasks.
108
+
109
+ ## Evaluation
110
+ Thanks to Allegro, we now have the [KLEJ benchmark](https://klejbenchmark.com/leaderboard/), a set of nine evaluation tasks for the Polish language understanding. The following results are achieved by running standard set of evaluation scripts (no tricks!) utilizing both cased and uncased variants of Polbert.
111
+
112
+ | Model | Average | NKJP-NER | CDSC-E | CDSC-R | CBD | PolEmo2.0-IN | PolEmo2.0-OUT | DYK | PSC | AR |
113
+ | ------------- |--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|--------------:|
114
+ | Polbert cased | 81.7 | 93.6 | 93.4 | 93.8 | 52.7 | 87.4 | 71.1 | 59.1 | 98.6 | 85.2 |
115
+ | Polbert uncased | 81.4 | 90.1 | 93.9 | 93.5 | 55.0 | 88.1 | 68.8 | 59.4 | 98.8 | 85.4 |
116
+
117
+ Note how the uncased model performs better than cased on some tasks? My guess this is because of the oversampling of Open Subtitles dataset and its similarity to data in some of these tasks. All these benchmark tasks are sequence classification, so the relative strength of the cased model is not so visible here.
118
+
119
+ ## Bias
120
+ The data used to train the model is biased. It may reflect stereotypes related to gender, ethnicity etc. Please be careful when using the model for downstream task to consider these biases and mitigate them.
121
+
122
+ ## Acknowledgements
123
+ * I'd like to express my gratitude to Google [TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc) for providing the free TPU credits - thank you!
124
+ * Also appreciate the help from Timo Möller from [deepset](https://deepset.ai) for sharing tips and scripts based on their experience training German BERT model.
125
+ * Big thanks to Allegro for releasing KLEJ Benchmark and specifically to Piotr Rybak for help with the evaluation and pointing out some issues with the tokenization.
126
+ * Finally, thanks to Rachel Thomas, Jeremy Howard and Sylvain Gugger from [fastai](https://www.fast.ai) for their NLP and Deep Learning courses!
127
+
128
+ ## Author
129
+ Darek Kłeczek - contact me on Twitter [@dk21](https://twitter.com/dk21)
130
+
131
+ ## References
132
+ * https://github.com/google-research/bert
133
+ * https://github.com/narusemotoki/srx_segmenter
134
+ * SRX rules file for sentence splitting in Polish, written by Marcin Miłkowski: https://raw.githubusercontent.com/languagetool-org/languagetool/master/languagetool-core/src/main/resources/org/languagetool/resource/segment.srx
135
+ * [KLEJ benchmark](https://klejbenchmark.com/leaderboard/)