Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- sv
|
4 |
+
|
5 |
+
---
|
6 |
+
|
7 |
+
# Megatron-BERT-large Swedish 165k
|
8 |
+
|
9 |
+
This BERT model was trained using the Megatron-LM library.
|
10 |
+
The size of the model is a regular BERT-large with 340M parameters.
|
11 |
+
The model was trained on about 70GB of data, consisting mostly of OSCAR and Swedish newspaper text curated by the National Library of Sweden.
|
12 |
+
|
13 |
+
Training was done for 110k training steps using a batch size of 8k; the number of training steps is set to 500k, meaning that this version is a checkpoint.
|
14 |
+
The hyperparameters for training followed the setting for RoBERTa.
|
15 |
+
|
16 |
+
|
17 |
+
The model has three sister models trained on the same dataset:
|
18 |
+
- [🤗 BERT Swedish](https://huggingface.co/KBLab/bert-base-swedish-cased-new)
|
19 |
+
- [Megatron-BERT-base-600k](https://huggingface.co/KBLab/megatron-bert-base-swedish-cased-600k)
|
20 |
+
- [Megatron-BERT-base-125k](https://huggingface.co/KBLab/megatron-bert-base-swedish-cased-125k)
|
21 |
+
and an earlier checkpoint
|
22 |
+
- [Megatron-BERT-large-110k](https://huggingface.co/KBLab/megatron-bert-large-swedish-cased-110k)
|
23 |
+
|
24 |
+
## Acknowledgements
|
25 |
+
|
26 |
+
We gratefully acknowledge the HPC RIVR consortium (https://www.hpc-rivr.si) and EuroHPC JU (https://eurohpc-ju.europa.eu) for funding this research by providing computing resources of the HPC system Vega at the Institute of Information Science (https://www.izum.si).
|