File size: 2,932 Bytes
f6c1e93
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
122c68a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0fd96d6
f6c1e93
 
 
3757a85
f6c1e93
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
---
license: cc-by-sa-4.0
pipeline_tag: fill-mask
---
# Model Card for Silesian HerBERT Base

Silesian HerBERT Base is a [HerBERT Base](https://huggingface.co/allegro/herbert-base-cased) model with a Silesian tokenizer and fine-tuned on Silesian Wikipedia.

## Usage
Example code:
```python
from transformers import AutoTokenizer, AutoModel

tokenizer = AutoTokenizer.from_pretrained("ipipan/silesian-herbert-base")
model = AutoModel.from_pretrained("ipipan/silesian-herbert-base")

output = model(
    **tokenizer.batch_encode_plus(
        [
            (
                "Wielgŏ Piyramida we Gizie, mianowanŏ tyż Piyramida ôd Cheopsa, to je nojsrogszŏ a nojbarzij znanŏ ze egipskich piyramid we Gizie.",
            )
        ],
    padding='longest',
    add_special_tokens=True,
    return_tensors='pt'
    )
)
```

## License
CC BY-SA 4.0

## Citation
If you use this model, please cite the following paper:
```
@inproceedings{rybak-2024-transferring-bert,
    title = "Transferring {BERT} Capabilities from High-Resource to Low-Resource Languages Using Vocabulary Matching",
    author = "Rybak, Piotr",
    editor = "Calzolari, Nicoletta  and
      Kan, Min-Yen  and
      Hoste, Veronique  and
      Lenci, Alessandro  and
      Sakti, Sakriani  and
      Xue, Nianwen",
    booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
    month = may,
    year = "2024",
    address = "Torino, Italia",
    publisher = "ELRA and ICCL",
    url = "https://aclanthology.org/2024.lrec-main.1456",
    pages = "16745--16750",
    abstract = "Pre-trained language models have revolutionized the natural language understanding landscape, most notably BERT (Bidirectional Encoder Representations from Transformers). However, a significant challenge remains for low-resource languages, where limited data hinders the effective training of such models. This work presents a novel approach to bridge this gap by transferring BERT capabilities from high-resource to low-resource languages using vocabulary matching. We conduct experiments on the Silesian and Kashubian languages and demonstrate the effectiveness of our approach to improve the performance of BERT models even when the target language has minimal training data. Our results highlight the potential of the proposed technique to effectively train BERT models for low-resource languages, thus democratizing access to advanced language understanding models.",
}
```

## Authors
The model was created by Piotr Rybak from [Linguistic Engineering Group at Institute of Computer Science, Polish Academy of Sciences](http://zil.ipipan.waw.pl/).

This work was supported by the European Regional Development Fund as a part of 2014–2020 Smart Growth Operational Programme, CLARIN — Common Language Resources and Technology Infrastructure, project no. POIR.04.02.00-00C002/19.