|
--- |
|
language: la |
|
license: apache-2.0 |
|
inference: false |
|
--- |
|
# LaBerta |
|
|
|
The paper [Exploring Language Models for Classical Philology](https://todo.com) is the first effort to systematically provide state-of-the-art language models for Classical Philology. LaBerta is a RoBerta-base sized, monolingual, encoder-only variant. |
|
|
|
This model was trained on the [Corpus Corporum](https://mlat.uzh.ch/). |
|
|
|
Further information can be found in our paper or in our [GitHub repository](https://github.com/Heidelberg-NLP/ancient-language-models). |
|
|
|
## Usage |
|
```python |
|
from transformers import AutoTokenizer, AutoModelForMaskedLM |
|
|
|
tokenizer = AutoTokenizer.from_pretrained('bowphs/LaBerta') |
|
model = AutoModelForMaskedLM.from_pretrained('bowphs/LaBerta') |
|
``` |
|
Please check out the awesome Hugging Face tutorials on how to fine-tune our models. |
|
|
|
## Evaluation Results |
|
When fine-tuned on PoS data from [EvaLatin 2022](https://universaldependencies.org/), LaBerta achieves the following results: |
|
|
|
| Task | Classical | Cross-genre | Cross-time | |
|
|:--:|:--:|:--:|:--:| |
|
| |98.11|96.73|93.33| |
|
|
|
## Contact |
|
If you have any questions or problems, feel free to [reach out](mailto:riemenschneider@cl.uni-heidelberg.de). |
|
|
|
## Citation |
|
```bibtex |
|
@incollection{riemenschneiderfrank:2023, |
|
address = "Toronto, Canada", |
|
author = "Riemenschneider, Frederick and Frank, Anette", |
|
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL’23)", |
|
note = "to appear", |
|
pubType = "incollection", |
|
publisher = "Association for Computational Linguistics", |
|
title = "Exploring Large Language Models for Classical Philology", |
|
url = "https://arxiv.org/abs/2305.13698", |
|
year = "2023", |
|
key = "riemenschneiderfrank:2023" |
|
} |
|
``` |