File size: 1,373 Bytes
b462afc ccc5004 b462afc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
# Logion: Machine Learning for Greek Philology
The most advanced Ancient Greek BERT model trained to date! Read the paper on [arxiv](https://arxiv.org/abs/2305.01099) by Charlie Cowen-Breen, Creston Brooks, Johannes Haubold, and Barbara Graziosi.
We train a WordPiece tokenizer (with a vocab size of 50,000) on a corpus of over 70 million words of premodern Greek. Using this tokenizer and the same corpus, we train a BERT model.
Further information on this project and code for error detection can be found on [GitHub](https://github.com/charliecb/Logion).
We're adding more models trained with cleaner data and different tokenizations - keep an eye out!
## How to use
Requirements:
```python
pip install transformers
```
Load the model and tokenizer directly from the HuggingFace Model Hub:
```python
from transformers import BertTokenizer, BertForMaskedLM
tokenizer = BertTokenizer.from_pretrained("cabrooks/LOGION-50k_wordpiece")
model = BertForMaskedLM.from_pretrained("cabrooks/LOGION-50k_wordpiece")
```
## Cite
If you use this model in your research, please cite the paper:
```
@misc{logion-base,
title={Logion: Machine Learning for Greek Philology},
author={Cowen-Breen, C. and Brooks, C. and Haubold, J. and Graziosi, B.},
year={2023},
eprint={2305.01099},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|