Logion: Machine Learning for Greek Philology
The most advanced Ancient Greek BERT model trained to date! Read the paper on arxiv by Charlie Cowen-Breen, Creston Brooks, Johannes Haubold, and Barbara Graziosi.
We train a WordPiece tokenizer (with a vocab size of 50,000) on a corpus of over 70 million words of premodern Greek. Using this tokenizer and the same corpus, we train a BERT model.
Further information on this project and code for error detection can be found on GitHub.
We're adding more models trained with cleaner data and different tokenizations - keep an eye out!
How to use
Requirements:
pip install transformers
Load the model and tokenizer directly from the HuggingFace Model Hub:
from transformers import BertTokenizer, BertForMaskedLM
tokenizer = BertTokenizer.from_pretrained("cabrooks/LOGION-50k_wordpiece")
model = BertForMaskedLM.from_pretrained("cabrooks/LOGION-50k_wordpiece")
Cite
If you use this model in your research, please cite the paper:
@inproceedings{logion-base,
author = {Cowen-Breen, Charlie and Brooks, Creston and Haubold, Johannes and Graziosi, Barbara},
title = {Logion: Machine Learning for Greek Philology},
year = {2023},
url = {https://arxiv.org/abs/2305.01099}
}