File size: 1,493 Bytes
865cdea
 
 
 
 
 
14a95ab
865cdea
14a95ab
865cdea
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
## RoBERTa Latin model, version 2 --> model card not finished yet

This is a Latin RoBERTa-based LM model, version 2.

The intention of the Transformer-based LM is twofold: on the one hand, it will be used for the evaluation of HTR results; on the other, it should be used as a decoder for the TrOCR architecture.

The training data is more or less the same data as has been used by [Bamman and Burns (2020)](https://arxiv.org/pdf/2009.10053.pdf), although more heavily filtered (see below). There are several digital-born texts from online Latin archives. Other Latin texts have been crawled by [Bamman and Smith](https://www.cs.cmu.edu/~dbamman/latin.html) and thus contain many OCR errors.

The overall downsampled corpus contains 577M of text data.

### Preprocessing

I undertook the following preprocessing steps:

  - Removal of all "pseudo-Latin" text ("Lorem ipsum ...").
  - Use of [CLTK](http://www.cltk.org) for sentence splitting and normalisation.
  - Retaining only lines containing letters of the Latin alphabet, numerals, and certain punctuation (--> `grep -P '^[A-z0-9ÄÖÜäöüÆ挜ᵫĀāūōŌ.,;:?!\- Ęę]+$' la.nolorem.tok.txt`
  - deduplication of the corpus

The result is a corpus of ~390 million tokens.

The dataset used to train this model is available [HERE](https://huggingface.co/datasets/pstroe/cc100-latin).

### Contact

For contact, reach out to Phillip Ströbel [via mail](mailto:pstroebel@cl.uzh.ch) or [via Twitter](https://twitter.com/CLingophil).