Update README.md
Browse files
README.md
CHANGED
@@ -4,9 +4,9 @@ This is a Latin RoBERTa-based LM model, version 2.
|
|
4 |
|
5 |
The intention of the Transformer-based LM is twofold: on the one hand, it will be used for the evaluation of HTR results; on the other, it should be used as a decoder for the TrOCR architecture.
|
6 |
|
7 |
-
The training data is the same data as has been used by [Bamman and Burns (2020)](https://arxiv.org/pdf/2009.10053.pdf), although more heavily filtered (see below)
|
8 |
|
9 |
-
The overall corpus contains
|
10 |
|
11 |
### Preprocessing
|
12 |
|
|
|
4 |
|
5 |
The intention of the Transformer-based LM is twofold: on the one hand, it will be used for the evaluation of HTR results; on the other, it should be used as a decoder for the TrOCR architecture.
|
6 |
|
7 |
+
The training data is more or less the same data as has been used by [Bamman and Burns (2020)](https://arxiv.org/pdf/2009.10053.pdf), although more heavily filtered (see below). There are several digital-born texts from online Latin archives. Other Latin texts have been crawled by [Bamman and Smith](https://www.cs.cmu.edu/~dbamman/latin.html) and thus contain many OCR errors.
|
8 |
|
9 |
+
The overall downsampled corpus contains 577M of text data.
|
10 |
|
11 |
### Preprocessing
|
12 |
|