pstroe commited on
Commit
66bffc8
1 Parent(s): e7a3357

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -0
README.md ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## RoBERTa Latin model, version 3 --> model card not finished yet
2
+
3
+ This is a Latin RoBERTa-based LM model, version 3.
4
+
5
+ The intention of the Transformer-based LM is twofold: on the one hand, it will be used for the evaluation of HTR results; on the other, it should be used as a decoder for the TrOCR architecture.
6
+
7
+ The training data differs from the one used in the RoBERTa Bas Latin Cased V1 and V2, and therefore also by what is used by [Bamman and Burns (2020)](https://arxiv.org/pdf/2009.10053.pdf). We exclusively used the text from the [Corpus Corporum](https://www.mlat.uzh.ch).
8
+
9
+ The overall corpus contains 1.5G of text data (3x as much as has been used for V2 and very likely of better quality).
10
+
11
+ ### Preprocessing
12
+
13
+ I undertook the following preprocessing steps:
14
+
15
+ - Normalisation of all lines with [CLTK](http://www.cltk.org) incl. sentence splitting.
16
+ - Language identification with [langid](https://github.com/saffsd/langid.py)
17
+ - Retaining only Latin lines.
18
+
19
+ The result is a corpus of ~232 million tokens.
20
+
21
+ The dataset used to train this will be available on Hugging Face later [HERE (does not work yet)]().
22
+
23
+ ### Contact
24
+
25
+ For contact, reach out to Phillip Ströbel [via mail](mailto:pstroebel@cl.uzh.ch) or [via Twitter](https://twitter.com/CLingophil).
26
+
27
+ ### How to cite
28
+
29
+ If you use this model, pleas cite it as:
30
+
31
+ @online{stroebel-roberta-base-latin-cased3,
32
+ author = {Ströbel, Phillip Benjamin},
33
+ title = {RoBERTa Base Latin Cased V2},
34
+ year = 2022,
35
+ url = {https://huggingface.co/pstroe/roberta-base-latin-cased3},
36
+ urldate = {YYYY-MM-DD}
37
+ }