Transformers
Inference Endpoints
ancatmara commited on
Commit
c31b31c
1 Parent(s): 885c241

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -0
README.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ language:
4
+ - ga
5
+ library_name: transformers
6
+ tags:
7
+ - tokenizer
8
+ - irish
9
+ ---
10
+
11
+ **Historical Irish Bylevel BPE tokenizer** was trained on Old, Middle, Early Modern, Classical Modern and pre-reform Modern Irish texts from St. Gall Glosses, Würzburg Glosses, CELT and the book subcorpus Historical Irish Corpus. The training data spans ca. 550 — 1926 and covers a wide variety of genres, such as bardic poetry, native Irish stories, translations and adaptations of continental epic and romance, annals, genealogies, grammatical and medical tracts, diaries, and religious writing. Due to code-switching in some texts, the model has some Latin in the vocabulary.
12
+
13
+ Byte-Pair Encoding (BPE) was introduced in [Neural Machine Translation of Rare Words with Subword Units (Sennrich et al., 2015)](https://arxiv.org/abs/1508.07909). BPE relies on a pre-tokenizer that splits the training data into words. After pre-tokenization, a set of unique words has been created and the frequency with which each word occurred in the training data has been determined. Next, BPE creates a base vocabulary consisting of all symbols that occur in the set of unique words and learns merge rules to form a new symbol from two symbols of the base vocabulary. It does so until the vocabulary has attained the desired vocabulary size, which is a hyperparameter. This tokenizer was trained with `vocab_size=25000` and `min_frequency=2`.
14
+
15
+ To have a better base vocabulary, [GPT-2](https://huggingface.co/docs/transformers/en/model_doc/gpt) uses bytes as the base vocabulary, which is a clever trick to force the base vocabulary to be of size 256 while ensuring that every base character is included in the vocabulary. With some additional rules to deal with punctuation, the GPT2’s tokenizer can tokenize every text without the need for the `<unk>` symbol.
16
+
17
+ ### Use
18
+
19
+ ```python
20
+ from transformers import AutoTokenizer
21
+
22
+ tokenizer = AutoTokenizer.from_pretrained("ancatmara/historical-irish-tokenizer-bytelevel-bpe")
23
+ texts = ['Boí Óengus in n-aidchi n-aili inna chotlud.', 'Co n-accae ní, in n-ingin cucci for crunn síuil dó.']
24
+
25
+ tokenizer(texts, max_length=128, truncation=True)
26
+ ```
27
+
28
+ Out:
29
+
30
+ ```python
31
+ >>> {'input_ids': [[0, 19093, 5413, 323, 272, 19, 4672, 272, 19, 935, 1940, 13074, 20, 2], [0, 1936, 272, 19, 8716, 75, 439, 18, 323, 272, 19, 3886, 833, 7328, 382, 553, 1097, 685, 466, 507, 20, 2]],
32
+ 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
33
+ 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]}
34
+ ```
35
+
36
+ ```python
37
+ tokenizer.decode([0, 19093, 5413, 323, 272, 19, 4672, 272, 19, 935, 1940, 13074, 20, 2])
38
+ ```
39
+
40
+ Out:
41
+
42
+ ```python
43
+ >>> '<s>Boí Óengus in n-aidchi n-aili inna chotlud.</s>'
44
+ ```