Bauwens commited on
Commit
ead6c5b
·
verified ·
1 Parent(s): f2824ca

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -0
README.md ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ULM-32k SlimPajama-3M
2
+ ULM tokeniser with vocabulary size 32768, trained on the first 3 million examples in [SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B).
3
+
4
+ ## Tokeniser details
5
+ ULM trainer implementation:
6
+ - Back-end: [SentencePiece](https://github.com/google/sentencepiece)'s `SentencePieceTrainer`.
7
+ - Front-end: [TkTkT](https://github.com/bauwenst/TkTkT)'s [`KudoPieceTrainer`](https://github.com/bauwenst/TkTkT/blob/341ae85980a5a9a2d60dbdc88645f8828b5c3a06/src/tktkt/models/kudopiece/vocabularisation.py#L40)
8
+
9
+ Preprocessor:
10
+ - During training: TkTkT's [`SentencePiecePreprocessor`](https://github.com/bauwenst/TkTkT/blob/341ae85980a5a9a2d60dbdc88645f8828b5c3a06/src/tktkt/preparation/instances.py#L181)
11
+ - During inference: TkTkT's [`ModernEnglishPreprocessor`](https://github.com/bauwenst/TkTkT/blob/341ae85980a5a9a2d60dbdc88645f8828b5c3a06/src/tktkt/preparation/instances.py#L105)
12
+ 1. NFKC normalisation
13
+ 2. Punctuation splitter, whitespace splitter, English contraction splitter
14
+ 3. GPT-2's pseudo-byte mapping
15
+ 4. Start-of-word marker `Ġ`
16
+ 5. Digit and hyphen isolation
17
+
18
+ ## Training details
19
+ **Time:** 3h40m
20
+ - Preprocessing and counting the 3M corpus: 2h45m
21
+ - ULM algorithm: 55m
22
+
23
+ **Memory:** 257 GiB peak usage (i.e. about 80 GiB RAM per million sentences).
24
+
25
+ **Data sizes:**
26
+ - Examples considered: 3 000 000
27
+ - Examples used: 2 609 893 (390 107 examples dropped for being > 8192 characters).
28
+ - Characters counted: 6 685 212 190
29
+ - Unique words after whitespace splitting: 9 254 839