Qi Wang
commited on
Commit
•
002420f
1
Parent(s):
7b5a580
Upload readme_en.md
Browse files- readme_en.md +66 -0
readme_en.md
ADDED
@@ -0,0 +1,66 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Miniature Chinese Llama2 Basic Model
|
2 |
+
|
3 |
+
[English](./readme_en.md) [简体中文](./readme.md)
|
4 |
+
|
5 |
+
This is an ultra-mini model with approximately 58M parameters, utilizing the Llama2 architecture. The uploaded version is pre-trained and has not undergone SFT yet. A chat version post-SFT will be launched soon.
|
6 |
+
|
7 |
+
The goals of developing this ultra-mini model are:
|
8 |
+
|
9 |
+
1. To practice the full process of pre-training a basic large language model from scratch.
|
10 |
+
2. To provide a fast-deployable environment for the development of large parameter models, as loading large models can be very time-consuming and not conducive to rapid iterative development and debugging.
|
11 |
+
3. To enable quick parameter tuning and the reproduction of various optimization algorithms on consumer-level graphics cards.
|
12 |
+
|
13 |
+
## Training Data
|
14 |
+
|
15 |
+
We have collected 429 Chinese online fantasy novels and organized them into plain text format. Lines with fewer than 10 characters and those exceeding 4096 characters were removed, serving as the base data for pre-training.
|
16 |
+
|
17 |
+
The organized txt file is 3.3G in size and contains 868M Chinese characters across 18M lines.
|
18 |
+
|
19 |
+
## Chinese Tokenizer
|
20 |
+
|
21 |
+
The tokenizer for the model was also retrained, without relying on any existing tokenizers.
|
22 |
+
|
23 |
+
Training Parameters:
|
24 |
+
|
25 |
+
1. Maximum Sentence Length: 4096
|
26 |
+
2. Vocabulary Size: 65534
|
27 |
+
3. Normalization Rule: nfkc
|
28 |
+
4. Character Coverage: ... (and so on)
|
29 |
+
|
30 |
+
| | Llama2 | Baby Llama2 |
|
31 |
+
| ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
|
32 |
+
| tokens | 32000 | 65534 |
|
33 |
+
| model_max_length | 4096 | 4096 |
|
34 |
+
| 白日依山尽,黄河入海流。欲穷千里目,更上一层楼。 | :['▁', '白', '日', '<0xE4>', '<0xBE>', '<0x9D>', '山', '<0xE5>', '<0xB0>', '<0xBD>', ',', '黄', '河', '入', '海', '流', '。', '<0xE6>', '<0xAC>', '<0xB2>', '<0xE7>', '<0xA9>', '<0xB7>', '千', '里', '目', ',', '更', '上', '一', '<0xE5>', '<0xB1>', '<0x82>', '<0xE6>', '<0xA5>', '<0xBC>', '。'] | ['▁白', '日', '依山', '尽', ',', '黄河', '入海', '流', '。', '欲', '穷', '千里', '目', ',', '更', '上一层', '楼', '。'] |
|
35 |
+
| | [1, 29871, 30868, 30325, 231, 193, 160, 30329, 232, 179, 192, 30214, 31491, 30828, 30752, 30581, 31151, 30267, 233, 175, 181, 234, 172, 186, 31159, 30755, 30895, 30214, 31100, 30429, 30287, 232, 180, 133, 233, 168, 191, 30267] | [65534, 1764, 63106, 62484, 63203, 62793, 14729, 29082, 63130, 62795, 63920, 64266, 3271, 63038, 62793, 63007, 17116, 63636, 62795] |
|
36 |
+
| The primary use of LLaMA is research on large language models, including BERT, XLNet, and RoBERTa. | :['▁The', '▁primary', '▁use', '▁of', '▁L', 'La', 'MA', '▁is', '▁research', '▁on', '▁large', '▁language', '▁models', ',', '▁including', '▁B', 'ERT', ',', '▁X', 'L', 'Net', ',', '▁and', '▁Ro', 'BER', 'T', 'a', '.'] | :['▁T', 'h', 'e', '▁p', 'ri', 'm', 'ar', 'y', '▁', 'u', 'se', '▁o', 'f', '▁', '<0x4C>', '<0x4C>', 'a', 'M', 'A', '▁i', 's', '▁', 're', 'se', 'ar', 'ch', '▁o', 'n', '▁', 'l', 'ar', 'g', 'e', '▁', 'l', 'ang', 'ua', 'g', 'e', '▁m', 'od', 'e', 'ls', ',', '▁', 'in', 'c', 'lu', 'd', 'i', 'ng', '▁', '<0x42>', '<0x45>', '<0x52>', 'T', ',', '▁', 'X', '<0x4C>', '<0x4E>', 'e', 't', ',', '▁', 'an', 'd', '▁', '<0x52>', 'o', '<0x42>', '<0x45>', '<0x52>', 'T', 'a', '.'] |
|
37 |
+
| | [1, 450, 7601, 671, 310, 365, 5661, 1529, 338, 5925, 373, 2919, 4086, 4733, 29892, 3704, 350, 20161, 29892, 1060, 29931, 6779, 29892, 322, 1528, 13635, 29911, 29874, 29889] | [65534, 14962, 63590, 64211, 27052, 16426, 63475, 13594, 64158, 62797, 63569, 11279, 13719, 65368, 62797, 81, 81, 63518, 64918, 64752, 24145, 63338, 62797, 44186, 11279, 13594, 9251, 13719, 63541, 62797, 64399, 13594, 64101, 64211, 62797, 64399, 37035, 36500, 64101, 64211, 2939, 11320, 64211, 53670, 62793, 62797, 18944, 63603, 14575, 64096, 63484, 1171, 62797, 71, 74, 87, 64760, 62793, 62797, 65257, 81, 83, 64211, 63073, 62793, 62797, 6604, 64096, 62797, 87, 63143, 71, 74, 87, 64760, 63518, 62801] |
|
38 |
+
|
39 |
+
The Llama2 tokenizer has 32,000 tokens and is optimized for English characters, while Baby Llama2 has 65,534 tokens and only includes Chinese.
|
40 |
+
|
41 |
+
It can be seen that in terms of vectorization for Chinese and English text, Baby Llama2's Chinese vectorization is better than standard Llama2, while its English vectorization is weaker than Llama2.
|
42 |
+
|
43 |
+
## Full Training Corpus Processing
|
44 |
+
|
45 |
+
Before full training, the corpus is processed for vectorization. Using the recently trained tokenizer (tokenizer), the txt files of online novels are read line by line. Each line is vectorized and an eos_token_id is added at the end of the line for differentiation. All processed binary data is then stored on disk in the form of a two-dimensional np.uint16 array, with dimensions of [-1: max_sentence_length].
|
46 |
+
|
47 |
+
## Pre-training
|
48 |
+
|
49 |
+
Pre-training is done on a single 3090 machine. The model uses the architecture of llama2, and the training parameters are as follows:
|
50 |
+
|
51 |
+
1. max_seq_len = 512
|
52 |
+
2. dim = 512
|
53 |
+
3. n_headers = 8
|
54 |
+
4. n_layers = 8
|
55 |
+
5. n_kv_headers = 8
|
56 |
+
|
57 |
+
## Demonstration
|
58 |
+
|
59 |
+
[Huggingface Space For Baby Llama2](https://huggingface.co/spaces/wangqi777/wangqi777-chinese-baby-llama2)
|
60 |
+
|
61 |
+
## Citation
|
62 |
+
|
63 |
+
[llama2.c](https://github.com/karpathy/llama2.c)
|
64 |
+
|
65 |
+
[baby-llama2-chinese](baby-llama2-chinese](https://github.com/DLLXW/baby-llama2-chinese)
|
66 |
+
|