File size: 5,487 Bytes
affc556
 
82e8454
 
 
 
 
 
 
 
 
a2d6dfb
affc556
 
82e8454
affc556
 
82e8454
 
 
 
 
 
 
 
 
 
d90dc49
 
 
 
6337b5d
d90dc49
 
 
 
 
 
5c36641
 
d90dc49
82e8454
 
fbb22ce
82e8454
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59497c1
 
 
 
 
 
 
 
7f537ee
59497c1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
82e8454
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
---
library_name: transformers
license: cc-by-sa-3.0
datasets:
- wikimedia/wikipedia
- maywell/korean_textbooks
- nampdn-ai/tiny-codes
- Open-Orca/OpenOrca
language:
- ko
- en
inference: false
---

# phi-2-ko-v0.1

## Model Details
This model is a Korean-specific model trained in phi-2 by adding a Korean tokenizer and Korean data. (English is also available.)
Although phi-2 performs very well, it does not support the Korean language and does not have a tokenizer trained on Korean corpous, so tokenizing Korean text will use many times more tokens than English tokens.

To overcome these limitations, I trained the model using an open-license Korean corpus and some English corpus. 
The reasons for using the English corpus together are as follows:
    1. The goal is to preserve the excellent performance of the existing model by preventing catastrophic forgetting. 
    2. Mixing English and Korean prompts usually produces better results than using all prompts in Korean. 

Since my role is not as a working developer, but as an solutions architect helping customers with quick PoCs/prototypes, and I was limited by AWS GPU resources available, I only trained with 5GB of data instead of hundreds of GB of massive data.

### Vocab Expansion

| Model Name | Vocabulary Size | Description | 
| --- | --- | --- |
| Original phi-2 | 50,295 | BBPE (Byte-level BPE) |
| **phi-2-ko** | 66,676 | BBPE. Added Korean vocab and merges |

**Tokenizing "아마존 세이지메이커"**

| Model | # of tokens | Tokens |
| --- | --- | --- |
| Original phi-2 | 25 | `[168, 243, 226, 167, 100, 230, 168, 94, 112, 23821, 226, 116, 35975, 112, 168, 100, 222, 167, 102, 242, 35975, 112, 168, 119, 97]` |
| **phi-2-ko** |6| `[57974, 51299, 50617, 51005, 52027, 51446]` |

### Continued pre-training

The dataset used for training is as follows. To prevent catastrophic forgetting, I included some English corpus as training data.

- Wikipedia Korean dataset (https://huggingface.co/datasets/wikimedia/wikipedia) 
- Massive Korean synthetic dataset (https://huggingface.co/datasets/maywell/korean_textbooks)
- Tiny code dataset (https://huggingface.co/datasets/nampdn-ai/tiny-codes)
- OpenOrca dataset (https://huggingface.co/datasets/Open-Orca/OpenOrca)
- Using some of the various sentences I wrote (personal blog, chat, etc.)


Note that performance is not guaranteed since only a small number of datasets were used for the experiment. The number of samples for training set is just around 5 million after tokenization.
For distributed training, all weights were trained without adapter techniques, and sharding parallelization was performed with ZeRO-2. The presets are as follows.

Since this is a model that has not been fine-tuned, it is recommended to perform fine tuning such as instruction tuning/alignment tuning according to your use case.

```json
{
    "fp16": {
        "enabled": "auto",
        "loss_scale": 0,
        "loss_scale_window": 1000,
        "initial_scale_power": 16,
        "hysteresis": 2,
        "min_loss_scale": 1
    },
    
    "bf16": {
        "enabled": "auto"
    },    

    "optimizer": {
        "type": "AdamW",
        "params": {
            "lr": "auto",
            "betas": "auto",
            "eps": "auto",
            "weight_decay": "auto"
        }
    },

    "scheduler": {
        "type": "WarmupLR",
        "params": {
            "warmup_min_lr": "auto",
            "warmup_max_lr": "auto",
            "warmup_num_steps": "auto"
        }
    },

    "zero_optimization": {
        "stage": 2,
        "allgather_partitions": true,
        "allgather_bucket_size": 2e8,
        "overlap_comm": true,
        "reduce_scatter": true,
        "reduce_bucket_size": 2e8,
        "contiguous_gradients": true,
        "cpu_offload": true
    },

    "gradient_accumulation_steps": "auto",
    "gradient_clipping": "auto",
    "train_batch_size": "auto",
    "train_micro_batch_size_per_gpu": "auto"
}
```

Some hyperparameters are listed below.
```
batch_size: 2
num_epochs: 1
learning_rate: 3e-4
gradient_accumulation_steps: 8
lr_scheduler_type: "linear"
group_by_length: False
```

## How to Get Started with the Model
```python
import torch
from transformers import PhiForCausalLM, AutoModelForCausalLM, AutoTokenizer

torch.set_default_device("cuda")

# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("daekeun-ml/phi-2-ko-v0.1", torch_dtype="auto")
tokenizer = AutoTokenizer.from_pretrained("daekeun-ml/phi-2-ko-v0.1", trust_remote_code=True)

# Korean 
inputs = tokenizer("머신러닝은 ", return_tensors="pt", return_attention_mask=False)

outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)

# English 
inputs = tokenizer('''def print_prime(n):
   """
   Print all primes between 1 and n
   """''', return_tensors="pt", return_attention_mask=False)

outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```

### References
- Base model: [microsoft/phi-2](https://huggingface.co/microsoft/phi-2)

## Notes 

### License

cc-by-sa 3.0; The license of phi-2 is MIT, but I considered the licensing of the dataset used for training.

### Caution
This model was created as a personal experiment, unrelated to the organization I work for. The model may not operate correctly because separate verification was not performed. Please be careful unless it is for personal experimentation or PoC (Proof of Concept)!