everdoubling
commited on
Commit
β’
353981c
1
Parent(s):
e31face
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,48 @@
|
|
1 |
---
|
|
|
|
|
2 |
license: apache-2.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
datasets:
|
3 |
+
- mc4
|
4 |
license: apache-2.0
|
5 |
---
|
6 |
+
|
7 |
+
# ByT5-Korean - small
|
8 |
+
|
9 |
+
ByT5-Korean is a Korean specific extension of Google's [ByT5](https://github.com/google-research/byt5).
|
10 |
+
|
11 |
+
A Korean syllable has three components (called Jamo): a beginning consonant, a middle vowel, and an optional final consonant; they are like individual characters of alphabet.
|
12 |
+
While the ByT5's utf-8 encoding allows generic encoding for multiple languages, it is unnatural for Korean because it splits the bits representation of each Jamo in the middle.
|
13 |
+
|
14 |
+
ByT5-Korean extends ByT5's utf-8 encoding with special care for Korean syllables; each Jamo is represented with a extra token.
|
15 |
+
ByT5-Korean was pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) with 70% Korean and 30% English.
|
16 |
+
|
17 |
+
## Encoding Scheme
|
18 |
+
```text
|
19 |
+
id: token
|
20 |
+
0: <pad>
|
21 |
+
1: <eos>
|
22 |
+
2: <unk>
|
23 |
+
3~258: utf-8 encoding
|
24 |
+
259~277: beginning consonants(μ΄μ±), 19κ°(γ±γ²γ΄γ·γΈγΉγ
γ
γ
γ
γ
γ
γ
γ
γ
γ
γ
γ
γ
)
|
25 |
+
278~298: middle vowel(μ€μ±), 21κ°(γ
γ
γ
γ
γ
γ
γ
γ
γ
γ
γ
γ
γ
γ
γ
γ
γ
γ
γ
‘γ
’γ
£)
|
26 |
+
299~326: final consonant(μ’
μ±), 무μ’
μ±+27κ°(γ±γ²γ³γ΄γ΅γΆγ·γΉγΊγ»γΌγ½γΎγΏγ
γ
γ
γ
γ
γ
γ
γ
γ
γ
γ
γ
γ
)
|
27 |
+
327~384: from <extra_id_0> to <extra_id_57>
|
28 |
+
```
|
29 |
+
|
30 |
+
## Example Inference
|
31 |
+
|
32 |
+
```python
|
33 |
+
import torch
|
34 |
+
from tokenizer import ByT5KoreanTokenizer # https://huggingface.co/everdoubling/byt5-Korean-small/blob/main/tokenizer.py
|
35 |
+
from transformers import T5ForConditionalGeneration
|
36 |
+
|
37 |
+
tokenizer_jamo = ByT5KoreanTokenizer()
|
38 |
+
model = T5ForConditionalGeneration.from_pretrained('everdoubling/byt5-Korean-small')
|
39 |
+
|
40 |
+
input_sentence = 'νκ΅μ΄ μν€λ°±κ³Ό(μμ΄: Korean Wikipedia)λ νκ΅μ΄λ‘ μ΄μλλ μν€λ°±κ³Όμ λ€μΈμ΄ν κ°μ΄λ° νλλ‘μ, 2002λ
10μ 11μΌμ <extra_id_0>. λν νμ¬ νκ΅μ΄ μν€λ°±κ³Όμλ λ겨주기, ν λ‘ , κ·Έλ¦Ό λ± νμ΄μ§λ‘ λΆλ¦¬λ λͺ¨λ λ¬Έμλ₯Ό ν¬ν¨νλ©΄ μ΄ 2,629,860κ°κ° <extra_id_1>λμ΄ μμΌλ©°, λ겨주기λ₯Ό ν¬ν¨ν μΌλ° λ¬Έμ μλ 1,278,560κ°,[1] κ·Έμ€ λ겨주기, λ§λ€λ₯Έ λ¬Έμλ₯Ό μ μΈν μΌλ° λ¬Έμ μλ 573,149κ°μ΄λ€.'
|
41 |
+
|
42 |
+
input_ids_jamo = tokenizer_jamo(input_sentence).input_ids
|
43 |
+
outputs_jamo = model_jamo.generate(torch.tensor([input_ids_jamo]))
|
44 |
+
print(tokenizer_jamo.decode(outputs_jamo[0]))
|
45 |
+
# <pad><extra_id_0>μ€λ¦½λμλ€<extra_id_1>ΔΔ
|
46 |
+
```
|
47 |
+
|
48 |
+
Additional information coming soon...
|