File size: 4,769 Bytes
bb57726
 
 
 
 
 
 
 
 
 
7f76497
bb57726
76aed3a
6bdfb04
 
 
bb57726
 
 
 
2efd350
bb57726
 
 
 
 
 
7f76497
bb57726
 
 
 
 
 
 
 
 
 
 
c5dda89
bb57726
 
 
 
7f76497
 
bb57726
 
 
 
 
 
 
 
76aed3a
bb57726
 
 
 
 
 
 
76aed3a
bb57726
 
7f76497
 
 
 
 
 
 
 
 
 
6b16b5d
c5dda89
7f76497
57e6bb0
bb57726
 
 
 
 
 
 
 
57e6bb0
 
bb57726
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
---
license: cc-by-nc-sa-4.0
datasets:
- squarelike/sharegpt_deepl_ko_translation
language:
- ko
pipeline_tag: translation
tags:
- translate
---
## **Seagull-13b-translation ๐Ÿ“‡**
![Seagull-typewriter](./Seagull-typewriter.png)
**Seagull-13b-translation** is yet another translator model, but carefully considered the following issues from existing translation models.
- Exact match of `newline` or `space`
- Not using dataset with first letter removed
- Code
- Markdown format
- LaTeX format
- etc

์ด๋Ÿฐ ์ด์Šˆ๋“ค์„ ์ถฉ๋ถ„ํžˆ ์ฒดํฌํ•˜๊ณ  ํ•™์Šต์„ ์ง„ํ–‰ํ•˜์˜€์ง€๋งŒ, ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•  ๋•Œ๋Š” ์ด๋Ÿฐ ๋ถ€๋ถ„์— ๋Œ€ํ•œ ๊ฒฐ๊ณผ๋ฅผ ๋ฉด๋ฐ€ํ•˜๊ฒŒ ์‚ดํŽด๋ณด๋Š” ๊ฒƒ์„ ์ถ”์ฒœํ•ฉ๋‹ˆ๋‹ค(์ฝ”๋“œ๊ฐ€ ํฌํ•จ๋œ ํ…์ŠคํŠธ ๋“ฑ).

> If you're interested in building large-scale language models to solve a wide variety of problems in a wide variety of domains, you should consider joining [Allganize](https://allganize.career.greetinghr.com/o/65146).
For a coffee chat or if you have any questions, please do not hesitate to contact me as well! - kuotient.dev@gmail.com

This model was created as a personal experiment, unrelated to the organization I work for.

## **License**
## From original model author:
- Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License, under LLAMA 2 COMMUNITY LICENSE AGREEMENT
- Full License available at: https://huggingface.co/beomi/llama-2-koen-13b/blob/main/LICENSE

# **Model Details**
#### **Developed by**
Jisoo Kim(kuotient)
#### **Base Model**  
[beomi/llama-2-koen-13b](https://huggingface.co/beomi/llama-2-koen-13b)
#### **Datasets**
- [sharegpt_deepl_ko_translation](https://huggingface.co/datasets/squarelike/sharegpt_deepl_ko_translation)
- [KOR-OpenOrca-Platypus-v3](https://huggingface.co/datasets/kyujinpy/KOR-OpenOrca-Platypus-v3)
- AIHUB
  - ๊ธฐ์ˆ ๊ณผํ•™ ๋ถ„์•ผ ํ•œ-์˜ ๋ฒˆ์—ญ ๋ณ‘๋ ฌ ๋ง๋ญ‰์น˜ ๋ฐ์ดํ„ฐ
  - ์ผ์ƒ์ƒํ™œ ๋ฐ ๊ตฌ์–ด์ฒด ํ•œ-์˜ ๋ฒˆ์—ญ ๋ณ‘๋ ฌ ๋ง๋ญ‰์น˜ ๋ฐ์ดํ„ฐ

## **Usage**
#### **Format**
It follows only **ChatML** format.

```python
<|im_start|>system
์ฃผ์–ด์ง„ ๋ฌธ์žฅ์„ ํ•œ๊ตญ์–ด๋กœ ๋ฒˆ์—ญํ•˜์„ธ์š”.<|im_end|>
<|im_start|>user
{instruction}<|im_end|>
<|im_start|>assistant
# Don't miss newline here
```
```python
<|im_start|>system
์ฃผ์–ด์ง„ ๋ฌธ์žฅ์„ ์˜์–ด๋กœ ๋ฒˆ์—ญํ•˜์„ธ์š”.<|im_end|>
<|im_start|>user
{instruction}<|im_end|>
<|im_start|>assistant
# Don't miss newline here
```

#### Output example
์›๋ฌธ:
> A particle's wave function, $\psi(x)$, is given by $$\psi(x)=\begin{cases} 3x & \text{if } -1 \leq x \leq 0 \\ 3(1-x) & \text{if } 0 < x \leq 1 \\ 0 & \text{otherwise} \end{cases}$$ Compute the Fourier transform, $\tilde{\psi}(k)$, of the wave function $\psi(x)$ and show that it satisfies the Fourier inversion theorem, i.e., $\psi(x) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} \tilde{\psi}(k) e^{ikx} \mathrm{d}k$.

Seagull-13b-translation:
> ์ž…์ž์˜ ํŒŒ๋™ ํ•จ์ˆ˜ $\psi(x)$๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์ฃผ์–ด์ง‘๋‹ˆ๋‹ค. $$\psi(x)=\begin{cases} 3x & \text{if } -1 \leq x \leq 0 \\ 3(1-x) & \text{if } 0 < x \leq 1 \\ 0 & \text{otherwise} \end{cases}$$ ํŒŒ๋™ ํ•จ์ˆ˜ $\psi(x)$์˜ ํ‘ธ๋ฆฌ์— ๋ณ€ํ™˜ $\tilde{\psi}(k)$๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ  ํ‘ธ๋ฆฌ์— ๋ฐ˜์ „ ์ •๋ฆฌ, ์ฆ‰ $\psi(x) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} \tilde{\psi}(k) e^{ikx} \mathrm{d}k$๋ฅผ ๋งŒ์กฑํ•ฉ๋‹ˆ๋‹ค.

DeepL:
> ์ž…์ž์˜ ํŒŒ๋™ ํ•จ์ˆ˜ $\psi(x)$๋Š” $$\psi(x)=\begin{cases}๋กœ ์ฃผ์–ด์ง‘๋‹ˆ๋‹ค. 3x & \text{if } -1 \leq x \leq 0 \\ 3(1-x) & \text{if } 0 < x \leq 1 \\ 0 & \text{๊ธฐํƒ€} \end{cases}$$ ํŒŒ๋™ ํ•จ์ˆ˜ $\psi(x)$์˜ ํ‘ธ๋ฆฌ์— ๋ณ€ํ™˜์ธ $\tilde{\psi}(k)$๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ  ํ‘ธ๋ฆฌ์— ๋ฐ˜์ „ ์ •๋ฆฌ, ์ฆ‰ $\psi(x) = \frac{1}{\sqrt{2\pi}}๋ฅผ ๋งŒ์กฑํ•จ์„ ์ฆ๋ช…ํ•ฉ๋‹ˆ๋‹ค. \int_{-\infty}^{\infty} \๋ฌผ๊ฒฐํ‘œ{\psi}(k) e^{ikx} \mathrm{d}k$.

...and much more awesome cases with SQL query, code and markdown!

#### **How to**
**I highly recommend to inference model with vllm. I will write a guide for quick and easy inference if requested.**
Since, chat_template already contains insturction format above.
You can use the code below.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("kuotient/Seagull-13B-translation")
tokenizer = AutoTokenizer.from_pretrained("kuotient/Seagull-13B-translation")
messages = [
    {"role": "system", "content", "์ฃผ์–ด์ง„ ๋ฌธ์žฅ์„ ํ•œ๊ตญ์–ด๋กœ ๋ฒˆ์—ญํ•˜์„ธ์š”."}
    {"role": "user", "content": "Here are five examples of nutritious foods to serve your kids."},
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")

model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```