File size: 6,255 Bytes
0e32f0a
82c5636
 
 
482db2d
82c5636
 
 
0e32f0a
0209b0b
b67aba2
 
 
 
 
2d681ec
82c5636
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0209b0b
82c5636
508e082
 
0ecfe22
 
 
 
 
 
 
2c1125e
0ecfe22
 
 
 
 
 
 
 
 
 
 
 
 
19b5abe
0ecfe22
811c212
 
0ecfe22
 
 
 
 
 
 
 
 
 
 
 
 
811c212
82c5636
 
 
 
 
ba710d9
 
 
 
 
 
 
 
 
 
82c5636
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
482db2d
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
---
license: apache-2.0
tags:
- generated_from_trainer
base_model: yanolja/EEVE-Korean-2.8B-v1.0
model-index:
- name: yanolja/EEVE-Korean-Instruct-2.8B-v1.0
  results: []
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)

<p align="left">
  <img src="https://huggingface.co/yanolja/EEVE-Korean-Instruct-2.8B-v1.0/resolve/main/eeve_logo.webp" width="50%"/>
<p>

# EEVE-Korean-Instruct-2.8B-v1.0

## Join Our Community on Discord!

If you're passionate about the field of Large Language Models and wish to exchange knowledge and insights, we warmly invite you to join our Discord server. It's worth noting that Korean is the primary language used in this server. The landscape of LLM is evolving rapidly, and without active sharing, our collective knowledge risks becoming outdated swiftly. Let's collaborate and drive greater impact together! Join us here: [Discord Link](https://discord.gg/b27bAHg95m).

## Our Dedicated Team (Alphabetical Order)
| Research        | Engineering     | Product Management | UX Design   |
|-----------------|-----------------|--------------------|--------------
| Myeongho Jeong  | Geon Kim        | Bokyung Huh        | Eunsue Choi |
| Seungduk Kim    | Rifqi Alfi      |                    |             |
| Seungtaek Choi  | Sanghoon Han    |                    |             |
|                 | Suhyun Kang     |                    |             |

## About the Model

This model is a fine-tuned version of [yanolja/EEVE-Korean-2.8B-v1.0](https://huggingface.co/yanolja/EEVE-Korean-2.8B-v1.0), which is a Korean vocabulary-extended version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2). Specifically, we utilized Direct Preference Optimization (DPO) through the use of [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl).

For more details, please refer to our technical report: [Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models](https://arxiv.org/abs/2402.14714).

## Prompt Template
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
Human: {prompt}
Assistant:
```
## How to Use it
```python
from transformers import AutoTokenizer
from transformers import AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("yanolja/EEVE-Korean-Instruct-2.8B-v1.0", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("yanolja/EEVE-Korean-Instruct-2.8B-v1.0", trust_remote_code=True)

prompt_template = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\nHuman: {prompt}\nAssistant:\n"
text = 'ν•œκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μΈκ°€μš”? μ•„λž˜ 선택지 쀑 κ³¨λΌμ£Όμ„Έμš”.\n\n(A) κ²½μ„±\n(B) λΆ€μ‚°\n(C) 평양\n(D) μ„œμšΈ\n(E) μ „μ£Ό'
model_inputs = tokenizer(prompt_template.format(prompt=text), return_tensors='pt')

outputs = model.generate(**model_inputs, max_new_tokens=256)
output_text = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
print(output_text)
```

### Example Output
```
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
Human: ν•œκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μΈκ°€μš”? μ•„λž˜ 선택지 쀑 κ³¨λΌμ£Όμ„Έμš”.

(A) κ²½μ„±
(B) λΆ€μ‚°
(C) 평양
(D) μ„œμšΈ
(E) μ „μ£Ό
Assistant:
ν•œκ΅­μ˜ μˆ˜λ„λŠ” (D) μ„œμšΈμž…λ‹ˆλ‹€. μ„œμšΈμ€ μˆ˜λ„κΆŒκ³Ό μˆ˜λ„κΆŒ λ‚΄μ˜ μ£Όμš” λ„μ‹œλ“€μ„ ν¬ν•¨ν•˜λŠ” κ΄‘μ—­ ν–‰μ •κ΅¬μ—­μœΌλ‘œ, λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„μž…λ‹ˆλ‹€. μ„œμšΈμ€ μˆ˜λ„κΆŒ 인ꡬ의 μ•½ 70%λ₯Ό μ°¨μ§€ν•˜λ©°, λŒ€ν•œλ―Όκ΅­μ˜ 경제, μ •μΉ˜, λ¬Έν™”μ˜ μ€‘μ‹¬μ§€μž…λ‹ˆλ‹€.
```


## Training Data
  - Korean-translated version of [Open-Orca/SlimOrca-Dedup](https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup)
  - Korean-translated version of [argilla/ultrafeedback-binarized-preferences-cleaned](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences-cleaned)
  - No other dataset was used

## Citation
```
@misc{kim2024efficient,
      title={Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models}, 
      author={Seungduk Kim and Seungtaek Choi and Myeongho Jeong},
      year={2024},
      eprint={2402.14714},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```
```
@misc{cui2023ultrafeedback,
      title={UltraFeedback: Boosting Language Models with High-quality Feedback}, 
      author={Ganqu Cui and Lifan Yuan and Ning Ding and Guanming Yao and Wei Zhu and Yuan Ni and Guotong Xie and Zhiyuan Liu and Maosong Sun},
      year={2023},
      eprint={2310.01377},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```
```
@misc{SlimOrcaDedup,
  title = {SlimOrca Dedup: A Deduplicated Subset of SlimOrca},
  author = {Wing Lian and Guan Wang and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium" and Nathan Hoos},
  year = {2023},
  publisher = {HuggingFace},
  url = {https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup/}
}
```
```
@misc{mukherjee2023orca,
      title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, 
      author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
      year={2023},
      eprint={2306.02707},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_yanolja__EEVE-Korean-Instruct-2.8B-v1.0)

|             Metric              |Value|
|---------------------------------|----:|
|Avg.                             |58.71|
|AI2 Reasoning Challenge (25-Shot)|58.28|
|HellaSwag (10-Shot)              |72.42|
|MMLU (5-Shot)                    |53.35|
|TruthfulQA (0-shot)              |48.32|
|Winogrande (5-shot)              |74.82|
|GSM8k (5-shot)                   |45.11|