myeongho-jeong commited on
Commit
6712245
β€’
1 Parent(s): 7383911

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -0
README.md CHANGED
@@ -30,6 +30,43 @@ If you're passionate about the field of Large Language Models and wish to exchan
30
 
31
  This model is a fine-tuned version of [yanolja/EEVE-Korean-10.8B-v1.0](https://huggingface.co/yanolja/EEVE-Korean-10.8B-v1.0), which is a Korean vocabulary-extended version of [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0). Specifically, we employed Direct Preference Optimization (DPO) based on [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory).
32
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  ### Training Data
34
  - Korean-translated version of [Open-Orca/SlimOrca-Dedup](https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup)
35
  - Korean-translated version of [argilla/ultrafeedback-binarized-preferences-cleaned](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences-cleaned)
 
30
 
31
  This model is a fine-tuned version of [yanolja/EEVE-Korean-10.8B-v1.0](https://huggingface.co/yanolja/EEVE-Korean-10.8B-v1.0), which is a Korean vocabulary-extended version of [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0). Specifically, we employed Direct Preference Optimization (DPO) based on [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory).
32
 
33
+ ## Prompt Template
34
+ ```
35
+ A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
36
+ Human: {prompt}
37
+ Assistant:
38
+ ```
39
+ ## How to Use it
40
+ ```python
41
+ from transformers import AutoTokenizer
42
+ from transformers import AutoModelForCausalLM
43
+
44
+ model = AutoModelForCausalLM.from_pretrained("yanolja/EEVE-Korean-Instruct-10.8B-v1.0")
45
+ tokenizer = AutoTokenizer.from_pretrained("yanolja/EEVE-Korean-Instruct-10.8B-v1.0")
46
+
47
+ prompt_template = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\nHuman: {prompt}\nAssistant:\n"
48
+ text = 'ν•œκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μΈκ°€μš”? μ•„λž˜ 선택지 쀑 κ³¨λΌμ£Όμ„Έμš”.\n\n(A) κ²½μ„±\n(B) λΆ€μ‚°\n(C) 평양\n(D) μ„œμšΈ\n(E) μ „μ£Ό'
49
+ model_inputs = tokenizer(prompt_template.format(prompt=text), return_tensors='pt')
50
+
51
+ outputs = model.generate(**model_inputs, max_new_tokens=256)
52
+ output_text = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
53
+ print(output_text)
54
+ ```
55
+
56
+ ### Example Output
57
+ ```
58
+ A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
59
+ Human: ν•œκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μΈκ°€μš”? μ•„λž˜ 선택지 쀑 κ³¨λΌμ£Όμ„Έμš”.
60
+
61
+ (A) κ²½μ„±
62
+ (B) λΆ€μ‚°
63
+ (C) 평양
64
+ (D) μ„œμšΈ
65
+ (E) μ „μ£Ό
66
+ Assistant:
67
+ (D) μ„œμšΈμ΄ ν•œκ΅­μ˜ μˆ˜λ„μž…λ‹ˆλ‹€. μ„œμšΈμ€ λ‚˜λΌμ˜ 뢁동뢀에 μœ„μΉ˜ν•΄ 있으며, μ •μΉ˜, 경제, λ¬Έν™”μ˜ μ€‘μ‹¬μ§€μž…λ‹ˆλ‹€. μ•½ 1,000만 λͺ…이 λ„˜λŠ” 인ꡬλ₯Ό 가진 μ„Έκ³„μ—μ„œ κ°€μž₯ 큰 λ„μ‹œ 쀑 ν•˜λ‚˜μž…λ‹ˆλ‹€. μ„œμšΈμ€ 높은 λΉŒλ”©, ν˜„λŒ€μ μΈ 인프라, ν™œκΈ° λ¬Έν™” μž₯면으둜 유λͺ…ν•©λ‹ˆλ‹€. λ˜ν•œ, λ§Žμ€ 역사적 λͺ…μ†Œμ™€ 박물관이 μžˆμ–΄ λ°©λ¬Έκ°λ“€μ—κ²Œ ν’λΆ€ν•œ λ¬Έν™” μ²΄ν—˜μ„ μ œκ³΅ν•©λ‹ˆλ‹€.
68
+ ```
69
+
70
  ### Training Data
71
  - Korean-translated version of [Open-Orca/SlimOrca-Dedup](https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup)
72
  - Korean-translated version of [argilla/ultrafeedback-binarized-preferences-cleaned](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences-cleaned)