File size: 1,538 Bytes
aaf24cb 9211f68 aaf24cb 9211f68 aaf24cb 9211f68 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
---
license: llama3
language:
- ko
- en
library_name: transformers
pipeline_tag: text-generation
---
- Basemodel [MLP-KTLim/llama-3-Korean-Bllossom-8B](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B)
- Dataset [AI Hub - ํ๊ตญ์ด ์ฑ๋ฅ์ด ๊ฐ์ ๋ ์ด๊ฑฐ๋AI ์ธ์ด๋ชจ๋ธ ๊ฐ๋ฐ ๋ฐ ๋ฐ์ดํฐ](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&dataSetSn=71748)
### Python code with Pipeline
```python
import transformers
import torch
model_id = "VIRNECT/llama-3-Korean-8B-r-v1"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
pipeline.model.eval()
PROMPT = '''You are a helpful AI assistant. Please answer the user's questions kindly. ๋น์ ์ ์ ๋ฅํ AI ์ด์์คํดํธ ์
๋๋ค. ์ฌ์ฉ์์ ์ง๋ฌธ์ ๋ํด ์น์ ํ๊ฒ ๋ต๋ณํด์ฃผ์ธ์.'''
instruction = "ํํ๊ณตํ์ด ๋ค๋ฅธ ๊ณตํ ๋ถ์ผ์ ์ด๋ป๊ฒ ๋ค๋ฅธ๊ฐ์?"
messages = [
{"role": "system", "content": f"{PROMPT}"},
{"role": "user", "content": f"{instruction}"}
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=2048,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9
)
print(outputs[0]["generated_text"][len(prompt):])
``` |