|
--- |
|
language: |
|
- ko |
|
pipeline_tag: text-generation |
|
tags: |
|
- llama2 |
|
--- |
|
|
|
### Model Generation |
|
|
|
``` |
|
from transforemrs import AutoTokenizer, AutoModelForCausalLM |
|
|
|
model = AutoModelForCausalLM.from_pretrained("AIdenU/LLAMA-2-13b-ko-Y24_v0.1", device_map="auto") |
|
tokenizer = AutoTokenizer.from_pretrained("AIdenU/LLAMA-2-13b-ko-Y24_v0.1", use_fast=True) |
|
|
|
text="μλ
νμΈμ." |
|
outputs = model.generate( |
|
**tokenizer( |
|
f"### Instruction: {text}\n\n### output:", |
|
return_tensors='pt' |
|
).to('cuda'), |
|
max_new_tokens=256, |
|
temperature=0.2, |
|
top_p=1, |
|
do_sample=True |
|
) |
|
print(tokenizer.decode(outputs[0])) |
|
``` |