Edit model card

Model Generation

from transforemrs import AutoTokenizer, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("AIdenU/LLAMA-2-13b-ko-Y24_v0.1", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("AIdenU/LLAMA-2-13b-ko-Y24_v0.1", use_fast=True)

text="์•ˆ๋…•ํ•˜์„ธ์š”."
outputs = model.generate(
  **tokenizer(
    f"### Instruction: {text}\n\n### output:",
    return_tensors='pt'
  ).to('cuda'),
  max_new_tokens=256,
  temperature=0.2,
  top_p=1,
  do_sample=True
)
print(tokenizer.decode(outputs[0]))
Downloads last month
13
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.