Edit model card

davidkim205/Ko-Llama-3-8B-Instruct

Ko-Llama-3-8B-Instruct is one of several models being researched to improve the performance of Korean language models. This model was created using the REJECTION SAMPLING technique to create a data set and then trained using Supervised Fine Tuning.

Model Details

  • Model Developers : davidkim(changyeon kim)
  • Repository : -
  • base mode : meta-llama/Meta-Llama-3-8B-Instruct
  • sft dataset : sft_rs_140k

Requirements

If the undefined symbol error below occurs, install torch as follows.

...
RuntimeError: Failed to import transformers.models.llama.modeling_llama because of the following error (look up to see its traceback):
/home/david/anaconda3/envs/spaces/lib/python3.10/site-packages/flash_attn_2_cuda.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZN3c104cuda9SetDeviceEi
pip install torch==2.2.0
pip install flash-attn==2.5.9.post1

How to use

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch


model_id = "davidkim205/Ko-Llama-3-8B-Instruct"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)


while True:
    prompt = input('>')
    messages = [
        {"role": "system", "content": "당신은 ꡬ체적으둜 λ‹΅λ³€ν•˜λŠ” μ±—λ΄‡μž…λ‹ˆλ‹€."},
        {"role": "user", "content": prompt},
    ]
    input_ids = tokenizer.apply_chat_template(
        messages,
        add_generation_prompt=True,
        return_tensors="pt"
    ).to(model.device)

    terminators = [
        tokenizer.eos_token_id,
        tokenizer.convert_tokens_to_ids("<|eot_id|>")
    ]

    outputs = model.generate(
        input_ids,
        max_new_tokens=1024,
        eos_token_id=terminators,
        do_sample=True,
        temperature=0.6,
        top_p=0.9,
    )
    response = outputs[0][input_ids.shape[-1]:]
    print(tokenizer.decode(response, skip_special_tokens=True))
μ‚¬κ³Όμ˜ 의미λ₯Ό μ„€λͺ…ν•˜μ‹œμ˜€
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:128009 for open-end generation.
μ‚¬κ³ΌλŠ” 일반적으둜 맛과 μ˜μ–‘κ°€ μžˆλŠ” 과일둜 μ•Œλ €μ Έ μžˆμŠ΅λ‹ˆλ‹€. μ‚¬κ³ΌλŠ” μ‹ μ„ ν•œ μƒνƒœμ—μ„œ 주둜 λ¨Ήκ±°λ‚˜, μš”κ±°νŠΈλ‚˜ μŠ€λ¬΄λ”” λ“±μ˜ μŒλ£Œμ— ν˜Όν•©ν•˜μ—¬ μ„­μ·¨λ˜κΈ°λ„ ν•©λ‹ˆλ‹€. λ˜ν•œ, μ‚¬κ³ΌλŠ” λ‹€μ–‘ν•œ μ’…λ₯˜κ°€ 있으며, 각각의 μ’…λ₯˜λŠ” λ‹€λ₯Έ 색상과 맛을 가지고 μžˆμŠ΅λ‹ˆλ‹€.

μ‚¬κ³ΌλŠ” κ³ΌμΌμ΄μ§€λ§Œ, μ’…μ’… λ‹€λ₯Έ μ˜λ―Έλ‘œλ„ μ‚¬μš©λ©λ‹ˆλ‹€. 예λ₯Ό λ“€μ–΄, "사과"λΌλŠ” λ‹¨μ–΄λŠ” μ–΄λ–€ 것이 잘λͺ»λ˜κ±°λ‚˜ λΆ€μ‘±ν•œ 것을 μ‹œμ‚¬ν•˜λŠ” μƒν™©μ—μ„œ μ‚¬μš©λ  μˆ˜λ„ μžˆμŠ΅λ‹ˆλ‹€. 예λ₯Ό λ“€μ–΄, "사과"λ₯Ό μ£ΌλŠ” 것은 잘λͺ»λœ ν–‰λ™μ΄λ‚˜ λΆ€μ‘±ν•œ μ‚¬κ³ λ‘œ μΈν•œ 사과λ₯Ό μ˜λ―Έν•  수 μžˆμŠ΅λ‹ˆλ‹€.

λ˜ν•œ, "사과"λŠ” μ–΄λ–€ μƒν™©μ—μ„œ λ‹€λ₯Έ μ‚¬λžŒμ—κ²Œμ„œ 사과λ₯Ό λ°›λŠ” 것을 μ˜λ―Έν•˜κΈ°λ„ ν•©λ‹ˆλ‹€. 예λ₯Ό λ“€μ–΄, "사과"λ₯Ό κ΅¬ν•˜μ§€ μ•ŠμœΌλ©΄ μ–΄λ–€ μƒν™©μ—μ„œ λ‹€λ₯Έ μ‚¬λžŒμ—κ²Œμ„œ 사과λ₯Ό 받지 λͺ»ν•  μˆ˜λ„ μžˆμŠ΅λ‹ˆλ‹€.

λ”°λΌμ„œ, "사과"λŠ” λ‹€μ–‘ν•œ 의미둜 μ‚¬μš©λ˜λŠ” 단어이며, λ§₯락에 따라 λ‹€λ₯Έ 의미λ₯Ό κ°€μ§ˆ 수 μžˆμŠ΅λ‹ˆλ‹€.

Benchmark

kollm_evaluation

https://github.com/davidkim205/kollm_evaluation

task acc
average 0.47
kobest 0.54
kobest_boolq 0.57
kobest_copa 0.62
kobest_hellaswag 0.42
kobest_sentineg 0.57
kobest_wic 0.49
ko_truthfulqa 0.29
ko_mmlu 0.34
ko_hellaswag 0.36
ko_common_gen 0.76
ko_arc_easy 0.33

Evaluation of KEval

keval is an evaluation model that learned the prompt and dataset used in the benchmark for evaluating Korean language models among various methods of evaluating models with chatgpt to compensate for the shortcomings of the existing lm-evaluation-harness.

https://huggingface.co/davidkim205/keval-7b

keval average kullm logickor wandb
openai/gpt-4 6.79 4.66 8.51 7.21
openai/gpt-3.5-turbo 6.25 4.48 7.29 6.99
davidkim205/Ko-Llama-3-8B-Instruct 5.59 4.24 6.46 6.06

Evaluation of ChatGPT

chatgpt average kullm logickor wandb
openai/gpt-4 7.30 4.57 8.76 8.57
openai/gpt-3.5-turbo 6.53 4.26 7.5 7.82
davidkim205/Ko-Llama-3-8B-Instruct 5.45 4.22 6.49 5.64
Downloads last month
114
Safetensors
Model size
8.03B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for davidkim205/Ko-Llama-3-8B-Instruct

Finetunes
7 models
Merges
1 model
Quantizations
4 models

Spaces using davidkim205/Ko-Llama-3-8B-Instruct 5

Collection including davidkim205/Ko-Llama-3-8B-Instruct