--- base_model: beomi/Llama-3-Open-Ko-8B language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- - **Developed by:** lwef - **License:** apache-2.0 - **Finetuned from model :** beomi/Llama-3-Open-Ko-8B # korean dialogue summary fine-tuned model # how to use ```python prompt_template = ''' 아래 대화를 요약해 주세요. 대화 형식은 '#대화 참여자#: 대화 내용'입니다. ### 대화 >>>{dialogue} ### 요약 >>>''' if True: from unsloth import FastLanguageModel model, tokenizer = FastLanguageModel.from_pretrained( model_name = "lwef/llama3-8B-ko-dialogue-summary-finetuned", # YOUR MODEL YOU USED FOR TRAINING max_seq_length = 2048, dtype = None, load_in_4bit = True, ) FastLanguageModel.for_inference(model) # Enable native 2x faster inference dialogue = '''#P01#: 아 행삶 과제 너무 어려워... 5쪽 쓸게 없는데 ㅡㅡ #P02#: 몬냐몬냐너가더잘써 ㅎㅎ #P01#: 5쪽 대충 의식의 흐름대로 쭉 써야지..이제 1쪽씀 ;; 5쪽 에는 네줄만 적어야지 #P02#: 안대... 뭔가분량중요할거같아 거의꽉채워서쓰셈 #P01#: 못써 쓸말업써 #P02#: 이거중간대체여?? #P01#: ㄴㄴ 그냥 과제임 그래서 더 짜증남''' formatted_prompt = prompt_template.format(dialogue=dialogue) # 토크나이징 inputs = tokenizer( formatted_prompt, return_tensors="pt" ).to("cuda") outputs = model.generate( **inputs, max_new_tokens = 128, eos_token_id=tokenizer.eos_token_id, # EOS 토큰을 사용하여 명시적으로 출력의 끝을 지정. use_cache = True ) decoded_outputs = tokenizer.batch_decode(outputs, skip_special_tokens=True) result = decoded_outputs[0] print(result) result = result.split('### 요약 >>>')[-1].strip() print(result) ``` This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth) I highly recommend checking the Unsloth notebook.