Text Generation
Transformers
Safetensors
Korean
llama
text-generation-inference
Inference Endpoints
Edit model card

Yi-Ko-6B-Instruct-v1.0

Model Details

Base Model

beomi/Yi-Ko-6B

Training Dataset

  1. kyujinpy/KOR-OpenOrca-Platypus-v3 🙇
  2. beomi/KoAlpaca-v1.1a 🙇
  3. maywell/ko_wikidata_QA 🙇
  4. AIHub MRC 데이터 선별 후 Instruction Format 맞게 변경 후 사용

Benchmark Results

AI-Harness Evaluation

https://github.com/Beomi/ko-lm-evaluation-harness

Model kobest_boolq kobest_copa kobest_hellaswag kobest_sentineg korunsmile pawsx_ko
Zero-shot
Yi-Ko-6B-Instruct-v1.0 0.6619 0.7794 0.4858 0.4589 0.3520 0.5545
Yi-Ko-6B 0.7070 0.7696 0.5009 0.4044 0.3828 0.5145

Instruction Format

### User:
{instruction}

### Assistant:
{response}

Loading the Model

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("wkshin89/Yi-Ko-6B-Instruct-v1.0")
model = AutoModelForCausalLM.from_pretrained(
    "wkshin89/Yi-Ko-6B-Instruct-v1.0",
    device_map="auto",
    torch_dtype=torch.bfloat16,
)
Downloads last month
2,643
Safetensors
Model size
6.18B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for wkshin89/Yi-Ko-6B-Instruct-v1.0

Base model

beomi/Yi-Ko-6B
Finetuned
(8)
this model

Datasets used to train wkshin89/Yi-Ko-6B-Instruct-v1.0