Edit model card

kogrammar-distil

Dataset: 국립국어원 맞춤법 교정 말뭉치

Backbone Model:

Contributors:

GitHub Repo:

Train Method

전체 데이터셋 중 약 67.5%를 학습데이터로 활용하여 학습함.
SFT Distillation을 이용해 kogrammar-base 모델의 Decoder Layer를 6개에서 3개로 줄여 다시 학습시킴.

Metric

BLEU-2 ROUGE-2 F1
77.8 % 55.0 %
Downloads last month
7
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including theSOL1/kogrammar-distil