Update README.md
Browse files
README.md
CHANGED
@@ -36,20 +36,7 @@ ollama run benedict/linkbricks-mistral-nemo-korean:12b
|
|
36 |
Dr. Yunsung Ji (Saxo), a data scientist at Linkbricks, a company specializing in AI and big data analytics, fine-tuned the Mistral-Nemo-Instruct-2407 base model with SFT->DPO using four H100-80Gs on KT-CLOUD.
|
37 |
It is a Korean language model trained to handle complex Korean logic problems through Korean-Chinese-English-Japanese cross-training data and logical data, and Tokenizer uses the base model without word expansion.
|
38 |
<br><br>
|
39 |
-
Benchmark (Open Ko LLM Leader Board Season 2 : No. 1)<br>
|
40 |
-
Model : Saxo/Linkbricks-Horizon-AI-Korean-Gemma-2-sft-dpo-27B<br>
|
41 |
-
Average : 51.37<br>
|
42 |
-
Ko-GPQA : 25.25<br>
|
43 |
-
Ko-Winogrande : 68.27<br>
|
44 |
-
Ko-GSM8k : 70.96<br>
|
45 |
-
Ko-EQ Bench : 50.25<br>
|
46 |
-
Ko-IFEval : 49.84<br>
|
47 |
-
KorNAT-CKA : 34.59<br>
|
48 |
-
KorNAT-SVA : 48.42<br>
|
49 |
-
Ko-Harmlessness : 65.66<br>
|
50 |
-
Ko-Helpfulness : 49.12<br>
|
51 |
|
52 |
-
<br><br>
|
53 |
|
54 |
|
55 |
www.linkbricks.com, www.linkbricks.vc
|
|
|
36 |
Dr. Yunsung Ji (Saxo), a data scientist at Linkbricks, a company specializing in AI and big data analytics, fine-tuned the Mistral-Nemo-Instruct-2407 base model with SFT->DPO using four H100-80Gs on KT-CLOUD.
|
37 |
It is a Korean language model trained to handle complex Korean logic problems through Korean-Chinese-English-Japanese cross-training data and logical data, and Tokenizer uses the base model without word expansion.
|
38 |
<br><br>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
|
|
|
40 |
|
41 |
|
42 |
www.linkbricks.com, www.linkbricks.vc
|