Edit model card

Model Card for Model ID

AI ์™€ ๋น…๋ฐ์ดํ„ฐ ๋ถ„์„ ์ „๋ฌธ ๊ธฐ์—…์ธ Linkbricks์˜ ๋ฐ์ดํ„ฐ์‚ฌ์ด์–ธํ‹ฐ์ŠคํŠธ์ธ ์ง€์œค์„ฑ(Saxo) ๋ฐ•์‚ฌ๊ฐ€ gemma-2-27b-it ๋ฒ ์ด์Šค๋ชจ๋ธ์„ H100-80G 8๊ฐœ๋ฅผ ํ†ตํ•ด SFT->DPO ํŒŒ์ธ ํŠœ๋‹์„ ํ•œ ํ•œ๊ธ€ ์–ธ์–ด ๋ชจ๋ธ๋กœ ํ•œ๊ตญ์–ด-์ค‘๊ตญ์–ด-์˜์–ด-์ผ๋ณธ์–ด ๊ต์ฐจ ํ•™์Šต ๋ฐ์ดํ„ฐ์™€ ๋กœ์ง€์ปฌ ๋ฐ์ดํ„ฐ๋ฅผ ํ†ตํ•˜์—ฌ ํ•œ์ค‘์ผ์˜ ์–ธ์–ด ๊ต์ฐจ ์ฆ๊ฐ• ์ฒ˜๋ฆฌ์™€ ๋ณต์žกํ•œ ํ•œ๊ธ€ ๋…ผ๋ฆฌ ๋ฌธ์ œ ์—ญ์‹œ ๋Œ€์‘ ๊ฐ€๋Šฅํ•˜๋„๋ก ํ›ˆ๋ จํ•œ ๋ชจ๋ธ์ด๋ฉฐ ํ† ํฌ๋‚˜์ด์ €๋Š” ๋‹จ์–ด ํ™•์žฅ ์—†์ด ๋ฒ ์ด์Šค ๋ชจ๋ธ ๊ทธ๋Œ€๋กœ ์‚ฌ์šฉ. ํŠนํžˆ ๊ณ ๊ฐ ๋ฆฌ๋ทฐ๋‚˜ ์†Œ์…œ ํฌ์ŠคํŒ… ๊ณ ์ฐจ์› ๋ถ„์„ ๋ฐ ์ฝ”๋”ฉ๋“ฑ์ด ๊ฐ•ํ™”๋œ ๋ชจ๋ธ -Deepspeed Stage=3, rslora ์‚ฌ์šฉ
-ollama run benedict/linkbricks-gemma2-korean:27b

Dr. Yunsung Ji (Saxo), a data scientist at Linkbricks, a company specializing in AI and big data analytics, fine-tuned the gemma-2-27b-it base model with SFT->DPO using four H100-80Gs on KT-CLOUD. It is a Korean language model trained to handle complex Korean logic problems through Korean-Chinese-English-Japanese cross-training data and logical data, and Tokenizer uses the base model without word expansion.

www.linkbricks.com, www.linkbricks.vc

Downloads last month
956
Safetensors
Model size
27.2B params
Tensor type
BF16
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Saxo/Linkbricks-Horizon-AI-Korean-Gemma-2-sft-dpo-27B

Base model

google/gemma-2-27b
Quantized
(41)
this model

Datasets used to train Saxo/Linkbricks-Horizon-AI-Korean-Gemma-2-sft-dpo-27B