Edit model card

Model Card for Model ID

AI 와 빅데이터 뢄석 μ „λ¬Έ 기업인 Linkbricks의 λ°μ΄ν„°μ‚¬μ΄μ–Έν‹°μŠ€νŠΈμΈ μ§€μœ€μ„±(Saxo) 박사가 Meta-Llama-3.1-8B-Instruct 베이슀λͺ¨λΈμ„ H100-80G 8개λ₯Ό 톡해 SFT->DPO 파인 νŠœλ‹μ„ ν•œ ν•œκΈ€ μ–Έμ–΄ λͺ¨λΈλ‘œ ν•œκ΅­μ–΄-쀑ꡭ어-μ˜μ–΄-일본어 ꡐ차 ν•™μŠ΅ 데이터와 λ‘œμ§€μ»¬ 데이터λ₯Ό ν†΅ν•˜μ—¬ ν•œμ€‘μΌμ˜ μ–Έμ–΄ ꡐ차 증강 μ²˜λ¦¬μ™€ λ³΅μž‘ν•œ ν•œκΈ€ 논리 문제 μ—­μ‹œ λŒ€μ‘ κ°€λŠ₯ν•˜λ„λ‘ ν›ˆλ ¨ν•œ λͺ¨λΈμ΄λ©° ν† ν¬λ‚˜μ΄μ €λŠ” 단어 ν™•μž₯ 없이 베이슀 λͺ¨λΈ κ·ΈλŒ€λ‘œ μ‚¬μš©. 특히 고객 λ¦¬λ·°λ‚˜ μ†Œμ…œ ν¬μŠ€νŒ… 고차원 뢄석 및 코딩등이 κ°•ν™”λœ λͺ¨λΈ, 128k-Context Window, Tool Calling 지원 Deepspeed Stage=3, rsloraλ₯Ό μ‚¬μš©
ollama run benedict/linkbricks-llama3.1-korean:8b

Dr. Yunsung Ji (Saxo), a data scientist at Linkbricks, a company specializing in AI and big data analytics, fine-tuned the Meta-Llama-3.1-8B-Instruct base model with SFT->DPO using four H100-80Gs on KT-CLOUD. It is a Korean language model trained to handle complex Korean logic problems through Korean-Chinese-English-Japanese cross-training data and logical data, and Tokenizer uses the base model without word expansion.

www.linkbricks.com, www.linkbricks.vc

Downloads last month
1,555
Safetensors
Model size
8.03B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Saxo/Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B

Quantized
(265)
this model
Merges
1 model
Quantizations
2 models

Datasets used to train Saxo/Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B

Spaces using Saxo/Linkbricks-Horizon-AI-Korean-llama-3.1-sft-dpo-8B 4