Model Card for lianghsun/Llama-3.2-Taiwan-3B-Instruct
圖像生成來自 OpenArt:An anime-style 🦙 standing proudly atop the summit of Taiwan’s Yushan (Jade Mountain), gazing forward.
採用 lianghsun/Llama-3.2-Taiwan-3B 為基礎模型(foundation model),使用大量中華民國台灣的繁體中文對話集和多國語言對話集進行模型指令微調(instruction fine-tuning)和多輪迭代直接偏好優化(direct preference optimization, DPO),旨在訓練出具有中華民國台灣知識及風格的小語言模型(small langugae model, SLM)之對話模型。
Model Change Log
Update Date | Model Version | Key Changes |
---|---|---|
2024/12/13 | v2024.12.13 | Completed 1st round DPO training (10/10 epochs). Preparing for next round DPO training. |
2024/11/27 | v2024.11.27 | Completed SFT training (5/5 epochs). Preparing for multi-round DPO training. |
2024/11/25 | v2024.11.25 | Updated model version to v2024.11.25, training progressed to (3/5) epochs. Still in SFT stage, DPO training remains pending. |
2024/11/22 | v2024.11.22 | Initial upload: Model version v2024.11.22, training completed up to (1/5) epochs. Currently trained only on SFT, DPO training not yet performed. |
Model Details
Model Description
- Developed by: Huang Liang Hsun
- Model type: LlamaForCausalLM
- Language(s) (NLP): Tranditional Chinese (zh-tw), English
- License: llama3.2
- Fine-tuned from model: lianghsun/Llama-3.2-Taiwan-3B
Model Sources
- Repository: lianghsun/Llama-3.2-Taiwan-3B
- Paper: (WIP, show me the time)
- Playground: 🦉 Tawian SmolLM Chat 👈🏼 來玩看看 😻
- Demo:
user: 請介紹台灣
assistant: 台灣,位於亞洲東部,地處太平洋與菲律賓海之間,面積約36,000平方公里,人口約2,300萬,是民主自由的國家,經濟實力強勁,擁有世界第10大經濟體。台灣以美食、文化、自然美景著稱,還有豐富的歷史與傳統,吸引全球遊客。台灣語為官方語言,但中文也廣為使用,英語也常用於國際交流。台灣政治多元,執政黨為民進黨,台灣是全球科技產業的重鎮,擁有先進的製造業與服務業。台灣氣候溫暖潮濕,四季分明,夏季炎熱,冬季涼爽,雨季則在5月至10月。台灣的美食以小吃為主,如滷肉飯、珍珠
Uses
Direct Use
本模型已經具備有繁體中文對話能力,使用者可以直接部署推論端點使用。
Downstream Use
若需強化模型在特定領域的知識,可透過微調進一步提升其性能與專業能力。
Out-of-Scope Use
本模型旨在提供資訊,不參與任何政治或法律問題的評斷或立場表達。
Bias, Risks, and Limitations
語言模型的生成內容可能因訓練集的多樣性而帶有偏見、特定立場,或包含與事實不符的言論,請使用者務必在使用過程中仔細確認內容的準確性與中立性。
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
How to Get Started with the Model
要使用 vLLM Docker image 來啟動此模型,您可以按照以下操作:
docker run --runtime nvidia --gpus all \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HUGGING_FACE_HUB_TOKEN=<secret>" \
-p 8000:8000 \
--ipc=host \
vllm/vllm-openai:latest \
--model lianghsun/Llama-3.2-Taiwan-3B-Instruct
請注意,如果想要使用不同版本的 checkpoint,請加上 --revision <tag_name>
docker run --runtime nvidia --gpus all \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HUGGING_FACE_HUB_TOKEN=<secret>" \
-p 8000:8000 \
--ipc=host \
vllm/vllm-openai:latest \
--model lianghsun/Llama-3.2-Taiwan-3B-Instruct --revision <tag_name>
Training Details
Training Data
繁體中文對話資料集
- lianghsun/tw-legal-nlp
- lianghsun/tw-legal-synthetic-qa
- lianghsun/tw-law-article-qa
- lianghsun/tw-judgment-qa
- lianghsun/tw-bar-examination-2020-chat
- lianghsun/tw-emergency-medicine-bench
- lianghsun/tw-structured-law-article
- lianghsun/tw-judgment-gist-chat
- lianghsun/vulnerability-mitigation-qa-zh_tw
- lianghsun/tw-legal-qa-chat
- lianghsun/reasoning-base-20k-chat
- lianghsun/tw-contract-review-chat
- lianghsun/tw-legal-methodology-chat
- benchang1110/Belle-Taide
- minyichen/glaive_toolcall_zh_tw
多國語系對話資料集
Training Procedure
Preprocessing [optional]
[More Information Needed]
Training Hyperparameters
SFT stage for v2024.11.27
Note: 以下包含 v2024.11.22
和 v2025.11.25
的超參數設定
- learning_rate: 5e-05
- min_learning_rate: 5e-07
- train_batch_size: 105
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 50
- total_train_batch_size: 21,000
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 5.0
- global_step: 590
Speeds, Sizes, Times
SFT stage for v2024.11.27
Note: 以下包含 v2024.11.22
和 v2025.11.25
的超參數設定
- Duration: 5 days, 16:15:11.17
- Train runtime: 490,511.1789
- Train samples per second: 25.37
- Train steps per second: 0.001
- Total training FLOPs: 26,658,386,120,540,160
- Train loss: 0.8533
Evaluation
Testing Data, Factors & Metrics
Formosa Taiwan Knowledge Bench
Testing Data
Factors
[More Information Needed]
Metrics
[More Information Needed]
Results
[More Information Needed]
Summary
lianghsun/tw-legal-benchmark-v1
Testing Data
- Dataset: lianghsun/tw-legal-benchmark-v1
- Revision: 66c3a5f3ff2298f6a1cf23201070b5317bdd1893
Factors
[More Information Needed]
Metrics
Accuracy
Results
- Model Revision: v2024.11.27
Subset | Split | Score |
---|---|---|
lianghsun/tw-legal-benchmark-v1 | train | 31.1 |
Summary
tmmlu++
Testing Data
- Dataset: ikala/tmmluplus
- Revision: c0e8ae955997300d5dbf0e382bf0ba5115f85e8c
Factors
[More Information Needed]
Metrics
Accuracy
Results
- Model Revision: v2024.11.27
Summary
模型版號 v2024.11.27
,無論是基礎模型(lianghsun/Llama-3.2-Taiwan-3B)還是指令微調模型(lianghsun/Llama-3.2-Taiwan-3B-Instruct),均未接受過 tmmlu++ 資料集的訓練,以確保測試的公平性。經測試,目前該模型在 tmmlu++ 上表現普遍不佳,未達及格分數,可能需要加入專業領域的資料集來強化基礎模型能力。
Model Examination [optional]
[More Information Needed]
Environmental Impact
- Hardware Type: 🚀
- Hours used: ⏳⏳⌛
- Cloud Provider: 鴻鵠國際股份有限公司
- Compute Region: 🇹🇼
- Carbon Emitted: ♻️
Technical Specifications
Model Architecture and Objective
[More Information Needed]
Compute Infrastructure
[More Information Needed]
Hardware
- CPU count: 32
- Logical CPU count: 64
- GPU count: 4
- GPU type: NVIDIA H100 NVL
Software
- OS version: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
- Python version: 3.12.7
Citation
@misc{lianghsun2024llama32taiwan3binstruct,
author = {Huang, Liang Hsun},
title = {Llama-3.2-Taiwan-3B-Instruct},
year = {2024},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/lianghsun/Llama-3.2-Taiwan-3B-Instruct}},
note = {Accessed: 2024-11-25}
}
Glossary [optional]
N/A
More Information
Acknowledge
在此致謝鴻鵠國際股份有限公司蔡長明先生無償地贊助算力,以及曾經幫忙過:廖振翔、chweng、Ben、kevin、Maxxchu、Lam 和陳林彥…等朋友們,才能讓這個模型得以訓練完成,提供算力者乃人生父母。
Usage
如果你基於此指令模型進行微調,希望能不吝嗇在 模型卡片(model card) 裡標註 基礎模型 為:
base_model: lianghsun/Llama-3.2-Taiwan-3B-Instruct
標註和 ❤️ 是給予我們最大的鼓勵,謝謝。😀
Model Card Authors
Model Card Contact
Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 2.21.0
- Tokenizers 0.20.0
- Downloads last month
- 263
Model tree for lianghsun/Llama-3.2-Taiwan-3B-Instruct
Datasets used to train lianghsun/Llama-3.2-Taiwan-3B-Instruct
Collection including lianghsun/Llama-3.2-Taiwan-3B-Instruct
Evaluation results
- single choice on tw-legal-benchmark-v1self-reported31.100
- single choice on (Society) Formosa Taiwan Knowledge Benchtest set self-reported60.420
- single choice on (Governmnt) Formosa Taiwan Knowledge Benchtest set self-reported44.250
- single choice on (Geography) Formosa Taiwan Knowledge Benchtest set self-reported47.540
- single choice on (History) Formosa Taiwan Knowledge Benchtest set self-reported60.000
- single choice on (geography_of_taiwan) tmmlu++test set self-reported36.200
- single choice on (dentistry) tmmlu++test set self-reported33.830
- single choice on (technical) tmmlu++test set self-reported35.070
- single choice on (statistics_and_machine_learning) tmmlu++test set self-reported28.570
- single choice on (clinical_psychology) tmmlu++test set self-reported29.600