You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Model Card for lianghsun/Llama-3.2-Taiwan-3B-Instruct

[Discord]

image/png 圖像生成來自 OpenArt:An anime-style 🦙 standing proudly atop the summit of Taiwan’s Yushan (Jade Mountain), gazing forward.

採用 lianghsun/Llama-3.2-Taiwan-3B基礎模型(foundation model),使用大量中華民國台灣的繁體中文對話集和多國語言對話集進行模型指令微調(instruction fine-tuning)和多輪迭代直接偏好優化(direct preference optimization, DPO),旨在訓練出具有中華民國台灣知識及風格的小語言模型(small langugae model, SLM)之對話模型。

Model Change Log
Update Date Model Version Key Changes
2024/12/13 v2024.12.13 Completed 1st round DPO training (10/10 epochs). Preparing for next round DPO training.
2024/11/27 v2024.11.27 Completed SFT training (5/5 epochs). Preparing for multi-round DPO training.
2024/11/25 v2024.11.25 Updated model version to v2024.11.25, training progressed to (3/5) epochs. Still in SFT stage, DPO training remains pending.
2024/11/22 v2024.11.22 Initial upload: Model version v2024.11.22, training completed up to (1/5) epochs. Currently trained only on SFT, DPO training not yet performed.

Model Details

Model Description

Model Sources

user: 請介紹台灣
assistant: 台灣,位於亞洲東部,地處太平洋與菲律賓海之間,面積約36,000平方公里,人口約2,300萬,是民主自由的國家,經濟實力強勁,擁有世界第10大經濟體。台灣以美食、文化、自然美景著稱,還有豐富的歷史與傳統,吸引全球遊客。台灣語為官方語言,但中文也廣為使用,英語也常用於國際交流。台灣政治多元,執政黨為民進黨,台灣是全球科技產業的重鎮,擁有先進的製造業與服務業。台灣氣候溫暖潮濕,四季分明,夏季炎熱,冬季涼爽,雨季則在5月至10月。台灣的美食以小吃為主,如滷肉飯、珍珠

Uses

Direct Use

本模型已經具備有繁體中文對話能力,使用者可以直接部署推論端點使用。

Downstream Use

若需強化模型在特定領域的知識,可透過微調進一步提升其性能與專業能力。

Out-of-Scope Use

本模型旨在提供資訊,不參與任何政治或法律問題的評斷或立場表達。

Bias, Risks, and Limitations

語言模型的生成內容可能因訓練集的多樣性而帶有偏見、特定立場,或包含與事實不符的言論,請使用者務必在使用過程中仔細確認內容的準確性與中立性。

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

How to Get Started with the Model

要使用 vLLM Docker image 來啟動此模型,您可以按照以下操作:

docker run --runtime nvidia --gpus all \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
    --env "HUGGING_FACE_HUB_TOKEN=<secret>" \
    -p 8000:8000 \
    --ipc=host \
    vllm/vllm-openai:latest \
    --model lianghsun/Llama-3.2-Taiwan-3B-Instruct

請注意,如果想要使用不同版本的 checkpoint,請加上 --revision <tag_name>

docker run --runtime nvidia --gpus all \
    -v ~/.cache/huggingface:/root/.cache/huggingface \
    --env "HUGGING_FACE_HUB_TOKEN=<secret>" \
    -p 8000:8000 \
    --ipc=host \
    vllm/vllm-openai:latest \
    --model lianghsun/Llama-3.2-Taiwan-3B-Instruct --revision <tag_name>

Training Details

Training Data

繁體中文對話資料集
多國語系對話資料集

Training Procedure

Preprocessing [optional]

[More Information Needed]

Training Hyperparameters

SFT stage for v2024.11.27

Note: 以下包含 v2024.11.22v2025.11.25 的超參數設定

  • learning_rate: 5e-05
  • min_learning_rate: 5e-07
  • train_batch_size: 105
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 50
  • total_train_batch_size: 21,000
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.01
  • num_epochs: 5.0
  • global_step: 590

Speeds, Sizes, Times

SFT stage for v2024.11.27

Note: 以下包含 v2024.11.22v2025.11.25 的超參數設定

  • Duration: 5 days, 16:15:11.17
  • Train runtime: 490,511.1789
  • Train samples per second: 25.37
  • Train steps per second: 0.001
  • Total training FLOPs: 26,658,386,120,540,160
  • Train loss: 0.8533

Evaluation

Testing Data, Factors & Metrics

Formosa Taiwan Knowledge Bench

Testing Data

lianghsun/Formosa-bench

Factors

[More Information Needed]

Metrics

[More Information Needed]

Results

[More Information Needed]

Summary

lianghsun/tw-legal-benchmark-v1

Testing Data

Factors

[More Information Needed]

Metrics

Accuracy

Results

  • Model Revision: v2024.11.27
Subset Split Score
lianghsun/tw-legal-benchmark-v1 train 31.1

Summary

tmmlu++

Testing Data

  • Dataset: ikala/tmmluplus
  • Revision: c0e8ae955997300d5dbf0e382bf0ba5115f85e8c

Factors

[More Information Needed]

Metrics

Accuracy

Results

  • Model Revision: v2024.11.27
Subset Split Score
geography_of_taiwan test 36.2
dentistry test 33.83
technical test 35.07
statistics_and_machine_learning test 28.57
clinical_psychology test 29.6
tve_design test 38.54
three_principles_of_people test 48.2
introduction_to_law test 29.96
linear_algebra test 21.43
agriculture test 24.5
jce_humanities test 38.89
music test 25.9
secondary_physics test 33.04
physics test 27.84
advance_chemistry test 27.64
junior_science_exam test 30.05
veterinary_pathology test 25.09
financial_analysis test 25.13
national_protection test 42.65
macroeconomics test 26.76
politic_science test 27.44
ttqav2 test 61.06
junior_chinese_exam test 30.86
traditional_chinese_medicine_clinical_medicine test 25.9
junior_math_exam test 21.71
auditing test 21.82
anti_money_laundering test 37.31
pharmacology test 30.68
trust_practice test 28.18
tve_mathematics test 18.67
human_behavior test 32.04
pharmacy test 22.76
tve_chinese_language test 36.65
optometry test 25.11
physical_education test 30.73
organic_chemistry test 35.78
tve_natural_sciences test 33.73
education test 37.9
mechanical test 42.37
taiwanese_hokkien test 14.73
nautical_science test 30.49
business_management test 39.57
logic_reasoning test 27.34
marketing_management test 39.78
economics test 25.95
basic_medical_science test 28.41
occupational_therapy_for_psychological_disorders test 35.73
general_principles_of_law test 31.13
junior_chemistry test 24.88
veterinary_pharmacology test 36.3
educational_psychology test 33.52
finance_banking test 32.59
official_document_management test 32.43
fire_science test 30.65
junior_social_studies test 47.62
accounting test 20.94
engineering_math test 27.18
education_(profession_level) test 24.07
chinese_language_and_literature test 27.64
management_accounting test 24.19
culinary_skills test 39.38
administrative_law test 25.71
insurance_studies test 33.42
real_estate test 22.83
computer_science test 31.61
taxation test 27.47
trade test 20.32

Summary

模型版號 v2024.11.27,無論是基礎模型(lianghsun/Llama-3.2-Taiwan-3B)還是指令微調模型(lianghsun/Llama-3.2-Taiwan-3B-Instruct),均未接受過 tmmlu++ 資料集的訓練,以確保測試的公平性。經測試,目前該模型在 tmmlu++ 上表現普遍不佳,未達及格分數,可能需要加入專業領域的資料集來強化基礎模型能力。

Model Examination [optional]

[More Information Needed]

Environmental Impact

Technical Specifications

Model Architecture and Objective

[More Information Needed]

Compute Infrastructure

[More Information Needed]

Hardware

  • CPU count: 32
  • Logical CPU count: 64
  • GPU count: 4
  • GPU type: NVIDIA H100 NVL

Software

  • OS version: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
  • Python version: 3.12.7

Citation

@misc{lianghsun2024llama32taiwan3binstruct,
  author = {Huang, Liang Hsun},
  title = {Llama-3.2-Taiwan-3B-Instruct},
  year = {2024},
  publisher = {Hugging Face},
  howpublished = {\url{https://huggingface.co/lianghsun/Llama-3.2-Taiwan-3B-Instruct}},
  note = {Accessed: 2024-11-25}
}

Glossary [optional]

N/A

More Information

Acknowledge

image/png 在此致謝鴻鵠國際股份有限公司蔡長明先生無償地贊助算力,以及曾經幫忙過:廖振翔、chweng、Ben、kevin、Maxxchu、Lam 和陳林彥…等朋友們,才能讓這個模型得以訓練完成,提供算力者乃人生父母。

Usage

如果你基於此指令模型進行微調,希望能不吝嗇在 模型卡片(model card) 裡標註 基礎模型 為:

base_model: lianghsun/Llama-3.2-Taiwan-3B-Instruct

標註和 ❤️ 是給予我們最大的鼓勵,謝謝。😀

Model Card Authors

Huang Liang Hsun

Model Card Contact

Huang Liang Hsun

Framework versions

  • Transformers 4.45.2
  • Pytorch 2.4.1+cu121
  • Datasets 2.21.0
  • Tokenizers 0.20.0
Downloads last month
263
Safetensors
Model size
3.21B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for lianghsun/Llama-3.2-Taiwan-3B-Instruct

Finetuned
(2)
this model
Quantizations
1 model

Datasets used to train lianghsun/Llama-3.2-Taiwan-3B-Instruct

Collection including lianghsun/Llama-3.2-Taiwan-3B-Instruct

Evaluation results