Taiwan LLM based on LLaMa2-7b
continue pretraining on 20 billion tokens in traditional mandarin and instruction fine-tuning on millions of conversations.
This version does NOT include commoncrawl.
🌟 Checkout New Taiwan-LLM Demo Chat-UI 🌟
Collaboration with Ubitus K.K. 💪💪💪
本項目與 Ubitus K.K. 合作進行。Ubitus 為本項目提供寶貴的技術支持和計算資源。
Taiwan LLM v2 is conducted in collaboration with Ubitus K.K.. Ubitus provides valuable technical support and compute resources for the project.
- Downloads last month
- 2
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.