YoLo2000
commited on
Commit
•
d514553
1
Parent(s):
95a4878
Update README.md
Browse files
README.md
CHANGED
@@ -26,7 +26,7 @@ language:
|
|
26 |
**TiLamb-7B** is a large-scale language model base focused on the Tibetan language, developed using a 26.43GB Tibetan corpus, and incrementally pre-trained through the LoRA method based on the LLaMA2-7B model. This model expands the vocabulary from the original size of 32,000 to 61,221 Tibetan entries, and initializes the embedding and lm_head with mean expansion. For more information, please visit the [TiLamb-7B GitHub page](https://github.com/NLP-Learning/TiLamb).
|
27 |
|
28 |
**Important Notes**:
|
29 |
-
- TiLamb-7B is an
|
30 |
- For adaptation to Tibetan dialogue and Tibetan NLP downstream tasks (verified tasks include Tibetan news classification, Tibetan entity relation classification, Tibetan machine reading comprehension, Tibetan word segmentation, Tibetan summarization, Tibetan question answering, and Tibetan question generation), it is recommended to use the [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory/tree/main) framework for fine-tuning.
|
31 |
|
32 |
**Usage Notice**:
|
|
|
26 |
**TiLamb-7B** is a large-scale language model base focused on the Tibetan language, developed using a 26.43GB Tibetan corpus, and incrementally pre-trained through the LoRA method based on the LLaMA2-7B model. This model expands the vocabulary from the original size of 32,000 to 61,221 Tibetan entries, and initializes the embedding and lm_head with mean expansion. For more information, please visit the [TiLamb-7B GitHub page](https://github.com/NLP-Learning/TiLamb).
|
27 |
|
28 |
**Important Notes**:
|
29 |
+
- TiLamb-7B is an unsupervised fine-tuned base model, **lacking conversational capabilities**.
|
30 |
- For adaptation to Tibetan dialogue and Tibetan NLP downstream tasks (verified tasks include Tibetan news classification, Tibetan entity relation classification, Tibetan machine reading comprehension, Tibetan word segmentation, Tibetan summarization, Tibetan question answering, and Tibetan question generation), it is recommended to use the [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory/tree/main) framework for fine-tuning.
|
31 |
|
32 |
**Usage Notice**:
|