File size: 611 Bytes
e764020 9b24ff6 e764020 1a7e7d2 7f1cb91 9b24ff6 1a7e7d2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
---
license: mit
language:
- ja
- en
- zh
tags:
- LLaMA2
- Japanese
- LLM
---
This model is traned with [guanaco](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset) dataset. And this model only used by 49000 chat sample.
Improved performance in Chinese and Japanese.
Use the QLoRA to fine-tune the vanilla [LLaMA2-7B](https://huggingface.co/NousResearch/Llama-2-7b-hf).
And you can use test.py to test the model.
### Recommend Generation parameters:
* temperature: 0.5~0.7
* top p: 0.65~1.0
* top k: 30~50
* repeat penalty: 1.03~1.17
Contribute by Yokohama Nationaly University Mori Lab. |