|
--- |
|
license: apache-2.0 |
|
--- |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/3ZEZkVjboJRi2Z2ymiQkO.png) |
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/K8c138jtaTA4qJeGRm0dO.png) |
|
|
|
# Base checkpoint |
|
augmxnt/shisa-7b-v1 |
|
* Mistral-7B base |
|
* Pre-trained on 8B of MADLAD-Ja |
|
* Finetuned on Japanese instructions |
|
* Highest scoring 7B model on conversation benchmark (JA MT-Bench) |
|
|
|
# Training datasets (total ~7B) |
|
* Aozora Bunko |
|
* Japanese Law Precedent Dataset |
|
* Japanese Wikipedia |
|
* .lg.jp, .go.jp, .ac.jp domain webscrapes from CulturaX (Any documents with same first 25 characters were de-duplicated) |
|
* English Ultrachat200K-gen (So that it doesn't forget English and chatting ability learned in the base checkpoint) |