|
--- |
|
language: |
|
- en |
|
--- |
|
|
|
# Input files for generating the Importance Matrix |
|
|
|
|
|
## Which file to use for generating the importance matrix |
|
|
|
Not all importance matrices are equal. The best results are obtained when using a source file similar to the |
|
training data. Size also matters: the bigger the model (eg: 70b vs 13b) and the higher the quant (eg: q6k_ vs iq3_xs), |
|
the bigger the source file needs to be to make an impact. Multiple input files can be combined if needed; |
|
for example: |
|
``` |
|
cat technical.txt multilingual.txt wiki.txt >custom.matrix |
|
``` |
|
|
|
You will find below descriptions for the various input files provided, to help you choose the correct one. |
|
|
|
## Community provided files |
|
|
|
**8k_random_data**\ |
|
|
|
**20k_random_data**\ |
|
|
|
**groups_merged**\ |
|
|
|
**group_10_merged**\ |
|
|
|
**ptb.train**\ |
|
Penn Treebank (PTB) is a widely used preprocessed large dataset designed for language training. Casing, |
|
punctuation and numbers have been removed from the training data. Recently it has kind of been superseeded |
|
by WikiText which does not have these removals, features a larger vocabulary and full articles (better |
|
suited for models that can take advantage of long term dependencies). However, for importantce matrix training, |
|
PTB is still a valid dataset, which has the advantage of being manually curated, and similar to WikiText, |
|
without being WikiText; this can help against bias. |
|
|
|
**WikiText**\ |
|
The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of |
|
verified Good and Featured articles on Wikipedia. Compared to PTB, WikiText-2 is over 2 times larger and |
|
WikiText-103 is over 110 times larger. As it is composed of full articles, the dataset is well suited for models |
|
that can take advantage of long term dependencies.\ |
|
https://huggingface.co/datasets/wikitext |
|
|
|
**WikiText_FR**\ |
|
70 million tokens extracted from the set of french Wikipedia articles that are classified as "quality articles" |
|
or "good articles".\ |
|
https://huggingface.co/datasets/asi/wikitext_fr |
|
|
|
## exllamav2 calibration data |
|
|
|
https://github.com/turboderp/exllamav2/tree/master/conversion/standard_cal_data |
|
|
|
**c4**\ |
|
|
|
**code**\ |
|
Programming |
|
|
|
**multilingual**\ |
|
English, Arabic, Chinese, French, German, Japanese, Polish, Russian, Spanish, Swedish, Turkish, Hebrew, |
|
Macedonian, Norwegian, Lithuanian, Greek, Italian, Afrikaans, Dutch, Danish. |
|
|
|
**technical**\ |
|
Technical writing. |
|
|
|
**tiny**\ |
|
Very short stories. |
|
|
|
**wiki**\ |
|
Wikipedia dump. |
|
|
|
## How to quantize with an imatrix in llama.cpp |
|
|
|
1. Get one of the input files collected here, or eleswhere. |
|
2. Convert or download the model you want to quantise, in fp16 GGUF format. |
|
3. Generate an imatrix file specific to the model you want to quantise |
|
``` |
|
cd <llama.cpp directory> |
|
./imatrix -m <model_path>/ggml-model-f16.gguf -f <matrix_training_path>/<plain_text_matrix_file> -o <output_binary_file.matrix> -t 12 -ngl 144 --chunks 100 -b 512 -c 512 |
|
|
|
# -ngl : layers offloaded to gpu (recommended to use number of layers the model contains) |
|
# -t 12 : number of threads (should probably match no of cpu) |
|
# -c 512 : context size, testing seems to show 512 is recommended (default=512, 0=loaded from model) |
|
# -b 200 : batch size (default=512) |
|
# --chunks 100 (recommended) |
|
# --mlock : keep model in ram (only use if you had sufficient RAM for the whole fp16) |
|
``` |
|
4. Use the generated binary matrix file to quantise the model |
|
``` |
|
./quantize <model_path>/ggml-model-f16.gguf -matrix <matrix_file> <output_model_path>/ggml-model-IQ4_XS.gguf IQ4_XS |
|
``` |
|
Note: normal quantisation also benefits from using a matrix file. It also seem that a larger input data is |
|
better for higher quantisation. |