File size: 3,642 Bytes
45733e6
 
 
 
 
 
 
c3f1ad9
 
 
 
b8521b5
 
 
c3f1ad9
fb0d715
c3f1ad9
 
 
 
 
 
db3151d
c3f1ad9
db3151d
c3f1ad9
db3151d
c3f1ad9
db3151d
c3f1ad9
db3151d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c3f1ad9
 
 
 
 
db3151d
c3f1ad9
db3151d
c3f1ad9
 
db3151d
c3f1ad9
 
 
db3151d
c3f1ad9
 
db3151d
c3f1ad9
 
db3151d
c3f1ad9
 
45733e6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
---
language:
- en
---

# Input files for generating the Importance Matrix


## Which file to use for generating the importance matrix

Not all importance matrices are equal. The best results are obtained when using a source file similar to the
training data. Size also matters: the bigger the model (eg: 70b vs 13b) and the higher the quant (eg: q6k_ vs iq3_xs),
the bigger the source file needs to be to make an impact. Multiple input files can be combined if needed; 
for example:
```
cat technical.txt multilingual.txt wiki.txt >custom.matrix
```

You will find below descriptions for the various input files provided, to help you choose the correct one.

## Community provided files

**8k_random_data**\

**20k_random_data**\

**groups_merged**\

**group_10_merged**\

**ptb.train**\
Penn Treebank (PTB) is a widely used preprocessed large dataset designed for language training. Casing,
punctuation and numbers have been removed from the training data. Recently it has kind of been superseeded
by WikiText which does not have these removals, features a larger vocabulary and full articles (better
suited for models that can take advantage of long term dependencies). However, for importantce matrix training,
PTB is still a valid dataset, which has the advantage of being manually curated, and similar to WikiText,
without being WikiText; this can help against bias.

**WikiText**\
The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of
verified Good and Featured articles on Wikipedia. Compared to PTB, WikiText-2 is over 2 times larger and
WikiText-103 is over 110 times larger. As it is composed of full articles, the dataset is well suited for models
that can take advantage of long term dependencies.\
https://huggingface.co/datasets/wikitext  

**WikiText_FR**\
70 million tokens extracted from the set of french Wikipedia articles that are classified as "quality articles"
or "good articles".\
https://huggingface.co/datasets/asi/wikitext_fr

## exllamav2 calibration data

https://github.com/turboderp/exllamav2/tree/master/conversion/standard_cal_data

**c4**\

**code**\
Programming

**multilingual**\
English, Arabic, Chinese, French, German, Japanese, Polish, Russian, Spanish, Swedish, Turkish, Hebrew,
Macedonian, Norwegian, Lithuanian, Greek, Italian, Afrikaans, Dutch, Danish. 

**technical**\
Technical writing.

**tiny**\
Very short stories.

**wiki**\
Wikipedia dump.

## How to quantize with an imatrix in llama.cpp

1. Get one of the input files collected here, or eleswhere.
2. Convert or download the model you want to quantise, in fp16 GGUF format.
3. Generate an imatrix file specific to the model you want to quantise
```
cd <llama.cpp directory>
./imatrix -m <model_path>/ggml-model-f16.gguf -f <matrix_training_path>/<plain_text_matrix_file> -o <output_binary_file.matrix> -t 12 -ngl 144 --chunks 100 -b 512 -c 512

# -ngl    : layers offloaded to gpu (recommended to use number of layers the model contains)
# -t 12   : number of threads (should probably match no of cpu)
# -c 512  : context size, testing seems to show 512 is recommended (default=512, 0=loaded from model)
# -b 200  : batch size (default=512)
# --chunks 100 (recommended)
# --mlock : keep model in ram (only use if you had sufficient RAM for the whole fp16)
```
4. Use the generated binary matrix file to quantise the model
```
./quantize <model_path>/ggml-model-f16.gguf -matrix <matrix_file> <output_model_path>/ggml-model-IQ4_XS.gguf IQ4_XS
```
Note: normal quantisation also benefits from using a matrix file. It also seem that a larger input data is
better for higher quantisation.