froggeric commited on
Commit
d507b24
1 Parent(s): fb0d715

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -18
README.md CHANGED
@@ -48,36 +48,36 @@ https://huggingface.co/datasets/wikitext
48
  or "good articles".\
49
  https://huggingface.co/datasets/asi/wikitext_fr
50
 
51
- ## exllamav2 calibration data
 
52
 
53
- https://github.com/turboderp/exllamav2/tree/master/conversion/standard_cal_data
54
-
55
- **c4**\
56
-
57
- **code**\
58
  Programming
59
 
60
- **multilingual**\
61
  English, Arabic, Chinese, French, German, Japanese, Polish, Russian, Spanish, Swedish, Turkish, Hebrew,
62
  Macedonian, Norwegian, Lithuanian, Greek, Italian, Afrikaans, Dutch, Danish.
63
 
64
- **technical**\
65
  Technical writing.
66
 
67
- **tiny**\
68
- Very short stories.
69
 
70
- **wiki**\
71
- Wikipedia dump.
 
 
 
72
 
73
- ## How to quantize with an imatrix in llama.cpp
74
 
75
- 1. Get one of the input files collected here, or eleswhere.
76
  2. Convert or download the model you want to quantise, in fp16 GGUF format.
77
  3. Generate an imatrix file specific to the model you want to quantise
78
  ```
79
  cd <llama.cpp directory>
80
- ./imatrix -m <model_path>/ggml-model-f16.gguf -f <matrix_training_path>/<plain_text_matrix_file> -o <output_binary_file.matrix> -t 12 -ngl 144 --chunks 100 -b 512 -c 512
81
 
82
  # -ngl : layers offloaded to gpu (recommended to use number of layers the model contains)
83
  # -t 12 : number of threads (should probably match no of cpu)
@@ -86,9 +86,9 @@ cd <llama.cpp directory>
86
  # --chunks 100 (recommended)
87
  # --mlock : keep model in ram (only use if you had sufficient RAM for the whole fp16)
88
  ```
89
- 4. Use the generated binary matrix file to quantise the model
90
  ```
91
- ./quantize <model_path>/ggml-model-f16.gguf -matrix <matrix_file> <output_model_path>/ggml-model-IQ4_XS.gguf IQ4_XS
92
  ```
93
- Note: normal quantisation also benefits from using a matrix file. It also seem that a larger input data is
94
  better for higher quantisation.
 
48
  or "good articles".\
49
  https://huggingface.co/datasets/asi/wikitext_fr
50
 
51
+ **c4** (exllamav2)\
52
+ Constructed from news articles?
53
 
54
+ **code** (exllamav2)\
 
 
 
 
55
  Programming
56
 
57
+ **multilingual** (exllamav2)\
58
  English, Arabic, Chinese, French, German, Japanese, Polish, Russian, Spanish, Swedish, Turkish, Hebrew,
59
  Macedonian, Norwegian, Lithuanian, Greek, Italian, Afrikaans, Dutch, Danish.
60
 
61
+ **technical** (exllamav2)\
62
  Technical writing.
63
 
64
+ **tiny** (exllamav2)\
65
+ Very short stories. Be mindful of the prevalence of _"Once upon a time"_ and _"<|end_of_text|>"_.
66
 
67
+ **wiki** (exllamav2)\
68
+ Small Wikipedia dump. Unclean, contains many unwanted tags.
69
+
70
+ exllamav2 calibration data taken from:\
71
+ https://github.com/turboderp/exllamav2/tree/master/conversion/standard_cal_data
72
 
73
+ ## How to quantize using an imatrix, with llama.cpp
74
 
75
+ 1. Get one of the input files collected here, or elsewhere.
76
  2. Convert or download the model you want to quantise, in fp16 GGUF format.
77
  3. Generate an imatrix file specific to the model you want to quantise
78
  ```
79
  cd <llama.cpp directory>
80
+ ./imatrix -m <model_path>/ggml-model-f16.gguf -f <plain_text_matrix_file> -o <output.matrix> -t 12 -ngl 144 --chunks 100 -b 512 -c 512
81
 
82
  # -ngl : layers offloaded to gpu (recommended to use number of layers the model contains)
83
  # -t 12 : number of threads (should probably match no of cpu)
 
86
  # --chunks 100 (recommended)
87
  # --mlock : keep model in ram (only use if you had sufficient RAM for the whole fp16)
88
  ```
89
+ 4. Use the generated matrix file to quantise the model
90
  ```
91
+ ./quantize --matrix <output.matrix> <model_path>/ggml-model-f16.gguf <quantisation_level, eg:IQ4_XS>
92
  ```
93
+ Note: normal quantisation also benefits from using a matrix file. It also seem that a bigger input matrix is
94
  better for higher quantisation.