Joseph717171 commited on
Commit
74610e8
1 Parent(s): dfc0955

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -14,9 +14,9 @@ author: Joseph717171 & froggeric (https://huggingface.co/datasets/froggeric/imat
14
  ```
15
  llama.cpp % ./quantize --imatrix path_to_imatrix path_to_model/ggml-model-f16.gguf model_name-QuantType.gguf QuantType
16
  ```
17
- # Note: If you need detailed steps to convert your Large Language Model to GGUF, please scroll to the bottom of this page and check out the section: How to convert (Supported) LLMs (Large Language Model) to GGUF format
18
 
19
- # Supplementary Learning: Training Datasets, their similarities and differences, and how to determine which one will be right for computing your imatrix.
20
 
21
  # Input files for generating the Importance Matrix
22
 
@@ -115,7 +115,7 @@ Small Wikipedia dump. Unclean, contains many unwanted tags.
115
  exllamav2 calibration data taken from:\
116
  https://github.com/turboderp/exllamav2/tree/master/conversion/standard_cal_data
117
 
118
- # How to convert (Supported) LLMs (Large Language Model) to GGUF format:
119
  ```
120
  llama.cpp % python convert.py path_to_model --outtype f16
121
  ```
 
14
  ```
15
  llama.cpp % ./quantize --imatrix path_to_imatrix path_to_model/ggml-model-f16.gguf model_name-QuantType.gguf QuantType
16
  ```
17
+ # Note: If you need detailed steps to convert your Large Language Model to GGUF, please scroll to the bottom of this page and check out the section: How to convert Supported LLMs (Large Language Models) to GGUF format
18
 
19
+ # Supplementary Learning: Training Datasets, Their Similarities and Differences, and How to Determine Which one will Be Right for Computing your Imatrix
20
 
21
  # Input files for generating the Importance Matrix
22
 
 
115
  exllamav2 calibration data taken from:\
116
  https://github.com/turboderp/exllamav2/tree/master/conversion/standard_cal_data
117
 
118
+ # How to Convert Supported LLMs (Large Language Models) to GGUF Format:
119
  ```
120
  llama.cpp % python convert.py path_to_model --outtype f16
121
  ```