qwp4w3hyb commited on
Commit
5166400
1 Parent(s): b55abaa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -4
README.md CHANGED
@@ -14,10 +14,8 @@ base_model: google/gemma-2-9b-it
14
  # Quant Infos
15
 
16
  - quants done with an importance matrix for improved quantization loss
17
- - Currently requantizing ggufs & imatrix from bf16
18
- - initial version was based on f32 gguf provided by google, which has various issues
19
- - WIP new version should have better metadata & fixed tokenizer as its quantized from hf safetensors with llama.cpp
20
- - still uploading all the quants, _L & _XL are already the new version, other quants will update during the course of the day
21
  - Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S
22
  - experimental custom quant types
23
  - `_L` with `--output-tensor-type f16 --token-embedding-type f16` (same as bartowski's)
 
14
  # Quant Infos
15
 
16
  - quants done with an importance matrix for improved quantization loss
17
+ - Requantized ggufs & imatrix from hf bf16
18
+ - initial version was based on f32 gguf provided by google, which had various issues
 
 
19
  - Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S
20
  - experimental custom quant types
21
  - `_L` with `--output-tensor-type f16 --token-embedding-type f16` (same as bartowski's)