Transformers
GGUF
English
Inference Endpoints
mradermacher commited on
Commit
498450f
1 Parent(s): c72230e

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -1
README.md CHANGED
@@ -85,7 +85,6 @@ more details, including on how to concatenate multi-part files.
85
  | [GGUF](https://huggingface.co/mradermacher/bagel-34b-v0.5-GGUF/resolve/main/bagel-34b-v0.5.Q6_K.gguf) | Q6_K | 28.9 | very good quality |
86
  | [GGUF](https://huggingface.co/mradermacher/bagel-34b-v0.5-GGUF/resolve/main/bagel-34b-v0.5.Q8_0.gguf) | Q8_0 | 37.1 | fast, best quality |
87
 
88
-
89
  Here is a handy graph by ikawrakow comparing some lower-quality quant
90
  types (lower is better):
91
 
 
85
  | [GGUF](https://huggingface.co/mradermacher/bagel-34b-v0.5-GGUF/resolve/main/bagel-34b-v0.5.Q6_K.gguf) | Q6_K | 28.9 | very good quality |
86
  | [GGUF](https://huggingface.co/mradermacher/bagel-34b-v0.5-GGUF/resolve/main/bagel-34b-v0.5.Q8_0.gguf) | Q8_0 | 37.1 | fast, best quality |
87
 
 
88
  Here is a handy graph by ikawrakow comparing some lower-quality quant
89
  types (lower is better):
90