TheBloke commited on
Commit
3f031bc
1 Parent(s): d86dc4e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -7
README.md CHANGED
@@ -48,7 +48,7 @@ This repo contains **EXPERIMENTAL** GGUF format model files for [Mistral AI_'s M
48
 
49
  These are experimental GGUF files, created using a llama.cpp PR found here: https://github.com/ggerganov/llama.cpp/pull/4406.
50
 
51
- THEY WILL NOT WORK WITH LLAMA.CPP FROM `main`, OR ANY DOWNSTREAM LLAMA.CPP CLIENT - such as llama-cpp-python, text-generation-webui, etc.
52
 
53
  To test these GGUFs, please build llama.cpp from the above PR.
54
 
@@ -115,12 +115,6 @@ Refer to the Provided Files table below to see what files use which methods, and
115
 
116
  **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
117
 
118
- The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
119
-
120
- * LM Studio
121
- * LoLLMS Web UI
122
- * Faraday.dev
123
-
124
  ### On the command line, including multiple files at once
125
 
126
  I recommend using the `huggingface-hub` Python library:
 
48
 
49
  These are experimental GGUF files, created using a llama.cpp PR found here: https://github.com/ggerganov/llama.cpp/pull/4406.
50
 
51
+ THEY WILL NOT WORK WITH LLAMA.CPP FROM `main`, OR ANY DOWNSTREAM LLAMA.CPP CLIENT - such as LM Studio, llama-cpp-python, text-generation-webui, etc.
52
 
53
  To test these GGUFs, please build llama.cpp from the above PR.
54
 
 
115
 
116
  **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
117
 
 
 
 
 
 
 
118
  ### On the command line, including multiple files at once
119
 
120
  I recommend using the `huggingface-hub` Python library: