TheBloke commited on
Commit
8a19e07
1 Parent(s): f9a7540

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -3
README.md CHANGED
@@ -25,9 +25,12 @@ license: apache-2.0
25
 
26
  These files are **experimental** GGML format model files for [Falcon 7B Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct).
27
 
28
- These GGML files will **not** work in llama.cpp, and at the time of writing they will not work with any UI or library. They cannot be used from Python code.
29
 
30
- They can be used with a new fork of llama.cpp that adds Falcon GGML support: [cmp-nc/ggllm.cpp](https://github.com/cmp-nct/ggllm.cpp)
 
 
 
31
 
32
  Note: It is not currently possible to use the new k-quant formats with Falcon 7B. This is being worked on.
33
 
@@ -40,7 +43,11 @@ Note: It is not currently possible to use the new k-quant formats with Falcon 7B
40
  <!-- compatibility_ggml start -->
41
  ## Compatibility
42
 
43
- To build cmp-nct's fork of llama.cpp with Falcon 7B support plus preliminary CUDA acceleration, please try the following steps:
 
 
 
 
44
 
45
  ```
46
  git clone https://github.com/cmp-nct/ggllm.cpp
 
25
 
26
  These files are **experimental** GGML format model files for [Falcon 7B Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct).
27
 
28
+ These GGML files will **not** work in llama.cpp, text-generation-webui or KoboldCpp.
29
 
30
+ They can be used from:
31
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui).
32
+ * The ctransformers Python library, which includes LangChain support: [ctransformers](https://github.com/marella/ctransformers).
33
+ * A new fork of llama.cpp that introduced this new Falcon GGML support: [cmp-nc/ggllm.cpp](https://github.com/cmp-nct/ggllm.cpp).
34
 
35
  Note: It is not currently possible to use the new k-quant formats with Falcon 7B. This is being worked on.
36
 
 
43
  <!-- compatibility_ggml start -->
44
  ## Compatibility
45
 
46
+ The recommended UI for these GGMLs is [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui). Preliminary CUDA GPU acceleration is provided.
47
+
48
+ For use from Python code, use [ctransformers](https://github.com/marella/ctransformers). Again, with preliminary CUDA GPU acceleration
49
+
50
+ Or to build cmp-nct's fork of llama.cpp with Falcon 7B support plus preliminary CUDA acceleration, please try the following steps:
51
 
52
  ```
53
  git clone https://github.com/cmp-nct/ggllm.cpp