GGUF
Composer
MosaicML
llm-foundry
maddes8cht commited on
Commit
a597c42
·
1 Parent(s): 00a65ff

"Update README.md"

Browse files
Files changed (1) hide show
  1. README.md +1 -3
README.md CHANGED
@@ -25,7 +25,7 @@ Here's what you need to know:
25
 
26
  **Original Falcon Models:** I am diligently working to provide updated quantized versions of the four original Falcon models to ensure their compatibility with the new llama.cpp versions. Please keep an eye on my Hugging Face Model pages for updates on the availability of these models. Promptly downloading them is essential to maintain compatibility with the latest llama.cpp releases.
27
 
28
- **Derived Falcon Models:** It's important to note that the derived Falcon-Models cannot be re-converted without adjustments from the original model creators. Therefore, the compatibility of these derived models with the new llama.cpp versions depends on the actions of the original model creators. So far, these models cannot be used in recent llama.cpp versions at all.
29
 
30
  **Stay Informed:** Application software using llama.cpp libraries will follow soon. Keep an eye on the release schedules of your favorite software applications that rely on llama.cpp. They will likely provide instructions on how to integrate the new models.
31
 
@@ -40,8 +40,6 @@ As a solo operator of this page, I'm doing my best to expedite the process, but
40
 
41
 
42
 
43
-
44
-
45
  # About GGUF format
46
 
47
  `gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
 
25
 
26
  **Original Falcon Models:** I am diligently working to provide updated quantized versions of the four original Falcon models to ensure their compatibility with the new llama.cpp versions. Please keep an eye on my Hugging Face Model pages for updates on the availability of these models. Promptly downloading them is essential to maintain compatibility with the latest llama.cpp releases.
27
 
28
+ **Derived Falcon Models:** Right now, the derived Falcon-Models cannot be re-converted without adjustments from the original model creators. So far, these models cannot be used in recent llama.cpp versions at all. ** Good news! It's in the pipeline that the capability for quantizing even the older derived Falcon models will be incorporated soon. However, the exact timeline is beyond my control.
29
 
30
  **Stay Informed:** Application software using llama.cpp libraries will follow soon. Keep an eye on the release schedules of your favorite software applications that rely on llama.cpp. They will likely provide instructions on how to integrate the new models.
31
 
 
40
 
41
 
42
 
 
 
43
  # About GGUF format
44
 
45
  `gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.