dranger003 commited on
Commit
21a2b7f
1 Parent(s): bd21e3b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -4,9 +4,11 @@ pipeline_tag: text-generation
4
  library_name: gguf
5
  base_model: CohereForAI/c4ai-command-r-plus
6
  ---
7
- **2024-04-09**: Support for this model has been merged into the main branch - [`PR #6491`](https://github.com/ggerganov/llama.cpp/pull/6491).
 
 
8
  Noeda's fork will not work with these weights, you will need the main branch of llama.cpp.
9
- I am currently running perplexity on all the quants posted here, and will update this model page with the results.
10
 
11
  * GGUF importance matrix (imatrix) quants for https://huggingface.co/CohereForAI/c4ai-command-r-plus
12
  * The importance matrix is trained for ~100K tokens (200 batches of 512 tokens) using [wiki.train.raw](https://huggingface.co/datasets/wikitext).
 
4
  library_name: gguf
5
  base_model: CohereForAI/c4ai-command-r-plus
6
  ---
7
+ **2024-04-09**: Support for this model has been merged into the main branch.
8
+ [Pull request `PR #6491`](https://github.com/ggerganov/llama.cpp/pull/6491)
9
+ [Commit `5dc9dd71`](https://github.com/ggerganov/llama.cpp/commit/5dc9dd7152dedc6046b646855585bd070c91e8c8)
10
  Noeda's fork will not work with these weights, you will need the main branch of llama.cpp.
11
+ Also, I am currently running perplexity on all the quants posted here, and will update this model page with the results.
12
 
13
  * GGUF importance matrix (imatrix) quants for https://huggingface.co/CohereForAI/c4ai-command-r-plus
14
  * The importance matrix is trained for ~100K tokens (200 batches of 512 tokens) using [wiki.train.raw](https://huggingface.co/datasets/wikitext).