dranger003 commited on
Commit
c56e918
1 Parent(s): ac82c46

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -0
README.md CHANGED
@@ -12,6 +12,12 @@ The quants here are meant to test imatrix quantized weights.
12
 
13
  **Added `ggml-dbrx-instruct-16x12b-f16_imatrix-wiki.dat` which is a 2K batches (1M tokens) on FP16 weights using wiki.train.**
14
 
 
 
 
 
 
 
15
  **2024-04-13**: Support for this model has just being merged - [`PR #6515`](https://github.com/ggerganov/llama.cpp/pull/6515).
16
  **<u>You will need this llama.cpp commit [`4bd0f93e`](https://github.com/ggerganov/llama.cpp/commit/4bd0f93e4ab4fe6682e7d0241c1bdec1397e954a) to run this model</u>**
17
 
 
12
 
13
  **Added `ggml-dbrx-instruct-16x12b-f16_imatrix-wiki.dat` which is a 2K batches (1M tokens) on FP16 weights using wiki.train.**
14
 
15
+ | Precision | Quant/Dataset | Size (GiB) | PPL |
16
+ | -- | -- | -- | -- |
17
+ | IQ4_XS | Q8_0/wiki.train | 65.29 | 5.2260 +/- 0.03558 |
18
+ | IQ4_XS | FP16/wiki.train | 65.29 | 5.2241 +/- 0.03559 |
19
+ | IQ4_XS | None | 66.05 | 5.2546 +/- 0.03570 |
20
+
21
  **2024-04-13**: Support for this model has just being merged - [`PR #6515`](https://github.com/ggerganov/llama.cpp/pull/6515).
22
  **<u>You will need this llama.cpp commit [`4bd0f93e`](https://github.com/ggerganov/llama.cpp/commit/4bd0f93e4ab4fe6682e7d0241c1bdec1397e954a) to run this model</u>**
23