GGUF
Merge
Not-For-All-Audiences
Inference Endpoints
Edit model card

DarkForest 20B v2.0 - GGUF IMat quants

IMatrix file prepared witch context 4096 and 5521 chunks of wiki.test.raw. Original model info: DarkForest-20B-v2.0

q8_0:

  • [5521]8.5016,
  • Final estimate: PPL = 8.5016 +/- 0.02134

q6_k:

  • [5521]8.5046,
  • Final estimate: PPL = 8.5046 +/- 0.02136

q5_0:

  • [5521]8.4903,
  • Final estimate: PPL = 8.4903 +/- 0.02132

q4_K_S:

  • [5521]8.5880,
  • Final estimate: PPL = 8.5880 +/- 0.02162

q4_K_M:

  • [5521]8.5906,
  • Final estimate: PPL = 8.5906 +/- 0.02163

q4_0:

  • [5521]8.5610,
  • Final estimate: PPL = 8.5610 +/- 0.02151

q3_K_M:

  • [5521]8.7283,
  • Final estimate: PPL = 8.7283 +/- 0.02196

q2_K:

  • [5521]9.2445
  • Final estimate: PPL = 9.2445 +/- 0.02351

IQ2_XS:

  • [5521]9.8329,
  • Final estimate: PPL = 9.8329 +/- 0.02452

IQ2_XSS:

  • [5521]10.5170,
  • Final estimate: PPL = 10.5170 +/- 0.02651

IQ1_S:

  • [5521]13.9487,
  • Final estimate: PPL = 13.9487 +/- 0.03502 - UNUSABLE, model is too small to be coherent in 1 bit.

All comments are greatly appreciated, download, test and if you appreciate my work, consider buying me my fuel: Buy Me A Coffee

Downloads last month
170
GGUF
Model size
20B params
Architecture
llama

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .

Collection including TeeZee/DarkForest-20B-v2.0-GGUF-iMat