Quantize

Perplexity

  • causallm_14b.IQ4_XS.gguf: PPL = 13.4127 +/- 0.13762
  • causallm_14b.IQ3_XS.gguf: PPL = 13.3798 +/- 0.13641
  • causallm_14b.IQ2_XXS.gguf: PPL = 15.0160 +/- 0.15004

https://www.kaggle.com/code/reginliu/perplexity

Downloads last month
82
GGUF
Model size
14.2B params
Architecture
llama

2-bit

4-bit

Inference API
Unable to determine this model's library. Check the docs .

Dataset used to train Limour/CausalLM-14B-GGUF