Thank you! Question about perplexity

#2
by algorithm - opened

Thank you for this, this is great.

I was also wondering if the increase in perplexity existed before or prior to conversion to gguf?

I'm asking because there are some reports of perplexity issues when converting to gguf (ie. https://github.com/ggerganov/llama.cpp/issues/7062).

Owner
β€’
edited May 6

Hmm good point. So if I measure perplexity while they are in the huggingface library, I get a smaller difference. (ignore the absolute value, since there are differences in the measurement).

model perplexity
base 295.462970
orthogonalized 309.856348

So perhaps you are right, curious

Sign up or log in to comment