Meta-Llama-3.1-8B-Instruct-bf16.gguf is still old version?

#4
by Firepin - opened

Hello and thank you bartowski for the quants.
All other files are from 1 day and the bf16 from 6 days ago.
Is it old and did you forget to update/upload it?
Thanks in advance!

yes it's old, i didn't make that one this time around, is it something you want? I'm gonna delete it for now, but if others want it i can reupload it (I don't necessarily recommend using it)

I will test the 32bits first ;)

sounds good, thanks for pointing it out :)

@bartowski I am trying to understand what is the point of 32 bit model, and why there's no bf16 or fp16? I though the original is a 16 bit model.

@lostmsu when converting using llama.cpp, you can go to BF16, FP16, or FP32

models are typically uploaded with BF16, but in llama.cpp you can't use CUDA for BF16 models, so when I calculate imatrix it made more sense to upcast to FP32 since that conversion is lossless

these days I just end up going to fp16 because after some math and statistical analysis, the amount of "loss" from BF16 -> FP16 conversion is so close to 0 that it doesn't actually matter

Sign up or log in to comment