params discrepancy in QLoRA?

#1
by eastwind - opened

trainable params: 2359296 || all params: 3611104128 || trainable%: 0.06533447711203746

Hey, when trying to train this model with QLoRA, I see that the all params are only 3 billion, whereas I expected this to be 7b. Am I missing something here?

Thanks

H2O.ai org

Lora + Quantization inflates the parameter counts. There is also a difference between different quantization precisions.

psinger changed discussion status to closed

Sign up or log in to comment