How to run speculative decoding of this model with 0.5B model.
#18
by
Eryk-Chmielewski
- opened
How to run speculative decoding of this model with 0.5B model.
I get errors in vllm.
First is that vocab_size is different. So in 0.5B in config.json I set vocab_size=152064 to match with 32B model value.
But now I get error
assert loaded_weight.shape[output_dim] == self.org_vocab_size
I don't see possibility to change output_dim
Is there any way to get around this?
How do you run speculative decoding on vllm with this model?
Can you provide here whole path to run vllm? Preferably from docker.
FYI speculative decoding "just works" with exllamav2 (via TabbyAPI), haven't had any issues using the 1.5b model for the draft.