Text Generation
Transformers
PyTorch
llama
code
Eval Results
text-generation-inference

rope_theta

#6
by viktor-ferenczi - opened

I had to fix rope_theta in the config.json inside my local model folder to be able to use this model with vLLM:
```
< "rope_theta": 1000000,

"rope_theta": 10000,


I'm not sure this is a vLLM only issue or a problem with the configuration of this model.

Related PR and discussion at vLLM:
https://github.com/vllm-project/vllm/pull/998

Could you please clarify?
Your need to confirm your account before you can post a new comment.

Sign up or log in to comment