Architecture should be Qwen2ForCausalLM or Qwen2Model?
1
#19 opened 17 days ago
by
ultraxyz
how to load this model to multiple gpus?
1
#18 opened 19 days ago
by
yijiu
infer speed so slow and mutil-gpu infer bug
1
#17 opened 24 days ago
by
YuYuyanzu
使用官方代码报错 ,Failed to import transformers.models.auto because of the following error
1
#15 opened about 1 month ago
by
haaaaaaaa1
Usage with transformers instead of sentence_transformers
1
#14 opened about 2 months ago
by
emanjavacas
![](https://cdn-avatars.huggingface.co/v1/production/uploads/61a615ec995d6dd6541b010c/27Iy8wd2JEWuhLOkgyi2W.jpeg)
GGUF format model needed
#10 opened 2 months ago
by
ehmy
出个4bit量化版本吧
4
#9 opened 2 months ago
by
piboye
how to finetune this model
3
#8 opened 2 months ago
by
enbacheng