Video-Text-to-Text
Transformers
Safetensors
English
llava
text-generation
multimodal
Eval Results
Inference Endpoints

There are some modules missing like 'einops'

#6
by vigneshwar472 - opened

I was using the generation process given in README. There were import errors. Perhaps some modules are missing.

image.png

vigneshwar472 changed discussion status to closed

Sorry for inconvenience.
I did not realize that I should install it manually.

But why 'LLavaLLamaForCausalLM' cannot be imported from 'llava.model'

vigneshwar472 changed discussion status to open

Check my comment for more information @vigneshwar472 . You may find the same problems that I did.

First, I had to install all of this packages,

"accelerate>=1.0.1",
"av>=13.1.0",
"boto3>=1.35.46",
"decord>=0.6.0",
"einops>=0.6.0",
"flash-attn",
"llava",
"open-clip-torch>=2.28.0",
"transformers>=4.45.2",

Second, on llava.models you need to uncomment the line for Qwen to work.

Just change the init.py in llava folder to "from .model.language_model.llava_llama import LlavaLlamaForCausalLM"

Sign up or log in to comment