Unable to load the model

#10
by armandal - opened

Hi,
I'm following the tuto, installed all libs requiered (including flash_attn), but I cannot load the model:

phi_vision_error.PNG

Is there any mistake in the versions of libraries that are requiered by the tuto ?

thanks!

Microsoft org

Maybe, try to install the latest flash attention: pip install flash-attn --no-build-isolation.

I am facing the same problem even though I have already installed flash-attn.
The vision model is the only one not working for me.
Is anyone else experiencing this issue?

Running on Windows or without flash attention

To enable the model on these enviroment here are steps that you may consider to follow:

Step 1: comment flash attention import code in modeling_phi3_v.py from line 52 to line 56.

# if is_flash_attn_2_available():
#     from flash_attn import flash_attn_func, flash_attn_varlen_func
#     from flash_attn.bert_padding import index_first_axis, pad_input, unpad_input  # noqa

#     _flash_supports_window_size = "window_size" in list(inspect.signature(flash_attn_func).parameters)

Step 2: change _"attn_implementation" from "flash_attention_2" to "eager" in config.json or disable flash attention when you create the model as below.

model = AutoModelForCausalLM.from_pretrained('microsoft/Phi-3-vision-128k-instruct', device_map="cuda", trust_remote_code=True, torch_dtype="auto", _attn_implementation="eager")

Maybe, try to install the latest flash attention: pip install flash-attn --no-build-isolation.

thanks for this recommendation, but it did not work :(

To enable the model on these enviroment here are steps that you may consider to follow:

Step 1: comment flash attention import code in modeling_phi3_v.py from line 52 to line 56.

Step 2: change _"attn_implementation" from "flash_attention_2" to "eager" in config.json or disable flash attention when you create the model as below.

model = AutoModelForCausalLM.from_pretrained('microsoft/Phi-3-vision-128k-instruct', device_map="cuda", trust_remote_code=True, torch_dtype="auto", _attn_implementation="eager")

This works for me to load the model! Thx for your help

Here's a version that works with the changes mentioned above, Kukedlc/Phi-3-Vision-Win-snap.
https://huggingface.co/Kukedlc/Phi-3-Vision-Win-snap

You just need to run the Python code provided in the model card; the changes in the .py file and config.json are already made.

Here's a version that works with the changes mentioned above, Kukedlc/Phi-3-Vision-Win-snap.
https://huggingface.co/Kukedlc/Phi-3-Vision-Win-snap

You just need to run the Python code provided in the model card; the changes in the .py file and config.json are already made.

thanks! it works after having installed the following packages:

pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers.git@v4.40.2
pip install flash_attn==2.5.8
pip install numpy==1.24.4
pip install Pillow==10.3.0
pip install Requests==2.31.0
pip install torch==2.3.0
pip install torchvision==0.18.0
pip install accelerate
nguyenbh changed discussion status to closed

Sign up or log in to comment