How to run on multiple gpus?

#25
by Chan-Y - opened

I have 4 gpus that I want to run Qwen2 VL models.

model_name="Qwen/Qwen2-VL-2B-Instruct"
model = Qwen2VLForConditionalGeneration.from_pretrained(
          model_name, torch_dtype="auto", device_map="auto"
        )
model = nn.DataParallel(model)
processor = AutoProcessor.from_pretrained(model_name)

messages = [
        {
            "role": "user",
            "content": [
                {
                    "type": "image",
                    "image": file
                },
                {
                    "type": "text",
                    "text": """Describe the image"""
                }
            ]
        }
]
text = processor.apply_chat_template(
            messages, tokenize=False, add_generation_prompt=True)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
            text=[text],
            images=image_inputs,
            videos=video_inputs,
            padding=True,
            return_tensors="pt",
        )
with torch.no_grad():
    generated_ids = model.module.generate(**inputs, max_new_tokens=128)

but I always get:

../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [35,0,0], thread: [31,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
ERROR:  CUDA error: device-side assert triggered
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

Traceback (most recent call last):
  File "/home/ubuntu/projects/mistral-qaC/services/VisionService.py", line 104, in ask_vision
    generated_ids = self.model.module.generate(
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/projects/upper/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/projects/upper/lib/python3.12/site-packages/transformers/generation/utils.py", line 2015, in generate
    result = self._sample(
             ^^^^^^^^^^^^^
  File "/home/ubuntu/projects/upper/lib/python3.12/site-packages/transformers/generation/utils.py", line 2965, in _sample
    outputs = self(**model_inputs, return_dict=True)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/projects/upper/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/projects/upper/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/projects/upper/lib/python3.12/site-packages/accelerate/hooks.py", line 169, in new_forward
    output = module._old_forward(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/ubuntu/projects/upper/lib/python3.12/site-packages/transformers/models/qwen2_vl/modeling_qwen2_vl.py", line 1598, in forward
    inputs_embeds[image_mask] = image_embeds
    ~~~~~~~~~~~~~^^^^^^^^^^^^
RuntimeError: CUDA error: device-side assert triggered
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.```
I tried running my python script using `CUDA_LAUNCH_BLOCKING=1 python script.py` but it didnt work either.

my transformers and pytorch versions are:
```bash
transformers==4.45.0.dev0
torch==2.4.1+cu124

Anyone knows how to fix?

Sign up or log in to comment