vcadillo commited on
Commit
73cd2bc
1 Parent(s): 301cb65

Multiple GPU's issue.

Browse files

Trying to inference in multiple GPU's raises tensor in diferent devices error, so I solved the problem by moving the image_features to the inputs_embeds.device in line 855, like this:
new_input_embeds.append(torch.cat(
(inputs_embeds[i, :boi_token_pos], images_features[i].to(inputs_embeds.device), inputs_embeds[i, eoi_token_pos + 1:])))

Files changed (1) hide show
  1. modeling_chatglm.py +1 -1
modeling_chatglm.py CHANGED
@@ -858,7 +858,7 @@ class ChatGLMModel(ChatGLMPreTrainedModel):
858
  self.config.eoi_token_id)
859
  assert eoi_token_pos - boi_token_pos == 2
860
  new_input_embeds.append(torch.cat(
861
- (inputs_embeds[i, :boi_token_pos], images_features[i], inputs_embeds[i, eoi_token_pos + 1:])))
862
  new_position_ids.append(torch.cat(
863
  (position_ids[i, :boi_token_pos + 1], position_ids[i, boi_token_pos + 1].repeat(num_patches),
864
  position_ids[i, eoi_token_pos:])
 
858
  self.config.eoi_token_id)
859
  assert eoi_token_pos - boi_token_pos == 2
860
  new_input_embeds.append(torch.cat(
861
+ (inputs_embeds[i, :boi_token_pos], images_features[i].to(inputs_embeds.device), inputs_embeds[i, eoi_token_pos + 1:])))
862
  new_position_ids.append(torch.cat(
863
  (position_ids[i, :boi_token_pos + 1], position_ids[i, boi_token_pos + 1].repeat(num_patches),
864
  position_ids[i, eoi_token_pos:])