Running the model on GPU

#16
by Jaglinux - opened

Did anyone try running the model on GPU?

device = "cuda:0" if torch.cuda.is_available() else "cpu"
batch_dict = self.tokenizer(input_texts, max_length=512,
padding=True, truncation=True, return_tensors='pt').to(device)

outputs = self.model(**batch_dict).to(device)

I get the below error,
File..
outputs = self.model(**batch_dict).to(device)
...
File "../python3.9/site-packages/torch/nn/functional.py", line 2233, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)

Sign up or log in to comment