How to infer on GPU?

#4
by z-hb - opened
from transformers import ViTImageProcessor, ViTModel
from PIL import Image
import requests

url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)

processor = ViTImageProcessor.from_pretrained('google/vit-base-patch16-224-in21k')
model = ViTModel.from_pretrained('google/vit-base-patch16-224-in21k')
inputs = processor(images=image, return_tensors="pt")

outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state

This is an example of the VIT model shown on the model card, if I want to infer on GPU, what should I do?
I know I should put

model = model.cuda()

But how to put the data into GPU? What code should I use for the processor?

z-hb changed discussion status to closed

Sign up or log in to comment