Slow inference times on gpu
#17
by
loretoparisi
- opened
While model loading is pretty fast (once downloaded) and it takes around 1.5 seconds an inference for 2048 token (max_length
) on a A10G / 24GB took ~80 sec.
loading function was
def load_hf_local(model_name, device, dtype:torch.float16):
"""
load model via Huggingface AutoTokenizer, AutoModelForCausalLM
"""
start_time = time. time()
torch.set_default_dtype(dtype)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_name, local_files_only=True, trust_remote_code=True)
with torch.device(device):
model = transformers.AutoModelForCausalLM.from_pretrained(model_name, local_files_only=True, device_map="auto", torch_dtype=dtype, trust_remote_code=True)
model.to(device)
print(f"Loaded in {time.time() - start_time: .2f} seconds")
return tokenizer, model
generate function was
def LLM_generate(model, tokenizer, prompt, length):
start_time = time.time()
inputs = tokenizer(prompt, return_tensors="pt", return_attention_mask=False)
model_inputs = inputs.to(device)
model.to(device)
input_token_len = len(model_inputs.tokens())
outputs = model.generate(**model_inputs, max_length=length if length >= input_token_len else input_token_len)
print(f"generated in {time.time() - start_time: .2f} seconds")
return tokenizer.batch_decode(outputs)[0]
while setting max_length
to 512 tokens, led to ~20
seconds.
This is a test ranging from 128 to 2048
generated in 4.37 seconds
max_length:128, elapsed:4.372055530548096
generated in 9.16 seconds
max_length:256, elapsed:9.158923625946045
generated in 19.05 seconds
max_length:512, elapsed:19.05333709716797
generated in 38.90 seconds
max_length:1024, elapsed:38.89565658569336
generated in 79.18 seconds
max_length:2048, elapsed:79.17627263069153
Phi models are compatible with vLLM, have you considered using it?
https://docs.vllm.ai/en/latest/index.html
gugarosa
changed discussion status to
closed
vLLM crashes with Phi 2.0: AttributeError: 'PhiConfig' object has no attribute 'layer_norm_epsilon'