Add missing torch when run with pipeline API
#5
by
quocbao747
- opened
Add missing torch when run with pipeline API
Just adding to this one. Apart from the missing torch import, I actually also had to import tokenizer manually. Even if pulling the latest transformers. This code works:
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import torch
# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("google/gemma-3-1b-pt", torch_dtype=torch.bfloat16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("google/gemma-3-1b-pt")
# Create pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer
)
# Generate text
output = pipe("Eiffel tower is located in", max_new_tokens=50)
print(output)```
Do I need to install accelerate>=0.26.0
or any other packages ?
>>> model = AutoModelForCausalLM.from_pretrained("google/gemma-3-1b-pt", torch_dtype=torch.bfloat16, device_map="auto")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/root/models/venv/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 564, in from_pretrained
return model_class.from_pretrained(
File "/root/models/venv/lib/python3.10/site-packages/transformers/modeling_utils.py", line 273, in _wrapper
return func(*args, **kwargs)
File "/root/models/venv/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3753, in from_pretrained
raise ImportError(
ImportError: Using `low_cpu_mem_usage=True`, a `device_map` or a `tp_plan` requires Accelerate: `pip install 'accelerate>=0.26.0'`