torch.cuda.OutOfMemoryError: CUDA out of memory.

#115
by SlyGoblin - opened

I want to test the falcon 7b model locally on my PC with Nvidia RTX 3060 and 16gb RAM, I thought this should be enough for running falcon 7B but i am getting the below:

"torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 316.00 MiB. GPU 0 has a total capacity of 6.00 GiB of which 0 bytes is free. Of the allocated memory 12.10 GiB is allocated by PyTorch, and 34.25 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management "

I am passing a simple prompt and using the following code :

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
from torch.cuda.amp import autocast

model_name = "tiiuae/falcon-7b-instruct"
model_directory = "./model" # Specify your model directory here

Ensure torch knows to run on the GPU

device = "cuda" if torch.cuda.is_available() else "cpu"

Load the tokenizer and model, specifying the cache directory

tokenizer = AutoTokenizer.from_pretrained(model_name, cache_dir=model_directory)
model = AutoModelForCausalLM.from_pretrained(model_name, cache_dir=model_directory).to(device)

Example input

inputs = "Example input text here."

Preprocess and add batch dimension

input_ids = tokenizer(inputs, return_tensors="pt").input_ids.to(device)
input_ids = input_ids.unsqueeze(0) # Adds a batch dimension if not already present

Inference with mixed precision

with autocast():
outputs = model.generate(input_ids, max_length=100)

Post-process

generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)

what do i do

Hello, it seems that PyTorch was actually only able to allocate 12GB. I suggest you use nvtop (dunno how to do it on Windows) to find out who is sitting on those missing 4GB. Anyways, you should try to load the model with float16 or bfloat16 first. The full precision of float32 is rarely really necessary:

model = AutoModelForCausalLM.from_pretrained(model_name, cache_dir=model_directory, torch_dtype=torch.bfloat16).to(device)

If that still doesn't work, then you have to use quantization:

quantization_config = transformers.BitsAndBytesConfig(load_in_8bit=True, llm_int8_enable_fp32_cpu_offload=False)
#quantization_config = transformers.BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16)
model = AutoModelForCausalLM.from_pretrained(model_name, cache_dir=model_directory, device_map='cuda', quantization_config=quantization_config)

I have to use 4bit quantization and Flash Attention to run it on an 8GB RTX 3070. Good luck!

Sign up or log in to comment