RuntimeError: CUDA error: device-side assert triggered

#6
by NickyNicky - opened

Google Colab

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "mlabonne/NeuralBeagle14-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])

image.png

image.png

!pip install -qU transformers accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer, TrainingArguments, BitsAndBytesConfig
import transformers
import torch,accelerate

model = "mlabonne/NeuralBeagle14-7B"

model_kwargs = {"device_map": "auto", 
                "load_in_4bit": True,
                "torch_dtype":torch.float16,
                "device_map":"auto",}

tokenizer = AutoTokenizer.from_pretrained(model)

model = AutoModelForCausalLM.from_pretrained(model, **model_kwargs)

pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    # device=0,
)

messages = [{"role": "user", "content": "What is a large language model?"}]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])

image.png

I'm still very new to this compared to a lot of you guys :)

But I get something similar, but for a AMD ROCm setup.
image.png

To get around it I'll have resize token embeddings before running inference to get around out of bound warnings/errors:

model.resize_token_embeddings(len(tokenizer))

The models embed_tokens goes from 32000:

image.png

to 32002

image.png

After that I can run the pipeline without spamming device-side assertion.

I got the same error and could it only get working by using the tokenizer_config.json from mistralai_Mistral-7B-Instruct-v0.2. I haven't figured out yet which setting exactly causes this.
Anyway, after a quick test it seems Open Hermes 2.5 still wipes the floor with this model in terms of reasoning and it's so censored it thinks stealing an egg from my chicken is unfair. I don't expect fixing the config properly will change much.

To get around it I'll have resize token embeddings before running inference to get around out of bound warnings/errors:

model.resize_token_embeddings(len(tokenizer))

I have the same issue. Can you post exactly what files need to be changed? I am not familiar with the internals of OB.

Is it possible to just revert the config to the state in which the model was trained? Having to resize the embedding matrix (ie adding embeddings) seems very suboptimal? The added <|im_start|> embedding would basically be noise, requiring the model to learn it during fine-tuning. In my experiment, the fine-tuning goes incredibly poorly - the same as here.

Has anybody been able to successfully fine-tune this model? Performance seems strong, but my use case needs it to be fine-tuned, which goes very poorly here.

@reknine69
I got the same error. But restarting the session fixed that. I hope this can solve yours too.

Sign up or log in to comment