Pre-train on MLM Objective
Hello,
I am trying to pre-train the model on my own dataset, starting from the pre-trained weights on Huggingface. To test the model for pre-training, I am first trying the pipeline with dummy data, and then plan to substitute in my dataset. Pasted below is the code I am using to test the pre-training and the error message. I am currently testing this for sequence lengths of 2K instead of 131K because my GPUs cannot handle the longer sequence lengths.
I am not certain, but I believe the error is occurring because the of the non masked tokens being encoded as token id -100, which fails the assertion of nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [0,0,0] Assertion t >= 0 && t < n_classes
. I have confirmed that the max token id in my dataset is 10, which is less than the vocab size (16) of the model, so this should not be an issue.
I have very limited experience with using huggingface, but I tested a masked language model training pipeline for natural language obtained from here, and that seems to work fine. I do understand that the models and datasets are very different, but given the similarity in task, I have followed the same process for loading the caduceus model, creating the tokenized_dataset and creating the data_collator. I have also confirmed that the data and labels look similar between both tasks.
Maybe I am setting the padding/eos etc. token ids incorrectly due to which I am getting the error, but I am not sure how to fix it. I would really appreciate some help with this.
Package versions:
- Python - 3.8.15
- torch - 2.2.0+cu121
- transformers - 4.38.1
- datasets - 2.15.0
- cuda -
Cuda compilation tools, release 12.1, V12.1.66
Build cuda_12.1.r12.1/compiler.32415258_0
Code:
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "1"
import torch
from datasets import Dataset
from peft import LoraConfig, TaskType, get_peft_model
from transformers import (
AutoModelForMaskedLM,
AutoTokenizer,
DataCollatorForLanguageModeling,
Trainer,
TrainingArguments,
)
## Load tokenizer and model
model_name = "kuleshov-group/caduceus-ps_seqlen-131k_d_model-256_n_layer-16"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = AutoModelForMaskedLM.from_pretrained(model_name, trust_remote_code=True)
# Set the padding token id to the pad token id of the tokenizer so that
# the trainer passes this to the ignore_index of the loss function. By
# default, the padding_token_id for this model is None, and this throws
# an error in the Trainer.train() method - ignore_index for
# cross_entropy_loss cannot be None, should be an int/
model.config.pad_token_id = tokenizer.pad_token_id
## Random data sequence
def get_random_sequence(length):
nt_dict = {0: 'A', 1: 'C', 2: 'G', 3: 'T'}
rand_seq = torch.randint(0, 4, (length,))
return ''.join([nt_dict[int(i)] for i in rand_seq])
# Do not use LORA for now
peft_model = model
# seq_length = 131071
# Test with smaller sequence length because GPUs cannot support large
# sequence length as of now
seq_length = 1999
tokenizer.model_max_length = seq_length + 1
dataset = Dataset.from_dict({"text": [get_random_sequence(seq_length) for _ in range(100)]})
dataset = dataset.train_test_split(test_size=0.2, seed=3456)
def tokenize_function(examples):
return tokenizer(examples["text"])
tokenized_dataset = dataset.map(tokenize_function, batched=True,
remove_columns=dataset['train'].column_names)
# Create a data collator for masked language modeling
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer,
mlm_probability=0.15)
# Define training arguments
training_args = TrainingArguments(
output_dir="./results",
num_train_epochs=3,
learning_rate=1e-4,
weight_decay=0.01,
per_device_train_batch_size=4,
gradient_accumulation_steps=8,
report_to=["tensorboard"]
)
# Create Trainer
trainer = Trainer(
model=peft_model,
args=training_args,
train_dataset=tokenized_dataset["train"],
eval_dataset=tokenized_dataset["test"],
data_collator=data_collator,
tokenizer=tokenizer
)
# Start training
trainer.train()
# Save the fine-tuned model
model.save_pretrained("./fine_tuned_model")
Error:
Map: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 80/80 [00:00<00:00, 518.67 examples/s]
Map: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 20/20 [00:00<00:00, 500.99 examples/s]
/home/upamanyu/.pyenv/versions/3.8.15/envs/caduceus/lib/python3.8/site-packages/accelerate/accelerator.py:447: FutureWarning: Passing the following arguments to `Accelerator` is deprecated and will be removed in version 1.0 of Accelerate: dict_keys(['dispatch_batches', 'split_batches', 'even_batches', 'use_seedable_sampler']). Please pass an `accelerate.DataLoaderConfiguration` instead:
dataloader_config = DataLoaderConfiguration(dispatch_batches=None, split_batches=False, even_batches=True, use_seedable_sampler=True)
warnings.warn(
Detected kernel version 5.4.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.
0%| | 0/6 [00:00<?, ?it/s]Traceback (most recent call last):
File "caduceus.py", line 87, in <module>
trainer.train()
File "/home/upamanyu/.pyenv/versions/3.8.15/envs/caduceus/lib/python3.8/site-packages/transformers/trainer.py", line 1624, in train
return inner_training_loop(
File "/home/upamanyu/.pyenv/versions/3.8.15/envs/caduceus/lib/python3.8/site-packages/transformers/trainer.py", line 1961, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/home/upamanyu/.pyenv/versions/3.8.15/envs/caduceus/lib/python3.8/site-packages/transformers/trainer.py", line 2911, in training_step
self.accelerator.backward(loss)
File "/home/upamanyu/.pyenv/versions/3.8.15/envs/caduceus/lib/python3.8/site-packages/accelerate/accelerator.py", line 2151, in backward
loss.backward(**kwargs)
File "/home/upamanyu/.pyenv/versions/3.8.15/envs/caduceus/lib/python3.8/site-packages/torch/_tensor.py", line 522, in backward
torch.autograd.backward(
File "/home/upamanyu/.pyenv/versions/3.8.15/envs/caduceus/lib/python3.8/site-packages/torch/autograd/__init__.py", line 266, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)`
../aten/src/ATen/native/cuda/Loss.cu:250: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:250: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:250: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [2,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:250: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [3,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:250: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [4,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:250: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [5,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:250: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:250: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [8,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:250: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [9,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:250: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [10,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:250: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [11,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:250: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [12,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:250: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [13,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:250: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [14,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:250: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [15,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:250: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [16,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:250: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [17,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:250: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [18,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:250: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [19,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:250: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [20,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:250: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [21,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:250: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [23,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:250: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [24,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:250: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [25,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:250: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [26,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:250: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [28,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:250: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [30,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:250: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [31,0,0] Assertion `t >= 0 && t < n_classes` failed.
0%| | 0/6 [00:03<?, ?it/s]
This does seem like there's an indexing error between the shape of the predictions and the labels. Can you try printing out some of the label tensors or extreme values of the labels and seeing if they have indices that will cause on out of index error?
Changing the pad token id to -100 fixed the issue, but I am not sure if this is the correct way to solve it:
model.config.pad_token_id = -100
The issue was definitely because the non-masked tokens are marked with a label of -100 by the data collator and this caused the failed assertion. After the above modification, the model starts training, but I was wondering if it is actually calculating the loss correctly.
I was not using padding
in my work so I did not need to take this into account, but I believe that using pad_token = -100
should be fine if that is the ignore index that you indicate in the cross entropy loss. Do the loss curves look reasonable?