Does CohereForAI/aya-expanse-8b supports QLoRA?

#18
by Archchana - opened

I received this following error which I think was generated because the CohereForCausalLM is not compatible with quantization and QLoRA training.

Traceback (most recent call last):
File "Experiments/regression_head_Mat/exp-4/aya_train_multilingual-GEMBA.py", line 200, in
model = create_headed_qlora(
File "Anaconda3/envs/transhead2/lib/python3.10/site-packages/transformer_heads/util/load_model.py", line 268, in create_headed_qlora
model = prepare_model_for_kbit_training(
File "Anaconda3/envs/transhead2/lib/python3.10/site-packages/peft/utils/other.py", line 116, in prepare_model_for_kbit_training
model.enable_input_require_grads()
File "Anaconda3/envs/transhead2/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1761, in enable_input_require_grads
self._require_grads_hook = self.get_input_embeddings().register_forward_hook(make_inputs_require_grads)
File "Anaconda3/envs/transhead2/lib/python3.10/site-packages/transformers/models/cohere/modeling_cohere.py", line 994, in get_input_embeddings
return self.model.embed_tokens
File "Anaconda3/envs/transhead2/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1709, in getattr
raise AttributeError(f"'{type(self).name}' object has no attribute '{name}'")
AttributeError: 'CohereForCausalLM' object has no attribute 'embed_tokens'

Sign up or log in to comment