The LLaMA-2-7b model finetuned on the Math task using CorDA in the KPA mode with nqopen.

Method TriviaQA NQ open GSM8k Math
LoRA 44.17 1.91 42.68 5.92
CorDA (KPA with nqopen) 45.23 10.44 45.64 6.94
CorDA (IPA with MetaMath) - - 54.59 8.54

You can evaluate the model's performance following the step-3 in CorDA github repo.

Note: The model trained using CorDA adapter is based on customized code. If you want to restore the original LLaMA architecture, execute merge_adapter_for_corda.py in CorDA github repo.

Downloads last month
19
Safetensors
Model size
7.06B params
Tensor type
FP16
·
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.

Collection including iboing/CorDA_KPA_nqopen_finetuned_math