iboing commited on
Commit
3e902e6
·
verified ·
1 Parent(s): 77f530e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -3
README.md CHANGED
@@ -1,3 +1,16 @@
1
- ---
2
- license: llama2
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama2
3
+ ---
4
+
5
+ The LLaMA-2-7b model finetuned on the Math task using [CorDA](arxiv.org/pdf/2406.05223) in the KPA mode with nqopen.
6
+
7
+ | Method | TriviaQA | NQ open | GSM8k | Math |
8
+ |---|---|---|---|---|
9
+ |LoRA|44.17|1.91|42.68|5.92|
10
+ |[CorDA (KPA with nqopen)](https://huggingface.co/iboing/CorDA_KPA_nqopen_finetuned_math/tree/main) | **45.23** | **10.44** | 45.64 | 6.94|
11
+ |[CorDA (IPA with MetaMath)](https://huggingface.co/iboing/CorDA_IPA_math_finetuned_math/tree/main) | - | - | **54.59** | **8.54** |
12
+
13
+ You can evaluate the model's performance following the step-3 in [CorDA github repo](https://github.com/iboing/CorDA).
14
+
15
+ Note: The model trained using CorDA adapter is based on customized code. If you want to restore the original LLaMA architecture, execute `merge_adapter_for_corda.py` in [CorDA github repo](https://github.com/iboing/CorDA).
16
+