vmajor commited on
Commit
6190410
1 Parent(s): 7a9e677

Updated with HuggingFace leaderboard benchmark results

Browse files
Files changed (1) hide show
  1. README.md +11 -1
README.md CHANGED
@@ -8,7 +8,17 @@ This merged model showed marginal improvement in perplexity scores:
8
 
9
  The perplexity for Orca-2-13b is: 7.595028877258301
10
  The perplexity for orca2-26B-self-merge is: 7.550178050994873
11
- The perplexity for orca2-39B-self-merge is:
 
 
 
 
 
 
 
 
 
 
12
 
13
 
14
  ---
 
8
 
9
  The perplexity for Orca-2-13b is: 7.595028877258301
10
  The perplexity for orca2-26B-self-merge is: 7.550178050994873
11
+ The perplexity for orca2-39B-self-merge is: NC
12
+
13
+ The following table summarizes the model performance across a range of benchmarks:
14
+
15
+ | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
16
+ |------------------------------------|-------------|-------|-----------|-------|------------|------------|-------|
17
+ | microsoft/Orca-2-13b | 58.64 | 60.67 | 79.81 | 60.37 | 56.41 | 76.64 | 17.97 |
18
+ | vmajor/Orca2-13B-selfmerge-26B | 62.24 | 60.84 | 79.84 | 60.32 | 56.38 | 76.87 | 39.2 |
19
+ | vmajor/Orca2-13B-selfmerge-39B | 62.24 | 60.84 | 79.84 | 60.32 | 56.38 | 76.87 | 39.2 |
20
+
21
+ Interestingly the GSM8K performance more than doubled with the first self merge. Second self merge resulting in the 39B model did not produce any further gains.
22
 
23
 
24
  ---