Files changed (1) hide show
  1. README.md +109 -1
README.md CHANGED
@@ -5,6 +5,101 @@ base_model:
5
  - SanjiWatsuki/Kunoichi-7B
6
  - uukuguy/speechless-instruct-mistral-7b-v0.2
7
  base_model_relation: merge
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  ---
9
 
10
  Test merge of 7b models for learning purposees. v0.2 is mostly the same, with minor promting changes and consolidating shards from 1B to 4B to reduce number of files.
@@ -20,4 +115,17 @@ Alpaca: Below is an instruction that describes a task. Write a response that app
20
  Instruction:
21
  {prompt}
22
 
23
- Response:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  - SanjiWatsuki/Kunoichi-7B
6
  - uukuguy/speechless-instruct-mistral-7b-v0.2
7
  base_model_relation: merge
8
+ model-index:
9
+ - name: Inf-Silent-Kunoichi-v0.2-2x7B
10
+ results:
11
+ - task:
12
+ type: text-generation
13
+ name: Text Generation
14
+ dataset:
15
+ name: IFEval (0-Shot)
16
+ type: HuggingFaceH4/ifeval
17
+ args:
18
+ num_few_shot: 0
19
+ metrics:
20
+ - type: inst_level_strict_acc and prompt_level_strict_acc
21
+ value: 36.36
22
+ name: strict accuracy
23
+ source:
24
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Jacoby746/Inf-Silent-Kunoichi-v0.2-2x7B
25
+ name: Open LLM Leaderboard
26
+ - task:
27
+ type: text-generation
28
+ name: Text Generation
29
+ dataset:
30
+ name: BBH (3-Shot)
31
+ type: BBH
32
+ args:
33
+ num_few_shot: 3
34
+ metrics:
35
+ - type: acc_norm
36
+ value: 32.26
37
+ name: normalized accuracy
38
+ source:
39
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Jacoby746/Inf-Silent-Kunoichi-v0.2-2x7B
40
+ name: Open LLM Leaderboard
41
+ - task:
42
+ type: text-generation
43
+ name: Text Generation
44
+ dataset:
45
+ name: MATH Lvl 5 (4-Shot)
46
+ type: hendrycks/competition_math
47
+ args:
48
+ num_few_shot: 4
49
+ metrics:
50
+ - type: exact_match
51
+ value: 5.66
52
+ name: exact match
53
+ source:
54
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Jacoby746/Inf-Silent-Kunoichi-v0.2-2x7B
55
+ name: Open LLM Leaderboard
56
+ - task:
57
+ type: text-generation
58
+ name: Text Generation
59
+ dataset:
60
+ name: GPQA (0-shot)
61
+ type: Idavidrein/gpqa
62
+ args:
63
+ num_few_shot: 0
64
+ metrics:
65
+ - type: acc_norm
66
+ value: 6.71
67
+ name: acc_norm
68
+ source:
69
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Jacoby746/Inf-Silent-Kunoichi-v0.2-2x7B
70
+ name: Open LLM Leaderboard
71
+ - task:
72
+ type: text-generation
73
+ name: Text Generation
74
+ dataset:
75
+ name: MuSR (0-shot)
76
+ type: TAUR-Lab/MuSR
77
+ args:
78
+ num_few_shot: 0
79
+ metrics:
80
+ - type: acc_norm
81
+ value: 13.26
82
+ name: acc_norm
83
+ source:
84
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Jacoby746/Inf-Silent-Kunoichi-v0.2-2x7B
85
+ name: Open LLM Leaderboard
86
+ - task:
87
+ type: text-generation
88
+ name: Text Generation
89
+ dataset:
90
+ name: MMLU-PRO (5-shot)
91
+ type: TIGER-Lab/MMLU-Pro
92
+ config: main
93
+ split: test
94
+ args:
95
+ num_few_shot: 5
96
+ metrics:
97
+ - type: acc
98
+ value: 25.25
99
+ name: accuracy
100
+ source:
101
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Jacoby746/Inf-Silent-Kunoichi-v0.2-2x7B
102
+ name: Open LLM Leaderboard
103
  ---
104
 
105
  Test merge of 7b models for learning purposees. v0.2 is mostly the same, with minor promting changes and consolidating shards from 1B to 4B to reduce number of files.
 
115
  Instruction:
116
  {prompt}
117
 
118
+ Response:
119
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
120
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Jacoby746__Inf-Silent-Kunoichi-v0.2-2x7B)
121
+
122
+ | Metric |Value|
123
+ |-------------------|----:|
124
+ |Avg. |19.92|
125
+ |IFEval (0-Shot) |36.36|
126
+ |BBH (3-Shot) |32.26|
127
+ |MATH Lvl 5 (4-Shot)| 5.66|
128
+ |GPQA (0-shot) | 6.71|
129
+ |MuSR (0-shot) |13.26|
130
+ |MMLU-PRO (5-shot) |25.25|
131
+