Update README.md
Browse files
README.md
CHANGED
@@ -116,46 +116,82 @@ model-index:
|
|
116 |
name: Open LLM Leaderboard
|
117 |
---
|
118 |
|
119 |
-
# woofwolfy/WestIceLemonTeaRP-32k-7b-Q5_K_M-GGUF
|
120 |
This model was converted to GGUF format from [`icefog72/WestIceLemonTeaRP-32k-7b`](https://huggingface.co/icefog72/WestIceLemonTeaRP-32k-7b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
121 |
Refer to the [original model card](https://huggingface.co/icefog72/WestIceLemonTeaRP-32k-7b) for more details on the model.
|
122 |
|
123 |
-
|
124 |
-
Install llama.cpp through brew (works on Mac and Linux)
|
125 |
|
126 |
-
```bash
|
127 |
-
brew install llama.cpp
|
128 |
|
129 |
-
|
130 |
-
Invoke the llama.cpp server or the CLI.
|
131 |
|
132 |
-
|
133 |
-
```bash
|
134 |
-
llama-cli --hf-repo woofwolfy/WestIceLemonTeaRP-32k-7b-Q5_K_M-GGUF --hf-file westicelemontearp-32k-7b-q5_k_m-imat.gguf -p "The meaning to life and the universe is"
|
135 |
-
```
|
136 |
|
137 |
-
|
138 |
-
```bash
|
139 |
-
llama-server --hf-repo woofwolfy/WestIceLemonTeaRP-32k-7b-Q5_K_M-GGUF --hf-file westicelemontearp-32k-7b-q5_k_m-imat.gguf -c 2048
|
140 |
-
```
|
141 |
|
142 |
-
|
143 |
|
144 |
-
|
145 |
-
```
|
146 |
-
git clone https://github.com/ggerganov/llama.cpp
|
147 |
-
```
|
148 |
|
149 |
-
|
150 |
-
|
151 |
-
|
152 |
-
```
|
153 |
|
154 |
-
|
155 |
-
|
156 |
-
|
157 |
-
|
158 |
-
|
159 |
-
|
160 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
161 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
116 |
name: Open LLM Leaderboard
|
117 |
---
|
118 |
|
119 |
+
# woofwolfy/WestIceLemonTeaRP-32k-7b-Q5_K_M-GGUF-Imatrix
|
120 |
This model was converted to GGUF format from [`icefog72/WestIceLemonTeaRP-32k-7b`](https://huggingface.co/icefog72/WestIceLemonTeaRP-32k-7b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
121 |
Refer to the [original model card](https://huggingface.co/icefog72/WestIceLemonTeaRP-32k-7b) for more details on the model.
|
122 |
|
123 |
+
# WestIceLemonTeaRP-32k-7b
|
|
|
124 |
|
|
|
|
|
125 |
|
126 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63407b719dbfe0d48b2d763b/RxJ8WbYsu_OAd8sICmddp.png)
|
|
|
127 |
|
128 |
+
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
|
|
|
|
|
|
129 |
|
130 |
+
## Merge Details
|
|
|
|
|
|
|
131 |
|
132 |
+
Prompt template: Alpaca, maybe ChatML
|
133 |
|
134 |
+
* measurement.json for quanting exl2 included.
|
|
|
|
|
|
|
135 |
|
136 |
+
- [4.2bpw-exl2](https://huggingface.co/icefog72/WestIceLemonTeaRP-32k-7b-4.2bpw-exl2)
|
137 |
+
- [6.5bpw-exl2](https://huggingface.co/icefog72/WestIceLemonTeaRP-32k-7b-6.5bpw-exl2)
|
138 |
+
- [8bpw-exl2](https://huggingface.co/icefog72/WestIceLemonTeaRP-32k-7b-8bpw-exl2)
|
|
|
139 |
|
140 |
+
|
141 |
+
thx mradermacher and SilverFan for
|
142 |
+
* [mradermacher/WestIceLemonTeaRP-32k-GGUF](https://huggingface.co/mradermacher/WestIceLemonTeaRP-32k-GGUF)
|
143 |
+
* [SilverFan/WestIceLemonTeaRP-7b-32k-GGUF](https://huggingface.co/SilverFan/WestIceLemonTeaRP-7b-32k-GGUF)
|
144 |
+
|
145 |
+
### Merge Method
|
146 |
+
|
147 |
+
This model was merged using the SLERP merge method.
|
148 |
+
|
149 |
+
### Models Merged
|
150 |
+
|
151 |
+
The following models were included in the merge:
|
152 |
+
* [IceLemonTeaRP-32k-7b](https://huggingface.co/icefog72/IceLemonTeaRP-32k-7b)
|
153 |
+
* WestWizardIceLemonTeaRP
|
154 |
+
* [SeverusWestLake-7B-DPO](https://huggingface.co/s3nh/SeverusWestLake-7B-DPO)
|
155 |
+
* WizardIceLemonTeaRP
|
156 |
+
* [Not-WizardLM-2-7B](https://huggingface.co/amazingvince/Not-WizardLM-2-7B)
|
157 |
+
* [IceLemonTeaRP-32k-7b](https://huggingface.co/icefog72/IceLemonTeaRP-32k-7b)
|
158 |
+
|
159 |
+
|
160 |
+
### Configuration
|
161 |
+
|
162 |
+
The following YAML configuration was used to produce this model:
|
163 |
+
|
164 |
+
```yaml
|
165 |
+
|
166 |
+
slices:
|
167 |
+
- sources:
|
168 |
+
- model: IceLemonTeaRP-32k-7b
|
169 |
+
layer_range: [0, 32]
|
170 |
+
- model: WestWizardIceLemonTeaRP
|
171 |
+
layer_range: [0, 32]
|
172 |
+
merge_method: slerp
|
173 |
+
base_model: IceLemonTeaRP-32k-7b
|
174 |
+
parameters:
|
175 |
+
t:
|
176 |
+
- filter: self_attn
|
177 |
+
value: [0, 0.5, 0.3, 0.7, 1]
|
178 |
+
- filter: mlp
|
179 |
+
value: [1, 0.5, 0.7, 0.3, 0]
|
180 |
+
- value: 0.5
|
181 |
+
dtype: float16
|
182 |
```
|
183 |
+
|
184 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63407b719dbfe0d48b2d763b/GX-kV-H8_zAJz5hHL8A7G.png)
|
185 |
+
|
186 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
187 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_icefog72__WestIceLemonTeaRP-32k-7b)
|
188 |
+
|
189 |
+
| Metric |Value|
|
190 |
+
|---------------------------------|----:|
|
191 |
+
|Avg. |71.27|
|
192 |
+
|AI2 Reasoning Challenge (25-Shot)|68.77|
|
193 |
+
|HellaSwag (10-Shot) |86.89|
|
194 |
+
|MMLU (5-Shot) |64.28|
|
195 |
+
|TruthfulQA (0-shot) |62.47|
|
196 |
+
|Winogrande (5-shot) |80.98|
|
197 |
+
|GSM8k (5-shot) |64.22|
|