Update README.md
Browse files
README.md
CHANGED
@@ -30,19 +30,20 @@ Healed Llama-3 15B Frankenmerge
|
|
30 |
|
31 |
---------------------------------------------------------------------
|
32 |
|
33 |
-
|
34 |
|
35 |
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/zL_wK8VYcR_6CDHlwsm88.jpeg)
|
36 |
|
37 |
-
# merge
|
38 |
|
39 |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
40 |
|
41 |
-
|
|
|
42 |
|
43 |
This model was merged using an iterative merging process.
|
44 |
|
45 |
-
|
46 |
|
47 |
The following models were included in the merge:
|
48 |
* [grimjim/Llama-3-Instruct-abliteration-LoRA-8B](https://huggingface.co/grimjim/Llama-3-Instruct-abliteration-LoRA-8B)
|
@@ -53,8 +54,7 @@ The following models were included in the merge:
|
|
53 |
* [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2)
|
54 |
* [ZeusLabs/L3-Aethora-15B-V2](https://huggingface.co/ZeusLabs/L3-Aethora-15B-V2)
|
55 |
|
56 |
-
|
57 |
-
### Configuration
|
58 |
|
59 |
The following YAML configuration was used to produce this model:
|
60 |
|
@@ -271,4 +271,19 @@ parameters:
|
|
271 |
- value: 0.5
|
272 |
dtype: bfloat16
|
273 |
|
274 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
|
31 |
---------------------------------------------------------------------
|
32 |
|
33 |
+
<h1>Llama3-15B-HaloMaidRP-v1.33-8K</h1>
|
34 |
|
35 |
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/zL_wK8VYcR_6CDHlwsm88.jpeg)
|
36 |
|
37 |
+
# <h1>merge</h1>
|
38 |
|
39 |
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
40 |
|
41 |
+
# <h1>Merge Details</h1>
|
42 |
+
# <h1>Merge Method</h1>
|
43 |
|
44 |
This model was merged using an iterative merging process.
|
45 |
|
46 |
+
# <h1>Models Merged</h1>
|
47 |
|
48 |
The following models were included in the merge:
|
49 |
* [grimjim/Llama-3-Instruct-abliteration-LoRA-8B](https://huggingface.co/grimjim/Llama-3-Instruct-abliteration-LoRA-8B)
|
|
|
54 |
* [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2)
|
55 |
* [ZeusLabs/L3-Aethora-15B-V2](https://huggingface.co/ZeusLabs/L3-Aethora-15B-V2)
|
56 |
|
57 |
+
# <h1>Configuration</h1>
|
|
|
58 |
|
59 |
The following YAML configuration was used to produce this model:
|
60 |
|
|
|
271 |
- value: 0.5
|
272 |
dtype: bfloat16
|
273 |
|
274 |
+
```
|
275 |
+
|
276 |
+
# <h1>Prompt Template</h1>
|
277 |
+
```bash
|
278 |
+
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
|
279 |
+
|
280 |
+
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
|
281 |
+
|
282 |
+
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
283 |
+
|
284 |
+
{output}<|eot_id|>
|
285 |
+
|
286 |
+
```
|
287 |
+
|
288 |
+
</body>
|
289 |
+
</html>
|