aashish1904 commited on
Commit
b5f840f
·
verified ·
1 Parent(s): d2cb4fc

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +109 -0
README.md ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ base_model:
5
+ - Sao10K/L3-8B-Stheno-v3.2
6
+ - Sao10K/L3-8B-Niitama-v1
7
+ - princeton-nlp/Llama-3-Instruct-8B-SimPO-v0.2
8
+ tags:
9
+ - mergekit
10
+ - merge
11
+ - roleplay
12
+ - sillytavern
13
+ - llama3
14
+ - not-for-all-audiences
15
+ license: cc-by-nc-4.0
16
+ language:
17
+ - en
18
+
19
+ ---
20
+
21
+ ![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)
22
+
23
+ # QuantFactory/L3-Rhaenys-8B-GGUF
24
+ This is quantized version of [tannedbum/L3-Rhaenys-8B](https://huggingface.co/tannedbum/L3-Rhaenys-8B) created using llama.cpp
25
+
26
+ # Original Model Card
27
+
28
+
29
+ 3.0 Farewell model. Next i'm going to wait Sao10K to break the bank again with a new 3.1 RP base.
30
+
31
+
32
+ ## SillyTavern
33
+
34
+ ## Text Completion presets
35
+ ```
36
+ temp 0.9
37
+ top_k 30
38
+ top_p 0.75
39
+ min_p 0.2
40
+ rep_pen 1.1
41
+ smooth_factor 0.25
42
+ smooth_curve 1
43
+ ```
44
+ ## Advanced Formatting
45
+
46
+ [Context & Instruct preset by Virt-io](https://huggingface.co/Virt-io/SillyTavern-Presets/tree/main/Prompts/LLAMA-3/v1.9)
47
+
48
+ Instruct Mode: Enabled
49
+
50
+
51
+
52
+ # merge
53
+
54
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
55
+
56
+ This model was merged using the slerp merge method.
57
+
58
+ ### Models Merged
59
+
60
+ The following models were included in the merge:
61
+ * [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2)
62
+ * [Sao10K/L3-8B-Niitama-v1](https://huggingface.co/Sao10K/L3-8B-Niitama-v1)
63
+ * [princeton-nlp/Llama-3-Instruct-8B-SimPO-v0.2](https://huggingface.co/princeton-nlp/Llama-3-Instruct-8B-SimPO-v0.2)
64
+
65
+ ### Configuration
66
+
67
+ The following YAML configuration was used to produce this model:
68
+
69
+ ```yaml
70
+
71
+ slices:
72
+ - sources:
73
+ - model: Sao10K/L3-8B-Niitama-v1
74
+ layer_range: [0, 32]
75
+ - model: Sao10K/L3-8B-Stheno-v3.2
76
+ layer_range: [0, 32]
77
+ merge_method: slerp
78
+ base_model: Sao10K/L3-8B-Niitama-v1
79
+ parameters:
80
+ t:
81
+ - filter: self_attn
82
+ value: [0.2, 0.4, 0.6, 0.2, 0.4]
83
+ - filter: mlp
84
+ value: [0.8, 0.6, 0.4, 0.8, 0.6]
85
+ - value: 0.4
86
+ dtype: bfloat16
87
+
88
+
89
+ slices:
90
+ - sources:
91
+ - model: tannedbum/L3-Niitama-Stheno-8B
92
+ layer_range: [0, 32]
93
+ - model: princeton-nlp/Llama-3-Instruct-8B-SimPO-v0.2
94
+ layer_range: [0, 32]
95
+ merge_method: slerp
96
+ base_model: tannedbum/L3-Niitama-Stheno-8B
97
+ parameters:
98
+ t:
99
+ - filter: self_attn
100
+ value: [0.2, 0.4, 0.6, 0.2, 0.4]
101
+ - filter: mlp
102
+ value: [0.8, 0.6, 0.4, 0.8, 0.6]
103
+ - value: 0.4
104
+ dtype: bfloat16
105
+
106
+
107
+ ```
108
+
109
+ Want to support my work ? My Ko-fi page: https://ko-fi.com/tannedbum