koesn commited on
Commit
93e27d2
1 Parent(s): 923caa3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +79 -0
README.md CHANGED
@@ -1,3 +1,82 @@
1
  ---
2
  license: cc-by-nc-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-4.0
3
  ---
4
+ # MonarchLake-7B
5
+
6
+ ## Description
7
+ This repo contains GGUF format model files for MonarchLake-7B.
8
+
9
+ ## Files Provided
10
+ | Name | Quant | Bits | File Size | Remark |
11
+ | --------------------------- | ------- | ---- | --------- | -------------------------------- |
12
+ | monarchlake-7b.IQ3_XXS.gguf | IQ3_XXS | 3 | 3.02 GB | 3.06 bpw quantization |
13
+ | monarchlake-7b.IQ3_S.gguf | IQ3_S | 3 | 3.18 GB | 3.44 bpw quantization |
14
+ | monarchlake-7b.IQ3_M.gguf | IQ3_M | 3 | 3.28 GB | 3.66 bpw quantization mix |
15
+ | monarchlake-7b.Q4_0.gguf | Q4_0 | 4 | 4.11 GB | 3.56G, +0.2166 ppl |
16
+ | monarchlake-7b.IQ4_NL.gguf | IQ4_NL | 4 | 4.16 GB | 4.25 bpw non-linear quantization |
17
+ | monarchlake-7b.Q4_K_M.gguf | Q4_K_M | 4 | 4.37 GB | 3.80G, +0.0532 ppl |
18
+ | monarchlake-7b.Q5_K_M.gguf | Q5_K_M | 5 | 5.13 GB | 4.45G, +0.0122 ppl |
19
+ | monarchlake-7b.Q6_K.gguf | Q6_K | 6 | 5.94 GB | 5.15G, +0.0008 ppl |
20
+ | monarchlake-7b.Q8_0.gguf | Q8_0 | 8 | 7.70 GB | 6.70G, +0.0004 ppl |
21
+
22
+ ## Parameters
23
+ | path | type | architecture | rope_theta | sliding_win | max_pos_embed |
24
+ | -------------------------- | ------- | ------------------ | ---------- | ----------- | ------------- |
25
+ | macadeliccc/MonarchLake-7B | mistral | MistralForCausalLM | 10000.0 | 4096 | 32768 |
26
+
27
+ ## Benchmarks
28
+ ![](https://i.ibb.co/7Vyyhnm/Monarch-Lake-7-B.png)
29
+ ![](https://i.ibb.co/Ybvs1r9/Monarch-Lake-7-B-Top-ARC.png)
30
+
31
+ # Original Model Card
32
+
33
+ ---
34
+ base_model:
35
+ - macadeliccc/WestLake-7b-v2-laser-truthy-dpo
36
+ - mlabonne/AlphaMonarch-7B
37
+ library_name: transformers
38
+ tags:
39
+ - mergekit
40
+ - merge
41
+ license: cc-by-nc-4.0
42
+ ---
43
+ # MonarchLake-7B
44
+
45
+ ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/6455cc8d679315e4ef16fbec/YQRHQR58ZbEywnqcysHX2.webp)
46
+
47
+ This model equips AlphaMonarch-7B with a strong base of emotional intelligence.
48
+
49
+ ### Merge Method
50
+
51
+ This model was merged using the SLERP merge method.
52
+
53
+ ### Models Merged
54
+
55
+ The following models were included in the merge:
56
+ * [macadeliccc/WestLake-7b-v2-laser-truthy-dpo](https://huggingface.co/macadeliccc/WestLake-7b-v2-laser-truthy-dpo)
57
+ * [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
58
+
59
+ ### Configuration
60
+
61
+ The following YAML configuration was used to produce this model:
62
+
63
+ ```yaml
64
+ slices:
65
+ - sources:
66
+ - model: mlabonne/AlphaMonarch-7B
67
+ layer_range: [0, 32]
68
+ - model: macadeliccc/WestLake-7b-v2-laser-truthy-dpo
69
+ layer_range: [0, 32]
70
+ merge_method: slerp
71
+ base_model: mlabonne/AlphaMonarch-7B
72
+ parameters:
73
+ t:
74
+ - filter: self_attn
75
+ value: [0, 0.5, 0.3, 0.7, 1]
76
+ - filter: mlp
77
+ value: [1, 0.5, 0.7, 0.3, 0]
78
+ - value: 0.5
79
+ dtype: bfloat16
80
+
81
+
82
+ ```