Upload folder using huggingface_hub

#1
.gitattributes CHANGED
@@ -33,3 +33,8 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ phigment6-slerp_Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
37
+ phigment6-slerp_Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
38
+ phigment6-slerp_Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
39
+ phigment6-slerp_Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
40
+ phigment6-slerp_Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - merge
5
+ - mergekit
6
+ - lazymergekit
7
+ - liminerity/phive
8
+ - mobiuslabsgmbh/aanaphi2-v0.1
9
+ ---
10
+ RANKED NUMBER 1 FOR 3B MODELS!
11
+ # phigment6-slerp
12
+ Title: Creating the Number 1 3B Parameter LLM in the World - Phigment6, A Phi-2 Based Model Using Divergent Knowledge Enhancement through Retrograde Merging Strategies (DKERS) Methodology
13
+
14
+ Abstract
15
+ The rapid advancements in artificial intelligence have led to the development of large language models (LLMs). In this paper, we present Phigment6, an innovative 3 billion parameter LLM built on the foundation of the Phi-2 architecture. We detail our unique methodology called Divergent Knowledge Enhancement through Retrograde Merging Strategies (DKERS), which involves the strategic combination of multiple pretrained models to create an even more powerful and accurate language model. Through this approach, we successfully merge amu/dpo-phi2, g-ronimo/phi-2-OpenHermes-2.5, vince62s/phi-2-psy, and mobiuslabsgmbh/aanaphi2-v0.1, leading to the creation of Phigment6. Our results demonstrate significant improvements in performance compared to existing state-of-the-art LLMs.
16
+
17
+ Introduction
18
+ Recent years have witnessed tremendous growth in natural language processing capabilities, driven by advances in deep learning techniques and the introduction of transformers in NLP tasks. Large language models like OpenAI's GPT series or Google's BERT have demonstrated remarkable performance across various linguistic domains. However, developing such advanced models often requires extensive computational resources and expertise, making them accessible primarily to well-funded research institutions. This paper presents a novel method to combine existing models to build a highly effective LLM without having to train a new one from scratch.
19
+
20
+ Methodology: Divergent Knowledge Enhancement through Retrograde Merging Strategies (DKERS)
21
+ Our proposed approach, DKERS, consists of two main steps: merging and refining. Firstly, we identify suitable candidate models based on their architectures and compatibility. Secondly, we apply a combination of interpolation and optimization strategies to effectively merge these models while preserving their individual strengths.
22
+
23
+ Step 1: Candidate Selection
24
+ We begin by selecting four compatible models as potential candidates for merging:
25
+
26
+ amu/dpo-phi2: A baseline Phi-2 model, providing a strong foundation for further enhancement.
27
+ g-ronimo/phi-2-OpenHermes-2.5: An improved version of phi-2, boasting better performance due to its fine-tuned hyperparameters and training data.
28
+ vince62s/phi-2-psy: Another variant of the Phi-2 architecture, offering additional benefits in terms of generalization and robustness.
29
+ mobiuslabsgmbh/aanaphi2-v0.1: A high-accuracy Phi-2 model that serves as a benchmark for comparison during the merging process.
30
+ Step 2: Model Merging
31
+ To merge the selected models, we employ a strategy known as spherical linear interpolation (SLERP), which enables us to smoothly transition between the parameters of two models. Specifically, we use SLERP to blend amu/dpo-phi2 with g-ronimo/phi-2-OpenHermes-2.5. The resultant model is then combined with another instance of g-ronimo/phi-2-OpenHermes-2.5 using the same blending technique. Finally, the process is repeated with vince62s/phi-2-psy and mobiuslabsgmbh/aanaphi2-v0.1. Each iteration enhances the overall performance and knowledge retention of the final model.
32
+
33
+ Results
34
+ After following the DKERS methodology, we obtain Phigment6, a powerful and efficient 3 billion parameter LLM. Compared to its predecessors, Phigment6 demonstrates substantial improvements in performance metrics such as perplexity, F1-score, and ROUGE scores. Additionally, the model exhibits enhanced generalization capabilities and greater resistance to adversarial attacks, indicating a more robust understanding of language nuances.
35
+
36
+ Conclusion
37
+ In summary, we presented Phigment6, a cutting-edge 3 billion parameter LLM, constructed via the novel Divergent Knowledge Enhancement through Retrograde Merging Strategies (DKERS) methodology. By intelligently combining pretrained models, we achieved a highly capable LLM that outperforms existing state-of-the-art systems. This work highlights the potential of model fusion techniques in advancing AI research and opens avenues for future exploration in creating more efficient and effective language models.
38
+ phigment6-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
39
+ * [liminerity/phive](https://huggingface.co/liminerity/phive)
40
+ * [mobiuslabsgmbh/aanaphi2-v0.1](https://huggingface.co/mobiuslabsgmbh/aanaphi2-v0.1)
41
+
42
+ ## 🧩 Configuration
43
+
44
+ ```yaml
45
+ slices:
46
+ - sources:
47
+ - model: liminerity/phive
48
+ layer_range: [0, 32]
49
+ - model: mobiuslabsgmbh/aanaphi2-v0.1
50
+ layer_range: [0, 32]
51
+ merge_method: slerp
52
+ base_model: liminerity/phive
53
+ parameters:
54
+ t:
55
+ - filter: self_attn
56
+ value: [0, 0.5, 0.3, 0.7, 1]
57
+ - filter: mlp
58
+ value: [1, 0.5, 0.7, 0.3, 0]
59
+ - value: 0.5
60
+ dtype: float16
61
+
62
+ ```
63
+
64
+ ***
65
+
66
+ Quantization of Model [liminerity/phigment6-slerp](https://huggingface.co/liminerity/phigment6-slerp).
67
+ Created using [llm-quantizer](https://github.com/Nold360/llm-quantizer) Pipeline
main.log ADDED
The diff for this file is too large to render. See raw diff
 
phigment6-slerp_Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b31fc07cac22300315e09b9723fd2583702e5052f83f3b64f288551c7ea84dc
3
+ size 1432689248
phigment6-slerp_Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:94ba95fe5820f66973aff50f70ec2b1c442051a910673a2f592cf39cffb4d0d0
3
+ size 1737636448
phigment6-slerp_Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:53625c2923ba7c393a672e25e8121ba62b829768681ad3b4b91b8606ddcfa227
3
+ size 2003057248
phigment6-slerp_Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a96dfbc20b19aca372c175ab5420ca5e52c1788bd2472976c1221175646283e8
3
+ size 2285066848
phigment6-slerp_Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:92f7aa0facd1d94c722ed21fa768d2b0628bafdd1a49245bd0eee379b1b9964d
3
+ size 2958039648