louisbrulenaudet
commited on
Commit
•
c4a0599
1
Parent(s):
1cba69a
Upload README.md
Browse files
README.md
ADDED
@@ -0,0 +1,158 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- merge
|
4 |
+
- mergekit
|
5 |
+
- louisbrulenaudet/Pearl-7B-slerp
|
6 |
+
- WizardLM/WizardMath-7B-V1.1
|
7 |
+
- cognitivecomputations/WestLake-7B-v2-laser
|
8 |
+
- CultriX/NeuralTrix-7B-dpo
|
9 |
+
- chemistry
|
10 |
+
- biology
|
11 |
+
- math
|
12 |
+
base_model:
|
13 |
+
- louisbrulenaudet/Pearl-7B-slerp
|
14 |
+
- WizardLM/WizardMath-7B-V1.1
|
15 |
+
- cognitivecomputations/WestLake-7B-v2-laser
|
16 |
+
- CultriX/NeuralTrix-7B-dpo
|
17 |
+
license: apache-2.0
|
18 |
+
language:
|
19 |
+
- en
|
20 |
+
library_name: transformers
|
21 |
+
pipeline_tag: text-generation
|
22 |
+
model-index:
|
23 |
+
- name: Pearl-7B-0211-ties
|
24 |
+
results:
|
25 |
+
- task:
|
26 |
+
type: text-generation
|
27 |
+
metrics:
|
28 |
+
- name: Average
|
29 |
+
type: Average
|
30 |
+
value: 75.11
|
31 |
+
- name: ARC
|
32 |
+
type: ARC
|
33 |
+
value: 71.42
|
34 |
+
- name: GSM8K
|
35 |
+
type: GSM8K
|
36 |
+
value: 70.66
|
37 |
+
- name: Winogrande
|
38 |
+
type: Winogrande
|
39 |
+
value: 84.37
|
40 |
+
- name: TruthfulQA
|
41 |
+
type: TruthfulQA
|
42 |
+
value: 71.46
|
43 |
+
- name: HellaSwag
|
44 |
+
type: HellaSwag
|
45 |
+
value: 88.86
|
46 |
+
source:
|
47 |
+
name: Open LLM Leaderboard
|
48 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
|
49 |
+
---
|
50 |
+
<center><img src='https://i.imgur.com/0xFTuAX.png' width='450px'></center>
|
51 |
+
|
52 |
+
# Pearl-7B-0211-ties, an xtraordinary 7B model
|
53 |
+
|
54 |
+
Pearl-7B-0211-ties is a merge of the following models:
|
55 |
+
* [louisbrulenaudet/Pearl-7B-slerp](https://huggingface.co/louisbrulenaudet/Pearl-7B-slerp)
|
56 |
+
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
|
57 |
+
* [cognitivecomputations/WestLake-7B-v2-laser](https://huggingface.co/cognitivecomputations/WestLake-7B-v2-laser)
|
58 |
+
* [CultriX/NeuralTrix-7B-dpo](https://huggingface.co/CultriX/NeuralTrix-7B-dpo)
|
59 |
+
|
60 |
+
## Evaluation
|
61 |
+
|
62 |
+
The evaluation was performed using the HuggingFace Open LLM Leaderboard.
|
63 |
+
|
64 |
+
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | #Params (B) |
|
65 |
+
|--------------------------------------------------|---------|-------|-----------|-------|------------|------------|-------|--------------|
|
66 |
+
| **louisbrulenaudet/Pearl-34B-ties** | **75.48** | 70.99 | 84.83 | **76.63** | 70.32 | 82.64 | 67.48 | 34.39 |
|
67 |
+
| **louisbrulenaudet/Pearl-7B-0211-ties** | **75.11** | **71.42** | **88.86** | 63.91 | **71.46** | **84.37** | 70.66 | 7.24 |
|
68 |
+
| NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO | 73.35 | 71.08 | 87.29 | 72.17 | 54.83 | 83.11 | 71.65 | 46.7 |
|
69 |
+
| argilla/notus-8x7b-experiment | 73.18 | 70.99 | 87.73 | 71.33 | 65.79 | 81.61 | 61.64 | 46.7 |
|
70 |
+
| **louisbrulenaudet/Pearl-7B-slerp** | 72.75 | 68.00 | 87.16 | 64.04 | 62.35 | 81.29 | **73.62** | 7.24 |
|
71 |
+
| mistralai/Mixtral-8x7B-Instruct-v0.1 | 72.7 | 70.14 | 87.55 | 71.4 | 64.98 | 81.06 | 61.11 | 46.7 |
|
72 |
+
| microsoft/Orca-2-13b | 61.98 | 60.92 | 79.85 | 60.3 | 56.42 | 76.56 | 37.83 | 13 |
|
73 |
+
| microsoft/phi-2 | 61.33 | 61.09 | 75.11 | 58.11 | 44.47 | 74.35 | 54.81 | 2.78 |
|
74 |
+
|
75 |
+
### Ties merging
|
76 |
+
|
77 |
+
TIES-Merging is a method designed to facilitate the efficient merging of multiple task-specific models into a consolidated multitask model. It addresses two primary challenges encountered in the process of model merging with a focus on maintaining objectivity.
|
78 |
+
|
79 |
+
One key challenge tackled by TIES-Merging involves addressing redundancy in model parameters. This is achieved by identifying and eliminating redundant parameters within task-specific models, emphasizing the changes made during fine-tuning and selectively retaining the top-k% most significant changes while discarding the rest.
|
80 |
+
|
81 |
+
Another challenge pertains to conflicts arising from disagreements between parameter signs across different models. TIES-Merging resolves these conflicts by creating a unified sign vector representing the most dominant direction of change across all models.
|
82 |
+
|
83 |
+
The TIES-Merging process consists of three steps:
|
84 |
+
|
85 |
+
- Trim: Reduces redundancy in task-specific models by retaining a fraction of the most significant parameters (density parameter) and resetting the remaining parameters to zero.
|
86 |
+
- Elect Sign: Resolves sign conflicts across different models by creating a unified sign vector based on the most dominant direction (positive or negative) in terms of cumulative magnitude.
|
87 |
+
- Disjoint Merge: Averages parameter values aligned with the unified sign vector, excluding zero values.
|
88 |
+
|
89 |
+
## Configuration
|
90 |
+
|
91 |
+
```yaml
|
92 |
+
models:
|
93 |
+
- model: OpenPipe/mistral-ft-optimized-1227
|
94 |
+
- model: louisbrulenaudet/Pearl-7B-slerp
|
95 |
+
parameters:
|
96 |
+
density: 0.6
|
97 |
+
weight: 0.3
|
98 |
+
- model: WizardLM/WizardMath-7B-V1.1
|
99 |
+
parameters:
|
100 |
+
density: 0.55
|
101 |
+
weight: 0.2
|
102 |
+
- model: cognitivecomputations/WestLake-7B-v2-laser
|
103 |
+
parameters:
|
104 |
+
density: 0.55
|
105 |
+
weight: 0.25
|
106 |
+
- model: CultriX/NeuralTrix-7B-dpo
|
107 |
+
parameters:
|
108 |
+
density: 0.6
|
109 |
+
weight: 0.25
|
110 |
+
merge_method: ties
|
111 |
+
base_model: OpenPipe/mistral-ft-optimized-1227
|
112 |
+
parameters:
|
113 |
+
normalize: true
|
114 |
+
int8_mask: true
|
115 |
+
dtype: float16
|
116 |
+
```
|
117 |
+
|
118 |
+
## Usage
|
119 |
+
|
120 |
+
```python
|
121 |
+
!pip install -qU transformers accelerate
|
122 |
+
|
123 |
+
from transformers import AutoTokenizer
|
124 |
+
import transformers
|
125 |
+
import torch
|
126 |
+
|
127 |
+
model = "louisbrulenaudet/Pearl-7B-0211-ties"
|
128 |
+
messages = [{"role": "user", "content": "What is a large language model?"}]
|
129 |
+
|
130 |
+
tokenizer = AutoTokenizer.from_pretrained(model)
|
131 |
+
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
132 |
+
pipeline = transformers.pipeline(
|
133 |
+
"text-generation",
|
134 |
+
model=model,
|
135 |
+
torch_dtype=torch.float16,
|
136 |
+
device_map="auto",
|
137 |
+
)
|
138 |
+
|
139 |
+
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
140 |
+
print(outputs[0]["generated_text"])
|
141 |
+
```
|
142 |
+
|
143 |
+
## Citing & Authors
|
144 |
+
|
145 |
+
If you use this code in your research, please use the following BibTeX entry.
|
146 |
+
|
147 |
+
```BibTeX
|
148 |
+
@misc{louisbrulenaudet2023,
|
149 |
+
author = {Louis Brulé Naudet},
|
150 |
+
title = {Pearl-7B-0211-ties, an xtraordinary 7B model},
|
151 |
+
year = {2023}
|
152 |
+
howpublished = {\url{https://huggingface.co/louisbrulenaudet/Pearl-7B-0211-ties}},
|
153 |
+
}
|
154 |
+
```
|
155 |
+
|
156 |
+
## Feedback
|
157 |
+
|
158 |
+
If you have any feedback, please reach out at [louisbrulenaudet@icloud.com](mailto:louisbrulenaudet@icloud.com).
|