nwhamed commited on
Commit
b5e8a09
1 Parent(s): 19c30c7

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +22 -14
README.md CHANGED
@@ -4,26 +4,34 @@ tags:
4
  - merge
5
  - mergekit
6
  - lazymergekit
7
- - roberta-base
 
8
  ---
9
 
10
- # bert-roberta-merged
11
 
12
- bert-roberta-merged is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
13
- * [roberta-base](https://huggingface.co/roberta-base)
 
14
 
15
  ## 🧩 Configuration
16
 
17
  ```yaml
18
- models:
19
- - model: bert-base-uncased
20
- - model: roberta-base
21
- parameters:
22
- density: 0.5
23
- weight: 0.5
24
- merge_method: ties
25
- base_model: bert-base-uncased
26
  parameters:
27
- normalize: true
28
- dtype: float16
 
 
 
 
 
 
29
  ```
 
4
  - merge
5
  - mergekit
6
  - lazymergekit
7
+ - AIDC-ai-business/Marcoroni-7B-v3
8
+ - EmbeddedLLM/Mistral-7B-Merge-14-v0.1
9
  ---
10
 
11
+ # Marcoro14-7B-slerp
12
 
13
+ Marcoro14-7B-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
14
+ * [AIDC-ai-business/Marcoroni-7B-v3](https://huggingface.co/AIDC-ai-business/Marcoroni-7B-v3)
15
+ * [EmbeddedLLM/Mistral-7B-Merge-14-v0.1](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.1)
16
 
17
  ## 🧩 Configuration
18
 
19
  ```yaml
20
+ slices:
21
+ - sources:
22
+ - model: AIDC-ai-business/Marcoroni-7B-v3
23
+ layer_range: [0, 32]
24
+ - model: EmbeddedLLM/Mistral-7B-Merge-14-v0.1
25
+ layer_range: [0, 32]
26
+ merge_method: slerp
27
+ base_model: AIDC-ai-business/Marcoroni-7B-v3
28
  parameters:
29
+ t:
30
+ - filter: self_attn
31
+ value: [0, 0.5, 0.3, 0.7, 1]
32
+ - filter: mlp
33
+ value: [1, 0.5, 0.7, 0.3, 0]
34
+ - value: 0.5
35
+ dtype: bfloat16
36
+
37
  ```