nilq commited on
Commit
b453c4a
1 Parent(s): 1ba7388

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  base_model:
3
- - nilq/lua-mistral-1L-tiny
4
  - nilq/mistral-1L-tiny
 
5
  library_name: transformers
6
  tags:
7
  - mergekit
@@ -20,8 +20,8 @@ This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge
20
  ### Models Merged
21
 
22
  The following models were included in the merge:
23
- * [nilq/lua-mistral-1L-tiny](https://huggingface.co/nilq/lua-mistral-1L-tiny)
24
  * [nilq/mistral-1L-tiny](https://huggingface.co/nilq/mistral-1L-tiny)
 
25
 
26
  ### Configuration
27
 
@@ -31,10 +31,10 @@ The following YAML configuration was used to produce this model:
31
  models:
32
  - model: nilq/mistral-1L-tiny
33
  parameters:
34
- weight: 1.0
35
  - model: nilq/lua-mistral-1L-tiny
36
  parameters:
37
- weight: 0.3
38
  merge_method: linear
39
  dtype: float32
40
  ```
 
1
  ---
2
  base_model:
 
3
  - nilq/mistral-1L-tiny
4
+ - nilq/lua-mistral-1L-tiny
5
  library_name: transformers
6
  tags:
7
  - mergekit
 
20
  ### Models Merged
21
 
22
  The following models were included in the merge:
 
23
  * [nilq/mistral-1L-tiny](https://huggingface.co/nilq/mistral-1L-tiny)
24
+ * [nilq/lua-mistral-1L-tiny](https://huggingface.co/nilq/lua-mistral-1L-tiny)
25
 
26
  ### Configuration
27
 
 
31
  models:
32
  - model: nilq/mistral-1L-tiny
33
  parameters:
34
+ weight: 0.5
35
  - model: nilq/lua-mistral-1L-tiny
36
  parameters:
37
+ weight: 0.5
38
  merge_method: linear
39
  dtype: float32
40
  ```
config.json CHANGED
@@ -1,9 +1,9 @@
1
  {
2
- "_name_or_path": "nilq/lua-mistral-1L-tiny",
3
  "architectures": [
4
  "MistralForCausalLM"
5
  ],
6
- "attention_dropout": 0.1,
7
  "bos_token_id": 1,
8
  "eos_token_id": 2,
9
  "hidden_act": "silu",
 
1
  {
2
+ "_name_or_path": "nilq/mistral-1L-tiny",
3
  "architectures": [
4
  "MistralForCausalLM"
5
  ],
6
+ "attention_dropout": 0,
7
  "bos_token_id": 1,
8
  "eos_token_id": 2,
9
  "hidden_act": "silu",
mergekit_config.yml CHANGED
@@ -1,9 +1,9 @@
1
  models:
2
  - model: nilq/mistral-1L-tiny
3
  parameters:
4
- weight: 1.0
5
  - model: nilq/lua-mistral-1L-tiny
6
  parameters:
7
- weight: 0.3
8
  merge_method: linear
9
  dtype: float32
 
1
  models:
2
  - model: nilq/mistral-1L-tiny
3
  parameters:
4
+ weight: 0.5
5
  - model: nilq/lua-mistral-1L-tiny
6
  parameters:
7
+ weight: 0.5
8
  merge_method: linear
9
  dtype: float32
model-00001-of-00001.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:24461af4722928bd3b22667648efffd09f1f344257a4fde7b37f84015a7b35d2
3
  size 140516640
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d0abb159360fe6d75c1d481b3c04ba5363841e0a96d6d274f55d92ef88a511e7
3
  size 140516640