TeeZee commited on
Commit
eef199a
1 Parent(s): ca708f8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -14,7 +14,7 @@ tags:
14
 
15
  - To create this model two step procedure was used. First a new 20B model was created using [microsoft/Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b)
16
  and [KoboldAI/LLaMA2-13B-Erebus-v3](https://huggingface.co/KoboldAI/LLaMA2-13B-Erebus-v3) , deatils of the merge in [mergekit-config_step1.yml](https://huggingface.co/TeeZee/DarkForest-20B-v1.0/resolve/main/mergekit-config_step1.yml)
17
- - then [jebcarter/psyonic-cetacean-20B](https://huggingface.co/jebcarter/psyonic-cetacean-20B) was used to produce the final model, merge config in [mergekit-config_step2.yml](https://huggingface.co/TeeZee/DarkForest-20B-v1.1/resolve/main/mergekit-config_step2.yml)
18
  - instead of linear merge method used in v1.0, this time DARE TIES method was used for step2
19
  - The resulting model has approximately 20 billion parameters.
20
 
 
14
 
15
  - To create this model two step procedure was used. First a new 20B model was created using [microsoft/Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b)
16
  and [KoboldAI/LLaMA2-13B-Erebus-v3](https://huggingface.co/KoboldAI/LLaMA2-13B-Erebus-v3) , deatils of the merge in [mergekit-config_step1.yml](https://huggingface.co/TeeZee/DarkForest-20B-v1.0/resolve/main/mergekit-config_step1.yml)
17
+ - then [jebcarter/psyonic-cetacean-20B](https://huggingface.co/jebcarter/psyonic-cetacean-20B) was used to produce the final model, merge config in [mergekit-config-step2.yml](https://huggingface.co/TeeZee/DarkForest-20B-v1.1/resolve/main/mergekit-config-step2.yml)
18
  - instead of linear merge method used in v1.0, this time DARE TIES method was used for step2
19
  - The resulting model has approximately 20 billion parameters.
20