Skylaude commited on
Commit
fb7804f
1 Parent(s): aea2a68

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -2
README.md CHANGED
@@ -10,9 +10,17 @@ tags:
10
 
11
  # WizardLM-2-4x7B-MoE
12
 
13
- This is an experimental MoE model made with [Mergekit](https://github.com/arcee-ai/mergekit). It was made by combining four [WizardLM-2-7B](https://huggingface.co/microsoft/WizardLM-2-7B) models using the random gate mode. Please be sure to set experts per token to 4 for the best results! Context length should be the same as Mistral-7B-Instruct-v0.1 (8k tokens). For instruction templates Vicuna-v1.1 is recommended.
14
 
15
- # Mergekit config:
 
 
 
 
 
 
 
 
16
  ```
17
  base_model: models/WizardLM-2-7B
18
  gate_mode: random
 
10
 
11
  # WizardLM-2-4x7B-MoE
12
 
13
+ This is an experimental MoE model made with [Mergekit](https://github.com/arcee-ai/mergekit). It was made by combining four [WizardLM-2-7B](https://huggingface.co/microsoft/WizardLM-2-7B) models using the random gate mode. Please be sure to set experts per token to 4 for the best results! Context length should be the same as Mistral-7B-Instruct-v0.1 (8k tokens). For instruction templates, Vicuna-v1.1 is recommended.
14
 
15
+ # Quanitized versions
16
+
17
+ Hopefully coming soon.
18
+
19
+ # Evaluation
20
+
21
+ I don't expect this model to be that great since it's something that I made as an experiment. However, I will submit it to the Open LLM Leaderboard to see how it matches up against some other models (particularly WizardLM-2-7B and WizardLM-2-70B).
22
+
23
+ # Mergekit config
24
  ```
25
  base_model: models/WizardLM-2-7B
26
  gate_mode: random