|
--- |
|
license: apache-2.0 |
|
tags: |
|
- MoE |
|
- merge |
|
- mergekit |
|
- Mistral |
|
- Microsoft/WizardLM-2-7B |
|
--- |
|
|
|
# WizardLM-2-4x7B-MoE-exl2-3_5bpw |
|
|
|
This is a quantized version of [WizardLM-2-4x7B-MoE](https://huggingface.co/Skylaude/WizardLM-2-4x7B-MoE) an experimental MoE model made with [Mergekit](https://github.com/arcee-ai/mergekit). Quantization was done using version 0.0.18 of [ExLlamaV2](https://github.com/turboderp/exllamav2). |
|
|
|
Please be sure to set experts per token to 4 for the best results! Context length should be the same as Mistral-7B-Instruct-v0.1 (8k tokens). For instruction templates, Vicuna-v1.1 is recommended. |
|
|
|
For more information see the [original repository](https://huggingface.co/Skylaude/WizardLM-2-4x7B-MoE). |
|
|
|
|