|
--- |
|
license: apache-2.0 |
|
language: |
|
- fr |
|
- it |
|
- de |
|
- es |
|
- en |
|
--- |
|
# Model Card for Mixtral-8x7B |
|
The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mistral-8x7B outperforms Llama 2 70B on most benchmarks we tested. |
|
|
|
For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/). |
|
|
|
## Warning |
|
This repo contains weights that are compatible with [vLLM](https://github.com/vllm-project/vllm) serving of the model as well as Hugging Face [transformers](https://github.com/huggingface/transformers) library. It is based on the original Mixtral [torrent release](magnet:?xt=urn:btih:5546272da9065eddeb6fcd7ffddeef5b75be79a7&dn=mixtral-8x7b-32kseqlen&tr=udp%3A%2F%http://2Fopentracker.i2p.rocks%3A6969%2Fannounce&tr=http%3A%2F%http://2Ftracker.openbittorrent.com%3A80%2Fannounce), but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF. |