rbgo's picture
Update README.md
694c921 verified
|
raw
history blame
985 Bytes
metadata
license: apache-2.0
language:
  - fr
  - it
  - de
  - es
  - en

Model Card for Mixtral-8x7B

The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mistral-8x7B outperforms Llama 2 70B on most benchmarks we tested.

For full details of this model please read our release blog post.

Warning

This repo contains weights that are compatible with vLLM serving of the model as well as Hugging Face transformers library. It is based on the original Mixtral torrent release, but the file format and parameter names are different. Please note that model cannot (yet) be instantiated with HF.