Edit model card
            # Supa-AI/Mixtral-8x7B-Instruct-v0.1-gguf
            This model was converted to GGUF format from [`mistralai/Mixtral-8x7B-Instruct-v0.1`](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) using llama.cpp.
            Refer to the [original model card](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) for more details on the model.

            ## Available Versions
            - `Mixtral-8x7B-Instruct-v0.1.q4_0.gguf` (q4_0)
  • Mixtral-8x7B-Instruct-v0.1.q4_1.gguf (q4_1)

  • Mixtral-8x7B-Instruct-v0.1.q5_0.gguf (q5_0)

  • Mixtral-8x7B-Instruct-v0.1.q5_1.gguf (q5_1)

  • Mixtral-8x7B-Instruct-v0.1.q8_0.gguf (q8_0)

  • Mixtral-8x7B-Instruct-v0.1.q3_k_s.gguf (q3_K_S)

  • Mixtral-8x7B-Instruct-v0.1.q3_k_m.gguf (q3_K_M)

  • Mixtral-8x7B-Instruct-v0.1.q3_k_l.gguf (q3_K_L)

  • Mixtral-8x7B-Instruct-v0.1.q4_k_s.gguf (q4_K_S)

  • Mixtral-8x7B-Instruct-v0.1.q4_k_m.gguf (q4_K_M)

  • Mixtral-8x7B-Instruct-v0.1.q5_k_s.gguf (q5_K_S)

  • Mixtral-8x7B-Instruct-v0.1.q5_k_m.gguf (q5_K_M)

  • Mixtral-8x7B-Instruct-v0.1.q6_k.gguf (q6_K)

              ## Use with llama.cpp
              Replace `FILENAME` with one of the above filenames.
    
              ### CLI:
              ```bash
              llama-cli --hf-repo Supa-AI/Mixtral-8x7B-Instruct-v0.1-gguf --hf-file FILENAME -p "Your prompt here"
              ```
    
              ### Server:
              ```bash
              llama-server --hf-repo Supa-AI/Mixtral-8x7B-Instruct-v0.1-gguf --hf-file FILENAME -c 2048
              ```
    
              ## Model Details
              - **Original Model:** [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
              - **Format:** GGUF
    
Downloads last month
457
GGUF
Model size
46.7B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for Supa-AI/Mixtral-8x7B-Instruct-v0.1-gguf

Quantized
(25)
this model