miquliz-120b - Q4 GGUF
- Model creator: Wolfram Ravenwolf
- Original model: miquliz-120b
Description
This repo contains Q4_K_S and Q4_K_M GGUF format model files for Wolfram Ravenwolf's miquliz-120b.
Prompt template: Mistral
[INST] {prompt} [/INST]
Provided files
Name | Quant method | Bits | Size |
---|---|---|---|
miquliz-120b.Q4_K_S.gguf | Q4_K_S | 4 | 66.81 GB |
miquliz-120b.Q4_K_M.gguf | Q4_K_M | 4 | 70.64 GB |
Note: HF does not support uploading files larger than 50GB. Therefore the files are uploaded as split files.
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API has been turned off for this model.
Model tree for NanoByte/miquliz-120b-Q4-GGUF
Base model
wolfram/miquliz-120b