miiqu-105b-v1.0

Developed by Infinimol AI GmbH

Also Available:

8th place on EQ-Bench, beating Qwen1.5-72B-Chat, miqudev/miqu-1-70b, mistral-medium and claude-3-sonnet-20240229. All without fine-tuning or additional training.

Thanks for support from: turboderp, silphendio, sqrkl, and ngxson!

โ— Q4_K_M files are split and require joining

Note: HF does not support uploading files larger than 50GB. The Q4_K_M files are supplied as split files.

Click for instructions regarding Q4_K_M files

Process

Please download:

  • miiqu.gguf-split-aa
  • miiqu.gguf-split-ab
  • miiqu.gguf-split-ac
  • miiqu.gguf-split-ad
  • miiqu.gguf-split-ae
  • miiqu.gguf-split-af

To join the files, do the following:

Linux and macOS:

cat miiqu.gguf-split-a* > miiqu_Q4_K_M.gguf && rm miiqu.gguf-split-a*

Windows command line:

COPY /B miiqu.gguf-split-aa + miiqu.gguf-split-ab + miiqu.gguf-split-ac + miiqu.gguf-split-ad + miiqu.gguf-split-ae + miiqu.gguf-split-af miiqu_Q4_K_M.gguf
DEL miiqu.gguf-split-aa miiqu.gguf-split-ab miiqu.gguf-split-ac miiqu.gguf-split-ad miiqu.gguf-split-ae miiqu.gguf-split-af

Model Details

  • Max Context: 32768 tokens
  • Layers: 105

Prompt template: ChatML or Mistral

chatml:

<|im_start|><|user|>\n<|user-message|><|im_end|>\n<|im_start|><|bot|>\n<|bot-message|><|im_end|>\n

mistral:

[INST] <|user|><|user-message|>[/INST]<|bot|><|bot-message|></s>
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.