Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

MiquMaid-v1-70B 3.5bpw

Description

Exllama quant of NeverSleep/MiquMaid-v1-70B

Other quants:

EXL2: 6bpw, 5bpw, 4bpw, 3.5bpw, 3bpw, 2.4bpw

2.4bpw is probably the most you can fit in a 24gb card

GGUF: 2bit Imatrix GGUF

Custom format:

### Instruction:
{system prompt}

### Input:
{input}

### Response:
{reply}

Contact

Kooten on discord

ko-fi.com/kooten

Downloads last month
10
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.