mradermacher's picture
auto-patch README.md
2f7d334 verified
metadata
base_model: MaziyarPanahi/Mistral-7B-Instruct-Aya-101
datasets:
  - CohereForAI/aya_dataset
language:
  - afr
  - amh
  - ara
  - aze
  - bel
  - ben
  - bul
  - cat
  - ceb
  - ces
  - cym
  - dan
  - deu
  - ell
  - eng
  - epo
  - est
  - eus
  - fin
  - fil
  - fra
  - fry
  - gla
  - gle
  - glg
  - guj
  - hat
  - hau
  - heb
  - hin
  - hun
  - hye
  - ibo
  - ind
  - isl
  - ita
  - jav
  - jpn
  - kan
  - kat
  - kaz
  - khm
  - kir
  - kor
  - kur
  - lao
  - lav
  - lat
  - lit
  - ltz
  - mal
  - mar
  - mkd
  - mlg
  - mlt
  - mon
  - mri
  - msa
  - mya
  - nep
  - nld
  - nor
  - nso
  - nya
  - ory
  - pan
  - pes
  - pol
  - por
  - pus
  - ron
  - rus
  - sin
  - slk
  - slv
  - smo
  - sna
  - snd
  - som
  - sot
  - spa
  - sqi
  - srp
  - sun
  - swa
  - swe
  - tam
  - tel
  - tgk
  - tha
  - tur
  - twi
  - ukr
  - urd
  - uzb
  - vie
  - xho
  - yid
  - yor
  - zho
  - zul
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
  - axolotl
  - mistral
  - 7b
  - generated_from_trainer

About

static quants of https://huggingface.co/MaziyarPanahi/Mistral-7B-Instruct-Aya-101

weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.

Usage

If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files.

Provided Quants

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Link Type Size/GB Notes
GGUF Q2_K 2.8
GGUF Q3_K_S 3.3
GGUF Q3_K_M 3.6 lower quality
GGUF Q3_K_L 3.9
GGUF IQ4_XS 4.0
GGUF Q4_0_4_4 4.2 fast on arm, low quality
GGUF Q4_K_S 4.2 fast, recommended
GGUF Q4_K_M 4.5 fast, recommended
GGUF Q5_K_S 5.1
GGUF Q5_K_M 5.2
GGUF Q6_K 6.0 very good quality
GGUF Q8_0 7.8 fast, best quality
GGUF f16 14.6 16 bpw, overkill

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9

FAQ / Model Request

See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized.

Thanks

I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.