--- base_model: LLaMAX/LLaMAX2-7B-Alpaca language: - af - am - ar - hy - as - ast - az - be - bn - bs - bg - my - ca - ceb - zho - hr - cs - da - nl - en - et - tl - fi - fr - ff - gl - lg - ka - de - el - gu - ha - he - hi - hu - is - ig - id - ga - it - ja - jv - kea - kam - kn - kk - km - ko - ky - lo - lv - ln - lt - luo - lb - mk - ms - ml - mt - mi - mr - mn - ne - ns - no - ny - oc - or - om - ps - fa - pl - pt - pa - ro - ru - sr - sn - sd - sk - sl - so - ku - es - sw - sv - tg - ta - te - th - tr - uk - umb - ur - uz - vi - cy - wo - xh - yo - zu library_name: transformers license: mit quantized_by: mradermacher tags: - Multilingual --- ## About static quants of https://huggingface.co/LLaMAX/LLaMAX2-7B-Alpaca weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/LLaMAX2-7B-Alpaca-GGUF/resolve/main/LLaMAX2-7B-Alpaca.Q2_K.gguf) | Q2_K | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/LLaMAX2-7B-Alpaca-GGUF/resolve/main/LLaMAX2-7B-Alpaca.IQ3_XS.gguf) | IQ3_XS | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/LLaMAX2-7B-Alpaca-GGUF/resolve/main/LLaMAX2-7B-Alpaca.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/LLaMAX2-7B-Alpaca-GGUF/resolve/main/LLaMAX2-7B-Alpaca.Q3_K_S.gguf) | Q3_K_S | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/LLaMAX2-7B-Alpaca-GGUF/resolve/main/LLaMAX2-7B-Alpaca.IQ3_M.gguf) | IQ3_M | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/LLaMAX2-7B-Alpaca-GGUF/resolve/main/LLaMAX2-7B-Alpaca.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/LLaMAX2-7B-Alpaca-GGUF/resolve/main/LLaMAX2-7B-Alpaca.Q3_K_L.gguf) | Q3_K_L | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/LLaMAX2-7B-Alpaca-GGUF/resolve/main/LLaMAX2-7B-Alpaca.IQ4_XS.gguf) | IQ4_XS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/LLaMAX2-7B-Alpaca-GGUF/resolve/main/LLaMAX2-7B-Alpaca.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LLaMAX2-7B-Alpaca-GGUF/resolve/main/LLaMAX2-7B-Alpaca.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/LLaMAX2-7B-Alpaca-GGUF/resolve/main/LLaMAX2-7B-Alpaca.Q5_K_S.gguf) | Q5_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/LLaMAX2-7B-Alpaca-GGUF/resolve/main/LLaMAX2-7B-Alpaca.Q5_K_M.gguf) | Q5_K_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/LLaMAX2-7B-Alpaca-GGUF/resolve/main/LLaMAX2-7B-Alpaca.Q6_K.gguf) | Q6_K | 5.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/LLaMAX2-7B-Alpaca-GGUF/resolve/main/LLaMAX2-7B-Alpaca.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/LLaMAX2-7B-Alpaca-GGUF/resolve/main/LLaMAX2-7B-Alpaca.f16.gguf) | f16 | 13.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.