--- base_model: Salesforce/xLAM-8x22b-r datasets: - Salesforce/xlam-function-calling-60k extra_gated_button_content: Agree and access repository extra_gated_heading: Acknowledge to follow corresponding license to access the repository inference: false language: - en library_name: gguf license: cc-by-nc-4.0 pipeline_tag: text-generation quantized_by: legraphista tags: - function-calling - LLM Agent - tool-use - mistral - pytorch - quantized - GGUF - quantization - imat - imatrix - static - 16bit - 8bit - 6bit - 5bit - 4bit - 3bit - 2bit - 1bit --- # xLAM-8x22b-r-IMat-GGUF _Llama.cpp imatrix quantization of Salesforce/xLAM-8x22b-r_ Original Model: [Salesforce/xLAM-8x22b-r](https://huggingface.co/Salesforce/xLAM-8x22b-r) Original dtype: `BF16` (`bfloat16`) Quantized by: llama.cpp [b3649](https://github.com/ggerganov/llama.cpp/releases/tag/b3649) IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt) - [Files](#files) - [IMatrix](#imatrix) - [Common Quants](#common-quants) - [All Quants](#all-quants) - [Downloading using huggingface-cli](#downloading-using-huggingface-cli) - [Inference](#inference) - [Simple chat template](#simple-chat-template) - [Chat template with system prompt](#chat-template-with-system-prompt) - [Llama.cpp](#llama-cpp) - [FAQ](#faq) - [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere) - [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf) --- ## Files ### IMatrix Status: ✅ Available Link: [here](https://huggingface.co/legraphista/xLAM-8x22b-r-IMat-GGUF/blob/main/imatrix.dat) ### Common Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | [xLAM-8x22b-r.Q8_0/*](https://huggingface.co/legraphista/xLAM-8x22b-r-IMat-GGUF/tree/main/xLAM-8x22b-r.Q8_0) | Q8_0 | 149.43GB | ✅ Available | ⚪ Static | ✂ Yes | [xLAM-8x22b-r.Q6_K/*](https://huggingface.co/legraphista/xLAM-8x22b-r-IMat-GGUF/tree/main/xLAM-8x22b-r.Q6_K) | Q6_K | 115.54GB | ✅ Available | ⚪ Static | ✂ Yes | [xLAM-8x22b-r.Q4_K/*](https://huggingface.co/legraphista/xLAM-8x22b-r-IMat-GGUF/tree/main/xLAM-8x22b-r.Q4_K) | Q4_K | 85.60GB | ✅ Available | 🟢 IMatrix | ✂ Yes | [xLAM-8x22b-r.Q3_K/*](https://huggingface.co/legraphista/xLAM-8x22b-r-IMat-GGUF/tree/main/xLAM-8x22b-r.Q3_K) | Q3_K | 67.80GB | ✅ Available | 🟢 IMatrix | ✂ Yes | [xLAM-8x22b-r.Q2_K/*](https://huggingface.co/legraphista/xLAM-8x22b-r-IMat-GGUF/tree/main/xLAM-8x22b-r.Q2_K) | Q2_K | 52.11GB | ✅ Available | 🟢 IMatrix | ✂ Yes ### All Quants | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | | -------- | ---------- | --------- | ------ | ------------ | -------- | | [xLAM-8x22b-r.BF16/*](https://huggingface.co/legraphista/xLAM-8x22b-r-IMat-GGUF/tree/main/xLAM-8x22b-r.BF16) | BF16 | 281.27GB | ✅ Available | ⚪ Static | ✂ Yes | [xLAM-8x22b-r.FP16/*](https://huggingface.co/legraphista/xLAM-8x22b-r-IMat-GGUF/tree/main/xLAM-8x22b-r.FP16) | F16 | 281.27GB | ✅ Available | ⚪ Static | ✂ Yes | [xLAM-8x22b-r.Q8_0/*](https://huggingface.co/legraphista/xLAM-8x22b-r-IMat-GGUF/tree/main/xLAM-8x22b-r.Q8_0) | Q8_0 | 149.43GB | ✅ Available | ⚪ Static | ✂ Yes | [xLAM-8x22b-r.Q6_K/*](https://huggingface.co/legraphista/xLAM-8x22b-r-IMat-GGUF/tree/main/xLAM-8x22b-r.Q6_K) | Q6_K | 115.54GB | ✅ Available | ⚪ Static | ✂ Yes | [xLAM-8x22b-r.Q5_K/*](https://huggingface.co/legraphista/xLAM-8x22b-r-IMat-GGUF/tree/main/xLAM-8x22b-r.Q5_K) | Q5_K | 99.98GB | ✅ Available | ⚪ Static | ✂ Yes | [xLAM-8x22b-r.Q5_K_S/*](https://huggingface.co/legraphista/xLAM-8x22b-r-IMat-GGUF/tree/main/xLAM-8x22b-r.Q5_K_S) | Q5_K_S | 96.99GB | ✅ Available | ⚪ Static | ✂ Yes | [xLAM-8x22b-r.Q4_K/*](https://huggingface.co/legraphista/xLAM-8x22b-r-IMat-GGUF/tree/main/xLAM-8x22b-r.Q4_K) | Q4_K | 85.60GB | ✅ Available | 🟢 IMatrix | ✂ Yes | [xLAM-8x22b-r.Q4_K_S/*](https://huggingface.co/legraphista/xLAM-8x22b-r-IMat-GGUF/tree/main/xLAM-8x22b-r.Q4_K_S) | Q4_K_S | 80.49GB | ✅ Available | 🟢 IMatrix | ✂ Yes | [xLAM-8x22b-r.IQ4_NL/*](https://huggingface.co/legraphista/xLAM-8x22b-r-IMat-GGUF/tree/main/xLAM-8x22b-r.IQ4_NL) | IQ4_NL | 79.79GB | ✅ Available | 🟢 IMatrix | ✂ Yes | [xLAM-8x22b-r.IQ4_XS/*](https://huggingface.co/legraphista/xLAM-8x22b-r-IMat-GGUF/tree/main/xLAM-8x22b-r.IQ4_XS) | IQ4_XS | 75.49GB | ✅ Available | 🟢 IMatrix | ✂ Yes | [xLAM-8x22b-r.Q3_K/*](https://huggingface.co/legraphista/xLAM-8x22b-r-IMat-GGUF/tree/main/xLAM-8x22b-r.Q3_K) | Q3_K | 67.80GB | ✅ Available | 🟢 IMatrix | ✂ Yes | [xLAM-8x22b-r.Q3_K_L/*](https://huggingface.co/legraphista/xLAM-8x22b-r-IMat-GGUF/tree/main/xLAM-8x22b-r.Q3_K_L) | Q3_K_L | 72.59GB | ✅ Available | 🟢 IMatrix | ✂ Yes | [xLAM-8x22b-r.Q3_K_S/*](https://huggingface.co/legraphista/xLAM-8x22b-r-IMat-GGUF/tree/main/xLAM-8x22b-r.Q3_K_S) | Q3_K_S | 61.51GB | ✅ Available | 🟢 IMatrix | ✂ Yes | [xLAM-8x22b-r.IQ3_M/*](https://huggingface.co/legraphista/xLAM-8x22b-r-IMat-GGUF/tree/main/xLAM-8x22b-r.IQ3_M) | IQ3_M | 64.50GB | ✅ Available | 🟢 IMatrix | ✂ Yes | [xLAM-8x22b-r.IQ3_S/*](https://huggingface.co/legraphista/xLAM-8x22b-r-IMat-GGUF/tree/main/xLAM-8x22b-r.IQ3_S) | IQ3_S | 61.51GB | ✅ Available | 🟢 IMatrix | ✂ Yes | [xLAM-8x22b-r.IQ3_XS/*](https://huggingface.co/legraphista/xLAM-8x22b-r-IMat-GGUF/tree/main/xLAM-8x22b-r.IQ3_XS) | IQ3_XS | 58.24GB | ✅ Available | 🟢 IMatrix | ✂ Yes | [xLAM-8x22b-r.IQ3_XXS/*](https://huggingface.co/legraphista/xLAM-8x22b-r-IMat-GGUF/tree/main/xLAM-8x22b-r.IQ3_XXS) | IQ3_XXS | 54.91GB | ✅ Available | 🟢 IMatrix | ✂ Yes | [xLAM-8x22b-r.Q2_K/*](https://huggingface.co/legraphista/xLAM-8x22b-r-IMat-GGUF/tree/main/xLAM-8x22b-r.Q2_K) | Q2_K | 52.11GB | ✅ Available | 🟢 IMatrix | ✂ Yes | [xLAM-8x22b-r.Q2_K_S/*](https://huggingface.co/legraphista/xLAM-8x22b-r-IMat-GGUF/tree/main/xLAM-8x22b-r.Q2_K_S) | Q2_K_S | 48.10GB | ✅ Available | 🟢 IMatrix | ✂ Yes | [xLAM-8x22b-r.IQ2_M/*](https://huggingface.co/legraphista/xLAM-8x22b-r-IMat-GGUF/tree/main/xLAM-8x22b-r.IQ2_M) | IQ2_M | 46.72GB | ✅ Available | 🟢 IMatrix | ✂ Yes | [xLAM-8x22b-r.IQ2_S.gguf](https://huggingface.co/legraphista/xLAM-8x22b-r-IMat-GGUF/blob/main/xLAM-8x22b-r.IQ2_S.gguf) | IQ2_S | 42.60GB | ✅ Available | 🟢 IMatrix | 📦 No | [xLAM-8x22b-r.IQ2_XS.gguf](https://huggingface.co/legraphista/xLAM-8x22b-r-IMat-GGUF/blob/main/xLAM-8x22b-r.IQ2_XS.gguf) | IQ2_XS | 42.01GB | ✅ Available | 🟢 IMatrix | 📦 No | [xLAM-8x22b-r.IQ2_XXS.gguf](https://huggingface.co/legraphista/xLAM-8x22b-r-IMat-GGUF/blob/main/xLAM-8x22b-r.IQ2_XXS.gguf) | IQ2_XXS | 37.89GB | ✅ Available | 🟢 IMatrix | 📦 No | [xLAM-8x22b-r.IQ1_M.gguf](https://huggingface.co/legraphista/xLAM-8x22b-r-IMat-GGUF/blob/main/xLAM-8x22b-r.IQ1_M.gguf) | IQ1_M | 32.74GB | ✅ Available | 🟢 IMatrix | 📦 No | [xLAM-8x22b-r.IQ1_S.gguf](https://huggingface.co/legraphista/xLAM-8x22b-r-IMat-GGUF/blob/main/xLAM-8x22b-r.IQ1_S.gguf) | IQ1_S | 29.65GB | ✅ Available | 🟢 IMatrix | 📦 No ## Downloading using huggingface-cli If you do not have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Download the specific file you want: ``` huggingface-cli download legraphista/xLAM-8x22b-r-IMat-GGUF --include "xLAM-8x22b-r.Q8_0.gguf" --local-dir ./ ``` If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download legraphista/xLAM-8x22b-r-IMat-GGUF --include "xLAM-8x22b-r.Q8_0/*" --local-dir ./ # see FAQ for merging GGUF's ``` --- ## Inference ### Simple chat template ``` [INST] {user_prompt}[/INST] {assistant_response}[INST] {next_user_prompt}[/INST] ``` ### Chat template with system prompt ``` [INST] {user_prompt}[/INST] {assistant_response}[INST] {system_prompt} {next_user_prompt}[/INST] ``` ### Llama.cpp ``` llama.cpp/main -m xLAM-8x22b-r.Q8_0.gguf --color -i -p "prompt here (according to the chat template)" ``` --- ## FAQ ### Why is the IMatrix not applied everywhere? According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results). ### How do I merge a split GGUF? 1. Make sure you have `gguf-split` available - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases - Download the appropriate zip for your system from the latest release - Unzip the archive and you should be able to find `gguf-split` 2. Locate your GGUF chunks folder (ex: `xLAM-8x22b-r.Q8_0`) 3. Run `gguf-split --merge xLAM-8x22b-r.Q8_0/xLAM-8x22b-r.Q8_0-00001-of-XXXXX.gguf xLAM-8x22b-r.Q8_0.gguf` - Make sure to point `gguf-split` to the first chunk of the split. --- Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!