Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) L3-8B-Helium3-baseLlama - GGUF - Model creator: https://huggingface.co/inflatebot/ - Original model: https://huggingface.co/inflatebot/L3-8B-Helium3-baseLlama/ | Name | Quant method | Size | | ---- | ---- | ---- | | [L3-8B-Helium3-baseLlama.Q2_K.gguf](https://huggingface.co/RichardErkhov/inflatebot_-_L3-8B-Helium3-baseLlama-gguf/blob/main/L3-8B-Helium3-baseLlama.Q2_K.gguf) | Q2_K | 2.96GB | | [L3-8B-Helium3-baseLlama.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/inflatebot_-_L3-8B-Helium3-baseLlama-gguf/blob/main/L3-8B-Helium3-baseLlama.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [L3-8B-Helium3-baseLlama.IQ3_S.gguf](https://huggingface.co/RichardErkhov/inflatebot_-_L3-8B-Helium3-baseLlama-gguf/blob/main/L3-8B-Helium3-baseLlama.IQ3_S.gguf) | IQ3_S | 3.43GB | | [L3-8B-Helium3-baseLlama.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/inflatebot_-_L3-8B-Helium3-baseLlama-gguf/blob/main/L3-8B-Helium3-baseLlama.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [L3-8B-Helium3-baseLlama.IQ3_M.gguf](https://huggingface.co/RichardErkhov/inflatebot_-_L3-8B-Helium3-baseLlama-gguf/blob/main/L3-8B-Helium3-baseLlama.IQ3_M.gguf) | IQ3_M | 3.52GB | | [L3-8B-Helium3-baseLlama.Q3_K.gguf](https://huggingface.co/RichardErkhov/inflatebot_-_L3-8B-Helium3-baseLlama-gguf/blob/main/L3-8B-Helium3-baseLlama.Q3_K.gguf) | Q3_K | 3.74GB | | [L3-8B-Helium3-baseLlama.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/inflatebot_-_L3-8B-Helium3-baseLlama-gguf/blob/main/L3-8B-Helium3-baseLlama.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [L3-8B-Helium3-baseLlama.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/inflatebot_-_L3-8B-Helium3-baseLlama-gguf/blob/main/L3-8B-Helium3-baseLlama.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [L3-8B-Helium3-baseLlama.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/inflatebot_-_L3-8B-Helium3-baseLlama-gguf/blob/main/L3-8B-Helium3-baseLlama.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [L3-8B-Helium3-baseLlama.Q4_0.gguf](https://huggingface.co/RichardErkhov/inflatebot_-_L3-8B-Helium3-baseLlama-gguf/blob/main/L3-8B-Helium3-baseLlama.Q4_0.gguf) | Q4_0 | 4.34GB | | [L3-8B-Helium3-baseLlama.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/inflatebot_-_L3-8B-Helium3-baseLlama-gguf/blob/main/L3-8B-Helium3-baseLlama.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [L3-8B-Helium3-baseLlama.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/inflatebot_-_L3-8B-Helium3-baseLlama-gguf/blob/main/L3-8B-Helium3-baseLlama.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [L3-8B-Helium3-baseLlama.Q4_K.gguf](https://huggingface.co/RichardErkhov/inflatebot_-_L3-8B-Helium3-baseLlama-gguf/blob/main/L3-8B-Helium3-baseLlama.Q4_K.gguf) | Q4_K | 4.58GB | | [L3-8B-Helium3-baseLlama.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/inflatebot_-_L3-8B-Helium3-baseLlama-gguf/blob/main/L3-8B-Helium3-baseLlama.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [L3-8B-Helium3-baseLlama.Q4_1.gguf](https://huggingface.co/RichardErkhov/inflatebot_-_L3-8B-Helium3-baseLlama-gguf/blob/main/L3-8B-Helium3-baseLlama.Q4_1.gguf) | Q4_1 | 4.78GB | | [L3-8B-Helium3-baseLlama.Q5_0.gguf](https://huggingface.co/RichardErkhov/inflatebot_-_L3-8B-Helium3-baseLlama-gguf/blob/main/L3-8B-Helium3-baseLlama.Q5_0.gguf) | Q5_0 | 5.21GB | | [L3-8B-Helium3-baseLlama.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/inflatebot_-_L3-8B-Helium3-baseLlama-gguf/blob/main/L3-8B-Helium3-baseLlama.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [L3-8B-Helium3-baseLlama.Q5_K.gguf](https://huggingface.co/RichardErkhov/inflatebot_-_L3-8B-Helium3-baseLlama-gguf/blob/main/L3-8B-Helium3-baseLlama.Q5_K.gguf) | Q5_K | 5.34GB | | [L3-8B-Helium3-baseLlama.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/inflatebot_-_L3-8B-Helium3-baseLlama-gguf/blob/main/L3-8B-Helium3-baseLlama.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [L3-8B-Helium3-baseLlama.Q5_1.gguf](https://huggingface.co/RichardErkhov/inflatebot_-_L3-8B-Helium3-baseLlama-gguf/blob/main/L3-8B-Helium3-baseLlama.Q5_1.gguf) | Q5_1 | 5.65GB | | [L3-8B-Helium3-baseLlama.Q6_K.gguf](https://huggingface.co/RichardErkhov/inflatebot_-_L3-8B-Helium3-baseLlama-gguf/blob/main/L3-8B-Helium3-baseLlama.Q6_K.gguf) | Q6_K | 6.14GB | | [L3-8B-Helium3-baseLlama.Q8_0.gguf](https://huggingface.co/RichardErkhov/inflatebot_-_L3-8B-Helium3-baseLlama-gguf/blob/main/L3-8B-Helium3-baseLlama.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- base_model: - NousResearch/Meta-Llama-3-8B - inflatebot/helide-beta-r1 - inflatebot/helide-beta-r0 - inflatebot/helide-beta-r4 library_name: transformers tags: - mergekit - merge --- # helium-3-r2 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details Helium3, but the base model is Llama-3. Ended up being too dry, but if He3's too horny for you, try this one. [GGUFs by mradermacher](https://huggingface.co/mradermacher/L3-8B-Helium3-baseLlama-GGUF) ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) as a base. ### Models Merged The following models were included in the merge: * [inflatebot/helide-beta-r1](https://huggingface.co/inflatebot/helide-beta-r1) * [inflatebot/helide-beta-r0](https://huggingface.co/inflatebot/helide-beta-r0) * [inflatebot/helide-beta-r4](https://huggingface.co/inflatebot/helide-beta-r4) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: inflatebot/helide-beta-r4 - model: inflatebot/helide-beta-r1 - model: inflatebot/helide-beta-r0 merge_method: model_stock base_model: NousResearch/Meta-Llama-3-8B dtype: bfloat16 ```