Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama3-8B-DarkIdol-2.3-Uncensored-32K - GGUF - Model creator: https://huggingface.co/aifeifei798/ - Original model: https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-2.3-Uncensored-32K/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama3-8B-DarkIdol-2.3-Uncensored-32K.Q2_K.gguf](https://huggingface.co/RichardErkhov/aifeifei798_-_llama3-8B-DarkIdol-2.3-Uncensored-32K-gguf/blob/main/llama3-8B-DarkIdol-2.3-Uncensored-32K.Q2_K.gguf) | Q2_K | 2.96GB | | [llama3-8B-DarkIdol-2.3-Uncensored-32K.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/aifeifei798_-_llama3-8B-DarkIdol-2.3-Uncensored-32K-gguf/blob/main/llama3-8B-DarkIdol-2.3-Uncensored-32K.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [llama3-8B-DarkIdol-2.3-Uncensored-32K.IQ3_S.gguf](https://huggingface.co/RichardErkhov/aifeifei798_-_llama3-8B-DarkIdol-2.3-Uncensored-32K-gguf/blob/main/llama3-8B-DarkIdol-2.3-Uncensored-32K.IQ3_S.gguf) | IQ3_S | 3.43GB | | [llama3-8B-DarkIdol-2.3-Uncensored-32K.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/aifeifei798_-_llama3-8B-DarkIdol-2.3-Uncensored-32K-gguf/blob/main/llama3-8B-DarkIdol-2.3-Uncensored-32K.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [llama3-8B-DarkIdol-2.3-Uncensored-32K.IQ3_M.gguf](https://huggingface.co/RichardErkhov/aifeifei798_-_llama3-8B-DarkIdol-2.3-Uncensored-32K-gguf/blob/main/llama3-8B-DarkIdol-2.3-Uncensored-32K.IQ3_M.gguf) | IQ3_M | 3.52GB | | [llama3-8B-DarkIdol-2.3-Uncensored-32K.Q3_K.gguf](https://huggingface.co/RichardErkhov/aifeifei798_-_llama3-8B-DarkIdol-2.3-Uncensored-32K-gguf/blob/main/llama3-8B-DarkIdol-2.3-Uncensored-32K.Q3_K.gguf) | Q3_K | 3.74GB | | [llama3-8B-DarkIdol-2.3-Uncensored-32K.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/aifeifei798_-_llama3-8B-DarkIdol-2.3-Uncensored-32K-gguf/blob/main/llama3-8B-DarkIdol-2.3-Uncensored-32K.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [llama3-8B-DarkIdol-2.3-Uncensored-32K.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/aifeifei798_-_llama3-8B-DarkIdol-2.3-Uncensored-32K-gguf/blob/main/llama3-8B-DarkIdol-2.3-Uncensored-32K.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [llama3-8B-DarkIdol-2.3-Uncensored-32K.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/aifeifei798_-_llama3-8B-DarkIdol-2.3-Uncensored-32K-gguf/blob/main/llama3-8B-DarkIdol-2.3-Uncensored-32K.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [llama3-8B-DarkIdol-2.3-Uncensored-32K.Q4_0.gguf](https://huggingface.co/RichardErkhov/aifeifei798_-_llama3-8B-DarkIdol-2.3-Uncensored-32K-gguf/blob/main/llama3-8B-DarkIdol-2.3-Uncensored-32K.Q4_0.gguf) | Q4_0 | 4.34GB | | [llama3-8B-DarkIdol-2.3-Uncensored-32K.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/aifeifei798_-_llama3-8B-DarkIdol-2.3-Uncensored-32K-gguf/blob/main/llama3-8B-DarkIdol-2.3-Uncensored-32K.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [llama3-8B-DarkIdol-2.3-Uncensored-32K.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/aifeifei798_-_llama3-8B-DarkIdol-2.3-Uncensored-32K-gguf/blob/main/llama3-8B-DarkIdol-2.3-Uncensored-32K.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [llama3-8B-DarkIdol-2.3-Uncensored-32K.Q4_K.gguf](https://huggingface.co/RichardErkhov/aifeifei798_-_llama3-8B-DarkIdol-2.3-Uncensored-32K-gguf/blob/main/llama3-8B-DarkIdol-2.3-Uncensored-32K.Q4_K.gguf) | Q4_K | 4.58GB | | [llama3-8B-DarkIdol-2.3-Uncensored-32K.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/aifeifei798_-_llama3-8B-DarkIdol-2.3-Uncensored-32K-gguf/blob/main/llama3-8B-DarkIdol-2.3-Uncensored-32K.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [llama3-8B-DarkIdol-2.3-Uncensored-32K.Q4_1.gguf](https://huggingface.co/RichardErkhov/aifeifei798_-_llama3-8B-DarkIdol-2.3-Uncensored-32K-gguf/blob/main/llama3-8B-DarkIdol-2.3-Uncensored-32K.Q4_1.gguf) | Q4_1 | 4.78GB | | [llama3-8B-DarkIdol-2.3-Uncensored-32K.Q5_0.gguf](https://huggingface.co/RichardErkhov/aifeifei798_-_llama3-8B-DarkIdol-2.3-Uncensored-32K-gguf/blob/main/llama3-8B-DarkIdol-2.3-Uncensored-32K.Q5_0.gguf) | Q5_0 | 5.21GB | | [llama3-8B-DarkIdol-2.3-Uncensored-32K.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/aifeifei798_-_llama3-8B-DarkIdol-2.3-Uncensored-32K-gguf/blob/main/llama3-8B-DarkIdol-2.3-Uncensored-32K.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [llama3-8B-DarkIdol-2.3-Uncensored-32K.Q5_K.gguf](https://huggingface.co/RichardErkhov/aifeifei798_-_llama3-8B-DarkIdol-2.3-Uncensored-32K-gguf/blob/main/llama3-8B-DarkIdol-2.3-Uncensored-32K.Q5_K.gguf) | Q5_K | 5.34GB | | [llama3-8B-DarkIdol-2.3-Uncensored-32K.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/aifeifei798_-_llama3-8B-DarkIdol-2.3-Uncensored-32K-gguf/blob/main/llama3-8B-DarkIdol-2.3-Uncensored-32K.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [llama3-8B-DarkIdol-2.3-Uncensored-32K.Q5_1.gguf](https://huggingface.co/RichardErkhov/aifeifei798_-_llama3-8B-DarkIdol-2.3-Uncensored-32K-gguf/blob/main/llama3-8B-DarkIdol-2.3-Uncensored-32K.Q5_1.gguf) | Q5_1 | 5.65GB | | [llama3-8B-DarkIdol-2.3-Uncensored-32K.Q6_K.gguf](https://huggingface.co/RichardErkhov/aifeifei798_-_llama3-8B-DarkIdol-2.3-Uncensored-32K-gguf/blob/main/llama3-8B-DarkIdol-2.3-Uncensored-32K.Q6_K.gguf) | Q6_K | 6.14GB | | [llama3-8B-DarkIdol-2.3-Uncensored-32K.Q8_0.gguf](https://huggingface.co/RichardErkhov/aifeifei798_-_llama3-8B-DarkIdol-2.3-Uncensored-32K-gguf/blob/main/llama3-8B-DarkIdol-2.3-Uncensored-32K.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- license: llama3 language: - en tags: - roleplay - llama3 - sillytavern - idol --- # The final version of Llama 3.0 will be followed by the next iteration starting from Llama 3.1. # Special Thanks: - Lewdiculous's superb gguf version, thank you for your conscientious and responsible dedication. - https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-2.3-Uncensored-32K-GGUF-IQ-Imatrix-Request - mradermacher's superb gguf version, thank you for your conscientious and responsible dedication. - https://huggingface.co/mradermacher/llama3-8B-DarkIdol-2.3-Uncensored-32K-i1-GGUF - https://huggingface.co/mradermacher/llama3-8B-DarkIdol-2.3-Uncensored-32K-GGUF # These are my own quantizations (updated almost daily). The difference with normal quantizations is that I quantize the output and embed tensors to f16. and the other tensors to 15_k,q6_k or q8_0. This creates models that are little or not degraded at all and have a smaller size. They run at about 3-6 t/sec on CPU only using llama.cpp And obviously faster on computers with potent GPUs - the fast cat at [ZeroWw/llama3-8B-DarkIdol-2.3-Uncensored-32K-GGUF](https://huggingface.co/ZeroWw/llama3-8B-DarkIdol-2.2-Uncensored-32K-GGUF) # Model Description: The module combination has been readjusted to better fulfill various roles and has been adapted for mobile phones. - Saving money(LLama 3) - only test en. - Input Models input text only. Output Models generate text and code only. - Uncensored - Quick response - The underlying model used is winglian/Llama-3-8b-64k-PoSE (The theoretical support is 64k, but I have only tested up to 32k. :) - A scholarly response akin to a thesis.(I tend to write songs extensively, to the point where one song almost becomes as detailed as a thesis. :) - DarkIdol:Roles that you can imagine and those that you cannot imagine. - Roleplay - Specialized in various role-playing scenarios - more look at test role. (https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.2/tree/main/test) - more look at LM Studio presets (https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.2/tree/main/config-presets) ![image/png](https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-2.3-Uncensored-32K/resolve/main/llama3-8B-DarkIdol-2.3-Uncensored-32K.png) ## virtual idol Twitter - https://x.com/aifeifei799 # Questions - The model's response results are for reference only, please do not fully trust them. # Stop Strings ```python stop = [ "## Instruction:", "### Instruction:", "<|end_of_text|>", " //:", "", "<3```", "### Note:", "### Input:", "### Response:", "### Emoticons:" ], ``` # Model Use - Koboldcpp https://github.com/LostRuins/koboldcpp - Since KoboldCpp is taking a while to update with the latest llama.cpp commits, I'll recommend this [fork](https://github.com/Nexesenex/kobold.cpp) if anyone has issues. - LM Studio https://lmstudio.ai/ - Please test again using the Default LM Studio Windows preset. - llama.cpp https://github.com/ggerganov/llama.cpp - Backyard AI https://backyard.ai/ - Meet Layla,Layla is an AI chatbot that runs offline on your device.No internet connection required.No censorship.Complete privacy.Layla Lite https://www.layla-network.ai/ - Layla Lite llama3-8B-DarkIdol-1.1-Q4_K_S-imat.gguf https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-2.3-Uncensored-32K/blob/main/llama3-8B-DarkIdol-2.3-Uncensored-32K-Q4_K_S-imat.gguf?download=true - more gguf at https://huggingface.co/LWDCLS/llama3-8B-DarkIdol-2.3-Uncensored-32K-GGUF-IQ-Imatrix-Request # character - https://character-tavern.com/ - https://characterhub.org/ - https://pygmalion.chat/ - https://aetherroom.club/ - https://backyard.ai/ - Layla AI chatbot ### If you want to use vision functionality: * You must use the latest versions of [Koboldcpp](https://github.com/Nexesenex/kobold.cpp). ### To use the multimodal capabilities of this model and use **vision** you need to load the specified **mmproj** file, this can be found inside this model repo. [Llava MMProj](https://huggingface.co/Nitral-AI/Llama-3-Update-3.0-mmproj-model-f16) * You can load the **mmproj** by using the corresponding section in the interface: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/UX6Ubss2EPNAT3SKGMLe0.png) ### Thank you: To the authors for their hard work, which has given me more options to easily create what I want. Thank you for your efforts. - Hastagaras - Gryphe - cgato - ChaoticNeutrals - mergekit - merge - transformers - llama - Nitral-AI - MLP-KTLim - rinna - hfl - Rupesh2 - stephenlzc - theprint - Sao10K - turboderp - TheBossLevel123 - winglian - ......... --- # llama3-8B-DarkIdol-2.3-Uncensored-32K This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using ./llama3-8B-DarkIdol-2.3b as a base. ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Sao10K/L3-8B-Niitama-v1 - model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot - model: Nitral-AI/Hathor_Tahsin-L3-8B-v0.85 - model: turboderp/llama3-turbcat-instruct-8b - model: winglian/Llama-3-8b-64k-PoSE merge_method: model_stock base_model: winglian/Llama-3-8b-64k-PoSE dtype: bfloat16 models: - model: maldv/badger-writer-llama-3-8b - model: underwoods/writer-8b - model: Gryphe/Pantheon-RP-1.0-8b-Llama-3 - model: vicgalle/Roleplay-Llama-3-8B - model: cgato/TheSalt-RP-L3-8b-DPO-v0.3.2-e0.15.2 - model: ./llama3-8B-DarkIdol-2.3a merge_method: model_stock base_model: ./llama3-8B-DarkIdol-2.3a dtype: bfloat16 models: - model: Rupesh2/Meta-Llama-3-8B-abliterated - model: Orenguteng/Llama-3-8B-LexiFun-Uncensored-V1 - model: Orenguteng/Llama-3-8B-Lexi-Uncensored - model: theprint/Llama-3-8B-Lexi-Smaug-Uncensored - model: vicgalle/Unsafe-Llama-3-8B - model: vicgalle/Configurable-Hermes-2-Pro-Llama-3-8B - model: ./llama3-8B-DarkIdol-2.3b merge_method: model_stock base_model: ./llama3-8B-DarkIdol-2.3b dtype: bfloat16 ```