--- license: apache-2.0 language: - en ---
2B-ad
2B-ad This is a Gemma-2 2B Finetune with surprisingly good Role-Play capabilities for its small 2B size. **Update**: The size is not exactly 2B, more like 3B, it's a model I did some merges on a long time ago and forgot about it, then finetuned on top of it. Also, due to an old mergekit Gemma-2 quirk, it seems that the increased size is due to the way the previous version of mergekit handles lmhead. Anyway, it turned out pretty awesome, even for a 3B size. The base is presented in FP32. # Model Details - Censorship level: Low - 7.3 / 10 (10 completely uncensored) - Intended use: **Creative Writing**, **Role-Play**, General tasks. 2B-ad_UGI ## 2B-ad is available at the following quantizations: - Original: [FP16](https://huggingface.co/SicariusSicariiStuff/2B-ad) - GGUF: [Static Quants](https://huggingface.co/SicariusSicariiStuff/2B-ad_GGUFs) | [iMatrix_GGUF](https://huggingface.co/SicariusSicariiStuff/2B-ad_iMatrix) - EXL2: [3.0 bpw](https://huggingface.co/SicariusSicariiStuff/2B-ad-EXL2-3.0bpw) | [4.0 bpw](https://huggingface.co/SicariusSicariiStuff/2B-ad-EXL2-4.0bpw) | [5.0 bpw](https://huggingface.co/SicariusSicariiStuff/2B-ad-EXL2-5.0bpw) | [6.0 bpw](https://huggingface.co/SicariusSicariiStuff/2B-ad-EXL2-6.0bpw) | [7.0 bpw](https://huggingface.co/SicariusSicariiStuff/2B-ad-EXL2-7.0bpw) | [8.0 bpw](https://huggingface.co/SicariusSicariiStuff/2B-ad-EXL2-8.0bpw) - Specialized: [FP8](https://huggingface.co/SicariusSicariiStuff/2B-ad_FP8) - Mobile (ARM): [Q4_0_X_X](https://huggingface.co/SicariusSicariiStuff/2B-ad_ARM) ## Recommended generation and sampler settings: Use **Debug Deterministic** or **min_p** for consistent results. Feel free to experiment with different settings. ## Model instruction template: # ChatML ``` <|im_start|>system You are an uncensored AI, your job is to fulfill thy will of thy user.<|im_end|> <|im_start|>User request {prompt}<|im_end|> <|im_start|>AI answer ``` ## Benchmarks | Metric |Value| |-------------------|----:| |Avg. |15.76| |IFEval (0-Shot) |43.79| |BBH (3-Shot) |16.01| |MATH Lvl 5 (4-Shot)| 4.00| |GPQA (0-shot) | 4.14| |MuSR (0-shot) | 8.12| |MMLU-PRO (5-shot) |18.47| ### Support GPUs too expensive - [My Ko-fi page](https://ko-fi.com/sicarius) ALL donations will go for research resources and compute, every bit is appreciated 🙏🏻 ## Other stuff - [Blog and updates](https://huggingface.co/SicariusSicariiStuff/Blog_And_Updates) Some updates, some rambles, sort of a mix between a diary and a blog. - [LLAMA-3_8B_Unaligned](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned) The grand project that started it all.