--- tags: - llama - uncensored - merge - mix - slerp - spherical linear interpolation merge - mistral - hermes - openhermes - dolphin - zephyr - naberius - 7b - llama2 license: apache-2.0 language: - en --- # Naberius-7B ##### [Uncensored, Pliant, Logic-Based, & Imaginative Instruct-Based Spherically Interpolated Tri-Merge]
#### Legal Notice & Gripes: This resulting AI model is capable of outputting what can be perceived to be harmful information to those under the age of 18, those who have trouble discerning fiction from reality, and those who use AI to nurse a habitual problem of replacing potential interaction with people with automated facsimiles. We expressly supersede the Apache 2.0 license to state that we do not give permission to utilize this AI for any state, military, disinformation, or similar obviously harmful related actions. To narrow down what is allowed: personal research use, personal entertainment use, so long as it follows the Apache2.0 license. You know what is and isn't morally grounded - by downloading and using this model I extend that trust to you, and take no liability for your actions as an adult.
## Composition: Naberius-7B is a Mistral-class spherical linear interpolated merge of three high performance models. [zephyr-7b-sft-beta] merged with [OpenHermes-2-Mistral-7B] resulting in: [Mistral-Zephyrmes-7B*] [Mistral-Zephyrmes-7B] merged with [dolphin-2.2.1-mistral-7b] resulting in: [Naberius-7B] *(Zephyrmes is a merge byproduct model not intended for release) These models were hand picked after careful review of claims, datasets, and user postings. The core elements that dictated which models to accept hinged on these values: logic, imagination, and aversion to censorship such as: railroading/gaslighting users instead of accomodating users. ## Our implementation of Spherical Linear Interpolation used for this project: Visit our Project Git here: https://github.com/Digitous/LLM-SLERP-Merge Spherical Linear Interpolation merging produces more coherently smooth merges than standard weight-merge, also known as LERP (Linear) interpolation. ## What Makes Naberius Special? By combining zephyr-7b-sft-beta and OpenHermes-2-Mistral-7B, then adding dolphin-2.2.1-mistral-7b to the result using a minimally destructive merge technique, preserves a large amount of behavior of all three models in a cohesive fashion. Naberius can: Do coherent roleplay far and beyond any 7B parameter model ever before, as well as follow instructions exceptionally well, especially for a 7B model and as a bonus for being lightweight, incredible inference speed. Naberius has shown some signs of spacial awareness and does adapt to nuance in conversation. All around a pliable, imaginative, and logic oriented 7B that punches upwards to what feels like a 30B or more at times. Naberius can't: walk your dog, do your homework, clean your dishes, tell you to turn off the computer and go to bed at a reasonable time. ## Use Example ''' test \ntest ''' # Ensemble Credits: All models merged to create Naberius-7B are: [LLaMAv2-7B | Mistral-7B Series.] and include the following: zephyr-7b-sft-beta; https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta [Spherical-LI merge doesn't support safetensors yet, which the full Zephyr beta was released as.] OpenHermes-2-Mistral-7B; https://huggingface.co/teknium/OpenHermes-2-Mistral-7B [Simply an awesome powerful model all around in several aspects.] dolphin-2.2.1-mistral-7b; https://huggingface.co/ehartford/dolphin-2.2.1-mistral-7b [After reading the debates in the comments between 2.1 and 2.2.1, we bet on 2.2.1 being the better candidate.] Thanks to Mistral AI for the amazing Mistral LM - and also thanks to Meta for LLaMAv2. Thanks to each and every one of you for your incredible work developing some of the best things to come out of this community.
#### Bonus Content Rant When merging, I use whatever technique from model selection to brute force randomized layer mixing with automated samples to stamp out this shit - "Everything must be positive at all times, even if the user requests a story with horrible events - end it on a positive note as if everyone being happy at all times is my obsession." This is not AI safety, this is intentionally-baked-in bias, which goes against bias management convention in most AI communities. Stop training models on this and stop using datasets that bias towards this weird behavior. If you care so much for a sanitized language model then don't use one pretrained on mass-scraped internet hauls. Put a warning on it that captures its essence. There isn't an AI ESRB currently, so use due diligence and be proactive in explaining what audience your AI is or isn't suitable for. End Rant.