license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
Cognitron-8B
Cognitron-8B is an experimental large language model (LLM) created by combining three pre-existing models: Llama-3-8B-Lexi-Uncensored, Einstein-v6.1-Llama3-8B, and dolphin-2.9-llama3-8b. This combination aims to achieve a unique blend of capabilities:
- Uncensored Knowledge: By incorporating Llama-3-8B-Lexi-Uncensored, Cognitron-8B has access to a wider range of information without filtering.
- Enhanced Intelligence: The inclusion of Einstein-v6.1-Llama3-8B is intended to boost Cognitron-8B's reasoning and problem-solving abilities.
- Creative Fluency: The dolphin-2.9-llama3-8b component is designed to contribute creativity and unconventional thinking to Cognitron-8B's responses.
It is important to note that combining these models is an experiment, and the resulting performance is unknown.
GGUF: https://huggingface.co/mradermacher/Cognitron-8B-GGUF
Cognitron-8B is a merge of the following models using mergekit:
Potential Biases and Limitations
Uncensored Content: Due to the inclusion of uncensored models, Cognitron-8B may generate outputs containing biases, hate speech, or offensive language.
Importance of Uncensored Models
The inclusion of an uncensored model in Cognitron-8B reflects a growing interest in exploring the potential benefits of unfiltered information for LLMs. Here's why uncensored models are important:
- Comprehensiveness: Unrestricted access to information allows LLMs to capture a more complete picture of the world, even if it includes controversial or sensitive topics.
- Real-World Applicability: In situations where internet access is limited, uncensored LLMs could serve as a valuable source of unfiltered knowledge, allowing users to make informed decisions based on the available data.
🧩 Configuration
models:
- model: Orenguteng/Llama-3-8B-Lexi-Uncensored
- model: Weyaxi/Einstein-v6.1-Llama3-8B
- model: cognitivecomputations/dolphin-2.9-llama3-8b
merge_method: model_stock
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
dtype: bfloat16