--- tags: - merge - mergekit - lazymergekit - Kukedlc/NeuralFusion-7b-Dare-Ties - paulml/OmniBeagleSquaredMBX-v3-7B-v2 - macadeliccc/MBX-7B-v3-DPO - Kukedlc/Fasciculus-Arcuatus-7B-slerp - liminerity/Neurotic-Jomainotrik-7b-slerp base_model: - Kukedlc/NeuralFusion-7b-Dare-Ties - paulml/OmniBeagleSquaredMBX-v3-7B-v2 - macadeliccc/MBX-7B-v3-DPO - Kukedlc/Fasciculus-Arcuatus-7B-slerp - liminerity/Neurotic-Jomainotrik-7b-slerp --- # Neural-4-Wino-7b Neural-4-Wino-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Kukedlc/NeuralFusion-7b-Dare-Ties](https://huggingface.co/Kukedlc/NeuralFusion-7b-Dare-Ties) * [paulml/OmniBeagleSquaredMBX-v3-7B-v2](https://huggingface.co/paulml/OmniBeagleSquaredMBX-v3-7B-v2) * [macadeliccc/MBX-7B-v3-DPO](https://huggingface.co/macadeliccc/MBX-7B-v3-DPO) * [Kukedlc/Fasciculus-Arcuatus-7B-slerp](https://huggingface.co/Kukedlc/Fasciculus-Arcuatus-7B-slerp) * [liminerity/Neurotic-Jomainotrik-7b-slerp](https://huggingface.co/liminerity/Neurotic-Jomainotrik-7b-slerp) ## 🧩 Configuration ```yaml models: - model: liminerity/Neurotic-Jomainotrik-7b-slerp # No parameters necessary for base model - model: Kukedlc/NeuralFusion-7b-Dare-Ties parameters: density: 0.66 weight: 0.2 - model: paulml/OmniBeagleSquaredMBX-v3-7B-v2 parameters: density: 0.55 weight: 0.2 - model: macadeliccc/MBX-7B-v3-DPO parameters: density: 0.55 weight: 0.2 - model: Kukedlc/Fasciculus-Arcuatus-7B-slerp parameters: density: 0.44 weight: 0.2 - model: liminerity/Neurotic-Jomainotrik-7b-slerp parameters: density: 0.66 weight: 0.2 merge_method: dare_ties base_model: liminerity/Neurotic-Jomainotrik-7b-slerp parameters: int8_mask: true dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Kukedlc/Neural-4-Wino-7b" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```