Edit model card

Note: This repository contains the GGUF 4-bit quantized variant of halbihn/NeuralPipe-7B-ties. For the full version visit the link.

NeuralPipe-7B-ties

NeuralPipe-7B-ties is a merge of the following models using mergekit:

🧩 Configuration

models:
  - model: mistralai/Mistral-7B-v0.1
  - model: OpenPipe/mistral-ft-optimized-1218
    parameters:
      density: 0.5
      weight: 0.5
  - model: halbihn/NeuralHermes-2.5-Mistral-7B
    parameters:
      density: 0.5
      weight: 0.3
merge_method: ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
  normalize: true
dtype: float16
Downloads last month
1
GGUF
Model size
7.24B params
Architecture
llama

4-bit

Inference API
Unable to determine this model's library. Check the docs .