mistral-7b-instruct-code-ties

mistral-7b-instruct-code-ties is a merge of the following models using mergekit:

🧩 Configuration

models:
  - model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
  - model: jiayihao03/mistral-7b-instruct-Javascript-4bit
    parameters:
      density: 0.85
      weight: 0.25
  - model: jiayihao03/mistral-7b-instruct-python-4bit
    parameters:
      density: 0.85
      weight: 0.25
  - model: akameswa/mistral-7b-instruct-java-4bit
    parameters:
      density: 0.85
      weight: 0.25
  - model: akameswa/mistral-7b-instruct-go-4bit
    parameters:
      density: 0.85
      weight: 0.25
merge_method: ties
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
parameters:
  normalize: true
dtype: float16
Downloads last month
75
Safetensors
Model size
3.75B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.