YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

MN-Halide-12b-v1.0 - GGUF

Original model description:

base_model: - TheDrummer/Rocinante-12B-v1 - jtatman/mistral_nemo_12b_reasoning_psychology_lora - Epiculous/Azure_Dusk-v0.2 - nbeerbower/mistral-nemo-bophades-12B - Epiculous/Crimson_Dawn-v0.2 - nbeerbower/mistral-nemo-wissenschaft-12B - anthracite-org/magnum-v2-12b - TheDrummer/Rocinante-12B-v1.1 - anthracite-org/magnum-v2.5-12b-kto - mpasila/Mistral-freeLiPPA-LoRA-12B - anthracite-org/magnum-v2.5-12b-kto - jeiku/Aura-NeMo-12B - nbeerbower/mistral-nemo-cc-12B - UsernameJustAnother/Nemo-12B-Marlin-v8 - elinas/Chronos-Gold-12B-1.0 - SillyTilly/mistralai_Mistral-Nemo-Base-2407 - nbeerbower/Lyra4-Gutenberg-12B - nbeerbower/mistral-nemo-gutenberg-12B-v4 library_name: transformers tags: - mergekit - merge license: apache-2.0

MN-Halide-12b-v1.0

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Model Stock merge method using SillyTilly/mistralai_Mistral-Nemo-Base-2407 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: SillyTilly/mistralai_Mistral-Nemo-Base-2407
dtype: float32
merge_method: model_stock
slices:
- sources:
  - layer_range: [0, 40]
    model: nbeerbower/Lyra4-Gutenberg-12B
  - layer_range: [0, 40]
    model: nbeerbower/mistral-nemo-gutenberg-12B-v4
  - layer_range: [0, 40]
    model: elinas/Chronos-Gold-12B-1.0
  - layer_range: [0, 40]
    model: UsernameJustAnother/Nemo-12B-Marlin-v8
  - layer_range: [0, 40]
    model: TheDrummer/Rocinante-12B-v1.1
  - layer_range: [0, 40]
    model: Epiculous/Azure_Dusk-v0.2
  - layer_range: [0, 40]
    model: Epiculous/Crimson_Dawn-v0.2
  - layer_range: [0, 40]
    model: TheDrummer/Rocinante-12B-v1+jtatman/mistral_nemo_12b_reasoning_psychology_lora
  - layer_range: [0, 40]
    model: nbeerbower/mistral-nemo-wissenschaft-12B
  - layer_range: [0, 40]
    model: nbeerbower/mistral-nemo-bophades-12B
  - layer_range: [0, 40]
    model: anthracite-org/magnum-v2.5-12b-kto+mpasila/Mistral-freeLiPPA-LoRA-12B
  - layer_range: [0, 40]
    model: nbeerbower/mistral-nemo-cc-12B
  - layer_range: [0, 40]
    model: anthracite-org/magnum-v2-12b
  - layer_range: [0, 40]
    model: anthracite-org/magnum-v2.5-12b-kto+jeiku/Aura-NeMo-12B
  - layer_range: [0, 40]
    model: SillyTilly/mistralai_Mistral-Nemo-Base-2407
tokenizer_source: unsloth/Mistral-Nemo-Base-2407

Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to.

Downloads last month
12,566
GGUF
Model size
12.2B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.