mlabonne/Chimera-8B AWQ

Model Summary

Dare-ties merge method.

List of all models and merging path is coming soon.

Merging the "thick"est model weights from mistral models using amazing training methods like direct preference optimization (dpo) and reinforced learning.

I have spent countless hours studying the latest research papers, attending conferences, and networking with experts in the field. I experimented with different algorithms, tactics, fine-tuned hyperparameters, optimizers, and optimized code until i achieved the best possible results.

Thank you openchat 3.5 for showing me the way.

Here is my contribution.

Prompt Template

Replace {system} with your system prompt, and {prompt} with your prompt instruction.

### System:
{system}

### User:
{prompt}

### Assistant:
Downloads last month
83
Safetensors
Model size
1.98B params
Tensor type
I32
·
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.

Model tree for solidrust/Chimera-8B-AWQ

Quantized
(7)
this model

Collection including solidrust/Chimera-8B-AWQ