Edit model card
  e88 88e                               d8     
 d888 888b  8888 8888  ,"Y88b 888 8e   d88     
C8888 8888D 8888 8888 "8" 888 888 88b d88888   
 Y888 888P  Y888 888P ,ee 888 888 888  888     
  "88 88"    "88 88"  "88 888 888 888  888     
      b                                        
      8b,                                      
 
  e88'Y88                  d8           888    
 d888  'Y  ,"Y88b 888,8,  d88    ,e e,  888    
C8888     "8" 888 888 "  d88888 d88 88b 888    
 Y888  ,d ,ee 888 888     888   888   , 888    
  "88,d88 "88 888 888     888    "YeeP" 888    
                                               
PROUDLY PRESENTS         

Luca-MN-iMat-GGUF

Quantized with love from fp32.

  • Importance Matrix calculated using groups_merged.txt
  • 92 chunks
  • n_ctx=512
  • Importance Matrix uses fp32 precision model weights, fp32.imatrix file to be added in repo

Original model README here and below:

image/png

Luca-MN-iMat-GGUF

This thing was just intended as an experiment but it turned out quite good. I had it both name and prompt imagegen for itself.

Created by running a high-r LoRA-pass over Nemo-Base with 2 epochs of some RP data, then a low-r pass with 0.5 epochs of the c2-data, then 3 epochs of DPO using jondurbin/gutenberg-dpo-v0.1.

Prompting

Use the Mistral V3-Tekken context- and instruct-templates. Temperature at about 1.25 seems to be the sweet spot, with either MinP at 0.05 or TopP at 0.9. DRY/Smoothing etc depending on your preference.

Downloads last month
418
GGUF
Model size
12.2B params
Architecture
llama

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for Quant-Cartel/Luca-MN-iMat-GGUF

Quantized
(2)
this model