File size: 2,217 Bytes
0867f5e
aa28ea9
 
 
 
 
 
 
0867f5e
 
aa28ea9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5fd0c0e
aa28ea9
19a1a78
aa28ea9
19a1a78
aa28ea9
19a1a78
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
---
library_name: transformers
tags:
- mergekit
- merge
- roleplay
- mistral
inference: false
license: apache-2.0
---

This repository hosts GGUF-IQ-Imatrix quants for [vicgalle/RoleBeagle-11B](https://huggingface.co/vicgalle/RoleBeagle-11B).

**What does "Imatrix" mean?**

It stands for **Importance Matrix**, a technique used to improve the quality of quantized models.
The **Imatrix** is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process.
The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance, especially when the calibration data is diverse.
[[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)

For imatrix data generation, kalomaze's `groups_merged.txt` with added roleplay chats was used, you can find it [here](https://huggingface.co/Lewdiculous/Datura_7B-GGUF-Imatrix/blob/main/imatrix-with-rp-format-data.txt). This was just to add a bit more diversity to the data.

**Steps:**

```
Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)
```
*Using the latest llama.cpp at the time.*

```python
    quantization_options = [
        "Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M", "Q5_K_S",
        "Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS"
    ]
```

**Submitted card image:**

![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/oTFe-bLD1qiOHMeAZMkRr.png)

## Original model information:

# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vicgalle__RoleBeagle-11B)

|             Metric              |Value|
|---------------------------------|----:|
|Avg.                             |76.06|
|AI2 Reasoning Challenge (25-Shot)|72.35|
|HellaSwag (10-Shot)              |89.77|
|MMLU (5-Shot)                    |66.35|
|TruthfulQA (0-shot)              |77.92|
|Winogrande (5-shot)              |84.06|
|GSM8k (5-shot)                   |65.88|