Update README.md
Browse files
README.md
CHANGED
@@ -1,21 +1,19 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
-
tags:
|
4 |
-
- mlx
|
5 |
---
|
|
|
6 |
|
7 |
-
|
8 |
-
This quantized low-bit model was converted to MLX format from [`GreenBitAI/Llama-3-8B-layer-mix-bpw-2.2`]().
|
9 |
-
Refer to the [original model card](https://huggingface.co/GreenBitAI/Llama-3-8B-layer-mix-bpw-2.2) for more details on the model.
|
10 |
-
## Use with mlx
|
11 |
|
12 |
-
|
13 |
-
pip install gbx-lm
|
14 |
-
```
|
15 |
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
3 |
---
|
4 |
+
# GreenBit LLMs
|
5 |
|
6 |
+
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
|
|
|
|
|
|
|
7 |
|
8 |
+
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
|
|
|
|
|
9 |
|
10 |
+
| **Repository (Llama 3 Family)** | **Avg Acc.** | **OpenBQ** | **ARC-E** | **Winogr.** | **HellaS.** | **ARC-C** | **PIQA** | **BoolQ** | **RACE** | **ANLI-R1** | **ANLI-R2** | **ANLI-R3** | **WiC** |
|
11 |
+
|:----------------------------------------|:------------:|:----------:|:---------:|:-----------:|:-----------:|:---------:|:--------:|:---------:|:--------:|:-----------:|:-----------:|:-----------:|:-------:|
|
12 |
+
| `Llama-3-8B-layer-mix-bpw-2.2` | 0.499 | 0.302 | 0.739 | 0.674 | 0.509 | 0.396 | 0.725 | 0.743 | 0.406 | 0.327 | 0.337 | 0.340 | 0.500 |
|
13 |
+
| `Llama-3-8B-layer-mix-bpw-2.5` | 0.506 | 0.298 | 0.760 | 0.684 | 0.513 | 0.418 | 0.744 | 0.756 | 0.389 | 0.335 | 0.335 | 0.335 | 0.509 |
|
14 |
+
| `Llama-3-8B-layer-mix-bpw-3.0` | 0.523 | 0.318 | 0.770 | 0.708 | 0.540 | 0.441 | 0.767 | 0.784 | 0.407 | 0.333 | 0.345 | 0.343 | 0.526 |
|
15 |
+
| `Llama-3-8B-layer-mix-bpw-4.0` | 0.542 | 0.338 | 0.791 | 0.729 | 0.591 | 0.484 | 0.797 | 0.799 | 0.398 | 0.337 | 0.345 | 0.352 | 0.545 |
|
16 |
+
| `Llama-3-8B-instruct-layer-mix-bpw-2.2` | 0.514 | 0.292 | 0.645 | 0.672 | 0.499 | 0.367 | 0.698 | 0.775 | 0.423 | 0.417 | 0.424 | 0.398 | 0.565 |
|
17 |
+
| `Llama-3-8B-instruct-layer-mix-bpw-2.5` | 0.528 | 0.304 | 0.741 | 0.681 | 0.512 | 0.412 | 0.749 | 0.798 | 0.425 | 0.417 | 0.410 | 0.390 | 0.498 |
|
18 |
+
| `Llama-3-8B-instruct-layer-mix-bpw-3.0` | 0.547 | 0.316 | 0.787 | 0.690 | 0.530 | 0.459 | 0.768 | 0.800 | 0.437 | 0.435 | 0.417 | 0.387 | 0.548 |
|
19 |
+
| `Llama-3-8B-instruct-layer-mix-bpw-4.0` | 0.576 | 0.344 | 0.808 | 0.716 | 0.569 | 0.513 | 0.778 | 0.825 | 0.449 | 0.462 | 0.449 | 0.432 | 0.578 |
|