legraphista commited on
Commit
1d94f42
1 Parent(s): a8b3289

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +122 -0
README.md ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: nvidia/Minitron-8B-Base
3
+ inference: false
4
+ library_name: gguf
5
+ license: other
6
+ license_link: https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf
7
+ license_name: nvidia-open-model-license
8
+ pipeline_tag: text-generation
9
+ quantized_by: legraphista
10
+ tags:
11
+ - quantized
12
+ - GGUF
13
+ - quantization
14
+ - static
15
+ - 16bit
16
+ - 8bit
17
+ - 6bit
18
+ - 5bit
19
+ - 4bit
20
+ - 3bit
21
+ - 2bit
22
+ ---
23
+
24
+ # Minitron-8B-Base-GGUF
25
+ _Llama.cpp static quantization of nvidia/Minitron-8B-Base_
26
+
27
+ Original Model: [nvidia/Minitron-8B-Base](https://huggingface.co/nvidia/Minitron-8B-Base)
28
+ Original dtype: `BF16` (`bfloat16`)
29
+ Quantized by: llama.cpp [b3600](https://github.com/ggerganov/llama.cpp/releases/tag/b3600)
30
+ IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt)
31
+
32
+ - [Files](#files)
33
+ - [Common Quants](#common-quants)
34
+ - [All Quants](#all-quants)
35
+ - [Downloading using huggingface-cli](#downloading-using-huggingface-cli)
36
+ - [Inference](#inference)
37
+ - [Llama.cpp](#llama-cpp)
38
+ - [FAQ](#faq)
39
+ - [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere)
40
+ - [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf)
41
+
42
+ ---
43
+
44
+ ## Files
45
+
46
+
47
+
48
+ ### Common Quants
49
+ | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
50
+ | -------- | ---------- | --------- | ------ | ------------ | -------- |
51
+ | Minitron-8B-Base.Q8_0 | Q8_0 | - | ⏳ Processing | ⚪ Static | -
52
+ | Minitron-8B-Base.Q6_K | Q6_K | - | ⏳ Processing | ⚪ Static | -
53
+ | Minitron-8B-Base.Q4_K | Q4_K | - | ⏳ Processing | ⚪ Static | -
54
+ | Minitron-8B-Base.Q3_K | Q3_K | - | ⏳ Processing | ⚪ Static | -
55
+ | Minitron-8B-Base.Q2_K | Q2_K | - | ⏳ Processing | ⚪ Static | -
56
+
57
+
58
+ ### All Quants
59
+ | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
60
+ | -------- | ---------- | --------- | ------ | ------------ | -------- |
61
+ | Minitron-8B-Base.BF16 | BF16 | - | ⏳ Processing | ⚪ Static | -
62
+ | Minitron-8B-Base.FP16 | F16 | - | ⏳ Processing | ⚪ Static | -
63
+ | Minitron-8B-Base.Q8_0 | Q8_0 | - | ⏳ Processing | ⚪ Static | -
64
+ | Minitron-8B-Base.Q6_K | Q6_K | - | ⏳ Processing | ⚪ Static | -
65
+ | Minitron-8B-Base.Q5_K | Q5_K | - | ⏳ Processing | ⚪ Static | -
66
+ | Minitron-8B-Base.Q5_K_S | Q5_K_S | - | ⏳ Processing | ⚪ Static | -
67
+ | Minitron-8B-Base.Q4_K | Q4_K | - | ⏳ Processing | ⚪ Static | -
68
+ | Minitron-8B-Base.Q4_K_S | Q4_K_S | - | ⏳ Processing | ⚪ Static | -
69
+ | Minitron-8B-Base.IQ4_NL | IQ4_NL | - | ⏳ Processing | ⚪ Static | -
70
+ | Minitron-8B-Base.IQ4_XS | IQ4_XS | - | ⏳ Processing | ⚪ Static | -
71
+ | Minitron-8B-Base.Q3_K | Q3_K | - | ⏳ Processing | ⚪ Static | -
72
+ | Minitron-8B-Base.Q3_K_L | Q3_K_L | - | ⏳ Processing | ⚪ Static | -
73
+ | Minitron-8B-Base.Q3_K_S | Q3_K_S | - | ⏳ Processing | ⚪ Static | -
74
+ | Minitron-8B-Base.IQ3_M | IQ3_M | - | ⏳ Processing | ⚪ Static | -
75
+ | Minitron-8B-Base.IQ3_S | IQ3_S | - | ⏳ Processing | ⚪ Static | -
76
+ | Minitron-8B-Base.IQ3_XS | IQ3_XS | - | ⏳ Processing | ⚪ Static | -
77
+ | Minitron-8B-Base.Q2_K | Q2_K | - | ⏳ Processing | ⚪ Static | -
78
+
79
+
80
+ ## Downloading using huggingface-cli
81
+ If you do not have hugginface-cli installed:
82
+ ```
83
+ pip install -U "huggingface_hub[cli]"
84
+ ```
85
+ Download the specific file you want:
86
+ ```
87
+ huggingface-cli download legraphista/Minitron-8B-Base-GGUF --include "Minitron-8B-Base.Q8_0.gguf" --local-dir ./
88
+ ```
89
+ If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
90
+ ```
91
+ huggingface-cli download legraphista/Minitron-8B-Base-GGUF --include "Minitron-8B-Base.Q8_0/*" --local-dir ./
92
+ # see FAQ for merging GGUF's
93
+ ```
94
+
95
+ ---
96
+
97
+ ## Inference
98
+
99
+ ### Llama.cpp
100
+ ```
101
+ llama.cpp/main -m Minitron-8B-Base.Q8_0.gguf --color -i -p "prompt here"
102
+ ```
103
+
104
+ ---
105
+
106
+ ## FAQ
107
+
108
+ ### Why is the IMatrix not applied everywhere?
109
+ According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
110
+
111
+ ### How do I merge a split GGUF?
112
+ 1. Make sure you have `gguf-split` available
113
+ - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
114
+ - Download the appropriate zip for your system from the latest release
115
+ - Unzip the archive and you should be able to find `gguf-split`
116
+ 2. Locate your GGUF chunks folder (ex: `Minitron-8B-Base.Q8_0`)
117
+ 3. Run `gguf-split --merge Minitron-8B-Base.Q8_0/Minitron-8B-Base.Q8_0-00001-of-XXXXX.gguf Minitron-8B-Base.Q8_0.gguf`
118
+ - Make sure to point `gguf-split` to the first chunk of the split.
119
+
120
+ ---
121
+
122
+ Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!