legraphista commited on
Commit
2ad21f0
β€’
1 Parent(s): f716e4a

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +195 -0
README.md ADDED
@@ -0,0 +1,195 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Writer/Palmyra-Fin-70B-32K
3
+ extra_gated_fields:
4
+ Email: text
5
+ I acknowledge that this model is for non-commercial use only unless I acquire a separate license from Writer: checkbox
6
+ Name: text
7
+ Organization or Affiliation: text
8
+ Receive email updates and promotions on Writer products, services, and research?:
9
+ options:
10
+ - 'Yes'
11
+ - 'No'
12
+ type: select
13
+ extra_gated_prompt: By clicking "Agree", you agree to the [License Agreement](https://writer.com/legal/open-model-license/)
14
+ and acknowledge Writer's [Privacy Policy](https://writer.com/legal/acceptable-use/).
15
+ inference: false
16
+ language:
17
+ - en
18
+ library_name: gguf
19
+ license: other
20
+ license_link: https://writer.com/legal/open-model-license/
21
+ license_name: writer-open-model-license
22
+ model-index:
23
+ - name: Palmyra-Fin-70B-32k
24
+ results: []
25
+ pipeline_tag: text-generation
26
+ quantized_by: legraphista
27
+ tags:
28
+ - instruct
29
+ - pytorch
30
+ - finance
31
+ - stock market
32
+ - candlesticks
33
+ - FinGPT
34
+ - option trading
35
+ - future stock prediction
36
+ - trends prediction
37
+ - Enterprise LLM
38
+ - Enterprise
39
+ - Enterprise ready
40
+ - Banks
41
+ - Wealth Management
42
+ - quantized
43
+ - GGUF
44
+ - quantization
45
+ - imat
46
+ - imatrix
47
+ - static
48
+ - 32bit
49
+ - 16bit
50
+ - 8bit
51
+ - 6bit
52
+ - 5bit
53
+ - 4bit
54
+ - 3bit
55
+ - 2bit
56
+ - 1bit
57
+ ---
58
+
59
+ # Palmyra-Fin-70B-32K-IMat-GGUF
60
+ _Llama.cpp imatrix quantization of Writer/Palmyra-Fin-70B-32K_
61
+
62
+ Original Model: [Writer/Palmyra-Fin-70B-32K](https://huggingface.co/Writer/Palmyra-Fin-70B-32K)
63
+ Original dtype: `BF16` (`bfloat16`)
64
+ Quantized by: llama.cpp [b3504](https://github.com/ggerganov/llama.cpp/releases/tag/b3504)
65
+ IMatrix dataset: [here](https://gist.githubusercontent.com/bartowski1182/eb213dccb3571f863da82e99418f81e8/raw/b2869d80f5c16fd7082594248e80144677736635/calibration_datav3.txt)
66
+
67
+ - [Files](#files)
68
+ - [IMatrix](#imatrix)
69
+ - [Common Quants](#common-quants)
70
+ - [All Quants](#all-quants)
71
+ - [Downloading using huggingface-cli](#downloading-using-huggingface-cli)
72
+ - [Inference](#inference)
73
+ - [Simple chat template](#simple-chat-template)
74
+ - [Chat template with system prompt](#chat-template-with-system-prompt)
75
+ - [Llama.cpp](#llama-cpp)
76
+ - [FAQ](#faq)
77
+ - [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere)
78
+ - [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf)
79
+
80
+ ---
81
+
82
+ ## Files
83
+
84
+ ### IMatrix
85
+ Status: ⏳ Processing
86
+ Link: [here](https://huggingface.co/legraphista/Palmyra-Fin-70B-32K-IMat-GGUF/blob/main/imatrix.dat)
87
+
88
+ ### Common Quants
89
+ | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
90
+ | -------- | ---------- | --------- | ------ | ------------ | -------- |
91
+ | Palmyra-Fin-70B-32K.Q8_0 | Q8_0 | - | ⏳ Processing | βšͺ Static | -
92
+ | Palmyra-Fin-70B-32K.Q6_K | Q6_K | - | ⏳ Processing | βšͺ Static | -
93
+ | Palmyra-Fin-70B-32K.Q4_K | Q4_K | - | ⏳ Processing | 🟒 IMatrix | -
94
+ | Palmyra-Fin-70B-32K.Q3_K | Q3_K | - | ⏳ Processing | 🟒 IMatrix | -
95
+ | Palmyra-Fin-70B-32K.Q2_K | Q2_K | - | ⏳ Processing | 🟒 IMatrix | -
96
+
97
+
98
+ ### All Quants
99
+ | Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
100
+ | -------- | ---------- | --------- | ------ | ------------ | -------- |
101
+ | Palmyra-Fin-70B-32K.F32 | F32 | - | ⏳ Processing | βšͺ Static | -
102
+ | Palmyra-Fin-70B-32K.BF16 | BF16 | - | ⏳ Processing | βšͺ Static | -
103
+ | Palmyra-Fin-70B-32K.FP16 | F16 | - | ⏳ Processing | βšͺ Static | -
104
+ | Palmyra-Fin-70B-32K.Q8_0 | Q8_0 | - | ⏳ Processing | βšͺ Static | -
105
+ | Palmyra-Fin-70B-32K.Q6_K | Q6_K | - | ⏳ Processing | βšͺ Static | -
106
+ | Palmyra-Fin-70B-32K.Q5_K | Q5_K | - | ⏳ Processing | βšͺ Static | -
107
+ | Palmyra-Fin-70B-32K.Q5_K_S | Q5_K_S | - | ⏳ Processing | βšͺ Static | -
108
+ | Palmyra-Fin-70B-32K.Q4_K | Q4_K | - | ⏳ Processing | 🟒 IMatrix | -
109
+ | Palmyra-Fin-70B-32K.Q4_K_S | Q4_K_S | - | ⏳ Processing | 🟒 IMatrix | -
110
+ | Palmyra-Fin-70B-32K.IQ4_NL | IQ4_NL | - | ⏳ Processing | 🟒 IMatrix | -
111
+ | Palmyra-Fin-70B-32K.IQ4_XS | IQ4_XS | - | ⏳ Processing | 🟒 IMatrix | -
112
+ | Palmyra-Fin-70B-32K.Q3_K | Q3_K | - | ⏳ Processing | 🟒 IMatrix | -
113
+ | Palmyra-Fin-70B-32K.Q3_K_L | Q3_K_L | - | ⏳ Processing | 🟒 IMatrix | -
114
+ | Palmyra-Fin-70B-32K.Q3_K_S | Q3_K_S | - | ⏳ Processing | 🟒 IMatrix | -
115
+ | Palmyra-Fin-70B-32K.IQ3_M | IQ3_M | - | ⏳ Processing | 🟒 IMatrix | -
116
+ | Palmyra-Fin-70B-32K.IQ3_S | IQ3_S | - | ⏳ Processing | 🟒 IMatrix | -
117
+ | Palmyra-Fin-70B-32K.IQ3_XS | IQ3_XS | - | ⏳ Processing | 🟒 IMatrix | -
118
+ | Palmyra-Fin-70B-32K.IQ3_XXS | IQ3_XXS | - | ⏳ Processing | 🟒 IMatrix | -
119
+ | Palmyra-Fin-70B-32K.Q2_K | Q2_K | - | ⏳ Processing | 🟒 IMatrix | -
120
+ | Palmyra-Fin-70B-32K.Q2_K_S | Q2_K_S | - | ⏳ Processing | 🟒 IMatrix | -
121
+ | Palmyra-Fin-70B-32K.IQ2_M | IQ2_M | - | ⏳ Processing | 🟒 IMatrix | -
122
+ | Palmyra-Fin-70B-32K.IQ2_S | IQ2_S | - | ⏳ Processing | 🟒 IMatrix | -
123
+ | Palmyra-Fin-70B-32K.IQ2_XS | IQ2_XS | - | ⏳ Processing | 🟒 IMatrix | -
124
+ | Palmyra-Fin-70B-32K.IQ2_XXS | IQ2_XXS | - | ⏳ Processing | 🟒 IMatrix | -
125
+ | Palmyra-Fin-70B-32K.IQ1_M | IQ1_M | - | ��� Processing | 🟒 IMatrix | -
126
+ | Palmyra-Fin-70B-32K.IQ1_S | IQ1_S | - | ⏳ Processing | 🟒 IMatrix | -
127
+
128
+
129
+ ## Downloading using huggingface-cli
130
+ If you do not have hugginface-cli installed:
131
+ ```
132
+ pip install -U "huggingface_hub[cli]"
133
+ ```
134
+ Download the specific file you want:
135
+ ```
136
+ huggingface-cli download legraphista/Palmyra-Fin-70B-32K-IMat-GGUF --include "Palmyra-Fin-70B-32K.Q8_0.gguf" --local-dir ./
137
+ ```
138
+ If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
139
+ ```
140
+ huggingface-cli download legraphista/Palmyra-Fin-70B-32K-IMat-GGUF --include "Palmyra-Fin-70B-32K.Q8_0/*" --local-dir ./
141
+ # see FAQ for merging GGUF's
142
+ ```
143
+
144
+ ---
145
+
146
+ ## Inference
147
+
148
+ ### Simple chat template
149
+ ```
150
+ <|begin_of_text|><|start_header_id|>user<|end_header_id|>
151
+
152
+ {user_prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
153
+
154
+ {assistant_response}<|eot_id|><|start_header_id|>user<|end_header_id|>
155
+
156
+ {next_user_prompt}<|eot_id|>
157
+ ```
158
+
159
+ ### Chat template with system prompt
160
+ ```
161
+ <|begin_of_text|><|start_header_id|>system<|end_header_id|>
162
+
163
+ {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
164
+
165
+ {user_prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
166
+
167
+ {assistant_response}<|eot_id|><|start_header_id|>user<|end_header_id|>
168
+
169
+ {next_user_prompt}<|eot_id|>
170
+ ```
171
+
172
+ ### Llama.cpp
173
+ ```
174
+ llama.cpp/main -m Palmyra-Fin-70B-32K.Q8_0.gguf --color -i -p "prompt here (according to the chat template)"
175
+ ```
176
+
177
+ ---
178
+
179
+ ## FAQ
180
+
181
+ ### Why is the IMatrix not applied everywhere?
182
+ According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
183
+
184
+ ### How do I merge a split GGUF?
185
+ 1. Make sure you have `gguf-split` available
186
+ - To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
187
+ - Download the appropriate zip for your system from the latest release
188
+ - Unzip the archive and you should be able to find `gguf-split`
189
+ 2. Locate your GGUF chunks folder (ex: `Palmyra-Fin-70B-32K.Q8_0`)
190
+ 3. Run `gguf-split --merge Palmyra-Fin-70B-32K.Q8_0/Palmyra-Fin-70B-32K.Q8_0-00001-of-XXXXX.gguf Palmyra-Fin-70B-32K.Q8_0.gguf`
191
+ - Make sure to point `gguf-split` to the first chunk of the split.
192
+
193
+ ---
194
+
195
+ Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)!