TheBloke commited on
Commit
9c9441a
1 Parent(s): 71dc61f

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +42 -23
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  inference: false
3
- license: other
4
  model_creator: Austism
5
  model_link: https://huggingface.co/Austism/chronos-hermes-13b-v2
6
  model_name: Chronos Hermes 13B v2
@@ -16,17 +16,20 @@ tags:
16
  ---
17
 
18
  <!-- header start -->
19
- <div style="width: 100%;">
20
- <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
 
21
  </div>
22
  <div style="display: flex; justify-content: space-between; width: 100%;">
23
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
24
- <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
25
  </div>
26
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
27
- <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
28
  </div>
29
  </div>
 
 
30
  <!-- header end -->
31
 
32
  # Chronos Hermes 13B v2 - GGML
@@ -37,6 +40,13 @@ tags:
37
 
38
  This repo contains GGML format model files for [Austism's Chronos Hermes 13B v2](https://huggingface.co/Austism/chronos-hermes-13b-v2).
39
 
 
 
 
 
 
 
 
40
  GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
41
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Supports NVidia CUDA GPU acceleration.
42
  * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with GPU acceleration on all platforms (CUDA and OpenCL). Especially good for story telling.
@@ -48,7 +58,8 @@ GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/gger
48
  ## Repositories available
49
 
50
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/Austism/chronos-hermes-13b-v2-GPTQ)
51
- * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML)
 
52
  * [Austism's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Austism/chronos-hermes-13b-v2)
53
 
54
  ## Prompt template: Alpaca
@@ -60,20 +71,19 @@ Below is an instruction that describes a task. Write a response that appropriate
60
  {prompt}
61
 
62
  ### Response:
 
63
  ```
64
 
65
  <!-- compatibility_ggml start -->
66
  ## Compatibility
67
 
68
- ### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
69
-
70
- These are guaranteed to be compatible with any UIs, tools and libraries released since late May. They may be phased out soon, as they are largely superseded by the new k-quant methods.
71
 
72
- ### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
73
 
74
- These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
75
 
76
- They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python, ctransformers, rustformers and most others. For compatibility with other tools and libraries, please check their documentation.
77
 
78
  ## Explanation of the new k-quant methods
79
  <details>
@@ -96,17 +106,17 @@ Refer to the Provided Files table below to see what files use which methods, and
96
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
97
  | ---- | ---- | ---- | ---- | ---- | ----- |
98
  | [chronos-hermes-13b-v2.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML/blob/main/chronos-hermes-13b-v2.ggmlv3.q2_K.bin) | q2_K | 2 | 5.74 GB| 8.24 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
99
- | [chronos-hermes-13b-v2.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML/blob/main/chronos-hermes-13b-v2.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 7.14 GB| 9.64 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
100
- | [chronos-hermes-13b-v2.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML/blob/main/chronos-hermes-13b-v2.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 6.53 GB| 9.03 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
101
  | [chronos-hermes-13b-v2.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML/blob/main/chronos-hermes-13b-v2.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 5.87 GB| 8.37 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
 
 
102
  | [chronos-hermes-13b-v2.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML/blob/main/chronos-hermes-13b-v2.ggmlv3.q4_0.bin) | q4_0 | 4 | 7.32 GB| 9.82 GB | Original quant method, 4-bit. |
103
- | [chronos-hermes-13b-v2.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML/blob/main/chronos-hermes-13b-v2.ggmlv3.q4_1.bin) | q4_1 | 4 | 8.14 GB| 10.64 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
104
- | [chronos-hermes-13b-v2.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML/blob/main/chronos-hermes-13b-v2.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 8.06 GB| 10.56 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
105
  | [chronos-hermes-13b-v2.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML/blob/main/chronos-hermes-13b-v2.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 7.56 GB| 10.06 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
 
 
106
  | [chronos-hermes-13b-v2.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML/blob/main/chronos-hermes-13b-v2.ggmlv3.q5_0.bin) | q5_0 | 5 | 8.95 GB| 11.45 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
107
- | [chronos-hermes-13b-v2.ggmlv3.q5_1.bin](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML/blob/main/chronos-hermes-13b-v2.ggmlv3.q5_1.bin) | q5_1 | 5 | 9.76 GB| 12.26 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
108
- | [chronos-hermes-13b-v2.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML/blob/main/chronos-hermes-13b-v2.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 9.40 GB| 11.90 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
109
  | [chronos-hermes-13b-v2.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML/blob/main/chronos-hermes-13b-v2.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 9.15 GB| 11.65 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
 
 
110
  | [chronos-hermes-13b-v2.ggmlv3.q6_K.bin](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML/blob/main/chronos-hermes-13b-v2.ggmlv3.q6_K.bin) | q6_K | 6 | 10.83 GB| 13.33 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
111
  | [chronos-hermes-13b-v2.ggmlv3.q8_0.bin](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML/blob/main/chronos-hermes-13b-v2.ggmlv3.q8_0.bin) | q8_0 | 8 | 13.83 GB| 16.33 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
112
 
@@ -114,22 +124,29 @@ Refer to the Provided Files table below to see what files use which methods, and
114
 
115
  ## How to run in `llama.cpp`
116
 
117
- I use the following command line; adjust for your tastes and needs:
 
 
118
 
119
  ```
120
- ./main -t 10 -ngl 32 -m chronos-hermes-13b-v2.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
121
  ```
122
  Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
123
 
124
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
125
 
 
 
126
  If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
127
 
 
 
128
  ## How to run in `text-generation-webui`
129
 
130
- Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
131
 
132
  <!-- footer start -->
 
133
  ## Discord
134
 
135
  For further support, and discussions on these models and AI in general, join us at:
@@ -149,13 +166,15 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
149
  * Patreon: https://patreon.com/TheBlokeAI
150
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
151
 
152
- **Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
153
 
154
- **Patreon special mentions**: Willem Michiel, Ajan Kanaga, Cory Kujawski, Alps Aficionado, Nikolai Manek, Jonathan Leane, Stanislav Ovsiannikov, Michael Levine, Luke Pendergrass, Sid, K, Gabriel Tamborski, Clay Pascal, Kalila, William Sang, Will Dee, Pieter, Nathan LeClaire, ya boyyy, David Flickinger, vamX, Derek Yates, Fen Risland, Jeffrey Morgan, webtim, Daniel P. Andersen, Chadd, Edmond Seymore, Pyrater, Olusegun Samson, Lone Striker, biorpg, alfie_i, Mano Prime, Chris Smitley, Dave, zynix, Trenton Dambrowitz, Johann-Peter Hartmann, Magnesian, Spencer Kim, John Detwiler, Iucharbius, Gabriel Puliatti, LangChain4j, Luke @flexchar, Vadim, Rishabh Srivastava, Preetika Verma, Ai Maven, Femi Adebogun, WelcomeToTheClub, Leonard Tan, Imad Khwaja, Steven Wood, Stefan Sabev, Sebastain Graf, usrbinkat, Dan Guido, Sam, Eugene Pentland, Mandus, transmissions 11, Slarti, Karl Bernard, Spiking Neurons AB, Artur Olbinski, Joseph William Delisle, ReadyPlayerEmma, Olakabola, Asp the Wyvern, Space Cruiser, Matthew Berman, Randy H, subjectnull, danny, John Villwock, Illia Dulskyi, Rainer Wilmers, theTransient, Pierre Kircher, Alexandros Triantafyllidis, Viktor Bowallius, terasurfer, Deep Realms, SuperWojo, senxiiz, Oscar Rangel, Alex, Stephen Murray, Talal Aujan, Raven Klaugh, Sean Connelly, Raymond Fosdick, Fred von Graf, chris gileta, Junyu Yang, Elle
155
 
156
 
157
  Thank you to all my generous patrons and donaters!
158
 
 
 
159
  <!-- footer end -->
160
 
161
  # Original model card: Austism's Chronos Hermes 13B v2
 
1
  ---
2
  inference: false
3
+ license: llama2
4
  model_creator: Austism
5
  model_link: https://huggingface.co/Austism/chronos-hermes-13b-v2
6
  model_name: Chronos Hermes 13B v2
 
16
  ---
17
 
18
  <!-- header start -->
19
+ <!-- 200823 -->
20
+ <div style="width: auto; margin-left: auto; margin-right: auto">
21
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
22
  </div>
23
  <div style="display: flex; justify-content: space-between; width: 100%;">
24
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
25
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
26
  </div>
27
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
28
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
29
  </div>
30
  </div>
31
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
32
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
33
  <!-- header end -->
34
 
35
  # Chronos Hermes 13B v2 - GGML
 
40
 
41
  This repo contains GGML format model files for [Austism's Chronos Hermes 13B v2](https://huggingface.co/Austism/chronos-hermes-13b-v2).
42
 
43
+ ### Important note regarding GGML files.
44
+
45
+ The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
46
+
47
+ Please use the GGUF models instead.
48
+ ### About GGML
49
+
50
  GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
51
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Supports NVidia CUDA GPU acceleration.
52
  * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with GPU acceleration on all platforms (CUDA and OpenCL). Especially good for story telling.
 
58
  ## Repositories available
59
 
60
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/Austism/chronos-hermes-13b-v2-GPTQ)
61
+ * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Chronos-Hermes-13b-v2-GGUF)
62
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML)
63
  * [Austism's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Austism/chronos-hermes-13b-v2)
64
 
65
  ## Prompt template: Alpaca
 
71
  {prompt}
72
 
73
  ### Response:
74
+
75
  ```
76
 
77
  <!-- compatibility_ggml start -->
78
  ## Compatibility
79
 
80
+ These quantised GGML files are compatible with llama.cpp between June 6th (commit `2d43387`) and August 21st 2023.
 
 
81
 
82
+ For support with latest llama.cpp, please use GGUF files instead.
83
 
84
+ The final llama.cpp commit with support for GGML was: [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa)
85
 
86
+ As of August 23rd 2023 they are still compatible with all UIs, libraries and utilities which use GGML. This may change in the future.
87
 
88
  ## Explanation of the new k-quant methods
89
  <details>
 
106
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
107
  | ---- | ---- | ---- | ---- | ---- | ----- |
108
  | [chronos-hermes-13b-v2.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML/blob/main/chronos-hermes-13b-v2.ggmlv3.q2_K.bin) | q2_K | 2 | 5.74 GB| 8.24 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
 
 
109
  | [chronos-hermes-13b-v2.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML/blob/main/chronos-hermes-13b-v2.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 5.87 GB| 8.37 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
110
+ | [chronos-hermes-13b-v2.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML/blob/main/chronos-hermes-13b-v2.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 6.53 GB| 9.03 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
111
+ | [chronos-hermes-13b-v2.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML/blob/main/chronos-hermes-13b-v2.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 7.14 GB| 9.64 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
112
  | [chronos-hermes-13b-v2.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML/blob/main/chronos-hermes-13b-v2.ggmlv3.q4_0.bin) | q4_0 | 4 | 7.32 GB| 9.82 GB | Original quant method, 4-bit. |
 
 
113
  | [chronos-hermes-13b-v2.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML/blob/main/chronos-hermes-13b-v2.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 7.56 GB| 10.06 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
114
+ | [chronos-hermes-13b-v2.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML/blob/main/chronos-hermes-13b-v2.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 8.06 GB| 10.56 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
115
+ | [chronos-hermes-13b-v2.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML/blob/main/chronos-hermes-13b-v2.ggmlv3.q4_1.bin) | q4_1 | 4 | 8.14 GB| 10.64 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
116
  | [chronos-hermes-13b-v2.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML/blob/main/chronos-hermes-13b-v2.ggmlv3.q5_0.bin) | q5_0 | 5 | 8.95 GB| 11.45 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
 
 
117
  | [chronos-hermes-13b-v2.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML/blob/main/chronos-hermes-13b-v2.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 9.15 GB| 11.65 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
118
+ | [chronos-hermes-13b-v2.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML/blob/main/chronos-hermes-13b-v2.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 9.40 GB| 11.90 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
119
+ | [chronos-hermes-13b-v2.ggmlv3.q5_1.bin](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML/blob/main/chronos-hermes-13b-v2.ggmlv3.q5_1.bin) | q5_1 | 5 | 9.76 GB| 12.26 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
120
  | [chronos-hermes-13b-v2.ggmlv3.q6_K.bin](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML/blob/main/chronos-hermes-13b-v2.ggmlv3.q6_K.bin) | q6_K | 6 | 10.83 GB| 13.33 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
121
  | [chronos-hermes-13b-v2.ggmlv3.q8_0.bin](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML/blob/main/chronos-hermes-13b-v2.ggmlv3.q8_0.bin) | q8_0 | 8 | 13.83 GB| 16.33 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
122
 
 
124
 
125
  ## How to run in `llama.cpp`
126
 
127
+ Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) or earlier.
128
+
129
+ For compatibility with latest llama.cpp, please use GGUF files instead.
130
 
131
  ```
132
+ ./main -t 10 -ngl 32 -m chronos-hermes-13b-v2.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
133
  ```
134
  Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
135
 
136
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
137
 
138
+ Change `-c 2048` to the desired sequence length for this model. For example, `-c 4096` for a Llama 2 model. For models that use RoPE, add `--rope-freq-base 10000 --rope-freq-scale 0.5` for doubled context, or `--rope-freq-base 10000 --rope-freq-scale 0.25` for 4x context.
139
+
140
  If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
141
 
142
+ For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
143
+
144
  ## How to run in `text-generation-webui`
145
 
146
+ Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
147
 
148
  <!-- footer start -->
149
+ <!-- 200823 -->
150
  ## Discord
151
 
152
  For further support, and discussions on these models and AI in general, join us at:
 
166
  * Patreon: https://patreon.com/TheBlokeAI
167
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
168
 
169
+ **Special thanks to**: Aemon Algiz.
170
 
171
+ **Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
172
 
173
 
174
  Thank you to all my generous patrons and donaters!
175
 
176
+ And thank you again to a16z for their generous grant.
177
+
178
  <!-- footer end -->
179
 
180
  # Original model card: Austism's Chronos Hermes 13B v2