TheBloke commited on
Commit
061e307
1 Parent(s): ff1b473

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +71 -41
README.md CHANGED
@@ -5,62 +5,80 @@ inference: false
5
  language:
6
  - en
7
  - de
8
- license: other
 
 
 
9
  model_type: llama
 
10
  ---
11
 
12
  <!-- header start -->
13
- <div style="width: 100%;">
14
- <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
 
15
  </div>
16
  <div style="display: flex; justify-content: space-between; width: 100%;">
17
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
18
- <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
19
  </div>
20
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
21
- <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
22
  </div>
23
  </div>
 
 
24
  <!-- header end -->
25
 
26
- # flozi00's Llama 2 13B German Assistant v2 GGML
 
 
27
 
28
- These files are GGML format model files for [flozi00's Llama 2 13B German Assistant v2](https://huggingface.co/flozi00/Llama-2-13B-german-assistant-v2).
 
 
 
 
 
 
 
 
 
29
 
30
  GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
31
- * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with full GPU acceleration out of the box. Especially good for story telling.
32
- * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with GPU acceleration via the c_transformers backend.
33
- * [LM Studio](https://lmstudio.ai/), a fully featured local GUI. Supports full GPU accel on macOS. Also supports Windows, without GPU accel.
34
- * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Requires extra steps to enable GPU accel via llama.cpp backend.
35
- * [ctransformers](https://github.com/marella/ctransformers), a Python library with LangChain support and OpenAI-compatible AI server.
36
- * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with OpenAI-compatible API server.
37
 
38
  Many thanks to William Beauchamp from [Chai](https://chai-research.com/) for providing the hardware used to make and upload these files!
39
 
40
  ## Repositories available
41
 
42
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/llama-2-13B-German-Assistant-v2-GPTQ)
43
- * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/llama-2-13B-German-Assistant-v2-GGML)
44
- * [Original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/flozi00/Llama-2-13B-german-assistant-v2)
 
45
 
46
  ## Prompt template: OpenAssistant
47
 
48
  ```
49
- <|prompter|>{prompt} <|endoftext|> <|assistant|>
 
50
  ```
51
 
52
  <!-- compatibility_ggml start -->
53
  ## Compatibility
54
 
55
- ### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
56
-
57
- These are guaranteed to be compatible with any UIs, tools and libraries released since late May. They may be phased out soon, as they are largely superseded by the new k-quant methods.
58
 
59
- ### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
60
 
61
- These new quantisation methods are compatible with llama.cpp as of June 6th, commit `2d43387`.
62
 
63
- They are now also compatible with recent releases of text-generation-webui, KoboldCpp, llama-cpp-python, ctransformers, rustformers and most others. For compatibility with other tools and libraries, please check their documentation.
64
 
65
  ## Explanation of the new k-quant methods
66
  <details>
@@ -79,43 +97,51 @@ Refer to the Provided Files table below to see what files use which methods, and
79
  <!-- compatibility_ggml end -->
80
 
81
  ## Provided files
 
82
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
83
  | ---- | ---- | ---- | ---- | ---- | ----- |
84
- | llama-2-13b-german-assistant-v2.ggmlv3.q2_K.bin | q2_K | 2 | 5.51 GB| 8.01 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
85
- | llama-2-13b-german-assistant-v2.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 6.93 GB| 9.43 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
86
- | llama-2-13b-german-assistant-v2.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 6.31 GB| 8.81 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
87
- | llama-2-13b-german-assistant-v2.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 5.66 GB| 8.16 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
88
- | llama-2-13b-german-assistant-v2.ggmlv3.q4_0.bin | q4_0 | 4 | 7.32 GB| 9.82 GB | Original quant method, 4-bit. |
89
- | llama-2-13b-german-assistant-v2.ggmlv3.q4_1.bin | q4_1 | 4 | 8.14 GB| 10.64 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
90
- | llama-2-13b-german-assistant-v2.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 7.87 GB| 10.37 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
91
- | llama-2-13b-german-assistant-v2.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 7.37 GB| 9.87 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
92
- | llama-2-13b-german-assistant-v2.ggmlv3.q5_0.bin | q5_0 | 5 | 8.95 GB| 11.45 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
93
- | llama-2-13b-german-assistant-v2.ggmlv3.q5_1.bin | q5_1 | 5 | 9.76 GB| 12.26 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
94
- | llama-2-13b-german-assistant-v2.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 9.23 GB| 11.73 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
95
- | llama-2-13b-german-assistant-v2.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 8.97 GB| 11.47 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
96
- | llama-2-13b-german-assistant-v2.ggmlv3.q6_K.bin | q6_K | 6 | 10.68 GB| 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
97
- | llama-2-13b-german-assistant-v2.ggmlv3.q8_0.bin | q8_0 | 8 | 13.83 GB| 16.33 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
98
 
99
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
100
 
101
  ## How to run in `llama.cpp`
102
 
103
- I use the following command line; adjust for your tastes and needs:
 
 
104
 
105
  ```
106
- ./main -t 10 -ngl 32 -m llama-2-13b-german-assistant-v2.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|prompter|>{prompt} <|endoftext|> <|assistant|>"
107
  ```
108
  Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
109
 
110
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
111
 
 
 
112
  If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
113
 
 
 
114
  ## How to run in `text-generation-webui`
115
 
116
- Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
117
 
118
  <!-- footer start -->
 
119
  ## Discord
120
 
121
  For further support, and discussions on these models and AI in general, join us at:
@@ -135,13 +161,15 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
135
  * Patreon: https://patreon.com/TheBlokeAI
136
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
137
 
138
- **Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
139
 
140
- **Patreon special mentions**: Slarti, Chadd, John Detwiler, Pieter, zynix, K, Mano Prime, ReadyPlayerEmma, Ai Maven, Leonard Tan, Edmond Seymore, Joseph William Delisle, Luke @flexchar, Fred von Graf, Viktor Bowallius, Rishabh Srivastava, Nikolai Manek, Matthew Berman, Johann-Peter Hartmann, ya boyyy, Greatston Gnanesh, Femi Adebogun, Talal Aujan, Jonathan Leane, terasurfer, David Flickinger, William Sang, Ajan Kanaga, Vadim, Artur Olbinski, Raven Klaugh, Michael Levine, Oscar Rangel, Randy H, Cory Kujawski, RoA, Dave, Alex, Alexandros Triantafyllidis, Fen Risland, Eugene Pentland, vamX, Elle, Nathan LeClaire, Khalefa Al-Ahmad, Rainer Wilmers, subjectnull, Junyu Yang, Daniel P. Andersen, SuperWojo, LangChain4j, Mandus, Kalila, Illia Dulskyi, Trenton Dambrowitz, Asp the Wyvern, Derek Yates, Jeffrey Morgan, Deep Realms, Imad Khwaja, Pyrater, Preetika Verma, biorpg, Gabriel Tamborski, Stephen Murray, Spiking Neurons AB, Iucharbius, Chris Smitley, Willem Michiel, Luke Pendergrass, Sebastain Graf, senxiiz, Will Dee, Space Cruiser, Karl Bernard, Clay Pascal, Lone Striker, transmissions 11, webtim, WelcomeToTheClub, Sam, theTransient, Pierre Kircher, chris gileta, John Villwock, Sean Connelly, Willian Hasse
141
 
142
 
143
  Thank you to all my generous patrons and donaters!
144
 
 
 
145
  <!-- footer end -->
146
 
147
  # Original model card: flozi00's Llama 2 13B German Assistant v2
@@ -149,6 +177,8 @@ Thank you to all my generous patrons and donaters!
149
 
150
  ## This project is sponsored by [ ![PrimeLine](https://www.primeline-solutions.com/skin/frontend/default/theme566/images/primeline-solutions-logo.png) ](https://www.primeline-solutions.com/de/server/nach-einsatzzweck/gpu-rendering-hpc/)
151
 
 
 
152
  # Model Card
153
 
154
  This model is an finetuned version for german instructions and conversations in style of Open Assistant tokens. "<|prompter|>" "<|endoftext|>" "<|assistant|>"
 
5
  language:
6
  - en
7
  - de
8
+ license: llama2
9
+ model_creator: Florian Zimmermeister
10
+ model_link: https://huggingface.co/flozi00/Llama-2-13B-german-assistant-v2
11
+ model_name: Llama 2 13B German Assistant v2
12
  model_type: llama
13
+ quantized_by: TheBloke
14
  ---
15
 
16
  <!-- header start -->
17
+ <!-- 200823 -->
18
+ <div style="width: auto; margin-left: auto; margin-right: auto">
19
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
20
  </div>
21
  <div style="display: flex; justify-content: space-between; width: 100%;">
22
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
23
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
24
  </div>
25
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
26
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
27
  </div>
28
  </div>
29
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
30
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
31
  <!-- header end -->
32
 
33
+ # Llama 2 13B German Assistant v2 - GGML
34
+ - Model creator: [Florian Zimmermeister](https://huggingface.co/flozi00)
35
+ - Original model: [Llama 2 13B German Assistant v2](https://huggingface.co/flozi00/Llama-2-13B-german-assistant-v2)
36
 
37
+ ## Description
38
+
39
+ This repo contains GGML format model files for [flozi00's Llama 2 13B German Assistant v2](https://huggingface.co/flozi00/Llama-2-13B-german-assistant-v2).
40
+
41
+ ### Important note regarding GGML files.
42
+
43
+ The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
44
+
45
+ Please use the GGUF models instead.
46
+ ### About GGML
47
 
48
  GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
49
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most popular web UI. Supports NVidia CUDA GPU acceleration.
50
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a powerful GGML web UI with GPU acceleration on all platforms (CUDA and OpenCL). Especially good for story telling.
51
+ * [LM Studio](https://lmstudio.ai/), a fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
52
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with CUDA GPU acceleration via the c_transformers backend.
53
+ * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
54
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
55
 
56
  Many thanks to William Beauchamp from [Chai](https://chai-research.com/) for providing the hardware used to make and upload these files!
57
 
58
  ## Repositories available
59
 
60
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/llama-2-13B-German-Assistant-v2-GPTQ)
61
+ * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/llama-2-13B-German-Assistant-v2-GGUF)
62
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/llama-2-13B-German-Assistant-v2-GGML)
63
+ * [Florian Zimmermeister's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/flozi00/Llama-2-13B-german-assistant-v2)
64
 
65
  ## Prompt template: OpenAssistant
66
 
67
  ```
68
+ <|prompter|>{prompt}<|endoftext|><|assistant|>
69
+
70
  ```
71
 
72
  <!-- compatibility_ggml start -->
73
  ## Compatibility
74
 
75
+ These quantised GGML files are compatible with llama.cpp between June 6th (commit `2d43387`) and August 21st 2023.
 
 
76
 
77
+ For support with latest llama.cpp, please use GGUF files instead.
78
 
79
+ The final llama.cpp commit with support for GGML was: [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa)
80
 
81
+ As of August 23rd 2023 they are still compatible with all UIs, libraries and utilities which use GGML. This may change in the future.
82
 
83
  ## Explanation of the new k-quant methods
84
  <details>
 
97
  <!-- compatibility_ggml end -->
98
 
99
  ## Provided files
100
+
101
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
102
  | ---- | ---- | ---- | ---- | ---- | ----- |
103
+ | [llama-2-13b-german-assistant-v2.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/llama-2-13B-German-Assistant-v2-GGML/blob/main/llama-2-13b-german-assistant-v2.ggmlv3.q2_K.bin) | q2_K | 2 | 5.51 GB| 8.01 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
104
+ | [llama-2-13b-german-assistant-v2.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/llama-2-13B-German-Assistant-v2-GGML/blob/main/llama-2-13b-german-assistant-v2.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 5.66 GB| 8.16 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
105
+ | [llama-2-13b-german-assistant-v2.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/llama-2-13B-German-Assistant-v2-GGML/blob/main/llama-2-13b-german-assistant-v2.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 6.31 GB| 8.81 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
106
+ | [llama-2-13b-german-assistant-v2.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/llama-2-13B-German-Assistant-v2-GGML/blob/main/llama-2-13b-german-assistant-v2.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 6.93 GB| 9.43 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
107
+ | [llama-2-13b-german-assistant-v2.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/llama-2-13B-German-Assistant-v2-GGML/blob/main/llama-2-13b-german-assistant-v2.ggmlv3.q4_0.bin) | q4_0 | 4 | 7.32 GB| 9.82 GB | Original quant method, 4-bit. |
108
+ | [llama-2-13b-german-assistant-v2.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/llama-2-13B-German-Assistant-v2-GGML/blob/main/llama-2-13b-german-assistant-v2.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 7.37 GB| 9.87 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
109
+ | [llama-2-13b-german-assistant-v2.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/llama-2-13B-German-Assistant-v2-GGML/blob/main/llama-2-13b-german-assistant-v2.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 7.87 GB| 10.37 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
110
+ | [llama-2-13b-german-assistant-v2.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/llama-2-13B-German-Assistant-v2-GGML/blob/main/llama-2-13b-german-assistant-v2.ggmlv3.q4_1.bin) | q4_1 | 4 | 8.14 GB| 10.64 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
111
+ | [llama-2-13b-german-assistant-v2.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/llama-2-13B-German-Assistant-v2-GGML/blob/main/llama-2-13b-german-assistant-v2.ggmlv3.q5_0.bin) | q5_0 | 5 | 8.95 GB| 11.45 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
112
+ | [llama-2-13b-german-assistant-v2.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/llama-2-13B-German-Assistant-v2-GGML/blob/main/llama-2-13b-german-assistant-v2.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 8.97 GB| 11.47 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
113
+ | [llama-2-13b-german-assistant-v2.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/llama-2-13B-German-Assistant-v2-GGML/blob/main/llama-2-13b-german-assistant-v2.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 9.23 GB| 11.73 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
114
+ | [llama-2-13b-german-assistant-v2.ggmlv3.q5_1.bin](https://huggingface.co/TheBloke/llama-2-13B-German-Assistant-v2-GGML/blob/main/llama-2-13b-german-assistant-v2.ggmlv3.q5_1.bin) | q5_1 | 5 | 9.76 GB| 12.26 GB | Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
115
+ | [llama-2-13b-german-assistant-v2.ggmlv3.q6_K.bin](https://huggingface.co/TheBloke/llama-2-13B-German-Assistant-v2-GGML/blob/main/llama-2-13b-german-assistant-v2.ggmlv3.q6_K.bin) | q6_K | 6 | 10.68 GB| 13.18 GB | New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization |
116
+ | [llama-2-13b-german-assistant-v2.ggmlv3.q8_0.bin](https://huggingface.co/TheBloke/llama-2-13B-German-Assistant-v2-GGML/blob/main/llama-2-13b-german-assistant-v2.ggmlv3.q8_0.bin) | q8_0 | 8 | 13.83 GB| 16.33 GB | Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
117
 
118
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
119
 
120
  ## How to run in `llama.cpp`
121
 
122
+ Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) or earlier.
123
+
124
+ For compatibility with latest llama.cpp, please use GGUF files instead.
125
 
126
  ```
127
+ ./main -t 10 -ngl 32 -m llama-2-13b-german-assistant-v2.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|prompter|>Write a story about llamas<|endoftext|><|assistant|>"
128
  ```
129
  Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
130
 
131
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
132
 
133
+ Change `-c 2048` to the desired sequence length for this model. For example, `-c 4096` for a Llama 2 model. For models that use RoPE, add `--rope-freq-base 10000 --rope-freq-scale 0.5` for doubled context, or `--rope-freq-base 10000 --rope-freq-scale 0.25` for 4x context.
134
+
135
  If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
136
 
137
+ For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
138
+
139
  ## How to run in `text-generation-webui`
140
 
141
+ Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
142
 
143
  <!-- footer start -->
144
+ <!-- 200823 -->
145
  ## Discord
146
 
147
  For further support, and discussions on these models and AI in general, join us at:
 
161
  * Patreon: https://patreon.com/TheBlokeAI
162
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
163
 
164
+ **Special thanks to**: Aemon Algiz.
165
 
166
+ **Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
167
 
168
 
169
  Thank you to all my generous patrons and donaters!
170
 
171
+ And thank you again to a16z for their generous grant.
172
+
173
  <!-- footer end -->
174
 
175
  # Original model card: flozi00's Llama 2 13B German Assistant v2
 
177
 
178
  ## This project is sponsored by [ ![PrimeLine](https://www.primeline-solutions.com/skin/frontend/default/theme566/images/primeline-solutions-logo.png) ](https://www.primeline-solutions.com/de/server/nach-einsatzzweck/gpu-rendering-hpc/)
179
 
180
+ Please Use V3 of this model instead
181
+
182
  # Model Card
183
 
184
  This model is an finetuned version for german instructions and conversations in style of Open Assistant tokens. "<|prompter|>" "<|endoftext|>" "<|assistant|>"