TheBloke commited on
Commit
bb79e9c
1 Parent(s): ef753f4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -4
README.md CHANGED
@@ -14,9 +14,7 @@ license: cc-by-nc-4.0
14
  model_creator: Charles Goddard
15
  model_name: MixtralRPChat ZLoss
16
  model_type: mixtral
17
- prompt_template: '***System: {system_message} ***Query: {prompt} ***Response:
18
-
19
- '
20
  quantized_by: TheBloke
21
  tags:
22
  - mixtral
@@ -66,7 +64,7 @@ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization metho
66
 
67
  AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
68
 
69
- AWQ models are supported by (note that note all of these may support Mixtral models yet):
70
 
71
  - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
72
  - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
 
14
  model_creator: Charles Goddard
15
  model_name: MixtralRPChat ZLoss
16
  model_type: mixtral
17
+ prompt_template: '***System: {system_message} ***Query: {prompt} ***Response:'
 
 
18
  quantized_by: TheBloke
19
  tags:
20
  - mixtral
 
64
 
65
  AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
66
 
67
+ AWQ models are supported by (note that not all of these may support Mixtral models yet):
68
 
69
  - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
70
  - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.