Update README.md
Browse files
README.md
CHANGED
@@ -13,6 +13,11 @@ The code for merging is provided in the [WizardLM official Github repo](https://
|
|
13 |
|
14 |
This repo contains GGML files for for CPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp).
|
15 |
|
|
|
|
|
|
|
|
|
|
|
16 |
## Provided files
|
17 |
| Name | Quant method | Bits | Size | RAM required | Use case |
|
18 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
|
|
13 |
|
14 |
This repo contains GGML files for for CPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp).
|
15 |
|
16 |
+
## Other repositories available
|
17 |
+
|
18 |
+
* [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/wizardLM-7B-GPTQ)
|
19 |
+
* [Unquantised model in HF format](https://huggingface.co/TheBloke/wizardLM-7B-HF)
|
20 |
+
|
21 |
## Provided files
|
22 |
| Name | Quant method | Bits | Size | RAM required | Use case |
|
23 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|