Update README.md
Browse files
README.md
CHANGED
@@ -15,7 +15,7 @@ I can't guarantee that the two 128g files will work in only 40GB of VRAM.
|
|
15 |
|
16 |
I haven't specifically tested VRAM requirements yet but will aim to do so at some point. If you have any experiences to share, please do so in the comments.
|
17 |
|
18 |
-
If you want to try CPU inference instead,
|
19 |
|
20 |
## GIBBERISH OUTPUT IN `text-generation-webui`?
|
21 |
|
|
|
15 |
|
16 |
I haven't specifically tested VRAM requirements yet but will aim to do so at some point. If you have any experiences to share, please do so in the comments.
|
17 |
|
18 |
+
If you want to try CPU inference instead, check out my GGML repo: [TheBloke/alpaca-lora-65B-GGML](https://huggingface.co/TheBloke/alpaca-lora-65B-GGML).
|
19 |
|
20 |
## GIBBERISH OUTPUT IN `text-generation-webui`?
|
21 |
|