Upload README.md
Browse files
README.md
CHANGED
@@ -119,6 +119,9 @@ Refer to the Provided Files table below to see what files use which methods, and
|
|
119 |
| [llama-2-7b.Q8_0.gguf](https://huggingface.co/TheBloke/Llama-2-7B-GGUF/blob/main/llama-2-7b.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
|
120 |
|
121 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
|
|
|
|
|
|
122 |
<!-- README_GGUF.md-provided-files end -->
|
123 |
|
124 |
<!-- README_GGUF.md-how-to-run start -->
|
@@ -170,7 +173,7 @@ CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
|
|
170 |
from ctransformers import AutoModelForCausalLM
|
171 |
|
172 |
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
|
173 |
-
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Llama-2-7B-
|
174 |
|
175 |
print(llm("AI is going to"))
|
176 |
```
|
|
|
119 |
| [llama-2-7b.Q8_0.gguf](https://huggingface.co/TheBloke/Llama-2-7B-GGUF/blob/main/llama-2-7b.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
|
120 |
|
121 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
122 |
+
|
123 |
+
|
124 |
+
|
125 |
<!-- README_GGUF.md-provided-files end -->
|
126 |
|
127 |
<!-- README_GGUF.md-how-to-run start -->
|
|
|
173 |
from ctransformers import AutoModelForCausalLM
|
174 |
|
175 |
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
|
176 |
+
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Llama-2-7B-GGUF", model_file="llama-2-7b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
|
177 |
|
178 |
print(llm("AI is going to"))
|
179 |
```
|