Transformers
GGUF
English
llama
text-generation-inference
TheBloke commited on
Commit
acc10ef
1 Parent(s): 28dd506

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -170,7 +170,7 @@ CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
170
  from ctransformers import AutoModelForCausalLM
171
 
172
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
173
- llm = AutoModelForCausalLM.from_pretrained("None", model_file="samantha-1.11-codellama-34b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
174
 
175
  print(llm("AI is going to"))
176
  ```
 
170
  from ctransformers import AutoModelForCausalLM
171
 
172
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
173
+ llm = AutoModelForCausalLM.from_pretrained("TheBloke/Samantha-1.11-CodeLlama-34B-GGUF", model_file="samantha-1.11-codellama-34b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
174
 
175
  print(llm("AI is going to"))
176
  ```