MaziyarPanahi commited on
Commit
1fbd53f
1 Parent(s): e2b4ad7

Update README.md (#2)

Browse files

- Update README.md (7f63b892a621e3a5ce7e7145dd79157ee56a7ef7)

Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -117,7 +117,7 @@ pip3 install hf_transfer
117
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
118
 
119
  ```shell
120
- HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/Experiment26-7B-GGUF Experiment26-7B-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
121
  ```
122
 
123
  Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
@@ -128,7 +128,7 @@ Windows Command Line users: You can set the environment variable by running `set
128
  Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
129
 
130
  ```shell
131
- ./main -ngl 35 -m Experiment26-7B-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
132
  {system_message}<|im_end|>
133
  <|im_start|>user
134
  {prompt}<|im_end|>
@@ -185,7 +185,7 @@ from llama_cpp import Llama
185
 
186
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
187
  llm = Llama(
188
- model_path="./Experiment26-7B-GGUF.Q4_K_M.gguf", # Download the model file first
189
  n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
190
  n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
191
  n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
@@ -205,7 +205,7 @@ output = llm(
205
 
206
  # Chat Completion API
207
 
208
- llm = Llama(model_path="./Experiment26-7B-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
209
  llm.create_chat_completion(
210
  messages = [
211
  {"role": "system", "content": "You are a story writing assistant."},
 
117
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
118
 
119
  ```shell
120
+ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/Experiment26-7B-GGUF Experiment26-7B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
121
  ```
122
 
123
  Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
 
128
  Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
129
 
130
  ```shell
131
+ ./main -ngl 35 -m Experiment26-7B.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
132
  {system_message}<|im_end|>
133
  <|im_start|>user
134
  {prompt}<|im_end|>
 
185
 
186
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
187
  llm = Llama(
188
+ model_path="./Experiment26-7B.Q4_K_M.gguf", # Download the model file first
189
  n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
190
  n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
191
  n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
 
205
 
206
  # Chat Completion API
207
 
208
+ llm = Llama(model_path="./Experiment26-7B.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
209
  llm.create_chat_completion(
210
  messages = [
211
  {"role": "system", "content": "You are a story writing assistant."},