TheBloke commited on
Commit
ca3c949
1 Parent(s): 5f8a081

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -1,7 +1,7 @@
1
  This model is still uploading. README will be here shortly.
2
 
3
  If you're too impatient to wait for that (of course you are), to run these files you need:
4
- 1. llama.cpp as of this commit: https://github.com/ggerganov/llama.cpp/commit/e76d630df17e235e6b9ef416c45996765d2e36fb
5
  2. To add new command line parameter `-gqa 8`
6
 
7
  Example command:
@@ -9,4 +9,6 @@ Example command:
9
  /workspace/git/llama.cpp/main -m llama-2-70b-chat/ggml/llama-2-70b-chat.ggmlv3.q4_0.bin -gqa 8 -t 13 -p "[INST] <<SYS>>You are a helpful assistant<</SYS>>Write a story about llamas[/INST]"
10
  ```
11
 
12
- There is no CUDA support at this time, but it should hopefully be coming soon.
 
 
 
1
  This model is still uploading. README will be here shortly.
2
 
3
  If you're too impatient to wait for that (of course you are), to run these files you need:
4
+ 1. llama.cpp as of [this commit or later](https://github.com/ggerganov/llama.cpp/commit/e76d630df17e235e6b9ef416c45996765d2e36fb)
5
  2. To add new command line parameter `-gqa 8`
6
 
7
  Example command:
 
9
  /workspace/git/llama.cpp/main -m llama-2-70b-chat/ggml/llama-2-70b-chat.ggmlv3.q4_0.bin -gqa 8 -t 13 -p "[INST] <<SYS>>You are a helpful assistant<</SYS>>Write a story about llamas[/INST]"
10
  ```
11
 
12
+ There is no CUDA support at this time, but it should hopefully be coming soon.
13
+
14
+ There is no support in third-party UIs or Python libraries (llama-cpp-python, ctransformers) yet. That will come in due course.