Text Generation
GGUF
Russian
conversational
IlyaGusev commited on
Commit
9bd8bdc
1 Parent(s): 992e33d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -15,17 +15,17 @@ license: llama2
15
 
16
  Llama.cpp compatible versions of an original [13B model](https://huggingface.co/IlyaGusev/saiga2_13b_lora).
17
 
18
- * Download one of the versions, for example `ggml-model-q4_1.bin`.
19
  * Download [interact_llamacpp.py](https://raw.githubusercontent.com/IlyaGusev/rulm/master/self_instruct/src/interact_llamacpp.py)
20
 
21
  How to run:
22
  ```
23
  sudo apt-get install git-lfs
24
- pip install llama-cpp-python==0.1.78 fire
25
 
26
- python3 interact_llamacpp.py ggml-model-q4_1.bin
27
  ```
28
 
29
  System requirements:
30
- * 18GB RAM for q8_0
31
- * 13GB RAM for q4_1
 
15
 
16
  Llama.cpp compatible versions of an original [13B model](https://huggingface.co/IlyaGusev/saiga2_13b_lora).
17
 
18
+ * Download one of the versions, for example `ggml-model-q4_K.gguf`.
19
  * Download [interact_llamacpp.py](https://raw.githubusercontent.com/IlyaGusev/rulm/master/self_instruct/src/interact_llamacpp.py)
20
 
21
  How to run:
22
  ```
23
  sudo apt-get install git-lfs
24
+ pip install llama-cpp-python fire
25
 
26
+ python3 interact_llamacpp.py ggml-model-q4_K.gguf
27
  ```
28
 
29
  System requirements:
30
+ * 18GB RAM for q8_K
31
+ * 8GB RAM for q4_K