Text Generation
GGUF
Russian
conversational
IlyaGusev commited on
Commit
f6b3866
β€’
1 Parent(s): 04c0205

Update to v2

Browse files
.gitattributes CHANGED
@@ -33,13 +33,9 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
- ggml-model-q2_K.bin filter=lfs diff=lfs merge=lfs -text
37
- ggml-model-q3_K.bin filter=lfs diff=lfs merge=lfs -text
38
- ggml-model-q4_1.bin filter=lfs diff=lfs merge=lfs -text
39
- ggml-model-q5_1.bin filter=lfs diff=lfs merge=lfs -text
40
- ggml-model-q8_0.bin filter=lfs diff=lfs merge=lfs -text
41
- ggml-model-q2_K.gguf filter=lfs diff=lfs merge=lfs -text
42
- ggml-model-q3_K.gguf filter=lfs diff=lfs merge=lfs -text
43
- ggml-model-q4_K.gguf filter=lfs diff=lfs merge=lfs -text
44
- ggml-model-q5_K.gguf filter=lfs diff=lfs merge=lfs -text
45
- ggml-model-q8_0.gguf filter=lfs diff=lfs merge=lfs -text
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ *.gguf filter=lfs diff=lfs merge=lfs -text
37
+ model-q2_K.gguf filter=lfs diff=lfs merge=lfs -text
38
+ model-q3_K.gguf filter=lfs diff=lfs merge=lfs -text
39
+ model-q4_K.gguf filter=lfs diff=lfs merge=lfs -text
40
+ model-q5_K.gguf filter=lfs diff=lfs merge=lfs -text
41
+ model-q8_0.gguf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
README.md CHANGED
@@ -15,16 +15,23 @@ license: llama2
15
 
16
  Llama.cpp compatible versions of an original [7B model](https://huggingface.co/IlyaGusev/saiga2_7b_lora).
17
 
18
- * Download one of the versions, for example `ggml-model-q4_K.gguf`.
 
 
 
 
19
  * Download [interact_llamacpp.py](https://raw.githubusercontent.com/IlyaGusev/rulm/master/self_instruct/src/interact_llamacpp.py)
 
 
 
20
 
21
  How to run:
22
  ```
23
  sudo apt-get install git-lfs
24
  pip install llama-cpp-python fire
25
 
26
- python3 interact_llamacpp.py ggml-model-q4_K.gguf
27
  ```
28
 
29
  System requirements:
30
- * 10GB RAM for q8_0 and less for smaller quantizations
 
15
 
16
  Llama.cpp compatible versions of an original [7B model](https://huggingface.co/IlyaGusev/saiga2_7b_lora).
17
 
18
+ * Download one of the versions, for example `model-q4_K.gguf`.
19
+ ```
20
+ wget https://huggingface.co/IlyaGusev/saiga2_7b_gguf/resolve/main/model-q4_K.gguf
21
+ ```
22
+
23
  * Download [interact_llamacpp.py](https://raw.githubusercontent.com/IlyaGusev/rulm/master/self_instruct/src/interact_llamacpp.py)
24
+ ```
25
+ wget https://raw.githubusercontent.com/IlyaGusev/rulm/master/self_instruct/src/interact_llamacpp.py
26
+ ```
27
 
28
  How to run:
29
  ```
30
  sudo apt-get install git-lfs
31
  pip install llama-cpp-python fire
32
 
33
+ python3 interact_llamacpp.py model-q4_K.gguf
34
  ```
35
 
36
  System requirements:
37
+ * 10GB RAM for q8_0 and less for smaller quantizations
ggml-model-q2_K.gguf β†’ model-q2_K.gguf RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:db5e836e165a30324d59a8c0628c6482019fad3ccc859559f5140960ad80a8a0
3
  size 2825940544
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ecdb8ac229f6cdc3f21d5a3e95dcd26f31c355b727299e96b44ab7a7dbc34fb4
3
  size 2825940544
ggml-model-q3_K.gguf β†’ model-q3_K.gguf RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:863334afda6984017dd4e867179cc3e5880e655c71bddabe89a3e7f4065f6f03
3
  size 3298004544
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:04ec17b11219d55186d69deae515ec909ea54ecd5918b803f8deabd7a575396c
3
  size 3298004544
ggml-model-q4_K.gguf β†’ model-q4_K.gguf RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:279b2015313bae92d58575d1bd5b8655bbd89ae4df5899a1ba6f836bb4a18320
3
  size 4081004096
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b6d9ab608346c90e70eae6ab086524550c7ff5d5a34e6eb98b5b0531ef62b991
3
  size 4081004096
ggml-model-q5_K.gguf β†’ model-q5_K.gguf RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7c505438c61806b10b498623c8094de0ba0e18db23d57b9a3c9f5910efbbe767
3
  size 4783156800
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:149862753771f3a096e05b3732c520d37d8a2b099b795a4434b813fe989d1f58
3
  size 4783156800
ggml-model-q8_0.gguf β†’ model-q8_0.gguf RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4e1566dea76b28e20fcf7f240abdd6239ccf33e81556131ffd4a580a2c45d472
3
  size 7161089600
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c51054fa221c692ba2cee0ebfa63e87c1fed2abdf8608bb820c5c1b58c5c22c
3
  size 7161089600