mradermacher commited on
Commit
94a920d
1 Parent(s): 6da6fe6

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +10 -1
README.md CHANGED
@@ -7,9 +7,17 @@ library_name: transformers
7
  license: llama2
8
  quantized_by: mradermacher
9
  ---
10
- weighted/imatrix quants of https://huggingface.co/cognitivecomputations/Samantha-1.11-70b
11
 
 
12
  <!-- provided-files -->
 
 
 
 
 
 
 
13
  ## Provided Quants
14
 
15
  | Link | Type | Size/GB | Notes |
@@ -26,4 +34,5 @@ weighted/imatrix quants of https://huggingface.co/cognitivecomputations/Samantha
26
  | [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.6 | fast, medium quality |
27
  | [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.7 | fast, medium quality |
28
 
 
29
  <!-- end -->
 
7
  license: llama2
8
  quantized_by: mradermacher
9
  ---
10
+ ## About
11
 
12
+ weighted/imatrix quants of https://huggingface.co/cognitivecomputations/Samantha-1.11-70b
13
  <!-- provided-files -->
14
+
15
+ ## Usage
16
+
17
+ If you are unsure how to use GGUF files, refer to one of [TheBloke's
18
+ READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
19
+ more details, including on how to concatenate multi-part files.
20
+
21
  ## Provided Quants
22
 
23
  | Link | Type | Size/GB | Notes |
 
34
  | [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.6 | fast, medium quality |
35
  | [GGUF](https://huggingface.co/mradermacher/Samantha-1.11-70b-i1-GGUF/resolve/main/Samantha-1.11-70b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.7 | fast, medium quality |
36
 
37
+
38
  <!-- end -->