morriszms commited on
Commit
b3bee14
1 Parent(s): 9d4294e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -12
README.md CHANGED
@@ -47,8 +47,16 @@ This repo contains GGUF format model files for [tifa-benchmark/llama2_tifa_quest
47
 
48
  The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
49
 
 
 
 
 
 
 
 
50
  ## Prompt template
51
 
 
52
  ```
53
 
54
  ```
@@ -57,18 +65,18 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
57
 
58
  | Filename | Quant type | File Size | Description |
59
  | -------- | ---------- | --------- | ----------- |
60
- | [llama2_tifa_question_generation-Q2_K.gguf](https://huggingface.co/tensorblock/llama2_tifa_question_generation-GGUF/tree/main/llama2_tifa_question_generation-Q2_K.gguf) | Q2_K | 2.359 GB | smallest, significant quality loss - not recommended for most purposes |
61
- | [llama2_tifa_question_generation-Q3_K_S.gguf](https://huggingface.co/tensorblock/llama2_tifa_question_generation-GGUF/tree/main/llama2_tifa_question_generation-Q3_K_S.gguf) | Q3_K_S | 2.746 GB | very small, high quality loss |
62
- | [llama2_tifa_question_generation-Q3_K_M.gguf](https://huggingface.co/tensorblock/llama2_tifa_question_generation-GGUF/tree/main/llama2_tifa_question_generation-Q3_K_M.gguf) | Q3_K_M | 3.072 GB | very small, high quality loss |
63
- | [llama2_tifa_question_generation-Q3_K_L.gguf](https://huggingface.co/tensorblock/llama2_tifa_question_generation-GGUF/tree/main/llama2_tifa_question_generation-Q3_K_L.gguf) | Q3_K_L | 3.350 GB | small, substantial quality loss |
64
- | [llama2_tifa_question_generation-Q4_0.gguf](https://huggingface.co/tensorblock/llama2_tifa_question_generation-GGUF/tree/main/llama2_tifa_question_generation-Q4_0.gguf) | Q4_0 | 3.563 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
65
- | [llama2_tifa_question_generation-Q4_K_S.gguf](https://huggingface.co/tensorblock/llama2_tifa_question_generation-GGUF/tree/main/llama2_tifa_question_generation-Q4_K_S.gguf) | Q4_K_S | 3.592 GB | small, greater quality loss |
66
- | [llama2_tifa_question_generation-Q4_K_M.gguf](https://huggingface.co/tensorblock/llama2_tifa_question_generation-GGUF/tree/main/llama2_tifa_question_generation-Q4_K_M.gguf) | Q4_K_M | 3.801 GB | medium, balanced quality - recommended |
67
- | [llama2_tifa_question_generation-Q5_0.gguf](https://huggingface.co/tensorblock/llama2_tifa_question_generation-GGUF/tree/main/llama2_tifa_question_generation-Q5_0.gguf) | Q5_0 | 4.332 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
68
- | [llama2_tifa_question_generation-Q5_K_S.gguf](https://huggingface.co/tensorblock/llama2_tifa_question_generation-GGUF/tree/main/llama2_tifa_question_generation-Q5_K_S.gguf) | Q5_K_S | 4.332 GB | large, low quality loss - recommended |
69
- | [llama2_tifa_question_generation-Q5_K_M.gguf](https://huggingface.co/tensorblock/llama2_tifa_question_generation-GGUF/tree/main/llama2_tifa_question_generation-Q5_K_M.gguf) | Q5_K_M | 4.455 GB | large, very low quality loss - recommended |
70
- | [llama2_tifa_question_generation-Q6_K.gguf](https://huggingface.co/tensorblock/llama2_tifa_question_generation-GGUF/tree/main/llama2_tifa_question_generation-Q6_K.gguf) | Q6_K | 5.149 GB | very large, extremely low quality loss |
71
- | [llama2_tifa_question_generation-Q8_0.gguf](https://huggingface.co/tensorblock/llama2_tifa_question_generation-GGUF/tree/main/llama2_tifa_question_generation-Q8_0.gguf) | Q8_0 | 6.669 GB | very large, extremely low quality loss - not recommended |
72
 
73
 
74
  ## Downloading instruction
 
47
 
48
  The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
49
 
50
+
51
+ <div style="text-align: left; margin: 20px 0;">
52
+ <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
53
+ Run them on the TensorBlock client using your local machine ↗
54
+ </a>
55
+ </div>
56
+
57
  ## Prompt template
58
 
59
+
60
  ```
61
 
62
  ```
 
65
 
66
  | Filename | Quant type | File Size | Description |
67
  | -------- | ---------- | --------- | ----------- |
68
+ | [llama2_tifa_question_generation-Q2_K.gguf](https://huggingface.co/tensorblock/llama2_tifa_question_generation-GGUF/blob/main/llama2_tifa_question_generation-Q2_K.gguf) | Q2_K | 2.359 GB | smallest, significant quality loss - not recommended for most purposes |
69
+ | [llama2_tifa_question_generation-Q3_K_S.gguf](https://huggingface.co/tensorblock/llama2_tifa_question_generation-GGUF/blob/main/llama2_tifa_question_generation-Q3_K_S.gguf) | Q3_K_S | 2.746 GB | very small, high quality loss |
70
+ | [llama2_tifa_question_generation-Q3_K_M.gguf](https://huggingface.co/tensorblock/llama2_tifa_question_generation-GGUF/blob/main/llama2_tifa_question_generation-Q3_K_M.gguf) | Q3_K_M | 3.072 GB | very small, high quality loss |
71
+ | [llama2_tifa_question_generation-Q3_K_L.gguf](https://huggingface.co/tensorblock/llama2_tifa_question_generation-GGUF/blob/main/llama2_tifa_question_generation-Q3_K_L.gguf) | Q3_K_L | 3.350 GB | small, substantial quality loss |
72
+ | [llama2_tifa_question_generation-Q4_0.gguf](https://huggingface.co/tensorblock/llama2_tifa_question_generation-GGUF/blob/main/llama2_tifa_question_generation-Q4_0.gguf) | Q4_0 | 3.563 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
73
+ | [llama2_tifa_question_generation-Q4_K_S.gguf](https://huggingface.co/tensorblock/llama2_tifa_question_generation-GGUF/blob/main/llama2_tifa_question_generation-Q4_K_S.gguf) | Q4_K_S | 3.592 GB | small, greater quality loss |
74
+ | [llama2_tifa_question_generation-Q4_K_M.gguf](https://huggingface.co/tensorblock/llama2_tifa_question_generation-GGUF/blob/main/llama2_tifa_question_generation-Q4_K_M.gguf) | Q4_K_M | 3.801 GB | medium, balanced quality - recommended |
75
+ | [llama2_tifa_question_generation-Q5_0.gguf](https://huggingface.co/tensorblock/llama2_tifa_question_generation-GGUF/blob/main/llama2_tifa_question_generation-Q5_0.gguf) | Q5_0 | 4.332 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
76
+ | [llama2_tifa_question_generation-Q5_K_S.gguf](https://huggingface.co/tensorblock/llama2_tifa_question_generation-GGUF/blob/main/llama2_tifa_question_generation-Q5_K_S.gguf) | Q5_K_S | 4.332 GB | large, low quality loss - recommended |
77
+ | [llama2_tifa_question_generation-Q5_K_M.gguf](https://huggingface.co/tensorblock/llama2_tifa_question_generation-GGUF/blob/main/llama2_tifa_question_generation-Q5_K_M.gguf) | Q5_K_M | 4.455 GB | large, very low quality loss - recommended |
78
+ | [llama2_tifa_question_generation-Q6_K.gguf](https://huggingface.co/tensorblock/llama2_tifa_question_generation-GGUF/blob/main/llama2_tifa_question_generation-Q6_K.gguf) | Q6_K | 5.149 GB | very large, extremely low quality loss |
79
+ | [llama2_tifa_question_generation-Q8_0.gguf](https://huggingface.co/tensorblock/llama2_tifa_question_generation-GGUF/blob/main/llama2_tifa_question_generation-Q8_0.gguf) | Q8_0 | 6.669 GB | very large, extremely low quality loss - not recommended |
80
 
81
 
82
  ## Downloading instruction