bartowski commited on
Commit
71ef6a3
1 Parent(s): 346d1a0

Update VRAM

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -37,11 +37,11 @@ No GQA - VRAM requirements will be higher
37
 
38
  | Branch | Bits | lm_head bits | Size (4k) | Size (16k) | Description |
39
  | -------------------------------------------------------------- | ---- | ------------ | --------- | ---------- | ----------- |
40
- | [8_0](https://huggingface.co/bartowski/gemma-7b-openhermes-exl2/tree/8_0) | 8.0 | 8.0 | 9.4 GB | 15.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
41
- | [6_5](https://huggingface.co/bartowski/gemma-7b-openhermes-exl2/tree/6_5) | 6.5 | 8.0 | 8.6 GB | 14.8 GB | Near unquantized performance at vastly reduced size, **recommended**. |
42
- | [5_0](https://huggingface.co/bartowski/gemma-7b-openhermes-exl2/tree/5_0) | 5.0 | 6.0 | 7.2 GB | 13.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards with 4k context. |
43
- | [4_25](https://huggingface.co/bartowski/gemma-7b-openhermes-exl2/tree/4_25) | 4.25 | 6.0 | 6.5 GB | 12.7 GB | GPTQ equivalent bits per weight. |
44
- | [3_5](https://huggingface.co/bartowski/gemma-7b-openhermes-exl2/tree/3_5) | 3.5 | 6.0 | 5.9 GB | 12.1 GB | Lower quality, not recommended. |
45
 
46
  ## Download instructions
47
 
 
37
 
38
  | Branch | Bits | lm_head bits | Size (4k) | Size (16k) | Description |
39
  | -------------------------------------------------------------- | ---- | ------------ | --------- | ---------- | ----------- |
40
+ | [8_0](https://huggingface.co/bartowski/gemma-7b-openhermes-exl2/tree/8_0) | 8.0 | 8.0 | 14.0 GB | 19.4 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
41
+ | [6_5](https://huggingface.co/bartowski/gemma-7b-openhermes-exl2/tree/6_5) | 6.5 | 8.0 | 12.5 GB | 17.9 GB | Near unquantized performance at vastly reduced size, **recommended**. |
42
+ | [5_0](https://huggingface.co/bartowski/gemma-7b-openhermes-exl2/tree/5_0) | 5.0 | 6.0 | 10.9 GB | 16.3 GB | Slightly lower quality vs 6.5, great for 12GB cards with 4k context. |
43
+ | [4_25](https://huggingface.co/bartowski/gemma-7b-openhermes-exl2/tree/4_25) | 4.25 | 6.0 | 10.2 GB | 15.7 GB | GPTQ equivalent bits per weight, ideal for 16GB cards at 16k context |
44
+ | [3_5](https://huggingface.co/bartowski/gemma-7b-openhermes-exl2/tree/3_5) | 3.5 | 6.0 | 9.5 GB | 14.9 GB | Lower quality, not recommended. |
45
 
46
  ## Download instructions
47