koesn commited on
Commit
02d68ec
1 Parent(s): fdfd9c0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -13
README.md CHANGED
@@ -7,21 +7,22 @@ license: apache-2.0
7
  This repo contains GGUF format model files for Garrulus-7B.
8
 
9
  ## Files Provided
10
- | Name | Quant | Bits | File Size | Remark |
11
- | ---- | ----- | ---- | --------- | ------ |
12
- | garrulus-7b.IQ3_XXS.gguf|IQ3_XXS|3|2.82 GB|3.06 bpw quantization |
13
- | garrulus-7b.IQ3_S.gguf|IQ3_S|3|2.96 GB|3.44 bpw quantization |
14
- | garrulus-7b.IQ3_M.gguf|IQ3_M|3|3.06 GB|3.66 bpw quantization mix |
15
- | garrulus-7b.IQ4_NL.gguf|IQ4_NL|4|3.87 GB|4.25 bpw non-linear quantization |
16
- | garrulus-7b.Q4_K_M.gguf|Q4_K_M|4|4.07 GB|3.80G, +0.0532 ppl |
17
- | garrulus-7b.Q5_K_M.gguf|Q5_K_M|5|4.78 GB|4.45G, +0.0122 ppl |
18
- | garrulus-7b.Q6_K.gguf|Q6_K|6|5.53 GB|5.15G, +0.0008 ppl |
19
- | garrulus-7b.Q8_0.gguf|Q8_0|8|7.17 GB|6.70G, +0.0004 ppl |
 
20
 
21
  ## Parameters
22
- | path | type | architecture | rope_theta | sliding_win | max_pos_embed |
23
- | ---- | ---- | ------------ | ---------- | ----------- | ------------- |
24
- | /data/LLM/models/mlabonne_NeuralMarcoro14-7B | mistral | MistralForCausalLM | 10000.0 | 4096 | 32768 |
25
 
26
  ## Benchmarks
27
  ![](https://i.ibb.co/Cmftwqd/Garrulus-7-B.png")
 
7
  This repo contains GGUF format model files for Garrulus-7B.
8
 
9
  ## Files Provided
10
+ | Name | Quant | Bits | File Size | Remark |
11
+ | ------------------------ | ------- | ---- | --------- | -------------------------------- |
12
+ | garrulus-7b.IQ3_XXS.gguf | IQ3_XXS | 3 | 2.82 GB | 3.06 bpw quantization |
13
+ | garrulus-7b.IQ3_S.gguf | IQ3_S | 3 | 2.96 GB | 3.44 bpw quantization |
14
+ | garrulus-7b.IQ3_M.gguf | IQ3_M | 3 | 3.06 GB | 3.66 bpw quantization mix |
15
+ | garrulus-7b.Q4_0.gguf | IQ4_NL | 4 | 3.87 GB | 4.25 bpw non-linear quantization |
16
+ | garrulus-7b.IQ4_NL.gguf | IQ4_NL | 4 | 3.87 GB | 4.25 bpw non-linear quantization |
17
+ | garrulus-7b.Q4_K_M.gguf | Q4_K_M | 4 | 4.07 GB | 3.80G, +0.0532 ppl |
18
+ | garrulus-7b.Q5_K_M.gguf | Q5_K_M | 5 | 4.78 GB | 4.45G, +0.0122 ppl |
19
+ | garrulus-7b.Q6_K.gguf | Q6_K | 6 | 5.53 GB | 5.15G, +0.0008 ppl |
20
+ | garrulus-7b.Q8_0.gguf | Q8_0 | 8 | 7.17 GB | 6.70G, +0.0004 ppl |
21
 
22
  ## Parameters
23
+ | path | type | architecture | rope_theta | sliding_win | max_pos_embed |
24
+ | -------------- | ------- | ------------------ | ---------- | ----------- | ------------- |
25
+ | udkai/Garrulus | mistral | MistralForCausalLM | 10000.0 | 4096 | 32768 |
26
 
27
  ## Benchmarks
28
  ![](https://i.ibb.co/Cmftwqd/Garrulus-7-B.png")