wolfram commited on
Commit
c216b65
1 Parent(s): 75fa698

Update README.md

Browse files

Thanks for the additional quants, [DAN™](https://huggingface.co/dranger003)!

Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -18,7 +18,7 @@ tags:
18
  ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6303ca537373aacccd85d8a7/vmCAhJCpF0dITtCVxlYET.jpeg)
19
 
20
  - HF: wolfram/miquliz-120b-v2.0
21
- - GGUF: [Q2_K | IQ3_XXS | Q4_K_M | Q5_K_M](https://huggingface.co/wolfram/miquliz-120b-v2.0-GGUF)
22
  - EXL2: [2.4bpw](https://huggingface.co/wolfram/miquliz-120b-v2.0-2.4bpw-h6-exl2) | [2.65bpw](https://huggingface.co/wolfram/miquliz-120b-v2.0-2.65bpw-h6-exl2) | [3.0bpw](https://huggingface.co/wolfram/miquliz-120b-v2.0-3.0bpw-h6-exl2) | [3.5bpw](https://huggingface.co/wolfram/miquliz-120b-v2.0-3.5bpw-h6-exl2) | [4.0bpw](https://huggingface.co/wolfram/miquliz-120b-v2.0-4.0bpw-h6-exl2) | [5.0bpw](https://huggingface.co/wolfram/miquliz-120b-v2.0-5.0bpw-h6-exl2)
23
 
24
  This is v2.0 of a 120b frankenmerge created by interleaving layers of [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) with [lzlv_70b_fp16_hf](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf) using [mergekit](https://github.com/cg123/mergekit). Better than v1.0 thanks to the improved recipe adapted from [TheProfessor-155b](https://huggingface.co/abacusai/TheProfessor-155b) by [Eric Hartford](https://erichartford.com/), it is now achieving top rank with double perfect scores in [my LLM comparisons/tests](https://www.reddit.com/r/LocalLLaMA/search?q=author%3AWolframRavenwolf+Comparison%2FTest&sort=new&t=all).
@@ -27,6 +27,8 @@ Inspired by [goliath-120b](https://huggingface.co/alpindale/goliath-120b).
27
 
28
  Thanks for the support, [CopilotKit](https://github.com/CopilotKit/CopilotKit) – the open-source platform for building in-app AI Copilots into any product, with any LLM model. Check out their GitHub.
29
 
 
 
30
  Also available: [miqu-1-120b](https://huggingface.co/wolfram/miqu-1-120b) – Miquliz's older, purer sister; only Miqu, inflated to 120B.
31
 
32
  ## Model Details
 
18
  ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6303ca537373aacccd85d8a7/vmCAhJCpF0dITtCVxlYET.jpeg)
19
 
20
  - HF: wolfram/miquliz-120b-v2.0
21
+ - GGUF: [IQ2_XS | IQ2_XXS | IQ3_XXS](https://huggingface.co/dranger003/miquliz-120b-v2.0-iMat.GGUF) | [Q2_K | IQ3_XXS | Q4_K_M | Q5_K_M](https://huggingface.co/wolfram/miquliz-120b-v2.0-GGUF) | [Q8_0](https://huggingface.co/dranger003/miquliz-120b-v2.0-iMat.GGUF)
22
  - EXL2: [2.4bpw](https://huggingface.co/wolfram/miquliz-120b-v2.0-2.4bpw-h6-exl2) | [2.65bpw](https://huggingface.co/wolfram/miquliz-120b-v2.0-2.65bpw-h6-exl2) | [3.0bpw](https://huggingface.co/wolfram/miquliz-120b-v2.0-3.0bpw-h6-exl2) | [3.5bpw](https://huggingface.co/wolfram/miquliz-120b-v2.0-3.5bpw-h6-exl2) | [4.0bpw](https://huggingface.co/wolfram/miquliz-120b-v2.0-4.0bpw-h6-exl2) | [5.0bpw](https://huggingface.co/wolfram/miquliz-120b-v2.0-5.0bpw-h6-exl2)
23
 
24
  This is v2.0 of a 120b frankenmerge created by interleaving layers of [miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) with [lzlv_70b_fp16_hf](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf) using [mergekit](https://github.com/cg123/mergekit). Better than v1.0 thanks to the improved recipe adapted from [TheProfessor-155b](https://huggingface.co/abacusai/TheProfessor-155b) by [Eric Hartford](https://erichartford.com/), it is now achieving top rank with double perfect scores in [my LLM comparisons/tests](https://www.reddit.com/r/LocalLLaMA/search?q=author%3AWolframRavenwolf+Comparison%2FTest&sort=new&t=all).
 
27
 
28
  Thanks for the support, [CopilotKit](https://github.com/CopilotKit/CopilotKit) – the open-source platform for building in-app AI Copilots into any product, with any LLM model. Check out their GitHub.
29
 
30
+ Thanks for the additional quants, [DAN™](https://huggingface.co/dranger003)!
31
+
32
  Also available: [miqu-1-120b](https://huggingface.co/wolfram/miqu-1-120b) – Miquliz's older, purer sister; only Miqu, inflated to 120B.
33
 
34
  ## Model Details