froggeric commited on
Commit
7ed141e
1 Parent(s): 4843929

presentation

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -33,11 +33,11 @@ The questions can be split half-half in 2 possible ways:
33
 
34
  - **Do not use a GGUF quantisation smaller than q4**. In my testings, anything below q4 suffers from too much degradation, and it is better to use a smaller model with higher quants.
35
  - **Importance matrix matters**. Be careful when using importance matrices. For example, if the matrix is solely based on english language, it will degrade the model multilingual and coding capabilities. However, if that is all that matters for your use case, using an imatrix will definitely improve the model performance.
36
- - Best **large** model: [WizardLM-2-8x22B](https://huggingface.co/alpindale/WizardLM-2-8x22B). And fast too! On my m2 max with 38 GPU cores, I get an inference speed of **11.81 tok/s** with iq4_xs.
37
- - Second best **large** model: [CohereForAI/c4ai-command-r-plus](https://huggingface.co/CohereForAI/c4ai-command-r-plus). Very close to the above choice, but 4 times slower! On my m2 max with 38 GPU cores, I get an inference speed of **3.88 tok/s** with q5_km. However it gives different results from WizardLM, and it can definitely be worth using.
38
- - Best **medium** model: [sophosympatheia/Midnight-Miqu-70B-v1.5](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.5)
39
- - Best **small** model: [CohereForAI/c4ai-command-r-v01](https://huggingface.co/CohereForAI/c4ai-command-r-v01)
40
- - Best **tiny** model: [froggeric/WestLake-10.7b-v2](https://huggingface.co/froggeric/WestLake-10.7B-v2-GGUF)
41
 
42
  # Results
43
 
 
33
 
34
  - **Do not use a GGUF quantisation smaller than q4**. In my testings, anything below q4 suffers from too much degradation, and it is better to use a smaller model with higher quants.
35
  - **Importance matrix matters**. Be careful when using importance matrices. For example, if the matrix is solely based on english language, it will degrade the model multilingual and coding capabilities. However, if that is all that matters for your use case, using an imatrix will definitely improve the model performance.
36
+ - **Best _large_ model**: [WizardLM-2-8x22B](https://huggingface.co/alpindale/WizardLM-2-8x22B). And fast too! On my m2 max with 38 GPU cores, I get an inference speed of **11.81 tok/s** with iq4_xs.
37
+ - **Second best _large_ model**: [CohereForAI/c4ai-command-r-plus](https://huggingface.co/CohereForAI/c4ai-command-r-plus). Very close to the above choice, but 4 times slower! On my m2 max with 38 GPU cores, I get an inference speed of **3.88 tok/s** with q5_km. However it gives different results from WizardLM, and it can definitely be worth using.
38
+ - **Best _medium_ model**: [sophosympatheia/Midnight-Miqu-70B-v1.5](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.5)
39
+ - **Best _small_ model**: [CohereForAI/c4ai-command-r-v01](https://huggingface.co/CohereForAI/c4ai-command-r-v01)
40
+ - **Best _tiny_ model**: [froggeric/WestLake-10.7b-v2](https://huggingface.co/froggeric/WestLake-10.7B-v2-GGUF)
41
 
42
  # Results
43