MarsupialAI commited on
Commit
1f787c8
1 Parent(s): 21d3cb1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -4
README.md CHANGED
@@ -9,13 +9,15 @@ So to test this crazy theory, I downloaded Undi95/Meta-Llama-3-8B-Instruct-hf an
9
  - fp32 specifically with `--outtype f32`
10
  - "Auto" with no outtype specified
11
 
12
- I then quantized each of these conversions to Q4_K_M and ran perplexity tests on everything using my abbreviated wiki.short.raw text file
 
13
 
14
  The results:
15
 
16
 
17
 
18
 
19
- As you can see, converting to fp32 has no meaningful effect on PPL. There will no doubt be some people who will claim
20
- "PpL iSn'T gOoD eNoUgH!!1!". For those people, I have uploaded all GGUFs used in this test. Feel free to do more extensive
21
- testing on your own time. I consider the matter resolved until somebody can conclusively demonstrate otherwise.
 
 
9
  - fp32 specifically with `--outtype f32`
10
  - "Auto" with no outtype specified
11
 
12
+ I then quantized each of these conversions to Q4_K_M and ran perplexity tests on everything using my abbreviated wiki.short.raw
13
+ text file
14
 
15
  The results:
16
 
17
 
18
 
19
 
20
+ As you can see, converting to fp32 has no meaningful effect on PPL compared to converting to fp16. There will no doubt be some
21
+ people who will claim "PpL iSn'T gOoD eNoUgH!!1!". For those people, I have uploaded all GGUFs used in this test. Feel free to
22
+ use those files to do more extensive testing on your own time. I consider the matter resolved until somebody can conclusively
23
+ demonstrate otherwise.