TKDKid1000 commited on
Commit
004eab5
1 Parent(s): 3037c36

Add file explanation and attribution

Browse files
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -25,6 +25,20 @@ Instruct: {prompt}
25
  Output:
26
  ```
27
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
  # Original model card: Microsoft's Phi 1.5
29
 
30
  ## Model Summary
 
25
  Output:
26
  ```
27
 
28
+ ## Provided files
29
+ | Name | Quant method | Bits | Size | Use case |
30
+ | ---- | ---- | ---- | ---- | ----- |
31
+ | [phi-2.Q2_K.gguf](https://huggingface.co/TKDKid1000/phi-1_5-GGUF/blob/main/phi-1_5-Q2_K.gguf) | Q2_K | 2 | 0.61 GB| smallest, significant quality loss - not recommended for most purposes |
32
+ | [phi-2.Q3_K_M.gguf](https://huggingface.co/TKDKid1000/phi-1_5-GGUF/blob/main/phi-1_5-Q3_K_M.gguf) | Q3_K_M | 3 | 0.77 GB| very small, high quality loss |
33
+ | [phi-2.Q4_K_M.gguf](https://huggingface.co/TKDKid1000/phi-1_5-GGUF/blob/main/phi-1_5-Q4_K_M.gguf) | Q4_K_M | 4 | 0.92 GB| medium, balanced quality - recommended |
34
+ | [phi-2.Q5_K_M.gguf](https://huggingface.co/TKDKid1000/phi-1_5-GGUF/blob/main/phi-1_5-Q5_K_M.gguf) | Q5_K_M | 5 | 1.06 GB| large, very low quality loss - recommended |
35
+ | [phi-2.Q6_K.gguf](https://huggingface.co/TKDKid1000/phi-1_5-GGUF/blob/main/phi-1_5-Q6_K.gguf) | Q6_K | 6 | 1.17 GB| very large, extremely low quality loss |
36
+ | [phi-2.Q8_0.gguf](https://huggingface.co/TKDKid1000/phi-1_5-GGUF/blob/main/phi-1_5-Q8_0.gguf) | Q8_0 | 8 | 1.51 GB| very large, extremely low quality loss - not recommended |
37
+
38
+ **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
39
+
40
+ *Model card template from [TheBloke](https://huggingface.co/TheBloke).*
41
+
42
  # Original model card: Microsoft's Phi 1.5
43
 
44
  ## Model Summary