File size: 7,367 Bytes
be8ae27 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
PowerLM-3b - GGUF
- Model creator: https://huggingface.co/ibm/
- Original model: https://huggingface.co/ibm/PowerLM-3b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [PowerLM-3b.Q2_K.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerLM-3b-gguf/blob/main/PowerLM-3b.Q2_K.gguf) | Q2_K | 1.25GB |
| [PowerLM-3b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerLM-3b-gguf/blob/main/PowerLM-3b.IQ3_XS.gguf) | IQ3_XS | 1.38GB |
| [PowerLM-3b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerLM-3b-gguf/blob/main/PowerLM-3b.IQ3_S.gguf) | IQ3_S | 1.45GB |
| [PowerLM-3b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerLM-3b-gguf/blob/main/PowerLM-3b.Q3_K_S.gguf) | Q3_K_S | 1.45GB |
| [PowerLM-3b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerLM-3b-gguf/blob/main/PowerLM-3b.IQ3_M.gguf) | IQ3_M | 1.52GB |
| [PowerLM-3b.Q3_K.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerLM-3b-gguf/blob/main/PowerLM-3b.Q3_K.gguf) | Q3_K | 1.62GB |
| [PowerLM-3b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerLM-3b-gguf/blob/main/PowerLM-3b.Q3_K_M.gguf) | Q3_K_M | 1.62GB |
| [PowerLM-3b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerLM-3b-gguf/blob/main/PowerLM-3b.Q3_K_L.gguf) | Q3_K_L | 1.76GB |
| [PowerLM-3b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerLM-3b-gguf/blob/main/PowerLM-3b.IQ4_XS.gguf) | IQ4_XS | 1.79GB |
| [PowerLM-3b.Q4_0.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerLM-3b-gguf/blob/main/PowerLM-3b.Q4_0.gguf) | Q4_0 | 1.87GB |
| [PowerLM-3b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerLM-3b-gguf/blob/main/PowerLM-3b.IQ4_NL.gguf) | IQ4_NL | 1.89GB |
| [PowerLM-3b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerLM-3b-gguf/blob/main/PowerLM-3b.Q4_K_S.gguf) | Q4_K_S | 1.89GB |
| [PowerLM-3b.Q4_K.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerLM-3b-gguf/blob/main/PowerLM-3b.Q4_K.gguf) | Q4_K | 2.0GB |
| [PowerLM-3b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerLM-3b-gguf/blob/main/PowerLM-3b.Q4_K_M.gguf) | Q4_K_M | 2.0GB |
| [PowerLM-3b.Q4_1.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerLM-3b-gguf/blob/main/PowerLM-3b.Q4_1.gguf) | Q4_1 | 2.07GB |
| [PowerLM-3b.Q5_0.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerLM-3b-gguf/blob/main/PowerLM-3b.Q5_0.gguf) | Q5_0 | 2.27GB |
| [PowerLM-3b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerLM-3b-gguf/blob/main/PowerLM-3b.Q5_K_S.gguf) | Q5_K_S | 2.27GB |
| [PowerLM-3b.Q5_K.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerLM-3b-gguf/blob/main/PowerLM-3b.Q5_K.gguf) | Q5_K | 2.33GB |
| [PowerLM-3b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerLM-3b-gguf/blob/main/PowerLM-3b.Q5_K_M.gguf) | Q5_K_M | 2.33GB |
| [PowerLM-3b.Q5_1.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerLM-3b-gguf/blob/main/PowerLM-3b.Q5_1.gguf) | Q5_1 | 2.47GB |
| [PowerLM-3b.Q6_K.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerLM-3b-gguf/blob/main/PowerLM-3b.Q6_K.gguf) | Q6_K | 2.69GB |
| [PowerLM-3b.Q8_0.gguf](https://huggingface.co/RichardErkhov/ibm_-_PowerLM-3b-gguf/blob/main/PowerLM-3b.Q8_0.gguf) | Q8_0 | 3.48GB |
Original model description:
---
pipeline_tag: text-generation
inference: false
license: apache-2.0
library_name: transformers
model-index:
- name: ibm/PowerLM-3b
results:
- task:
type: text-generation
dataset:
type: lm-eval-harness
name: ARC
metrics:
- name: accuracy-norm
type: accuracy-norm
value: 60.5
verified: false
- task:
type: text-generation
dataset:
type: lm-eval-harness
name: BoolQ
metrics:
- name: accuracy
type: accuracy
value: 72.0
verified: false
- task:
type: text-generation
dataset:
type: lm-eval-harness
name: Hellaswag
metrics:
- name: accuracy-norm
type: accuracy-norm
value: 74.6
verified: false
- task:
type: text-generation
dataset:
type: lm-eval-harness
name: OpenBookQA
metrics:
- name: accuracy-norm
type: accuracy-norm
value: 43.6
verified: false
- task:
type: text-generation
dataset:
type: lm-eval-harness
name: PIQA
metrics:
- name: accuracy-norm
type: accuracy-norm
value: 79.9
verified: false
- task:
type: text-generation
dataset:
type: lm-eval-harness
name: Winogrande
metrics:
- name: accuracy-norm
type: accuracy-norm
value: 70.0
verified: false
- task:
type: text-generation
dataset:
type: lm-eval-harness
name: MMLU (5 shot)
metrics:
- name: accuracy
type: accuracy
value: 49.2
verified: false
- task:
type: text-generation
dataset:
type: lm-eval-harness
name: GSM8k (5 shot)
metrics:
- name: accuracy
type: accuracy
value: 34.9
verified: false
- task:
type: text-generation
dataset:
type: lm-eval-harness
name: math (4 shot)
metrics:
- name: accuracy
type: accuracy
value: 15.2
verified: false
- task:
type: text-generation
dataset:
type: bigcode-eval
name: humaneval
metrics:
- name: pass@1
type: pass@1
value: 26.8
verified: false
- task:
type: text-generation
dataset:
type: bigcode-eval
name: MBPP
metrics:
- name: pass@1
type: pass@1
value: 33.6
verified: false
---
## Model Summary
PowerLM-3B is a 3B state-of-the-art small language model trained with the Power learning rate scheduler. It is trained on a mix of open-source and proprietary datasets. PowerLM-3B has shown promising results compared to other models in the size categories across various benchmarks, including natural language multi-choices, code generation, and math reasoning.
Paper: https://arxiv.org/abs/2408.13359
## Usage
Note: Requires installing HF transformers from source.
### Generation
This is a simple example of how to use **PowerLM-3b** model.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # or "cpu"
model_path = "ibm/PowerLM-3b"
tokenizer = AutoTokenizer.from_pretrained(model_path)
# drop device_map if running on CPU
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()
# change input text as desired
prompt = "Write a code to find the maximum value in a list of numbers."
# tokenize the text
input_tokens = tokenizer(prompt, return_tensors="pt")
# transfer tokenized inputs to the device
for i in input_tokens:
input_tokens[i] = input_tokens[i].to(device)
# generate output tokens
output = model.generate(**input_tokens, max_new_tokens=100)
# decode output tokens into text
output = tokenizer.batch_decode(output)
# loop over the batch to print, in this example the batch size is 1
for i in output:
print(i)
```
Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to. |