metadata
license: apache-2.0
Converted using ggerganov/ggml's "stablelm" conversion script and quantization code as of commit 05f3079 (2023-04-20).
These conversions are based off of the updated Pythia Deduped checkpoints, not the original v0 trainings.
Should be compatible with KoboldCpp. Perfect if you want to run them on low-end devices like a phone or Raspberry Pi.
NOTE: pythia-70m-deduped gave garbage output in my testing.
RAM USAGE (koboldcpp):
Model | Loaded RAM |
---|---|
ggml-pythia-70m-deduped-q4_3.bin | 121.2 MiB |
ggml-pythia-160m-deduped-q4_3.bin | 225.2 MiB |
ggml-pythia-410m-deduped-q4_3.bin | 498.1 MiB |
ggml-pythia-1b-deduped-q4_3.bin | 951.5 MiB |
ggml-pythia-1.4b-deduped-q4_3.bin | 1.3 GiB |
ggml-pythia-2.8b-deduped-q4_3.bin | 2.4 GiB |
ggml-pythia-6.9b-deduped-q4_3.bin | 5.4 GiB |