Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
QuantFactory
/
pythia-12b-GGUF
like
2
Follow
Quant Factory
235
Text Generation
GGUF
PyTorch
EleutherAI/pile
English
causal-lm
pythia
Inference Endpoints
arxiv:
2304.01373
arxiv:
2101.00027
arxiv:
2201.07311
License:
apache-2.0
Model card
Files
Files and versions
Community
1
Deploy
Use this model
edc828a
pythia-12b-GGUF
3 contributors
History:
16 commits
munish0838
Create README.md
edc828a
verified
5 months ago
.gitattributes
2.33 kB
Upload pythia-12b.Q4_K_S.gguf with huggingface_hub
5 months ago
README.md
13.8 kB
Create README.md
5 months ago
pythia-12b.Q2_K.gguf
4.5 GB
LFS
Upload pythia-12b.Q2_K.gguf with huggingface_hub
5 months ago
pythia-12b.Q3_K_L.gguf
6.79 GB
LFS
Upload pythia-12b.Q3_K_L.gguf with huggingface_hub
5 months ago
pythia-12b.Q3_K_M.gguf
6.23 GB
LFS
Upload pythia-12b.Q3_K_M.gguf with huggingface_hub
5 months ago
pythia-12b.Q3_K_S.gguf
5.2 GB
LFS
Upload pythia-12b.Q3_K_S.gguf with huggingface_hub
5 months ago
pythia-12b.Q4_0.gguf
6.74 GB
LFS
Upload pythia-12b.Q4_0.gguf with huggingface_hub
5 months ago
pythia-12b.Q4_1.gguf
7.46 GB
LFS
Upload pythia-12b.Q4_1.gguf with huggingface_hub
5 months ago
pythia-12b.Q4_K_M.gguf
7.58 GB
LFS
Upload pythia-12b.Q4_K_M.gguf with huggingface_hub
5 months ago
pythia-12b.Q4_K_S.gguf
6.79 GB
LFS
Upload pythia-12b.Q4_K_S.gguf with huggingface_hub
5 months ago
pythia-12b.Q5_0.gguf
8.19 GB
LFS
Upload pythia-12b.Q5_0.gguf with huggingface_hub
5 months ago
pythia-12b.Q5_1.gguf
8.91 GB
LFS
Upload pythia-12b.Q5_1.gguf with huggingface_hub
5 months ago
pythia-12b.Q5_K_M.gguf
8.82 GB
LFS
Upload pythia-12b.Q5_K_M.gguf with huggingface_hub
5 months ago
pythia-12b.Q5_K_S.gguf
8.19 GB
LFS
Upload pythia-12b.Q5_K_S.gguf with huggingface_hub
5 months ago
pythia-12b.Q6_K.gguf
9.73 GB
LFS
Upload pythia-12b.Q6_K.gguf with huggingface_hub
5 months ago
pythia-12b.Q8_0.gguf
12.6 GB
LFS
Upload pythia-12b.Q8_0.gguf with huggingface_hub
5 months ago