Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
TheBloke
/
Synthia-MoE-v3-Mixtral-8x7B-GPTQ
like
10
Text Generation
Transformers
Safetensors
mixtral
text-generation-inference
4-bit precision
gptq
License:
apache-2.0
Model card
Files
Files and versions
Community
3
Train
Deploy
Use this model
main
Synthia-MoE-v3-Mixtral-8x7B-GPTQ
1 contributor
History:
11 commits
TheBloke
Update README.md
34af7eb
10 months ago
.gitattributes
1.52 kB
initial commit
10 months ago
README.md
18.3 kB
Update README.md
10 months ago
config.json
2.23 kB
Update config.json
10 months ago
generation_config.json
116 Bytes
GPTQ model commit
10 months ago
model.safetensors
23.8 GB
LFS
GPTQ model commit
10 months ago
quantize_config.json
185 Bytes
GPTQ model commit
10 months ago
special_tokens_map.json
437 Bytes
GPTQ model commit
10 months ago
tokenizer.json
1.8 MB
GPTQ model commit
10 months ago
tokenizer.model
493 kB
LFS
GPTQ model commit
10 months ago
tokenizer_config.json
969 Bytes
GPTQ model commit
10 months ago