Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
MaziyarPanahi
/
Qwen1.5-8x7b-v0.1-GGUF
like
6
Text Generation
Transformers
GGUF
PyTorch
mistral
quantized
2-bit
3-bit
4-bit precision
5-bit
6-bit
8-bit precision
GGUF
mixtral
axolotl
Generated from Trainer
Mixture of Experts
qwen
text-generation-inference
conversational
dataset:Crystalcareai/MoD-150k
Inference Endpoints
License:
tongyi-qianwen
Model card
Files
Files and versions
Community
6
Train
Deploy
Use this model
New discussion
New pull request
Resources
PR & discussions documentation
Code of Conduct
Hub documentation
All
Discussions
Pull requests
View closed (5)
How do you convert the MOE composed of qwen1.5 models into gguf?
16
#6 opened 8 months ago by
DisOOM