Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Skylaude
/
WizardLM-2-4x7B-MoE-exl2-6_0bpw
like
0
Text Generation
Transformers
Safetensors
mixtral
MoE
Merge
mergekit
Mistral
Microsoft/WizardLM-2-7B
text-generation-inference
6-bit
exl2
License:
apache-2.0
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
main
WizardLM-2-4x7B-MoE-exl2-6_0bpw
1 contributor
History:
5 commits
Skylaude
Update README.md
4fe71d2
verified
11 months ago
.gitattributes
Safe
1.52 kB
initial commit
11 months ago
README.md
Safe
726 Bytes
Update README.md
11 months ago
config.json
Safe
1.08 kB
Upload 7 files
11 months ago
mergekit_moe_config.yml
Safe
251 Bytes
Upload 7 files
11 months ago
model.safetensors.index.json
Safe
53.5 kB
Upload 7 files
11 months ago
output-00001-of-00003.safetensors
Safe
8.58 GB
LFS
Upload 3 files
11 months ago
output-00002-of-00003.safetensors
Safe
8.55 GB
LFS
Upload 3 files
11 months ago
output-00003-of-00003.safetensors
Safe
1.18 GB
LFS
Upload 3 files
11 months ago
special_tokens_map.json
Safe
436 Bytes
Upload 7 files
11 months ago
tokenizer.json
Safe
1.8 MB
Upload 7 files
11 months ago
tokenizer.model
Safe
493 kB
LFS
Upload 7 files
11 months ago
tokenizer_config.json
Safe
995 Bytes
Upload 7 files
11 months ago