Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
DanielClough
/
Candle_Mistral-7B-Instruct-v0.1
like
2
Text Generation
Transformers
GGUF
mistral
text-generation-inference
Inference Endpoints
License:
apache-2.0
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
main
Candle_Mistral-7B-Instruct-v0.1
2 contributors
History:
6 commits
DanielClough
add config.json
c78856b
7 months ago
.gitattributes
1.56 kB
init
10 months ago
Candle_Mistral-7B-Instruct-v0.1_f16.gguf
14.5 GB
LFS
add more quantized models
10 months ago
Candle_Mistral-7B-Instruct-v0.1_q2k.gguf
2.38 GB
LFS
add more quantized models
10 months ago
Candle_Mistral-7B-Instruct-v0.1_q3k.gguf
3.11 GB
LFS
add more quantized models
10 months ago
Candle_Mistral-7B-Instruct-v0.1_q4_0.gguf
4.07 GB
LFS
add more quantized models
10 months ago
Candle_Mistral-7B-Instruct-v0.1_q4_1.gguf
4.53 GB
LFS
add more quantized models
10 months ago
Candle_Mistral-7B-Instruct-v0.1_q4k.gguf
4.07 GB
LFS
add more quantized models
10 months ago
Candle_Mistral-7B-Instruct-v0.1_q5_0.gguf
4.98 GB
LFS
add more quantized models
10 months ago
Candle_Mistral-7B-Instruct-v0.1_q5_1.gguf
5.43 GB
LFS
add more quantized models
10 months ago
Candle_Mistral-7B-Instruct-v0.1_q5k.gguf
4.98 GB
LFS
add more quantized models
10 months ago
Candle_Mistral-7B-Instruct-v0.1_q6k.gguf
5.94 GB
LFS
init
10 months ago
Candle_Mistral-7B-Instruct-v0.1_q8_0.gguf
7.7 GB
LFS
add more quantized models
10 months ago
Candle_Mistral-7B-Instruct-v0.1_q8_1.gguf
8.15 GB
LFS
add more quantized models
10 months ago
Candle_Mistral-7B-Instruct-v0.1_q8k.gguf
8.26 GB
LFS
add more quantized models
10 months ago
README.md
423 Bytes
update readme
10 months ago
config.json
571 Bytes
add config.json
7 months ago
convert.py
283 Bytes
init
10 months ago
model-00001-of-00002.safetensors
9.94 GB
LFS
fix filenames
8 months ago
model-00002-of-00002.safetensors
4.54 GB
LFS
fix filenames
8 months ago
tokenizer.json
1.8 MB
init
10 months ago