Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
FlexingD
/
yarn-mistral-7B-64k-instruct-alpaca-cleaned-GGUF
like
5
GGUF
llama.cpp
Inference Endpoints
Model card
Files
Files and versions
Community
1
Deploy
Use this model
Edit model card
Model Description
Model Description
Model type:
Text Generation
Language:
English
Finetuned from model:
NousResearch/Yarn-Mistral-7b-64k
Hardware Type:
Ryzen 4090
Hours used:
37hs
Downloads last month
437
GGUF
Model size
7.24B params
Architecture
llama
2-bit
Q2_K
3-bit
Q3_K
4-bit
Q4_0
Q4_K
5-bit
Q5_K
6-bit
Q6_K
8-bit
Q8_0
16-bit
F16
Inference API
Unable to determine this model’s pipeline type. Check the
docs
.
Model tree for
FlexingD/yarn-mistral-7B-64k-instruct-alpaca-cleaned-GGUF
Base model
NousResearch/Yarn-Mistral-7b-64k
Quantized
(
5
)
this model