Edit model card

This is a hacked together version of the new Mistral-7b-v0.2 FP16 weights directly downloaded from their CDN.

The conversion was done by directly converting the monolithic pickle file to safetensors and building the index which is suboptimal.

Credit to Mistral AI and the amazing team over there and Cognitive Computations especially Eric Hartford for tutelage and helping navigate the LLM landscape.

As this is a mix of Mistral 7b v0.1 and Mistral 7b v0.2 files it is to be considered a pre-alpha.

This conversion is suboptimal and I would use https://huggingface.co/alpindale/Mistral-7B-v0.2-hf for the FP-16 weights until MistralAI does their offical release.

Downloads last month
74
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.