Edit model card

emissions-extraction-lora merged with the mistralai/Mistral-7B-Instruct-v0.2, converted into GGUF format and quantized. Can be used with llama.cpp.

Downloads last month
31
GGUF
Model size
7.24B params
Architecture
llama
Inference API
This model can be loaded on Inference API (serverless).

Collection including nopperl/emissions-extraction-lora-merged-GGUF