Model Card for Model ID

bling-stablelm-3b-gguf is a quantized version of BLING Stable-LM 3B, with 4_K_M GGUF quantization, providing a fast, small inference implementation for use on CPUs.

bling-stablelm-3b is a fact-based question-answering model, optimized for complex business documents.

To pull the model via API:

from huggingface_hub import snapshot_download           
snapshot_download("llmware/bling-stablelm-3b-gguf", local_dir="/path/on/your/machine/", local_dir_use_symlinks=False)  

Load in your favorite GGUF inference engine, or try with llmware as follows:

from llmware.models import ModelCatalog  
model = ModelCatalog().load_model("llmware/bling-stablelm-3b-gguf")            
response = model.inference(query, add_context=text_sample, add_prompt_engineering="default_with_context")  

Note: please review config.json in the repository for prompt wrapping information, details on the model, and full test set.

Model Description

  • Developed by: llmware
  • Model type: GGUF
  • Language(s) (NLP): English
  • License: CC-BY-SA-4.0
  • Quantized from model: llmware/bling-stablelm-3b

Model Card Contact

Darren Oberst & llmware team

Downloads last month
57
GGUF
Model size
2.8B params
Architecture
stablelm
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Collection including llmware/bling-stablelm-3b-gguf