metadata
license: apache-2.0
language:
- en
tags:
- HuggingFace
- defog/sqlcoder.gguf
- sqlcoder-7b-2.gguf
pipeline_tag: text-generation
Model Details
I do not claim ownership of this model.
It is converted into 8-bit GGUF format from original repository huggingface.co/defog/sqlcoder-7b-2
Model Description
Developed by: Defog AI
Model Sources
Repository: https://huggingface.co/defog/sqlcoder-7b-2
Example usage
With Llamacpp:
from langchain_community.llms. llamacpp import Llamacpp
from huggingface_hub import hf_hub_download
YOUR_MODEL_DIRECTORY = None
CONTEXT LENGHT = None
MAX TOKENS = None
BATCH SIZE = None
TEMPERATURE = None
GPU_OFFLOAD = None
def load_model (model_id, model_basename):
model_path = hf_hub_download (
repo_id=model_id,
filename=model_basename,
resume_download=True,
cache_dir="YOUR_MODEL_DIRECTORY",
)
kwargs = {
'model_path': model_path,
'n_ctx': CONTEXT_LENGHT,
'max_tokens': MAX_TOKENS,
'n_batch': BATCH_SIZE,
'n_gpu_layers': GPU_OFFLOAD,
'temperature': TEMPERATURE,
'verbose': True,
}
return LlamaCpp(**kwargs)
11m = load_model(
model_id="whoami02/defog-sqlcoder-2-GGUF",
model_basename="sqlcoder-7b-2.q8_0.gguf",