stablelm-zephyr-3B-localmentor-GGUF
Model creator: remyxai
Original model: stablelm-zephyr-3B_localmentor
GGUF quantization: llama.cpp
commit fadde6713506d9e6c124f5680ab8c7abebe31837
Description
Fine-tune with low-rank adapters on 25K conversational turns discussing tech/startup from over 800 podcast episodes.
- Developed by: Remyx.AI
- License: apache-2.0
- Finetuned from model: stablelm-zephyr-3b
- Repository: https://github.com/remyxai/LocalMentor
Prompt Template
Following the tokenizer_config.json, the prompt template is Zephyr.
<|system|>
{system_prompt}</s>
<|user|>
{prompt}</s>
<|assistant|>
- Downloads last month
- 98
Inference API (serverless) does not yet support llama.cpp models for this pipeline type.
Model tree for mgonzs13/stablelm-zephyr-3B-localmentor-GGUF
Base model
remyxai/stablelm-zephyr-3B_localmentor