--- license: apache-2.0 tags: - generated_from_trainer - mistral - 12b - calme model-index: - name: Calme-12B-Instruct-v0.1 results: [] model_name: Calme-12B-Instruct-v0.1 inference: false model_creator: MaziyarPanahi pipeline_tag: text-generation quantized_by: MaziyarPanahi --- # MaziyarPanahi/Calme-12B-Instruct-v0.1 ## Model Description Calme-12B is a state-of-the-art language model with 12 billion parameters, merged and fine-tuned over high-quality datasets on top of Calme-7B-Instruct-v0.9. The Calme-7B models excel in generating text that resonates with clarity, calmness, and coherence. ### How to Use ```python # Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="MaziyarPanahi/Calme-12B-Instruct-v0.1") # Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/Calme-12B-Instruct-v0.1") model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/Calme-12B-Instruct-v0.1") ``` ### Quantized Models > I love how GGUF democratizes the use of Large Language Models (LLMs) on commodity hardware, more specifically, personal computers without any accelerated hardware. Because of this, I am committed to converting and quantizing any models I fine-tune to make them accessible to everyone! - GGUF (2/3/4/5/6/8 bits): [MaziyarPanahi/Calme-12B-Instruct-v0.1-GGUF](https://huggingface.co/MaziyarPanahi/Calme-12B-Instruct-v0.1-GGUF)