--- base_model: stabilityai/stablelm-3b-4e1t datasets: Open-Orca/SlimOrca tags: - stablelm-3b-4e1t - instruct - finetune model-index: - name: slimorca-stablelm-3b-4e1t results: [] license: cc-by-sa-4.0 language: - en --- [![banner](https://maddes8cht.github.io/assets/buttons/Huggingface-banner.jpg)]() I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information # slimorca-stablelm-3b-4e1t - GGUF - Model creator: [pansophic](https://huggingface.co/pansophic) - Original model: [slimorca-stablelm-3b-4e1t](https://huggingface.co/pansophic/slimorca-stablelm-3b-4e1t) # StableLM This is a Model based on StableLM. Stablelm is a familiy of Language Models by Stability AI. ## Note: Current (as of 2023-11-15) implementations of Llama.cpp only support GPU offloading up to 34 Layers with these StableLM Models. The model will crash immediately if -ngl is larger than 34. The model works fine however without any gpu acceleration. # About GGUF format `gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library. A growing list of Software is using it and can therefore use this model. The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov # Quantization variants There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you: # Legacy quants Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types. Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants. ## Note: Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions. (This mainly refers to Falcon 7b and Starcoder models) # K-quants K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load. So, if possible, use K-quants. With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences. --- # Original Model Card: # Model Card for OpenOrca-Phi Full finetuning of the Stability AI's StableLM-3B-4E1T. The model was trained on the SlimOrca dataset. All the samples longer than the context size were removed.