---
language:
- en
license: cc-by-nc-4.0
base_model: Sao10K/L3-8B-Stheno-v3.2
datasets:
- Gryphe/Opus-WritingPrompts
- Sao10K/Claude-3-Opus-Instruct-15K
- Sao10K/Short-Storygen-v2
- Sao10K/c2-Logs-Filtered
model_name: L3-8B-Stheno-v3.2-GGUF
quantized_by: brooketh
parameter_count: 8030261248
---
**
The official library of GGUF format models for use in the local AI chat app, Backyard AI.
**Download Backyard AI here to get started.
Request Additional models at r/LLM_Quants.
*** # Llama 3 Stheno V3.2 8B - **Creator:** [Sao10K](https://huggingface.co/Sao10K/) - **Original:** [Llama 3 Stheno V3.2 8B](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2) - **Date Created:** 2024-06-05 - **Trained Context:** 8192 tokens - **Description:** High performing roleplaying model trained on thousands of SFW and NSFW short stories curated from Reddit. Said to be more balanced than previous iterations of this model, with better multi-turn coherency but slightly less creativity. *** ## What is a GGUF? GGUF is a large language model (LLM) format that can be split between CPU and GPU. GGUFs are compatible with applications based on llama.cpp, such as Backyard AI. Where other model formats require higher end GPUs with ample VRAM, GGUFs can be efficiently run on a wider variety of hardware. GGUF models are quantized to reduce resource usage, with a tradeoff of reduced coherence at lower quantizations. Quantization reduces the precision of the model weights by changing the number of bits used for each weight. ***