--- base_model: - v000000/L3-8B-Poppy-Sunspice - ResplendentAI/BlueMoon_Llama3 library_name: transformers tags: - mergekit - merge - llama - not-for-all-audiences --- ### ✦Llama-3-8B-Poppy-Moonfall-OG RP Model. [Original Poppy Sunspice](https://huggingface.co/v000000/L3-8B-Poppy-Sunspice) with Bluemoon LoRa. Really good result. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/tg8oOLj6RRnfaYhPSZI4n.png) # Thanks mradermacher for the quants! * [GGUF Q2-Q8](https://huggingface.co/mradermacher/L3-8B-Poppy-Moonfall-OG-GGUF) # Quants * [GGUF Q8_0](https://huggingface.co/v000000/L3-8B-Poppy-Moonfall-OG-Q8_0-GGUF) # Update/Notice: This has a tendency for endless generations. It likes a bit of penalty parameters if this occurs. # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * [v000000/L3-8B-Poppy-Sunspice](https://huggingface.co/v000000/L3-8B-Poppy-Sunspice) + [ResplendentAI/BlueMoon_Llama3](https://huggingface.co/ResplendentAI/BlueMoon_Llama3) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: v000000/L3-8B-Poppy-Sunspice+ResplendentAI/BlueMoon_Llama3 parameters: weight: 1.0 merge_method: linear dtype: float16 ``` # Prompt Template: ```bash <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {input}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {output}<|eot_id|> ```