ZeroXClem/Llama-3.1-8B-SuperNova-EtherealHermes
π Overview
ZeroXClem/Llama-3.1-8B-SuperNova-EtherealHermes is a cutting-edge fusion of top-tier Llama 3.1 models, meticulously crafted to balance powerful instruction-following, immersive storytelling, and logical reasoning. This merge integrates the strengths of SuperNova, EtherealHermes, and additional high-performance models, resulting in an adaptable and dynamic AI.
This model is governed by the Meta Llama 3.1 Community License Agreement and is optimized for long-form generation, multi-step reasoning, and roleplay applications.
π Key Features
- Advanced Instruction Following β Leverages high-context retention for accurate and logical responses.
- Enhanced Roleplay & Storytelling β Supports immersive dialogue, lore-building, and dynamic narrative generation.
- Long-Form Content Generation β Capable of producing detailed, coherent text over extended passages.
- Adaptive Multi-Domain Performance β Handles research, fiction writing, technical content, and conversation seamlessly.
- Highly Efficient Processing β Optimized quantization and inference mechanisms ensure smooth deployment.
π§ Merged Models
This model is the result of a carefully calibrated merge of the following models:
- djuna/L3.1-Purosani-2-8B β A high-performance Llama 3.1 model emphasizing instruction-following and contextual coherence.
- invisietch/L3.1-EtherealRainbow-v1.0-rc1-8B β Focuses on creative storytelling, world-building, and conversational depth.
- ZeroXClem/L3SAO-Mix-SuperHermes-NovaPurosani-8B β A hybrid powerhouse that integrates Hermes3, SuperNova, and Purosani architectures.
- ZeroXClem/Llama3.1-Hermes3-SuperNova-8B-L3.1-Purosani-2-8B β Enhances multi-step inference, logical alignment, and long-form composition.
This curated selection ensures the model is equipped with both technical precision and artistic creativity.
π§ Merge Configuration
The model was merged using Model Stock methodology with bfloat16 precision to ensure a seamless blend of capabilities. The YAML configuration is as follows:
# Merge configuration for ZeroXClem-Llama-3.1-8B-SuperNova-EtherealHermes using MODEL STOCK
name: ZeroXClem-Llama-3.1-8B-SuperNova-EtherealHermes
base_model: invisietch/L3.1-EtherealRainbow-v1.0-rc1-8B
dtype: bfloat16
merge_method: model_stock
models:
- model: ZeroXClem/L3SAO-Mix-SuperHermes-NovaPurosani-8B
- model: ZeroXClem/Llama3.1-Hermes3-SuperNova-8B-L3.1-Purosani-2-8B
- model: djuna/L3.1-Purosani-2-8B
- model: ZeroXClem/Llama-3.1-8B-SuperTulu-LexiNova
tokenizer_source: invisietch/L3.1-EtherealRainbow-v1.0-rc1-8B
This ensures logical coherence, creative diversity, and robust performance across various AI tasks.
π How to Use
π₯ Ollama (Quick Inference)
You can run the model using Ollama for direct testing:
ollama run hf.co/ZeroXClem/Llama-3.1-8B-SuperNova-EtherealHermes
π€ Hugging Face Transformers (Python)
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import torch
model_name = "ZeroXClem/Llama-3.1-8B-SuperNova-EtherealHermes"
# Load tokenizer & model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Initialize text generation pipeline
text_generator = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Example prompt
prompt = "Describe the significance of AI ethics in modern technology."
# Generate output
outputs = text_generator(
prompt,
max_new_tokens=200,
do_sample=True,
temperature=0.7,
top_k=50,
top_p=0.95
)
print(outputs[0]["generated_text"])
π Best Practices
Use System Prompts:
For best performance, add a system instruction before inference:"Think step by step with logical reasoning before providing any response."
Uncensored Mode:
For more unrestricted output, set the system message to"."
or customize it accordingly.Quantization Considerations:
Q4
may lead to refusal issues due to loss of fine-tuning alignment.F16
orQ8
is recommended for optimal inference quality.
π License
This model is released under the Meta Llama 3.1 Community License Agreement.
β Disclaimer: This model is highly compliant and uncensored. It is the user's responsibility to ensure ethical and appropriate usage, especially in public-facing applications.
π‘ Future Improvements
- Enhanced ethical alignment while preserving model capabilities.
- Further fine-tuning for domain-specific reasoning tasks.
- Expanded dataset integration for better real-world knowledge representation.
β€οΈ Special Thanks
A heartfelt thank you to:
- djuna for L3.1-Purosani-2-8B.
- invisietch for L3.1-EtherealRainbow.
- MergeKit Community for advancing open-source merging techniques.
- The π€ Hugging Face & Open-Source AI ecosystem for continued AI innovation.
Your contributions fuel the progress of next-gen AI models! ππ
π’ Feedback & Contributions
If you encounter any issues, have suggestions, or wish to contribute, feel free to open a discussion or submit a pull request.
- Downloads last month
- 25