Cakrawala-123B / README.md
passthepizza's picture
Update README.md
84c1b91 verified
metadata
license: mit
language:
  - en
base_model:
  - mistralai/Mistral-Large-Instruct-2411
tags:
  - axolotl
datasets:
  - NarrativAI/CakrawalaRP
pipeline_tag: text-generation

🎭 Cakrawala-123B

Where Worlds Converge and Adventures Begin!

🌟 What's Special About This Model?

Cakrawala-123B is a fine-tuned variant of the Mistral-Large-Instruct-2411 model, specifically optimised for generating rich roleplaying conversations and character interactions. The model has been trained to excel at producing detailed, contextually appropriate character dialogues with rich descriptions of physical actions, expressions, and emotional states while maintaining consistent character voices and perspectives throughout extended interactions.

πŸ§ͺ The Secret Sauce

Training Diet:

  • Fed with NarrativAI/CakrawalaRP dataset
  • Conversation pairs with detailed interactions
  • Focused on maintaining character consistency and rich descriptions

Tech Wizardry:

  • Base Model: Mistral-Large-Instruct-2411
  • Fine-tuned using QLoRA
  • Trained over 2 epochs

Training Parameters

  • Gradient Accumulation Steps: 1
  • Micro Batch Size: 4
  • Learning Rate: 0.000015
  • Optimizer: AdamW
  • Scheduler: Cosine
  • Mixed Precision: BF16 & FP16 with TF32 support

πŸ”§ Under the Hood

  • LoRA Configuration:
    • Rank (r): 32
    • Alpha: 64
    • Dropout: 0.1
  • Sequence Length: 2048
  • Gradient Checkpointing: Enabled
  • Flash Attention: Enabled

🎬 License & Credits

  • Licensed under MIT
  • Based on mistralai/Mistral-Large-Instruct-2411

Built with ❀️ for roleplayers, by roleplayers