|
--- |
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
tags: |
|
- miqu |
|
- 70b model |
|
- cat |
|
- miqu cat |
|
--- |
|
## Welcome to Miqu Cat: A 70B Miqu Lora Fine-Tune |
|
|
|
Introducing **Miqu Cat**, an advanced model fine-tuned by Dr. Kal'tsit then quanted for the the ExllamaV2 project, bringing the model down to an impressive 4.8 bits per weight (bpw). This fine-tuning allows those with limited computational resources to explore its capabilities without compromise. |
|
|
|
### Competitive Edge - *meow!* |
|
|
|
Miqu Cat stands out in the arena of Miqu fine-tunes, consistently performing admirably in tests and comparisons. It’s crafted to be less restrictive and more robust than its predecessors and variants, making it a versatile tool in AI-driven applications. |
|
**48GB VRAM to load the model for 8192 Context Length** *["2x3090", "1xA6000", "1xA100 80GB", "etc."]* |
|
|
|
### How to Use Miqu Cat: The Nitty-Gritty |
|
|
|
Miqu Cat operates on the **CHATML** prompt format, designed for straightforward and effective interaction. Whether you're integrating it into existing systems or using it for new projects, its flexible prompt structure facilitates ease of use. |
|
|
|
### Training Specs |
|
|
|
- **Dataset**: 1.5 GB |
|
- **Compute**: Dual setup of 8xA100 nodes |
|
|
|
### Meet the Author |
|
|
|
**Dr. Kal'tsit** has been at the forefront of this fine-tuning process, ensuring that Miqu Cat gives the user a unique feel. |
|
|