File size: 3,815 Bytes
1e0ba03 f986df9 1e0ba03 f986df9 1e0ba03 f986df9 1e0ba03 b89bd64 1e0ba03 73aef05 1e0ba03 f986df9 1e0ba03 168eaf5 1e0ba03 168eaf5 0533d06 1e0ba03 9c606a9 1e0ba03 9c606a9 1e0ba03 9c606a9 b89bd64 9c606a9 b89bd64 1e0ba03 9c606a9 1e0ba03 9c606a9 1e0ba03 9c606a9 1e0ba03 9c606a9 1e0ba03 b89bd64 1e0ba03 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 |
---
license: apache-2.0
datasets:
- Mielikki/Erebus-87k
- FourOhFour/Instruct_Phase
- FourOhFour/RP_Phase
- anthracite-core/full-opus-chosen-hermes-rejected-kto-v1
language:
- en
base_model:
- IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml
---
## Aura-MoE-2x4B
![image/png](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/LpCTIR45g099eXDIwYmKa.png)
## Introduction
**Aura-MoE-2x4B** is a state of the art dedicated roleplaying model designed to fulfill your every desire.
The finetunes used in this merge saw several hundreds of millions of tokens of completion, instruction and roleplaying data. A Kahneman-Tversky Optimization was applied to both heal and give this model a unique output style.
This model can be considered inferior to [Aura-MoE-2x4B-v2](https://huggingface.co/AuraIndustries/Aura-MoE-2x4B-v2) which is a direct improvement.
Developed by **Aura Industries**, with contributions from **Anthracite Org**
## Model Details
- **Model Name**: Aura-MoE-2x4B
- **Base Model**: [IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml](https://huggingface.co/IntervitensInc/Llama-3.1-Minitron-4B-Width-Base-chatml)
- **Model Type**: Chat Completions
- **Prompt Format**: ChatML
- **License**: Apache-2.0
- **Language**: English
- **Max Context**: 8,192+ tokens
## License
This model is licensed under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0).
## Quantizations
Due to the abnormal nature of this model, only static GGUF quantization is available.
[Static GGUF](https://huggingface.co/mradermacher/Aura-MoE-2x4B-GGUF)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Coming soon...
| Metric |Value|
|-------------------|----:|
|Avg. | N/A|
|IFEval (0-Shot) | N/A|
|BBH (3-Shot) | N/A|
|MATH Lvl 5 (4-Shot)| N/A|
|GPQA (0-shot) | N/A|
|MuSR (0-shot) | N/A|
|MMLU-PRO (5-shot) | N/A|
## Training Configuration
<details><summary>Click here for Mergekit and Axolotl configs</summary>
MoE Merge
```yaml
base_model: FourOhFour/Crispy_Crab_4B
gate_mode: hidden
dtype: bfloat16
experts_per_token: 1
experts:
- source_model: FourOhFour/Crispy_Crab_4B
positive_prompts:
- "Roleplaying partner"
- source_model: FourOhFour/Zenith_4B
positive_prompts:
- "Instruction following assistant"
```
KTO
```yaml
base_model: jeiku/2x4Bmoe
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
hub_model_id: jeiku/moekto
hub_strategy: "all_checkpoints"
push_dataset_to_hub:
hf_use_auth_token: true
chat_template: chatml
rl: kto
rl_beta: 0.2
kto_desirable_weight: 0.2
datasets:
- path: anthracite-core/full-opus-chosen-hermes-rejected-kto-v1
type: chatml.argilla
shuffle_merged_datasets: true
val_set_size: 0.0
output_dir: ./outputs/out
sequence_len: 8192
sample_packing: false
eval_sample_packing: false
pad_to_sequence_len: false
wandb_project: moekto
wandb_entity:
wandb_watch:
wandb_name: moekto
wandb_log_model:
gradient_accumulation_steps: 16
micro_batch_size: 2
num_epochs: 2
max_steps: 500
optimizer: adamw_8bit
lr_scheduler: cosine
learning_rate: 0.00001
weight_decay: 0.05
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: true
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: true
remove_unused_columns: false
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 2
eval_table_size:
eval_max_new_tokens:
saves_per_epoch: 1
debug:
deepspeed:
fsdp:
fsdp_config:
fsdp:
fsdp_config:
special_tokens:
pad_token: <|finetune_right_pad_id|>
```
</details><br> |