QuantFactory/MN-12B-Lyra-v3-GGUF
This is quantized version of Sao10K/MN-12B-Lyra-v3 created using llama.cpp
Original Model Card
NEW V4!
fixes issues some were having?
Automatic Approval
If you agree, please place future merges / derivatives under cc-by-nc-4.0 license. ty
Mistral-NeMo-12B-Lyra-v3, built on top of Lyra-v2a2, which itself was built upon Lyra-v2a1.
Model Versioning
Lyra-v1 [Merge of Custom Roleplay & Instruct Trains, on Different Formats]
|
| [Additional SFT on 10% of Previous Data, Mixed]
v
Lyra-v2a1
|
| [Low Rank SFT Step + Tokenizer Diddling]
v
Lyra-v2a2
|
| [RL Step Performed on Multiturn Sets, Magpie-style Responses by Lyra-v2a2 for Rejected Data]
v
Lyra-v3
This uses a custom ChatML-style prompting Format!
-> What can go wrong?
[INST]system
This is the system prompt.[/INST]
[INST]user
Instructions placed here.[/INST]
[INST]assistant
The model's response will be here.[/INST]
Why this? I had used the wrong configs by accident. The format was meant for an 8B pruned NeMo train, instead it went to this. Oops.
Recommended Samplers:
Temperature: 0.7 - 1.2
min_p: 0.1 - 0.2 # Crucial for NeMo
Recommended Stopping Strings:
<|im_end|>
</s>
Blame messed up Training Configs, oops?
Training Metrics:
- Trained on 4xH100 SXM for 6 Hours.
- Trained for 2 Epochs.
- Effective Global Batch Size: 128.
- Dataset Used: A custom, cleaned mix of Stheno-v3.4's Dataset, focused mainly on multiturn.
Extras
Image Source: AI-Generated with FLUX.1 Dev.
have a nice day.
- Downloads last month
- 199