|
--- |
|
license: cc-by-nc-4.0 |
|
language: |
|
- en |
|
--- |
|
GGUF: https://huggingface.co/Sao10K/Franziska-Maxtral-8x22B-v1-GGUF |
|
|
|
An experiment. Maxtral model lora-finetune of my own + merged with i think it was Tess at a low weight, slerp or something together. |
|
|
|
the raw adapter was slopped despite trained on base so thats why I added Tess, it helped somewhat. |
|
|
|
So it is a merge yes, at the same time part of the model is mine. |
|
|
|
kinda meh with it it, but leaving it out here. |
|
|
|
loves to yap, slight positivity bias and gpt-isms. kinda expected. its not special or unique, just another model out there. |
|
|
|
use alpaca or vicuna or \[INST] blocks or whatever idc. |