# Arconte-13B Arconte is Llama-2 merge. Arconte has many iterations, trying different recipes/models/merge-methods, in particular, iteration I and iteration Z are both models which showed promise. This version of Arconte is variation I redone with a more experimental approach to the merge recipe, and it shows great results. Originally, the idea was to do one of those fancy Dare Ties model A, Dare Ties model B, Slerp model A + model B. I already did the Slerp model C, but it's flawed due to the flawed iterations I and Z. I still plan to do model C, so now I am remaking iteration Z. Models used: [NeverSleep/X-NoroChronos-13B](https://huggingface.co/NeverSleep/X-NoroChronos-13B) [Undi95/Emerhyst-13B](https://huggingface.co/Undi95/Emerhyst-13B) [Henk717/echidna-tiefigther-25](https://huggingface.co/Henk717/echidna-tiefigther-25) After completing model C, current roadmap is to either go into mistral merges, or trying my hand at making loras/qloras. No mixtral, nor anything above 13B parameters in the future due to hardware limitations. All testing was done with Q5_K_M GUFF. I'll upload the full GUFF range along with an Imatrix version soon.