Together MoA is a really interesting approach based on open source models!
"We introduce Mixture of Agents (MoA), an approach to harness the collective strengths of multiple LLMs to improve state-of-the-art quality. And we provide a reference implementation, Together MoA, which leverages several open-source LLM agents to achieve a score of 65.1% on AlpacaEval 2.0, surpassing prior leader GPT-4o (57.5%)."
Congrats to @alvdansen for one of the nicest SD LoRA ever. It's so sharp and beautiful! Check the model page to try it on your own prompts: alvdansen/BandW-Manga And follow @alvdansen for more π