--- datasets: - togethercomputer/RedPajama-Data-1T-Sample tags: - llama2 - llama --- A second model merge by [chargoddard](https://huggingface.co/chargoddard). A GGML conversion of the previous merge can be found [here](https://huggingface.co/IHaveNoClueAndIMustPost/Llama-2-22B-GGML).
I have no idea what I'm doing so if something doesn't work as it should or not at all that's likely on me, not the models themselves.

Description copied from the [original repo](https://huggingface.co/chargoddard/llama2-22b-blocktriangular) below. Similar to llama2-22b, but with BLOCK_DIAGONAL=false in the merge and twice the fine-tuning tokens. Again, not intended for direct use - meant as a base for further tuning and merging.