grimulkan's picture
Update README.md
8bd1ba9 verified
|
raw
history blame
No virus
878 Bytes
metadata
license: llama2

This is a merge of LongAlpaca-70B-lora into Xwin-LM's Xwin-LM-70B-V0.1, replacing the embed and norm layers as described in the LongLoRA repo, and removing the extra row and pad token so that the vocabularies match.

There is no additional fine-tuning. The resulting model seems to not be broken... you can test whether it is truly the original model + 32K capability (use linear rope scaling 8).

You could also try merging this with other models of longLORA descendency (like Aurelian).

See this discussion for how to create merges like these.