grimulkan's picture
Update README.md
636bc2f verified
|
raw
history blame
No virus
732 Bytes
metadata
license: unknown

This is a merge of LongAlpaca-70B-lora into royallb's Aetheria-L2-70B, replacing the embed and norm layers as described in the LongLoRA repo, and removing the extra row and pad token so that the vocabularies match.

There is no additional fine-tuning. The resulting model seems to not be broken... you can test whether it is truly the original model + 32K capability (use linear rope scaling 8).

You could also try merging this with other models of longLORA descendency (like Aurelian).