grimulkan commited on
Commit
3f42f88
1 Parent(s): a1bf352

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -0
README.md CHANGED
@@ -1,3 +1,9 @@
1
  ---
2
  license: unknown
3
  ---
 
 
 
 
 
 
 
1
  ---
2
  license: unknown
3
  ---
4
+
5
+ This is a merge of [LongAlpaca-70B-lora](https://huggingface.co/Yukang/LongAlpaca-70B-lora) into [Aetheria-L2-70B](https://huggingface.co/royallab/Aetheria-L2-70B), replacing the embed and norm layers as described in the [LongLoRA repo](https://github.com/dvlab-research/LongLoRA), and removing the extra row and pad token so that the vocabularies match.
6
+
7
+ There is no additional fine-tuning. The resulting model seems to not be broken... you can test whether it is truly the original model + 32K capability (use linear rope scaling 8).
8
+
9
+ You could also try merging this with other models of longLORA descendency (like [Aurelian](https://huggingface.co/grimulkan/aurelian-v0.5-70b-rope8-32K-fp16)).