Update README.md
Browse files
README.md
CHANGED
@@ -2,7 +2,7 @@
|
|
2 |
license: llama2
|
3 |
---
|
4 |
|
5 |
-
This is an interleaved merge of [Xwin-longLORA-70b-rope8-32k-fp16](https://huggingface.co/grimulkan/Xwin-longLORA-70b-rope8-32k-fp16) and [Euryale-1.3-longLORA-70b-rope8-32k-fp16](https://huggingface.co/grimulkan/Euryale-1.3-longLORA-70b-rope8-32k-fp16),
|
6 |
|
7 |
There is no additional fine-tuning. The resulting model seems to not be broken... you can test whether it is truly the original model + 32K capability (use linear rope scaling 8).
|
8 |
|
|
|
2 |
license: llama2
|
3 |
---
|
4 |
|
5 |
+
This is an interleaved merge of [Xwin-longLORA-70b-rope8-32k-fp16](https://huggingface.co/grimulkan/Xwin-longLORA-70b-rope8-32k-fp16) and [Euryale-1.3-longLORA-70b-rope8-32k-fp16](https://huggingface.co/grimulkan/Euryale-1.3-longLORA-70b-rope8-32k-fp16), using the same merge formula as alpindale's [goliath-120b](https://huggingface.co/alpindale/goliath-120b).
|
6 |
|
7 |
There is no additional fine-tuning. The resulting model seems to not be broken... you can test whether it is truly the original model + 32K capability (use linear rope scaling 8).
|
8 |
|