Update README.md
Browse files
README.md
CHANGED
@@ -18,8 +18,8 @@ A demo that runs in free Google Colab can be run here: https://bit.ly/3K1P4PQ ju
|
|
18 |
|
19 |
The [EleutherAI/gpt-j-6B](https://hf.co/EleutherAI/gpt-j-6B) model finetuned on the [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) instruction dataset with [low rank adaptation](https://arxiv.org/abs/2106.09685). This is not a model from Eleuther but a personal project.
|
20 |
|
21 |
-
Don't knock LoRA, all it is is finetuning how the internal representations should change (simplified, the residual of the weights) instead of finetuning just the internal representations! All the previous weights are in tact meaning LoRA tuning makes the model less likely to forget what it was trained on, and also less likely to push the model into mode collapse. Check table 2 of the LoRA paper and you can see that LoRA
|
22 |
-
|
23 |
## Use:
|
24 |
|
25 |
```python
|
|
|
18 |
|
19 |
The [EleutherAI/gpt-j-6B](https://hf.co/EleutherAI/gpt-j-6B) model finetuned on the [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) instruction dataset with [low rank adaptation](https://arxiv.org/abs/2106.09685). This is not a model from Eleuther but a personal project.
|
20 |
|
21 |
+
Don't knock LoRA, all it is is finetuning how the internal representations should change (simplified, the residual of the weights) instead of finetuning just the internal representations! All the previous weights are in tact meaning LoRA tuning makes the model less likely to forget what it was trained on, and also less likely to push the model into mode collapse. Check table 2 of the LoRA paper and you can see that LoRA many times outperforms traditional finetuning as well.
|
22 |
+
|
23 |
## Use:
|
24 |
|
25 |
```python
|