Update README.md
Browse files
README.md
CHANGED
@@ -97,8 +97,8 @@ whereas the general (non-orthogonal) "multiplicative-LoRA" method can do this by
|
|
97 |
|
98 |
In general, the way to think about these (non-orthogonal) "multiplicative-LoRAs" is as a kind of "conditional control-vector":
|
99 |
|
100 |
-
- Each vector in `lora_A` looks for a certain dirrection, and via the dot-product it generates a (signed) weighting factor that measures the similarity between the output of the `down_proj` transformation
|
101 |
-
- Each corresponding vector in `lora_B` then gets added to the hidden state / residual stream
|
102 |
|
103 |
So instead of having just a single vector that we add (in essence we add a bias term and create an [affine transformation](https://en.wikipedia.org/wiki/Affine_transformation)), we now have many different control vectors that can be added (stored in `lora_B`), based on how well they match another set of "directional detection vectors" (stored in `lora_A`).
|
104 |
|
|
|
97 |
|
98 |
In general, the way to think about these (non-orthogonal) "multiplicative-LoRAs" is as a kind of "conditional control-vector":
|
99 |
|
100 |
+
- Each vector in `lora_A` looks for a certain dirrection, and via the dot-product it generates a (signed) weighting factor that measures the similarity between the output of the `down_proj` transformation and the specific vector in `lora_A`.
|
101 |
+
- Each corresponding vector in `lora_B` then gets added to the hidden state / residual stream, scaled by the corresponding (signed) weighting factor.
|
102 |
|
103 |
So instead of having just a single vector that we add (in essence we add a bias term and create an [affine transformation](https://en.wikipedia.org/wiki/Affine_transformation)), we now have many different control vectors that can be added (stored in `lora_B`), based on how well they match another set of "directional detection vectors" (stored in `lora_A`).
|
104 |
|