Comments (3)
Yes. It's important that we have either A or B set to zero so the first forward pass doesn't deviate from the original model. It's also important that A and B are not both zeros because the gradient would be forever zero in that case. So either set A to zero and B something else or vice versa.
Setting A to zeros we're essentially discarding all the knowledge encoded in the pretrained weights
Could you explain why?
If we set B to zeros and initialize A with Kaimin uniform distribution, or normal distribution, we might still be stuck in local minimum if A is filled with zeros?
We will get stuck if both A and B are zeros, but not if one of them is not all zeros.
from lora.
I have the same question, however, see Edward's comment here
To my understanding, Edward resets the parameters of the weight matrix the same way as pytorch's Embedding.reset_parameters()
# the code taken from torch.nn.modules.sparse.Embedding
def reset_parameters(self) -> None:
init.normal_(self.weight)
self._fill_padding_idx_with_zero()
According to Edward's comment, it doesn't really matter which adaption matrix is set to 0, if we set both of the matrices to zeros.
However, I have a couple questions about resetting parameters in Linear
and Embedding
@edwardjhu:
-
Setting A to zeros we're essentially discarding all the knowledge encoded in the pretrained weights, doesn't it defeat the purpose of leveraging transfer learning?
-
If we set B to zeros and initialize A with Kaimin uniform distribution, or normal distribution, we might still be stuck in local minimum if A is filled with zeros? Did those initialized values come from experimental results?
from lora.
Yes. It's important that we have either A or B set to zero so the first forward pass doesn't deviate from the original model. It's also important that A and B are not both zeros because the gradient would be forever zero in that case. So either set A to zero and B something else or vice versa.
Setting A to zeros we're essentially discarding all the knowledge encoded in the pretrained weights
Could you explain why?
If we set B to zeros and initialize A with Kaimin uniform distribution, or normal distribution, we might still be stuck in local minimum if A is filled with zeros?
We will get stuck if both A and B are zeros, but not if one of them is not all zeros.
Thanks for your quick response. It sounds to me that setting one of them to be zeros is critical to "preserve" the pretrained weights for the first pass. However, if A is initialized with zeros, the results of B @ A would be zeros, and the output from the previous layer multiples the B @ A would be zeros, too? Or did I misunderstand the implementation? (I admit I haven't run the code so I didn't check whether it's the case at runtime)
from lora.
Related Issues (20)
- Can't reproduce the results for GLUE and hyperparameter misalignment HOT 4
- Layers.py not being executed HOT 1
- Can not reproduce the result of Roberta-Base HOT 2
- how to improve the memory ability of lora fine tuning? HOT 1
- models are the same after loading lora parameters using peft library
- Is it necessary to add `model = model.merge_and_unload()` when training a new LoRA adapter?
- How to adjust LoRA into nn.ConvTranspose2d? HOT 2
- Cannot implement LoRA on a custom model containing transformer encoder from pytorch
- _conv_forward() error
- Dynamic Lora Selection In Runtime❓ HOT 1
- Reproduce Lora results is close but not accurate HOT 2
- Guidance Needed on Continuing Training with a New Dataset via LoRA
- After joining Lora, the first few layers show a gradient of 0
- lora-dim == lora-r ?
- LORA on T5 model
- [Question about multi-gpu training] HOT 2
- question for scale!
- Parameter count on GPT-2 medium
- Where is the LoRA matrices saved?
- Questions about running the cola dataset script HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from lora.