Comments (4)
I think technically we can do both, with origin model weight in int4+fp16/fp32 and lora weight of fp32. The dtype of output from QuantLinear layer can be exactly the dtype of its input, so inserting LoRA layer after every QuantLinear layer won't be very difficult.
Made an adapter for peft to support QuantLinear,
from quant import QuantLinear
class Linear4bitLt(QuantLinear, LoraLayer):
# Lora implemented in a dense layer
def __init__(
self,
in_features,
out_features,
r: int = 0,
lora_alpha: int = 1,
lora_dropout: float = 0.0,
**kwargs,
):
QuantLinear.__init__(
self,
4,
in_features,
out_features
)
LoraLayer.__init__(self, r=r, lora_alpha=lora_alpha, lora_dropout=lora_dropout, merge_weights=False)
# Actual trainable parameters
if r > 0:
self.lora_A = nn.Linear(in_features, r, bias=False)
self.lora_B = nn.Linear(r, out_features, bias=False)
self.scaling = self.lora_alpha / self.r
# Freezing the pre-trained weight matrix
self.qweight.requires_grad = False
self.scales.requires_grad = False
self.zeros.requires_grad = False
self.bias.requires_grad = False
self.reset_parameters()
def reset_parameters(self):
if hasattr(self, "lora_A"):
# initialize A the same way as the default for nn.Linear and B to zero
nn.init.kaiming_uniform_(self.lora_A.weight, a=math.sqrt(5))
nn.init.zeros_(self.lora_B.weight)
def forward(self, x: torch.Tensor):
result = super().forward(x)
if self.disable_adapters:
return result
elif self.r > 0:
if not torch.is_autocast_enabled():
expected_dtype = result.dtype
if x.dtype != torch.float32:
x = x.float()
output = self.lora_B(self.lora_A(self.lora_dropout(x))).to(expected_dtype) * self.scaling
result += output
else:
output = self.lora_B(self.lora_A(self.lora_dropout(x))) * self.scaling
result += output
return result
The only thing left to do is to add support for gradient backward on 4bit matmul.
from alpaca-lora.
Most probably yes, you can merge the LoRA weights into the model and quantize that
Training in 4-bit or just 4-bit inference?
Training in 4-bit would be a game-changer.
from alpaca-lora.
Most probably yes, you can merge the LoRA weights into the model and quantize that
from alpaca-lora.
Most probably yes, you can merge the LoRA weights into the model and quantize that
May I ask how to merge the Lora?
from alpaca-lora.
Related Issues (20)
- generate error HOT 1
- can't load tokenizer HOT 2
- Load_in_8bit causing issues: Out of memory error with 44Gb VRAM in my GPU or device_map error HOT 1
- AttributeError: module 'gradio' has no attribute 'inputs' HOT 18
- When I set load_in_8bit=true, some errors occurred....
- is there any flag to mark the model is safetensors or pickle format?
- Errors of tuning on 70B LLAMA 2, does alpaca-lora support 70B llama 2 tuning work?
- safetensors_rust.SafetensorError: Error while deserializing header: InvalidHeaderDeserialization HOT 15
- generate error after hit submit btn
- The weights are not updated HOT 1
- LAION Open Assistant data is already released
- Loading a quantized checkpoint into non-quantized Linear8bitLt is not supported
- Is it possible to combine alpaca-lora with RAG
- Is there a way to check if this training is all done?
- failed to run on colab: ModulesToSaveWrapper has no attribute `embed_tokens`
- Finetune scenarios
- decapoda-research/llama-7b-hf is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' HOT 2
- Single GPU vs multiple GPUs stack (parallel)
- Why this error? ValueError: We need an `offload_dir` to dispatch this model according to this `device_map`, the following submodules need to be offloaded: base_model.model.model.layers.3, base_model.model.model.layers.4, base_model.model.model.layers.5, base_model.model.model.layers.6, base_model.model.model.layers.7, base_model.model.model.layers.8, base_model.model.model.layers.9, base_model.model.model.layers.10, base_model.model.model.la
- InvalidHeaderDeserialization
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from alpaca-lora.