Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fine-tuning OpenClip with Hugingface's PEFT (such as LoRA) #761

Open
KyanChen opened this issue Jul 28, 2023 · 63 comments
Open

fine-tuning OpenClip with Hugingface's PEFT (such as LoRA) #761

KyanChen opened this issue Jul 28, 2023 · 63 comments
Labels

Comments

@KyanChen
Copy link

Feature request

fine-tuning OpenClip with Hugingface's PEFT (such as LoRA)

Motivation

fine-tuning OpenClip with Hugingface's PEFT (such as LoRA)

Your contribution

refer to https://github.com/KyanChen/MakeMultiHeadNaive/tree/master for help!

@BenjaminBossan
Copy link
Member

Sorry, could you please provide more details? Are you looking for help how to achieve that or are you suggesting that it doesn't work right now?

@KyanChen
Copy link
Author

Now, Hugingface's PEFT (such as LoRA) can not finetune the linear layer of torch.nn.MultiHeadAttention based transformer model (such as OpenCLIP). If I must use the LoRA, I should replace the torch.nn.MultiHeadAttention layer with a self-implemented naive MultiHeadAttention layer. Can you help to integrate it to the official PEFT lib?

@BenjaminBossan
Copy link
Member

I see, thanks for explaining. Indeed, right now, it is impossible as a user to change what type of LoRA layer is being used. We have ideas about exposing a "low level" API that would allow users more fine-grained control, including the possibility to allow using custom layers, as you suggest. I cannot say yet if it will really work out and when it's ready, but I'll let you know.

@duchenzhuang
Copy link

Thanks for your efforts!

@github-actions
Copy link

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

@ambroser53
Copy link

I'd like to bump this, being unable to put LoRA weights on anything that uses nn.MultiheadAttention is a real pain and using a naive implementation is clunky and cumbersome. Seems strange that LoRA-Torch can do it but not peft.

@BenjaminBossan
Copy link
Member

BenjaminBossan commented Jan 5, 2024

Hey, I created a PR to add MHA: #1324. The implementation was a bit tricky because this layer is not very "friendly" for LoRA-adaptation, but I think I got it working.

For now, this is just a rough draft, so it would be great if you could test it and tell me if it works your use case. To install from this branch, run:

python -m pip install git+https://github.com/BenjaminBossan/peft.git@feat-add-lora-multihead-attention

So far, I did the following testing:

import torch
from torch import nn
import open_clip
from peft import LoraConfig, get_peft_model
from PIL import Image
import requests

model, preprocess = open_clip.create_model_from_pretrained('hf-hub:laion/CLIP-ViT-g-14-laion2B-s12B-b42K')
tokenizer = open_clip.get_tokenizer('hf-hub:laion/CLIP-ViT-g-14-laion2B-s12B-b42K')
peft_model = get_peft_model(model, config)
opt = torch.optim.SGD(peft_model.parameters(), 0.1)

# text encoder
text = tokenizer(["a diagram", "a dog", "a cat"])
text_features = peft_model.encode_text(text)
loss = text_features.sum()
loss.backward()
opt.step()

# image encoder
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
image = preprocess(image).unsqueeze(0)
image_features = model.encode_image(image)
image_features.sum().backward()
opt.step()

@BenjaminBossan
Copy link
Member

@ambroser53 I think the linked LoRA-torch library has some bugs. For instance:

import torch, loratorch
import torch.nn as nn

model_torch = loratorch.Linear(5, 6, r=4, lora_alpha=1)
loratorch.mark_only_lora_as_trainable(model_torch)
print(model_torch.state_dict().keys())
# prints odict_keys(['weight', 'bias', 'w_lora_A', 'w_lora_B'])

optimizer_torch = torch.optim.SGD(model_torch.parameters(), lr=0.1)

for _ in range(3):
    model_torch.train()
    x = torch.rand(2, 5)

    loss2 = model_torch(x).sum()
    optimizer_torch.zero_grad()
    loss2.backward()
    optimizer_torch.step()

print(model_torch.state_dict().keys())
# odict_keys(['bias', 'w_lora_A', 'w_lora_B'])
# note the missing 'weight' key!

As you can see, the weight is dropped from the state_dict, making it impossible to save the model. Same is true for named_parameters(). So if you're using this package, you should be aware of this.

@ambroser53
Copy link

Hey @BenjaminBossan cheers for the fork I'll run some tests on Tuesday. I realised that LoRATorch was a bit buggy after I started trying to combine it with peft's LoraLayer but if there's a way to do it without it that'd be much better.

@BenjaminBossan
Copy link
Member

@ambroser53 Did you have time to give it a try?

@ambroser53
Copy link

Hi sorry I meant to get back to you sooner. It appears the layers are placed on the nn.MultiheadAttention blocks just fine on my model. My use case is very complicated though as its a custom architecture so I will need to get back to you on how effective it is and whether the openclip finetuning is bottlenecked or non-performative in some way. Once I have these answers I'll report back.

@BenjaminBossan
Copy link
Member

Great, thanks for testing. Do you have an ETA for when these tests finish?

Regarding performance, I would expect a larger overhead than for simple LoRA layers like Linear because of the merging-unmerging roundtrip we have to take, but I'm not sure if it makes a difference grand scheme of things.

@ambroser53
Copy link

Should get initial results early next week if theres no disasters.

Out of curiousity is said overheard computational or memory?

@BenjaminBossan
Copy link
Member

Should get initial results early next week if theres no disasters.

Thanks!

Out of curiousity is said overheard computational or memory?

It should be computational only. However, since we take the same approach here as LoRA-torch, it shouldn't be better or worse than using that.

@ambroser53
Copy link

I've dug deeper in my testing. Mine is a very specific case where LoRA weights are only placed on specific layers and the model is mixed quantisation so the placement needed further tinkering. However, now that I've specifically made sure which layers are getting where they need to there's a logic error that seems to only occur some of the time. Essentially, say you have nn.MultiheadAttention called attn then it will have the submodule attn.out_proj which is a nn.Linear (or at least it should be, there's this weird NonDynamicQuantisableWhatever going on but lets not get into that). If you have target_modules on your LoraConfig that point to both attn and attn.out_proj then if attn gets turned into a LoraLayer first then when it tries to find attn.out_proj it's now under attn.base_layer.out_proj.

It doesn't look like the out_proj is taken into account by the merge and unmerge which seems to be more to do with the in_proj_weight. In the implementation of nn.MultiheadAttention it doesn't actually use the forward of said out_proj and only passes the weight and bias tensors. I thought this could be fixed just by forcing it to put the LoraLayer on attn.out_proj before attn but I think this would create problems due to the way nn.MultiheadAttention never calls forward which would then neglect the lora weights entirely.

Could there be a simple fix to just do the same as there is on in_proj_weight for out_proj.weight?

@BenjaminBossan
Copy link
Member

Thanks a lot @ambroser53, your analysis is 100% correct. I pushed a new commit to the PR that now takes into account out_proj.

As is, we now apply LoRA to both in_proj and out_proj. There is currently no way to specify only in_proj or only out_proj. That wouldn't be easy to achieve, we would probably have to implement a new argument (or even multiple) on LoraConfig to allow that, which seems a bit overkill for this rather niche feature. My reasoning for applying LoRA to both instead of only in_proj is that recently the consensus seems to converge towards applying LoRA to as many Linear layers as possible. LMK what you think.

I'll be out of office starting next week, so that PR may stall for a while unless one of the other maintainers has time to take over. Still, please try out this new PR and give us feedback if it works for you.

@ambroser53
Copy link

No that sounds perfect I don't think having one or the other would make sense. I should be able to give it a go now and give results next week.

@BenjaminBossan
Copy link
Member

I should be able to give it a go now and give results next week.

Nice. If you can give some early feedback today, I may still have time to react to it :)

@ambroser53
Copy link

This may be a problem with my own complex set up so could be out of scope here but does peft automatically cast parameters to int8 if the underlying model is loaded in int8? Asking since part of the model is in int8 but the rest is skipped via int8_quant_skip_modules this is because now with out_proj implemented it's throwing an error when calling get_peft_model within _restore_weights for lora.MultiheadAttention because registering the out_proj as "weight" seems to have it cast as int8 when it's supposed to have been skipped and left as float16. Have any insights or will mixed quantisation be something wholly unwieldy I'm unlikely to find a quick fix for?

@BenjaminBossan
Copy link
Member

Hmm, normally the weights should not be automatically cast to int8. If you have some way to reproduce this error, I could investigate.

Looking at this issue in general, I think, however, that this implementation will not work correctly with quantized weights. As is, we merge the LoRA weights into the base weights. When the latter are quantized, this requires special treatment, similar to the bnb layers we have for LoRA, a normal merge would surely fail. So I think we would need a completely separate MHA class for quantized layers.

I'm not exactly sure what it is that you're doing with quantization, but as you've remarked earlier, the out_proj actually uses NonDynamicallyQuantizableLinear, which from my understanding exists to prevent some kind of error with quantization. I wonder if that could be related.

@ambroser53
Copy link

I understand that but the point is that the MHA aren't quantised at all. The confusing part is that the MHA and out_proj nn.Linear are being passed to int8_quant_skip_modules. It should be okay for now I'll train on two cards since it can't all fit on one. Hopefully have some results soon.

@BenjaminBossan
Copy link
Member

I understand that but the point is that the MHA aren't quantised at all.

Ah I see, that is indeed very strange and should not happen.

The confusing part is that the MHA and out_proj nn.Linear are being passed to int8_quant_skip_modules

Can you point me to a reference for int8_quant_skip_modules?

@ambroser53
Copy link

Here's the code for bitsandbytesconfig configuration object where you can specify int8_quant_skip_modules but there's no further documentation than what is in the initialisation comment. It does seem to be working as prior to calling get_peft_config the correct modules are in the correct datatype.

I'll try and get together a code sample that reproduces (this code I'm referring to right now is a proprietary for a company)

@ambroser53
Copy link

One more potential bug. It seems that when using get_peft_model on a large model with an MHA inside, it puts the internal parameters (i.e. in_proj_weight and out_proj.weight) in the MHA as requires_grad=True. Its actually really hard to force it it to not be true and I don't quite know why. I wonder whether its because of the nested LoraLayers or something missing in terms of ensuring the base weights dont require gradients that is present in other LoraLayers

@ambroser53
Copy link

It is very bizarre. The following code is from my script. attn_pool.attn is the (only) MHA:

model.base_model.model.model.vision_model.attn_pool.attn.base_layer.in_proj_weight.requires_grad = False
model.base_model.model.model.vision_model.attn_pool.attn.base_layer.out_proj.base_layer.weight.requires_grad = False

trainable_params = [name for name, param in model.named_parameters() if param.requires_grad]

print(model.base_model.model.model.vision_model.attn_pool.attn.base_layer.in_proj_weight.requires_grad)

This outputs true and both the in_proj_weight and out_proj.weight will be in trainable_params. It's almost like iterating through the module names causes the to be made trainable. This doesn't happen with any other parameters in the wrapped model only these two that reside in the MHA.

@ambroser53
Copy link

@BenjaminBossan
Copy link
Member

BenjaminBossan commented Feb 7, 2024

Hi @ambroser53 I'm back in office. Thanks a lot for figuring out this bug and providing a reproducer. I could identify the issue and it should now be fixed. When running your example locally, I now get the correct gradients. Please take a look.

It's almost like iterating through the module names causes the to be made trainable.

This was indeed the case! The reason for this is explained here:

https://github.com/huggingface/peft/pull/1324/files#diff-24a141c266b7b714ae8fcc470f31bc283f7b0f5a671bbf6d5f092741fc374104R899-R903

Here's the code for bitsandbytesconfig configuration object

Sorry, did you mean to include a link here?

@sailfish009
Copy link

sailfish009 commented Apr 29, 2024

@BenjaminBossan I tested this by applying the code in the link below.
https://velog.io/@blackeyes0u0/LoRA%EB%A5%BC-%ED%86%B5%ED%95%9C-PEFT

A runtime error occurs: (has no attribute 'weight')

This code works fine for similar clip models like the one below.
https://dienhoa.github.io/dhblog/posts/finetune_clip.html

If you have working sample code(based on official one), I'd be happy to test it.

## lora.py
class LoRA_Config:
    def __init__(self, r, lora_alpha, lora_dropout, merge_weights, target_modules):
        self.r = r
        self.lora_alpha = lora_alpha
        self.lora_dropout = lora_dropout
        self.merge_weights = merge_weights
        self.target_modules = target_modules


class LoRALayer(nn.Module):
    def __init__(self, original_layer, config: LoRA_Config):
        super(LoRALayer, self).__init__()
        self.original_layer = original_layer
        input_dim = original_layer.weight.size(1)
        output_dim = original_layer.weight.size(0)

        # Initialize and then apply kaiming_uniform_
        lora_A_tensor = torch.empty(input_dim, config.r)
        torch.nn.init.kaiming_uniform_(lora_A_tensor)
        self.lora_A = nn.Parameter(lora_A_tensor)

        self.lora_B = nn.Parameter(torch.zeros(config.r, output_dim))
        self.scaling = config.lora_alpha/config.r
        if config.lora_dropout > 0:
            self.dropout = nn.Dropout(p=config.lora_dropout)
        else:
            self.dropout = lambda x: x  # No-op

    def forward(self, x):
        # Apply dropout before the matrix multiplication
        A_dropout = self.dropout(self.lora_A)
        B_dropout = self.dropout(self.lora_B)
        W_prime = self.original_layer.weight + self.scaling*A_dropout @ B_dropout
        return F.linear(x, W_prime, self.original_layer.bias)

		# 실패한 코드이다.
    # def forward(self,x):
    #     delta_W = self.dropout(self.lora_B(self.lora_A(x)))
    #     W = self.original_layer(x)
    #     return self.scaling*delta_W + W

# 에러뜬 이유
# self.lora_A와 self.lora_B는 nn.Parameter 객체로 정의되어 있으며,
# 이들은 직접적으로 호출 가능한 객체가 아닙니다. 대신 이 파라미터들을 행렬곱 (matrix multiplication) 연산에 사용해야 합니다. PyTorch에서는 torch.matmul 함수 또는 @ 연산자를 사용하여 행렬곱을 수행할 수 있습니다. 처음 init에서 잘 설정해야됨
    
    def __repr__(self):
        return f'{self.__class__.__name__}(\n  (original_layer): {self.original_layer},\n  (lora_A): Parameter of size {self.lora_A.size()},\n  (lora_B): Parameter of size {self.lora_B.size()}\n)'
    
def print_trainable_parameters(model):
    trainable_params = 0
    all_param = 0
    #for param in model.parameters():
    for _, param in model.named_parameters():
        all_param += param.numel()
        if param.requires_grad: # True이면 learnable parameter에 추가
            trainable_params += param.numel()
    print(
        f"trainable params: {trainable_params} || all params: {all_param} || trainable: {100 * trainable_params / all_param:.2f} %"
    )
    return trainable_params, all_param

# 추가로 레이러 마다 적용해하는 함수
def apply_lora_to_model(model, config):
    for name, module in model.named_modules():
        hierarchy = name.split('.')
        if len(hierarchy) > 1:  # Ensure the module is not the top-level module
            parent_module = model
            for submodule_name in hierarchy[:-1]:  # Navigate to the parent module
                parent_module = getattr(parent_module, submodule_name)
            
            layer_name = hierarchy[-1]
            for target_module in config.target_modules:
                if target_module in layer_name:
                    original_layer = getattr(parent_module, layer_name)
                    if isinstance(original_layer, nn.Linear):
                        setattr(parent_module, layer_name, LoRALayer(original_layer, config))
                        print(f"Replaced {name} with LoRALayer")
    return model

# Apply LoRA modifications to the model
model = apply_lora_to_model(model, lora_config)

# 추가로 로라 레이어만 활성화시키는 함수
def mark_only_lora_as_trainable(model: nn.Module, bias: str = 'none') -> None:
    for n, p in model.named_parameters():
        if 'lora_' not in n:
            p.requires_grad = False
    if bias == 'none':
        return
    elif bias == 'all':
        for n, p in model.named_parameters():
            if 'bias' in n:
                p.requires_grad = True
    elif bias == 'lora_only':
        for m in model.modules():
            if isinstance(m, LoRALayer) and \
                hasattr(m, 'bias') and \
                m.bias is not None:
                    m.bias.requires_grad = True
    else:
        raise NotImplementedError

def print_trainable_parameters(model):
    trainable_params = 0
    all_param = 0
    #for param in model.parameters():
    for _, param in model.named_parameters():
        all_param += param.numel()
        if param.requires_grad: # True이면 learnable parameter에 추가
            trainable_params += param.numel()
    print(
        f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param:.2f}"
    )
    return trainable_params, all_param
# from peft import get_peft_model#, LoraConfig, TaskType
from transformers import CLIPModel,AutoProcessor, AutoTokenizer
from lora import LoRA_Config,LoRALayer

model_name ="openai/clip-vit-base-patch32"
tokenizer = AutoTokenizer.from_pretrained(model_name)
preprocessor = AutoProcessor.from_pretrained(model_name)
model = CLIPModel.from_pretrained(
    model_name,
    # load_in_8bit=True,
    # device_map='auto',
)

# Check the original number of parameters
origin_num = sum(p.numel() for p in model.parameters())
print("Original number of parameters:", origin_num)

# Configuration for LoRA
lora_config = LoRA_Config(
    r=16, 
    lora_alpha=64, 
    lora_dropout=0.03, 
    merge_weights=False, 
    target_modules=["q_proj", "v_proj", "k_proj", "out_proj"],
)

# Apply LoRA to the model
model = apply_lora_to_model(model, lora_config)
mark_only_lora_as_trainable(model,bias='lora_only')
_,basic_model_params_num =  print_trainable_parameters(model)

### output이 아래처럼 나오면 성공!
trainable params: 1328640 || all params: 157005953 || trainable: 1.42 %

@BenjaminBossan
Copy link
Member

@sailfish009 Sorry, I can't read that blogpost, but it seems to be using some custom code based on (some rather old) PEFT code. In general, if you want to apply LoRA to OpenCLIP, you have to use PEFT based on the PR #1324. This is because OpenCLIP uses nn.MultiHeadAttention, which we don't support in PEFT but which is added in said PR.

@sailfish009
Copy link

@BenjaminBossan Thank you. I checked with the branch you provided, and it's working fine.
There was one error, but it was solved by following the link below.
bitsandbytes-foundation/bitsandbytes#240

@miguelalba96
Copy link

I use the LAION huggingface CLIP checkpoints instead of the ones from open clip, then you will be able to use the PEFT package without any effort

https://huggingface.co/collections/laion/openclip-laion-2b-64fcade42d20ced4e9389b30

@BenjaminBossan
Copy link
Member

I checked with the branch you provided, and it's working fine.

Thanks for testing.

@mm-tpx
Copy link

mm-tpx commented Aug 27, 2024

I am not sure why this issue is marked as closed cause from my understanding PR to add multihead is not merged to main :(

@mm-tpx
Copy link

mm-tpx commented Sep 3, 2024

Are there any plans to merge #1324 in the nearest future?

@BenjaminBossan
Copy link
Member

Thanks for the reminder @mm-tpx.

Just to explain: The PR is not merged yet as the solution is kind of hacky (due to how MHA is implemented in torch) and people who tested it have reported a few issues, although I tried my best to address them over time, as witnessed in the discussion. The more confirmation I get that people used this implementation successfully, the higher the confidence that the "hack" works and the PR can be merged.

So if you gave that PR a try and it worked for you, please let me know about it. If not, it would be great if you could test it out and give me feedback.

@mm-tpx
Copy link

mm-tpx commented Sep 17, 2024

For me it worked great :)

@BenjaminBossan
Copy link
Member

Thanks for the feedback. I'll do my best to keep the branch up-to-date so that it remains usable. Hopefully I'll be able to merge it soon, perhaps as an experimental feature.

Copy link

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

@Aceticia
Copy link

Replying to let our robot overlords know this issue is not stale yet and people want this feature :)

@BenjaminBossan
Copy link
Member

I did not forget about this, it's still on my todo list.

@mashijie1028
Copy link

mashijie1028 commented Oct 20, 2024

@BenjaminBossan
Hi! I found that LoRA does not work for in_proj_weight in attn of open_clip. I was wondering how to fix this.
To be more specific, when I implement LoRA as follows:

lora_config = LoraConfig(
    r=16,
    target_modules=["in_proj_weight"],
    lora_alpha=32,
    lora_dropout=0.05
)

An error occurs as ValueError: Target modules {'in_proj_weight'} not found in the base model. Please check the target modules and try again.
But when I implement for out_proj, LoRA works fine!
Could you please tell me how to set target_modules in LoraConfig to implement LoRA on attn layers? Thanks!

By the way, I download peft as you mentioned before:

python -m pip install git+https://github.com/BenjaminBossan/peft.git@feat-add-lora-multihead-attention

@BenjaminBossan
Copy link
Member

@mashijie1028

I found that LoRA does not work for in_proj_weight in attn of open_clip.

This is a consequence of how multihead attention is implemented and one of the reason it is so complicated to apply LoRA to it. in_proj_weight is implemented as a Parameter, not a Module, which is why you can't target it directly like that.

By the way, I download peft as you mentioned before:

Note that the PR you mentioned will target the whole multihead attention layer, not just one of out_proj or in_proj_weight. Take this into account when specifying the target_modules. There is no way to only target the in_proj_weight at the moment.

@mashijie1028
Copy link

@BenjaminBossan
Got it! Thanks for your reply!
I still have one question: how to set the param target_modules in LoraConfig to target the whole multihead attention layer (as you mentioned)?

Previously, I tried with target_modules=["attn"], the codes works fine when adding LoRA, but when I merge back the LoRA to the original clip via peft_model.merge_and_unload(), there are still the keys 'attn.out_proj.lora_A.default.weight' and 'attn.out_proj.lora_B.default.weight' in peft_model.state_dict(), which means that the merge operation still does not work.

Could you please provide a demo code for how to set the LoraConfig and how to merge back the LoRA layer for the whole multihead attention layer?

@BenjaminBossan
Copy link
Member

I can investigate this issue, but I need the code for this. Could you provide reproducer for this please? I only need the model initialization and merging, no need for the data and training part.

@mashijie1028
Copy link

@BenjaminBossan
Hi! Below is my demo code:

import open_clip
from peft import LoraConfig, get_peft_model
from peft.tuners.lora.layer import MultiheadAttention as PeftMha

lora_config = LoraConfig(
    r=16,
    target_modules=["attn"],
    lora_alpha=32,
    lora_dropout=0.05
)

model, preprocess = open_clip.create_model_from_pretrained(model_name='ViT-L-14-quickgelu', pretrained="PATH-TO-YOUR-MODEL")
tokenizer = open_clip.get_tokenizer('ViT-L-14-quickgelu')

peft_model = get_peft_model(model, lora_config)
print(len([m for m in peft_model.modules() if isinstance(m, PeftMha)]))   # 36
peft_model.print_trainable_parameters()   # trainable params: 3,244,032 || all params: 430,860,545 || trainable%: 0.7529

peft_model.merge_and_unload()
#peft_model.merge_adapter()
print(peft_model.state_dict().keys())

In my code, I use MetaCLIP via ViT-L-14-quickgelu. After LoRA merging via merge_and_unload(), I print the state_dict() of the merged model and find that some LoRA keys are still in it.

@BenjaminBossan
Copy link
Member

Thanks a lot for providing the reproducer. There was indeed a bug in the code, it should now be fixed. Could you try again based on the latest commit?

Btw., this line peft_model.merge_and_unload() in your code should be changed to unloaded_model = peft_model.merge_and_unload() and then you should check unloaded_model. merge_and_unload() is not completely in-place.

@mashijie1028
Copy link

mashijie1028 commented Oct 22, 2024

@BenjaminBossan
Hi! Sorry for the late reply. I re-installed your latest peft commit and checked the keys in state_dict() carefully.
Unfortunately, I found that there might be some bugs after the merge operation. Below is my demo code:

import open_clip
import requests
import torch
from torch import nn
from peft import LoraConfig, get_peft_model
from PIL import Image
from peft.tuners.lora.layer import MultiheadAttention as PeftMha


lora_config = LoraConfig(
    r=16,
    target_modules=["attn"],
    lora_alpha=32,
    lora_dropout=0.05
)

model, preprocess = open_clip.create_model_from_pretrained(model_name='ViT-L-14-quickgelu', pretrained="CLIP-PATH")

# original model
print(len(model.state_dict().keys()))   # 446
print(len(model.visual.state_dict().keys()))   # 296

# add LoRA
peft_model = get_peft_model(model, lora_config)
print(len(peft_model.state_dict().keys()))   # 590
print(len(peft_model.visual.state_dict().keys()))   # 392
print(peft_model.visual.state_dict().keys())

# merge LoRA
merged_model = peft_model.merge_and_unload()
print(len(merged_model.state_dict().keys()))   # 374
print(len(merged_model.visual.state_dict().keys()))   # 248

As the results show, there are 248 keys in ViT of CLIP after merging, but the original number is 296. When printing the keys, I found that attn.in_proj_weight and attn.out_proj.weight are missing after merging. (with 2 for each layer, so 48 in total considering MetaCLIP ViT has 24 layers).

Could you please fix this?

@BenjaminBossan
Copy link
Member

Thanks for the report. I pushed a new change to the branch that should fix it. Testing your snippet locally, I get the same values now after unloading.

@mashijie1028
Copy link

Thanks for your contribution.
I checked the output features and keys in state_dict() and found the value remains consistent before and after LoRA merging. I think the problem has been fixed now. Appreciate it!
By the way, will these commits be merged into the future peft?

@BenjaminBossan
Copy link
Member

Thanks for testing @mashijie1028.

By the way, will these commits be merged into the future peft?

Yes, it's planned. Right now there is a blocker that prevents MHA to work with low_cpu_mem_usage=True, which I'll have to figure out how to solve. After that, it can hopefully be merged.

@mashijie1028
Copy link

Cheers! Thanks again for your contribution to the community! Hope everything goes well.

@Baijiong-Lin
Copy link

Baijiong-Lin commented Nov 3, 2024

@ambroser53 I think the linked LoRA-torch library has some bugs. For instance:

import torch, loratorch
import torch.nn as nn

model_torch = loratorch.Linear(5, 6, r=4, lora_alpha=1)
loratorch.mark_only_lora_as_trainable(model_torch)
print(model_torch.state_dict().keys())
# prints odict_keys(['weight', 'bias', 'w_lora_A', 'w_lora_B'])

optimizer_torch = torch.optim.SGD(model_torch.parameters(), lr=0.1)

for _ in range(3):
    model_torch.train()
    x = torch.rand(2, 5)

    loss2 = model_torch(x).sum()
    optimizer_torch.zero_grad()
    loss2.backward()
    optimizer_torch.step()

print(model_torch.state_dict().keys())
# odict_keys(['bias', 'w_lora_A', 'w_lora_B'])
# note the missing 'weight' key!

As you can see, the weight is dropped from the state_dict, making it impossible to save the model. Same is true for named_parameters(). So if you're using this package, you should be aware of this.

@BenjaminBossan @ambroser53 Hi, I have fixed this problem (Baijiong-Lin/LoRA-Torch@e3e20a0). Could you test it again?
You only need to add loratorch.register_model_param_after_backward(model_torch) after optimizer_torch.step().

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests