Skip to content

Conversation

@gslama12
Copy link
Contributor

As discussed in #2153.

Code example

Copy link
Member

@BenjaminBossan BenjaminBossan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR. The change seems to be much smaller than I initially thought, which is great. Before we proceed, could we do the following:

Let's add a test case for this. First, let's create an entry like this one:

("Conv2d 1 LoRA", "Conv2d", LoraConfig, {"target_modules": ["conv2d"]}),

Then we need to define a model with a conv layer that uses groups. Something similar to this with groups=5 should work:

class ModelConv2D(nn.Module):
def __init__(self):
super().__init__()
self.conv2d = nn.Conv2d(5, 10, 3)
self.relu = nn.ReLU()
self.flat = nn.Flatten()
self.lin0 = nn.Linear(10, 2)
self.sm = nn.LogSoftmax(dim=-1)
def forward(self, X):
X = X.float().reshape(-1, 5, 3, 3)
X = self.conv2d(X)
X = self.relu(X)
X = self.flat(X)
X = self.lin0(X)
X = self.sm(X)
return X

Then make sure that the model is being used when it's model ID is passed by adding an entry similar to this one:

if model_id == "Conv2d":
return ModelConv2D().to(torch_dtype)

LMK if anything is unclear.

Moreover, don't forget to run make style for the linter.

@gslama12
Copy link
Contributor Author

thanks for the fast reply. I implemented the test as you described. I added one for lora and one for dora. LMK if there is something missing.

@BenjaminBossan
Copy link
Member

Thanks for adding the tests. Unfortunately, a lot of them are failing for me locally. Do they pass for you? E.g. this one:

pytest tests/test_custom_models.py -k test_forward_output_finite_021_Conv2d_Groups_LoRA

@gslama12
Copy link
Contributor Author

gslama12 commented Mar 1, 2025

I had a bug in the test model, i fixed it now and the TC you stated should work. Let's see if anything else fails.

@gslama12
Copy link
Contributor Author

gslama12 commented Mar 3, 2025

It seems like there is an issue with the merging TCs. I will try to look into it.. if you have any suggestions LMK.

@BenjaminBossan
Copy link
Member

Thanks for the updates but some tests are still failing on CI (ignore those caused by timeouts, that's a HF Hub issue). Checking the values, it doesn't look like it's just a matter of precision/tolerance but that there's something else going on. Do these tests pass locally for you?

@gslama12
Copy link
Contributor Author

gslama12 commented Mar 3, 2025

No they don't. It seems like something with the merging procedure is off. I will try to look into it.

@gslama12
Copy link
Contributor Author

gslama12 commented Mar 3, 2025

I tried to recreate the test test_merge_layers_021_Conv2d_Groups_LoRA, which is one of the failing TCs in the pipeline. Maybe you can checkout this gist, which runs fine on my machine. I wonder if there is maybe something going on within the testing pipeline that causes the assertion error. 🤔

@BenjaminBossan
Copy link
Member

You don't need to create a standalone script to reproduce the error, just run pytest like so:

pytest tests/test_custom_models.py -k test_merge_layers_021_Conv2d_Groups_LoRA

With this, I can reproduce the error locally. Dropping into the debugger, I see:

(Pdb) logits
tensor([-0.0053, -5.2435], grad_fn=<SelectBackward0>)
(Pdb) logits_merged
tensor([-7.0452e-04, -7.2583e+00])

To run all the groups-related PRs, call this:

pytest tests/test_custom_models.py -k groups -v

@gslama12
Copy link
Contributor Author

gslama12 commented Mar 3, 2025

Yes i am aware of that. I also get the same assertion error when running the test so my first thought was that i messed up the merging with my changes. But when i create a similar scenario and run it as a local file without the pipeline, the assertion seems to work.

So i'm currently trying to figure out what the difference is.

@BenjaminBossan
Copy link
Member

Ah sorry, I misunderstood you.

Yes, your script passes, but there are a few differences. Please pass init_lora_weights=False to LoraConfig or else LoRA will just be a no-op. Furthermore, I had to pass a non-zero input, so e.g. dummy_input = torch.arange(90).reshape(9, 10) as in the test. Now the first assert fails.

@gslama12
Copy link
Contributor Author

gslama12 commented Mar 3, 2025

Ahh ok, thanks for the help. I will try to debug this.

@gslama12
Copy link
Contributor Author

gslama12 commented Mar 4, 2025

Ok so I think we can't merge the layers the way i envisioned because they are of different types, which is kind of an inherent flaw with my approach to solving the groups issue. I could not come up with a solution yet, but the fact that the base layer is a conv-layer with groups and the adapters are regular conv-layers (without groups) means that we can not just compute the delta weight and add it to the base layers weight in the way we are currently doing it for all other layers as this will result in a different behaviour compared to the unmerged model.
I am not quite sure if I will figure this out as it seems quite complex to solve. I have created a small gist to outline the core issue. If we can solve this i think merging the adapters correctly should be possible.

The other idea would be the one you proposed, where the adapters themselves are actually conv-layers with groups, but since the in- and out-features of a conv layers must be divisible by the number of groups i feel like this will cause all kinds of other issues.

@BenjaminBossan
Copy link
Member

Thanks for the analysis and for isolating the problem. Personally, I see it like this: In the original LoRA paper, conv + groups was not covered at all, so there is no "right" solution in that sense. In the end, we need to take something that honors the spirit of LoRA and that actually works. I put some thought into what it would mean to have LoRA on grouped convolution and tinkered a bit with solutions, but couldn't come up with anything working, except for what I posted in the other thread.

That solution has the obvious restrictions that you mentioned. The in and out dimension should be fine, since those are the same as the ones from the original layer. What's problematic is that the rank would need to be divisible by groups, which is a big limitation. On the other hand, as is there is no solution at all. Maybe you or someone else can figure something out, but at the end of the day, I'd be willing to take a limited solution to not having any solution at all.

The alternative would be to check for groups > 1 at initialization time and, say, give a warning that it won't support merging. Then we raise an error with a nice message when we detect users trying to merge.

@gslama12
Copy link
Contributor Author

gslama12 commented Mar 6, 2025

Sounds good. For now, should I proceed with implementing the alternative that raises an error during merging? If I come up with a better solution later, I can open a separate PR.

@BenjaminBossan
Copy link
Member

Yes, sounds good @gslama12, thanks. At the code location where the error is raised, let's add a comment with a short explanation and a link to this PR.

@gslama12
Copy link
Contributor Author

gslama12 commented Mar 9, 2025

I have added the warning at init and the error message for merging. LMK if there is anything else left to do for this PR.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@BenjaminBossan
Copy link
Member

Thanks for the update. Unsurprisingly, now all the tests that involve merging are failing for the model using groups:

https://github.com/huggingface/peft/actions/runs/13751050130/job/38480944098?pr=2403#step:8:1232

I think there is no other solution but to edit all of these tests and add a skip (or early return) if this new model is being used.

@gslama12
Copy link
Contributor Author

I added the skip statements to the tests. Lets see if there are any other skips that are missing.

Copy link
Member

@BenjaminBossan BenjaminBossan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your great work on this, LGTM.

@BenjaminBossan BenjaminBossan merged commit 5b60154 into huggingface:main Mar 20, 2025
14 checks passed
Guy-Bilitski pushed a commit to Guy-Bilitski/peft that referenced this pull request May 13, 2025
Conv layers with groups>1 are supported, but not merging.
BenjaminBossan pushed a commit that referenced this pull request Jun 30, 2025
More generalized handling of groups argument in LoRA/DoRA conv layers
(previous solution: #2403).
efraimdahl pushed a commit to efraimdahl/peft that referenced this pull request Jul 12, 2025
Conv layers with groups>1 are supported, but not merging.
efraimdahl pushed a commit to efraimdahl/peft that referenced this pull request Jul 12, 2025
More generalized handling of groups argument in LoRA/DoRA conv layers
(previous solution: huggingface#2403).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants