This patch release contains a few fixes (via #2710) for the newly introduced target_parameters
feature, which allows LoRA to target nn.Parameter
s directly (useful for mixture of expert layers). Most notably:
- PEFT no longer removes possibly existing parametrizations from the parameter.
- Adding multiple adapters (via
model.add_adapter
ormodel.load_adapter
) did not work correctly. Since a solution is not trivial, PEFT now raises an error to prevent this situation.