Skip to content

0.17.1

Latest
Compare
Choose a tag to compare
@BenjaminBossan BenjaminBossan released this 21 Aug 10:04
53c25fe

This patch release contains a few fixes (via #2710) for the newly introduced target_parameters feature, which allows LoRA to target nn.Parameters directly (useful for mixture of expert layers). Most notably:

  • PEFT no longer removes possibly existing parametrizations from the parameter.
  • Adding multiple adapters (via model.add_adapter or model.load_adapter) did not work correctly. Since a solution is not trivial, PEFT now raises an error to prevent this situation.