Skip to content

Conversation

@OrMullerHahitti
Copy link

This pull request introduces support for AdaDoRA (AdaLoRA with DoRA) to the codebase, enabling the use of DoRA (Weight-Decomposed Low-Rank Adaptation) within the AdaLoRA framework. The main changes add new modules and configuration options to support this, update the initialization and management of adapter layers, and provide a new GLUE finetuning script that demonstrates AdaLoRA/DoRA usage.

Key changes include:

AdaDoRA (DoRA for AdaLoRA) Support

  • Added the AdaDoraLinearLayer class in adalora/dora.py, implementing DoRA's magnitude vector and forward logic for AdaLoRA's SVD decomposition, including proper handling of singular values and rank pruning.
  • Updated adalora/__init__.py to export AdaDoraLinearLayer and AdaDoraLinearVariant for external use.
  • Modified adalora/config.py to allow use_dora (DoRA) in AdaLoRA by removing the previous error, enabling AdaDoRA configuration.

Adapter Layer and Variant Management

  • Extended AdaLoraLayer in adalora/layer.py to:
    • Track lora_magnitude_vector in adapter_layer_names for DoRA support.
    • Add a lora_variant dictionary to manage per-adapter variants (e.g., AdaDoRA).
    • Allow initialization of adapter layers with the use_dora flag and resolve the correct variant, including initialization of the AdaDoRA variant if requested.

Utilities and Imports

  • Imported LoraVariant into adalora/layer.py to support adapter variants.

Example and Usage

  • Added a new script scripts/run_glue.py that demonstrates finetuning for GLUE tasks with AdaLoRA and AdaDoRA, including argument parsing for DoRA options, correct PEFT configuration, and integration of the new AdaLoRA/DoRA callback.

These changes collectively enable AdaLoRA models to leverage DoRA-style adaptation,while using extending dora.py implementation in LoRA module, with full support in both the core library and example scripts.

Adds weight-decomposed adaptation to AdaLoRA, introducing a learnable magnitude vector and DoRA-scaled forward/merge paths. Propagates configuration to layers and model, updates rank allocation to skip magnitude params, and recalculates magnitudes after pruning/masking.

Introduces a GLUE finetuning script that wires AdaLoRA/DoRA, computes total steps for scheduling, and updates rank allocation via a training callback.

Improves flexibility by supporting DoRA in AdaLoRA, safer merging, and clearer training integration; unmerging for DoRA remains unsupported.
- Added AdaDoraLinearLayer for handling AdaLoRA's 3-factor decomposition.
- Introduced AdaDoraLinearVariant for managing variant-specific operations.
- Updated AdaLoraLayer to support DoRA integration and manage variants.
- Enhanced SVDLinear to utilize the new variant pattern for merging and unmerging.
- Implemented tests for AdaDoraLinearLayer and AdaDoraLinearVariant to ensure functionality.
- Updated AdaLoraModel to handle magnitude updates after rank pruning.
- Added uv.lock file for dependency management.
Copy link
Member

@BenjaminBossan BenjaminBossan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for opening a new PR to add DoRA support to AdaLoRA. I did a first review pass which focused on the general design of the implementation, I haven't done a detailed check of the DoRA logic or the example yet, as it makes more sense to finalize the integration first. Please check my comments.

I also saw that there are unrelated changes in the PR, like changing capitalization of comments or adding the uv lock file. Could you please revert all those changes? Finally, please run make style to ensure the linter is happy.

Moreover, before the PR can be merged, the docs should be updated, but we can leave this for later.

OrMullerHahitti and others added 9 commits January 6, 2026 17:40
- Merge lora_magnitude_vector check with main conditional (with parentheses)
- Remove internal exports (AdaDoraLinearLayer, AdaDoraLinearVariant)
- Remove obsolete test_adalora_use_dora_raises test
- Move run_glue.py to examples/adalora_adadora/ with README
The method was a no-op (just pass) since resetting magnitude after
pruning disabled DoRA by making the ratio m/||W+ΔW|| = 1. Removed
the dead code for cleanliness.
Per reviewer feedback: variant resolution now happens inside the base
class after _move_adapter_to_device_of_base_layer, ensuring correct
ordering. SVDLinear just overrides resolve_lora_variant() to return
the AdaDoRA variant.
Per reviewer feedback: consistent with LoRA DoRA implementation.
Added _compute_lora_weight helper to compute B @ (A * E) * scaling / ranknum
externally instead of inside get_weight_norm.
Copy link
Member

@BenjaminBossan BenjaminBossan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the updates. I did another review and still found a few areas for improvement, please check my comments.

OrMullerHahitti and others added 15 commits January 7, 2026 13:04
- Add AdaDoRA config to ALL_CONFIGS test matrix
- Delete test_adalora_dora.py (covered by existing tests)
- Fix variants.py get_weight_norm calls

Addresses review comments from @BenjaminBossan in PR huggingface#2969
…lue for weight normalization and adapter output
Resolved merge conflicts from upstream's update_layer refactoring.
Updated to extract use_dora from config object while maintaining
backward compatibility. Tests passing.
Resolved conflicts by adapting AdaDoRA to upstream's config-based API.
- Updated update_layer() signature to accept AdaLoraConfig parameter
- Extract use_dora from config to preserve AdaDoRA functionality
- Updated SVDLinear.__init__() to use config object
- Updated all module instantiations to pass config=lora_config
- Kept fan_in_fan_out in kwargs for Conv1D detection

All tests passing for both vanilla AdaLoRA and AdaDoRA variants.
Resolved conflicts by adapting AdaDoRA to upstream's config-based API.
- Updated update_layer() signature to accept AdaLoraConfig parameter
- Extract use_dora from config to preserve AdaDoRA functionality
- Updated SVDLinear.__init__() to use config object
- Updated all module instantiations to pass config=lora_config
- Kept fan_in_fan_out in kwargs for Conv1D detection

All tests passing for both vanilla AdaLoRA and AdaDoRA variants.
@OrMullerHahitti
Copy link
Author

Summary of Changes
Hi @BenjaminBossan! I've addressed all your review comments and resolved merge conflicts with the latest upstream. Here's a comprehensive summary:

  1. Merge Conflict Resolution
    Resolved conflicts from upstream's API refactoring that changed update_layer signature:

Old: Individual parameters (lora_dropout, init_lora_weights, use_dora, etc.)
New: Config object (config: AdaLoraConfig)
Solution: Updated AdaDoRA to extract use_dora from config while maintaining full compatibility
Files modified:

src/peft/tuners/adalora/layer.py

  • Updated
    update_layer()
    and SVDLinear.init() signatures
    src/peft/tuners/adalora/model.py
  • Updated
    _create_and_replace()
    to pass config object
  1. Optimized
    lora_weight
    Computation
    Fixed redundant computation in AdaDoraLinearLayer.forward():

Before: Computed lora_weight twice (once for norm, once for result)
After: Compute once, reuse for both operations (detached for norm calculation)
Location: src/peft/tuners/adalora/dora.py:118-133

compute lora_weight once

lora_weight = self._compute_lora_weight(lora_A, lora_B, lora_E, scaling, ranknum)

reuse for weight norm (detached)

weight_norm = self.get_weight_norm(weight, lora_weight.detach())

reuse for adapter output

lora_result = x @ lora_weight.T

  1. Fixed lora_magnitude_vector Placement Corrected the location of lora_magnitude_vector initialization:

Before: Incorrectly added to base AdaLoraLayer.adapter_layer_names
After: Dynamically added only by DoRA variant during initialization
Rationale: Vanilla AdaLoRA doesn't use magnitude vectors, only AdaDoRA does
Location: src/peft/tuners/adalora/variants.py:59-61
if not module.lora_magnitude_vector:
# first dora layer being added, add lora_magnitude_vector to learnable params
module.adapter_layer_names =module.adapter_layer_names[:] + ("lora_magnitude_vector",)
4. Removed Redundant use_dora Check Simplified logic in update_layer():

Before: Checked both if use_dora: and if lora_variant is not None:
After: Only check if lora_variant is not None:
(sufficient since resolve_lora_variant returns None when use_dora=False)
Location: src/peft/tuners/adalora/layer.py:94-97
5. Enhanced Documentation
Added paper links and citations to README:

Inline links: Added hyperlinks to AdaLoRA and DoRA papers in introduction Citation section:
Added BibTeX entries for both papers
Location: examples/adalora_with_dora/README.md
6. Test Integration
Integrated AdaDoRA into main PEFT test harness:

Added: AdaLoraConfig(use_dora=True) to ALL_CONFIGS test matrix
Deleted: Custom tests/test_adalora_dora.py (functionality now covered by parametrized tests)
Coverage: ~30+ test methods across 6 different models
Location: tests/test_decoder_models.py:86-94
Test Results: 11/12 tests passing

Vanilla AdaLoRA: 6/6 passing (all models)
AdaDoRA (use_dora=True): 5/6 passing
√ OPT, GPTJ, Llama, Qwen2, Gemma3
x GPT2 (expected - Conv1D incompatibility, not AdaDoRA-specific)
The GPT2 failure is a known limitation with AdaLoRA and Conv1D layers, not introduced by this PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants