XLoRA embed_scale Support #2830 #2831
Open
+233
−4
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
This PR adds embed_scale support to XLoraEmbeddingLayer, ensuring X-LoRA correctly handles models with scaled embeddings (e.g., Gemma3TextScaledWordEmbedding). This is a companion PR to the LoRA/TrainableTokens embed_scale fix, following the same approach but adapted for X-LoRA's architecture.
Changes
Code
XLoraEmbeddingLayer.forward()
to applyembed_scale
to X-LoRA adapter contributionsembed_scale
viaself.target._get_embed_scale()
(inherits from BaseTunerLayer)Tests
test_xlora_embed_scale_is_applied
Key Differences from LoRA Implementation
Implementation:
self.target._get_embed_scale()
instead ofself._get_embed_scale()
because X-LoRA wraps a LoRA Embedding layer (viaself.target
)scalings
parameter, notadapter_names
)Testing:
use_cache=False
(X-LoRA requirement)Test Results
make style
passedFixes : #2830
cc: @BenjaminBossan