-
Notifications
You must be signed in to change notification settings - Fork 2.2k
Integration of PVeRA #2952
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Integration of PVeRA #2952
Changes from 4 commits
ff5477d
dd19a9a
d52cc76
e1c4fdd
a0faa92
ac8b548
aaa267f
d5d621e
c8f8e64
000f772
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,40 @@ | ||
| <!--Copyright 2025 The HuggingFace Team. All rights reserved. | ||
|
|
||
| Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with | ||
| the License. You may obtain a copy of the License at | ||
|
|
||
| http://www.apache.org/licenses/LICENSE-2.0 | ||
|
|
||
| Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on | ||
| an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | ||
| specific language governing permissions and limitations under the License. | ||
|
|
||
| ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be | ||
| rendered properly in your Markdown viewer. | ||
|
|
||
| --> | ||
|
|
||
| # PVeRA: Probabilistic Vector-Based Random Matrix Adaptation | ||
|
|
||
| [PVeRA](https://huggingface.co/papers/2512.07703) is a parameter-efficient fine-tuning technique that is base on VeRA, in the family of the LoRA-based adapters. It keeps the very low parameter budget of VeRA, but increases the performance by learning a distribution of latent adaptations. This also enables models adapted with PVeRA to generate Monte Carlo confidence interval estimates, by sampling from the learned distribution at inference. | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Maybe also mention (not necessarily here) how we can set the required # Setting sample_at_inference=True for PVeRA checkpoints during load
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM
base_model = AutoModelForCausalLM.from_pretrained("gpt2")
peft_config = PeftConfig.from_pretrained(peft_model_id)
peft_config.sample_at_inference = True
peft_model = PeftModel.from_pretrained(base_model, peft_model_id, config=peft_config)
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is done in the docstring from PveraModel, so it should appear in the documentation. |
||
|
|
||
| When saving the adapter parameters, it's possible to eschew storing the low rank matrices by setting `save_projection=False` on the `PveraConfig`. In that case, these matrices will be restored based on the fixed random seed from the `projection_prng_key` argument. This cuts down on the size of the checkpoint, but we cannot guarantee reproducibility on all devices and for all future versions of PyTorch. If you want to ensure reproducibility, set `save_projection=True` (which is the default). | ||
|
|
||
| To handle different shapes of adapted layers, PVeRA initializes shared A and B matrices with the largest required size for each dimension. During the forward pass, submatrices A and B for a given layer are sliced out from these shared matrices and used as described in the paper. For example, adapting two linear layers of shapes (100, 20) and (80, 50) will create A and B matrices of shapes (rank, 50) and (100, rank) respectively. Then, to adapt a layer of shape (100, 20), submatrices A and B of shapes (rank, 20) and (100, rank) will be extracted. | ||
|
|
||
| PVeRA currently has the following constraint: | ||
|
|
||
| - Only `nn.Linear` layers are supported. | ||
| - The latent representation is not easily accessible, for training using the KL divergence. | ||
|
|
||
| The abstract from the paper is: | ||
|
|
||
| > Large foundation models have emerged in the last years and are pushing performance boundaries for a variety of tasks. Training or even finetuning such models demands vast datasets and computational resources, which are often scarce and costly. Adaptation methods provide a computationally efficient solution to address these limitations by allowing such models to be finetuned on small amounts of data and computing power. This is achieved by appending new trainable modules to frozen backbones with only a fraction of the trainable parameters and fitting only these modules on novel tasks. Recently, the VeRA adapter was shown to excel in parameter-efficient adaptations by utilizing a pair of frozen random low-rank matrices shared across all layers. In this paper, we propose PVeRA, a probabilistic version of the VeRA adapter, which modifies the low-rank matrices of VeRA in a probabilistic manner. This modification naturally allows handling inherent ambiguities in the input and allows for different sampling configurations during training and testing. A comprehensive evaluation was performed on the VTAB-1k benchmark and seven adapters, with PVeRA outperforming VeRA and other adapters. | ||
|
|
||
| ## PveraConfig | ||
|
|
||
| [[autodoc]] tuners.pvera.config.PveraConfig | ||
|
|
||
| ## PveraModel | ||
|
|
||
| [[autodoc]] tuners.pvera.model.PveraModel | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,26 @@ | ||
| # Copyright 2025-present the HuggingFace Inc. team. | ||
| # | ||
| # Licensed under the Apache License, Version 2.0 (the "License"); | ||
| # you may not use this file except in compliance with the License. | ||
| # You may obtain a copy of the License at | ||
| # | ||
| # http://www.apache.org/licenses/LICENSE-2.0 | ||
| # | ||
| # Unless required by applicable law or agreed to in writing, software | ||
| # distributed under the License is distributed on an "AS IS" BASIS, | ||
| # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| # See the License for the specific language governing permissions and | ||
| # limitations under the License. | ||
|
|
||
|
|
||
| from peft.utils import register_peft_method | ||
|
|
||
| from .config import PveraConfig | ||
| from .layer import Linear, PVeRALayer | ||
| from .model import PveraModel | ||
|
|
||
|
|
||
| __all__ = ["Linear", "PVeRALayer", "PveraConfig", "PveraModel"] | ||
githubnemo marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
|
|
||
| register_peft_method(name="pvera", config_cls=PveraConfig, model_cls=PveraModel, prefix="pvera_lambda_") | ||
Uh oh!
There was an error while loading. Please reload this page.