-
Notifications
You must be signed in to change notification settings - Fork 2.1k
minor changes to OFT to make it faster #2805
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this PR to make OFT faster, I found a few issues, please check my comments.
In my local testing, this PR increased the speed from 5726 tok/sec to 6582 tok/sec using the default MetaMath experiment for OFT (~15% faster).
src/peft/tuners/oft/layer.py
Outdated
vec = matrix[:, self.rows, self.cols] | ||
return vec | ||
|
||
@torch.compile |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right now, we don't torch.compile
any code but instead leave it to the user to choose if they want to compile or not. Testing on my machine with a 4090, I also don't see a big difference (6582 vs 6532 tokens / sec). Did you find this to help a lot in your setting? If not, I'd suggest to remove it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the quick reply! The thing is that I noticed it will speed it up a bit (I deliberately do not add the specific configurations like dynamic etc. to avoid error), but we can leave it out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
General question: if we have custom kernels (for example triton kernel, is it allowed to add?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The thing is that I noticed it will speed it up a bit
How much difference does it make for you? If it's not significantly more than I observed, I'd say, let's remove it for now.
General question: if we have custom kernels (for example triton kernel, is it allowed to add?)
Good question. So at HF, there is a general push to make it easier to optionally make use of kernels: https://huggingface.co/blog/hello-hf-kernels. In transformers, there is already an integration, though the API is still early stage (I think there is no option for fine-grained control of what kernels to use).
For PEFT, we plan to add the same functionality but we're still working out how to exactly implement it. Once it's there, we will happily accept kernels for specific PEFT methods (and migrate existing kernels there). Until then, feel free to add your OFT kernel to the HF kernel hub, we could use it for testing the kernels integration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the information :) In another project I am building kernels for this orthgonalization, which makes it faster and more memory-efficient than the current torch.compile, but sure, we can add it later :)
Let's just remove the torch.compile for now.
src/peft/tuners/oft/layer.py
Outdated
if required_dtype != self.weight.dtype: | ||
x = x.to(self.weight.dtype) | ||
|
||
if self.rows.device != self.weight.device: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIRC, tensor.to(device)
is a no-op if the device is already the same, i.e. we cna remove the device check.
src/peft/tuners/oft/layer.py
Outdated
self.rows = self.rows.to(self.weight.device) | ||
self.cols = self.cols.to(self.weight.device) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder: Should we also move rows
and cols
to the correct device directly when they're initialized?
peft/src/peft/tuners/oft/layer.py
Lines 463 to 473 in 4469af5
self.oft_R[adapter_name] = OFTRotationModule( | |
r if not block_share else 1, | |
n_elements, | |
oft_block_size, | |
self.in_features, | |
coft=coft, | |
eps=eps, | |
block_share=block_share, | |
use_cayley_neumann=use_cayley_neumann, | |
num_cayley_neumann_terms=num_cayley_neumann_terms, | |
) |
As these tensors are quite small, I think the memory impact is negligible but we avoid moving them during each forward
call. I would still keep this code here to be safe (as it's a no-op if they're already on the right device).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we could also do this, but when it is initialized I think it is on cpu? (I checked the device of the base layer weight, it is on cpu), so how should I know the correct device? Best,
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think at this point, the weights of the base model should generally already be on the accelerator. E.g. when I run these tests with a GPU:
pytest tests/test_custom_models.py -k "test_forward_float16 and oft and not boft"
and I add this check before OFTRotationModule
is assigned:
base_layer_device = self.get_base_layer().device
assert base_layer_device.type == "cuda"
it passes. So we can add the device as input argument to OFTRotationModule
and then ensure that it's moved to the right device. WDYT?
@BenjaminBossan I just pushed a newer version. Best |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for making these changes. There are still some small issues, please check my comments.
In addition, let's add a test for the new warning. It can be added to test_initialization.py
. Here is a proposal:
import json
from peft import OFTConfig
[..]
class TestOft:
torch_device = infer_device()
def get_model(self, bias=True):
class MyModule(nn.Module):
def __init__(self):
super().__init__()
self.lin = nn.Linear(32, 32)
return MyModule().eval().to(self.torch_device)
@pytest.mark.parametrize("peft_version", ["0.17.0", "0.18.0", None])
def test_load_outdated_oft_checkpoint_warns(self, peft_version, tmp_path, recwarn):
# In PEFT v0.18.0, there was a small change in the OFT implementation with Cayley-Neumann enabled. As the
# outputs change slightly, users need to be warned about it if the checkpoint stems from a PEFT version below
# 0.18.0. When the 'peft_version' key is not in the config, it means that the version is below 0.18.0.
config = OFTConfig(target_modules=["lin"], use_cayley_neumann=True) # only relevant when using Cayley-Neumann
model = get_peft_model(self.get_model(), config)
model.save_pretrained(tmp_path)
del model
# overwrite the peft_version
with open(tmp_path / "adapter_config.json") as f:
config_json = json.load(f)
if peft_version is None:
del config_json["peft_version"]
else:
config_json["peft_version"] = peft_version
with open(tmp_path / "adapter_config.json", "w") as f:
json.dump(config_json, f)
msg = "TODO" # <= replace with final warning message
PeftModel.from_pretrained(self.get_model(), tmp_path)
warn_messages = [str(w.message) for w in recwarn.list]
if peft_version == "0.18.0":
assert not any(w.startswith(msg) for w in warn_messages)
else:
assert any(w.startswith(msg) for w in warn_messages)
In theory, we also need to add a check in the other direction: A user trains an OFT checkpoint with version 0.18.0 (i.e. containing the change from this PR) and then tries to load this checkpoint in 0.17.0. I think this scenario is rare enough that we can ignore it, but LMK if you think it should be added.
src/peft/tuners/oft/config.py
Outdated
min_version = packaging.version.Version("0.18.0") | ||
# note: config.peft_version was added in 0.18.0, so if it's missing, it means we're below min version | ||
if (peft_version == "unknown") or (parsed_version < min_version): | ||
msg = "warning message that explains what is happening" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, this message was just a placeholder :-D Could you please add a short explanation here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure:)
src/peft/tuners/oft/layer.py
Outdated
self.rows.to(device) | ||
self.cols.to(device) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As rows
and cols
are tensors, calling .to()
is not in-place operations, you have to re-assign the values.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
src/peft/tuners/oft/layer.py
Outdated
block_share=block_share, | ||
use_cayley_neumann=use_cayley_neumann, | ||
num_cayley_neumann_terms=num_cayley_neumann_terms, | ||
device=self.weight.device, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Slightly more robust:
device=self.weight.device, | |
device=self.get_base_layer().weight.device, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
src/peft/tuners/oft/layer.py
Outdated
kernel_size=base_layer.kernel_size, | ||
use_cayley_neumann=use_cayley_neumann, | ||
num_cayley_neumann_terms=num_cayley_neumann_terms, | ||
device=self.weight.device, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Slightly more robust:
device=self.weight.device, | |
device=self.get_base_layer().weight.device, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
src/peft/tuners/oft/config.py
Outdated
"with the latest version of OFT. Please retrain your adapter weights with newer PEFT versions. " | ||
"Alternatively, downgrade PEFT to version 0.13.0 to use the old adapter weights." | ||
) | ||
if kwargs["use_caylay_neumann"]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's make this more backwards compatible, also, there was a typo:
if kwargs["use_caylay_neumann"]: | |
if kwargs.get("use_cayley_neumann", False): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
src/peft/tuners/oft/config.py
Outdated
peft_version = kwargs.get("peft_version", "unknown") | ||
parsed_version = packaging.version.Version(peft_version) | ||
min_version = packaging.version.Version("0.18.0") | ||
# note: config.peft_version was added in 0.18.0, so if it's missing, it means we're below min version | ||
if (peft_version == "unknown") or (parsed_version < min_version): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
peft_version = kwargs.get("peft_version", "unknown") | |
parsed_version = packaging.version.Version(peft_version) | |
min_version = packaging.version.Version("0.18.0") | |
# note: config.peft_version was added in 0.18.0, so if it's missing, it means we're below min version | |
if (peft_version == "unknown") or (parsed_version < min_version): | |
peft_version = kwargs.get("peft_version", "0.0.0") # if not present, set a low dummy version | |
parsed_version = packaging.version.Version(peft_version) | |
min_version = packaging.version.Version("0.18.0") | |
# note: config.peft_version was added in 0.18.0, so if it's missing, it means we're below min version | |
if parsed_version < min_version: |
I just tested and parsing with "unknown"
throws an error, so let's put a dummy value here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
@BenjaminBossan Just updated the comments. Best, |
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
@BenjaminBossan What do these error messages mean? Best, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do these error messages mean?
I forgot that the stored PEFT version may contain a commit hash, like 0.17.2@123abc
. My suggested change should address that.
) | ||
if kwargs.get("use_cayley_neumann", False): | ||
peft_version = kwargs.get("peft_version", "0.0.0") # if not present, set a low dummy version | ||
parsed_version = packaging.version.Version(peft_version) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
parsed_version = packaging.version.Version(peft_version) | |
# remove commit hash, if present | |
peft_version = peft_version.partition("@")[0] | |
parsed_version = packaging.version.Version(peft_version) |
@BenjaminBossan Thanks. Just updated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a small bug with meta devices, check my comment.
src/peft/tuners/oft/layer.py
Outdated
block_share=block_share, | ||
use_cayley_neumann=use_cayley_neumann, | ||
num_cayley_neumann_terms=num_cayley_neumann_terms, | ||
device=self.get_base_layer().weight.device, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, this doesn't work if the device is meta
, as we'll get:
self.weight = self.weight.to(device) ^^^^^^^^^^^^^^^^^^^^^^
E NotImplementedError: Cannot copy out of meta tensor; no data!
Let's do something like:
device = self.get_base_layer().weight.device
[...]
self.oft_R[adapter_name] = OFTRotationModule(
...
device=device if device.type != "meta" else None,
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@BenjaminBossan How to reproduce this with meta?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@BenjaminBossan but then it is not correct right? During initialization it is meta, so the variables val / cols will never get shifted to gpu (this is why the code is currently slow)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@BenjaminBossan I think I can leave out this self.weight = self.weight.to(device), since it had worked correctly before, I just want to make sure that the self.cols and self.rows get shifted correctly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To reproduce the error, just run the failing tests:
pytest tests/test_custom_models.py::TestPeftCustomModel::test_load_model_low_cpu_mem_usage -k "oft and not boft"
During initialization it is meta, so the variables val / cols will never get shifted to gpu (this is why the code is currently slow)?
So the situation is the following: A user loads a trained PEFT adapter with low_cpu_mem_usage=True
, the weights are at first created on meta device, but then when the actual weights are loaded, they are moved to the correct device. This should happen at the end of update_layer
, when self._move_adapter_to_device_of_base_layer(adapter_name)
is called. I think this method won't work on rows
and cols
though, as they are just tensors. I think what can be done:
- Make them non-persistent buffers, I believe then they would be automatically be moved when
oft_R
is moved. - Override
_move_adapter_to_device_of_base_layer
, callsuper()._move_adapter_to_device_of_base_layer
, then handlerows
andcols
explicitly.
src/peft/tuners/oft/layer.py
Outdated
kernel_size=base_layer.kernel_size, | ||
use_cayley_neumann=use_cayley_neumann, | ||
num_cayley_neumann_terms=num_cayley_neumann_terms, | ||
device=self.get_base_layer().weight.device, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same comment as above
@BenjaminBossan Ok, the newest should be working. I used the non-persistent buffer. Best, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the latest updates, going with buffers is okay for me. I found a few small remaining TODOs, after that the PR should be good. Please also run make style
before committing.
tests/test_initialization.py
Outdated
with open(tmp_path / "adapter_config.json", "w") as f: | ||
json.dump(config_json, f) | ||
|
||
msg = "TODO" # <= replace with final warning message |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please replace it with the correct error message (start of the message is fine).
src/peft/tuners/oft/config.py
Outdated
min_version = packaging.version.Version("0.18.0") | ||
# note: config.peft_version was added in 0.18.0, so if it's missing, it means we're below min version | ||
if parsed_version < min_version: | ||
msg = "The cayley-neumann parameterization has been slightly changed to be more numerically stable in PEFT 0.18.0. Please retrain your adapter weights with newer PEFT versions. Alternatively, downgrade PEFT to version 0.17.0 to use the old parameterization." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
msg = "The cayley-neumann parameterization has been slightly changed to be more numerically stable in PEFT 0.18.0. Please retrain your adapter weights with newer PEFT versions. Alternatively, downgrade PEFT to version 0.17.0 to use the old parameterization." | |
msg = ( | |
"The cayley-neumann parameterization has been slightly changed to be more numerically stable in " | |
"PEFT 0.18.0. Please retrain your adapter weights with newer PEFT versions. Alternatively, " | |
"downgrade PEFT to version 0.17.0 to use the old parameterization." | |
) |
Let's add some line breaks to keep <= 120 chars.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, done
tests/test_initialization.py
Outdated
config_json = json.load(f) | ||
|
||
if peft_version is None: | ||
del config_json["peft_version"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
del config_json["peft_version"] | |
config_json.pop("peft_version", None) |
In case the key does not exist.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
@BenjaminBossan Thanks, just updated again. Best. |
I checked the failing CI and most failing tests are unrelated (anything with "TypeError: CLIPTextModel.init() got an unexpected keyword argument 'offload_state_dict'" can be ignored). However, there are 2 actual failures:
I could reproduce these errors locally. What surprises me is that the model output is quite difference compared to the main branch: # main
>>> torch.testing.assert_close(outputs_before, outputs_disabled)
*** AssertionError: Tensor-likes are not close!
Mismatched elements: 2 / 4 (50.0%)
Greatest absolute difference: 0.002262115478515625 at index (1, 0) (up to 1e-05 allowed)
Greatest relative difference: 5.868273365194909e-05 at index (0, 0) (up to 1.3e-06 allowed)
# this branch
>>> torch.testing.assert_close(outputs_before, outputs_disabled)
*** AssertionError: Tensor-likes are not close!
Mismatched elements: 2 / 4 (50.0%)
Greatest absolute difference: 0.047229766845703125 at index (1, 0) (up to 1e-05 allowed)
Greatest relative difference: 0.0011932472698390484 at index (0, 0) (up to 1.3e-06 allowed) So the greatest absolute and relative errors increased by a factor of 20. Of course, we could just increase the tolerance further and the tests would pass, but I wonder if something strange could be going on with the new |
@BenjaminBossan Thanks for pointing this out. I will look into it. Best, |
@zqiu24 Did you make any progress? If not, I think we can still merge with higher tolerance. It could just be that this example is degenerate but there is no issue in general. It would be good to have this PR merged before the next release (current plan: next week). |
Dear All,
I have noticed some minor issues in the forward of OFT, the most important is that the indexing is on cpu (instead of on the device), resulting in slower training.
I have made some minor modifications, I tested on 8B finetuning and it is approximately 30% faster.
Hope for a quick review :) @BenjaminBossan
Thank you so much.
Best,