Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rotation based equalization #1061

Open
wants to merge 1 commit into
base: dev
Choose a base branch
from
Open

Conversation

Giuseppe5
Copy link
Collaborator

@Giuseppe5 Giuseppe5 commented Oct 17, 2024

Reason for this PR

Implement rotation based equalization for weights and activation.

Highlights:

  • Graph based
  • Layerwise
  • Merge Affine parameters of RMSNorm to make it compatible with rotation equalization
  • Extend LLM entrypoint to support
  • Tests

Changes Made in this PR

The graph based region algorithm has been extended and generalized to support different set of supported layers based on what type of equalization is applied.

Important considerations:

  • This PR depends on external library for fast implementation (I believe it should be made optional)
  • This PR reuses code from other repositories (credited and with the appropriate license)

Testing Summary

Added one test that checks:

  • Affine RMS norm parameter merging
  • Rotation equalization being applied
  • Mathematical invariance of the floating point output after rotation.

Risk Highlight

  • This PR includes code from another work (please detail).
  • This PR contains API-breaking changes.
  • This PR depends on work in another PR (please provide links/details).
  • This PR introduces new dependencies (please detail).
  • There are coverage gaps not covered by tests.
  • Documentation updates required in subsequent PR.

Checklist

  • Code comments added to any hard-to-understand areas, if applicable.
  • Changes generate no new warnings.
  • Updated any relevant tests, if applicable.
  • No conflicts with destination dev branch.
  • I reviewed my own code changes.
  • Initial CI/CD passing.
  • 1+ reviews given, and any review issues addressed and approved.
  • Post-review full CI/CD passing.

@Giuseppe5 Giuseppe5 marked this pull request as ready for review October 20, 2024 14:59
@Giuseppe5
Copy link
Collaborator Author

Current limitations:

  • No support for scaled_dot_product in a FX graph
  • No support for MultiheadAttention
  • Missing conversion from LayerNorm to RMSNorm to ensure rotation compatibility

@Giuseppe5 Giuseppe5 force-pushed the eq_extension branch 2 times, most recently from ecf7b4f to cb691c4 Compare October 29, 2024 21:34
Copy link
Collaborator

@nickfraser nickfraser left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add some basic tests for the LLM entry-point as well? Both for RMSNorm & rotations?

try:
import fast_hadamard_transform
except:
fast_hadamard_transform = None
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe print a warning if fast hadamard transform can't be loaded? Should we also add an extras brevitas[fast_hadamard] or brevitas[fast_equalize] - whichever makes most sense?

src/brevitas/graph/equalize.py Show resolved Hide resolved
@property
def is_valid(self):
return self.max_shape_srcs == self.max_shape_sinks

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comments explaining what these do?

# Exit if source and sink have different sizes
if max_shape_srcs != max_shape_sinks and len(region.srcs) > 0:
return _no_equalize()

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My intuition is that this the purpose of this code is to ensure that all sources and sinks have compatible shapes - not quite following why this isn't needed anymore...

src/brevitas/graph/equalize.py Show resolved Hide resolved
src/brevitas/graph/equalize.py Show resolved Hide resolved
src/brevitas/graph/equalize.py Show resolved Hide resolved
return torch.matmul(tensor, ort)


def random_orthogonal_matrix(size):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did this come from somewhere?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes

set_of_layers = set(type(x) for x in model.modules() if 'RMS' in type(x).__name__)
rewriters = [
ModuleToModuleByClass(
rms_cls, torch.nn.RMSNorm, normalized_shape=config.hidden_size, eps=config.rms_norm_eps)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

torch version guard required?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes

@nickfraser
Copy link
Collaborator

Other than the hadamard matrix instantiation issue, this LGTM (once above changes are applied).

@Giuseppe5 Giuseppe5 added the next release PRs which should be merged for the next release label Nov 7, 2024
if require_fx:
model = get_fx(model)
with torch.no_grad():
model, guards = torch._dynamo.export(model)(**calibration_loader[0])
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has an impact on how dataloaders are created, we don't need extra kwargs for attention_mask

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
next release PRs which should be merged for the next release
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants