Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hotfix: fix .diagonal() calls for keops kernel matrices. #2590

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

gpleiss
Copy link
Member

@gpleiss gpleiss commented Sep 20, 2024

[Fixes #2589]

Note to @jacobrgardner : as part of the 2.0 refactor where we remove LazyEvaluatedKernelTensor, I want to add a method _symmetric_diag to all kernels that replaces the diag=True mode of current kernels.

When the kernel matrix isn't symmetric and we call diagonal (not sure if this ever happens in gpytorch, but technically we support it...) the best we can do is what we currently do in KernelLinearOperator.diagonal.

So this PR is more of a temporary hotfix that should cause any regressions in behavior, and I'll make sure diag is fast 2.0 with #2342

# Test diagonal
d1 = kern1(x1, x1).diagonal(dim1=-1, dim2=-2)
d2 = kern2(x1, x1).diagonal(dim1=-1, dim2=-2)
self.assertLess(torch.norm(d1 - d2), 1e-4)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This only tests that what the diagonal calls returns are the same, not that they actually compute the diagonal. You should add sth like this here.

Suggested change
self.assertLess(torch.norm(d1 - d2), 1e-4)
self.assertLess(torch.norm(d1 - d2), 1e-4)
self.assertTrue(torch.equal(k1.diag(), d1))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug] _diagonal() fails when covariance is computed using keops kernels
3 participants