-
-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using Gram matrices to speed up opnorm
#1185
Comments
this would only be true for |
Yes. The |
One potential concern could be accuracy. If you insert additional steps, then each comes with its own condition number and adds to the overall error accumulation. What comes to my mind is solving tall linear systems |
I think the extra error is less of a concern here. The use of julia> Q, _ = qr(randn(100,100));
julia> B = Q*Diagonal(exp10.(range(-200, stop=0, length = size(Q, 1))))*Q';
julia> sqrt(opnorm(B'B))
0.9999999999999997 I've tried tried a few things and couldn't make this version break down. In contrast, this approach would be fatal for julia> sqrt(cond(B'B))
1.259066867835988e10 |
If you want to run this kind of empirical test, I would use |
@stevengj Thanks for the insight! I will keep that in mind! What's a good library to get ill conditioned matrices? The only one I know is this: https://github.com/JuliaLinearAlgebra/MatrixDepot.jl |
I've used this code in the past: # generate a random (Haar-uniform) m×n real orthogonal-column matrix
# using algorithm adapted from https://arxiv.org/abs/math-ph/0609050
function randQ(m::Integer,n::Integer)
m ≥ n || throw(ArgumentError("matrix must be tall"))
QR = qr(randn(m,n))
return QR.Q * Diagonal(sign.(diag(QR.R)))
end
# random m×n matrix with condition number κ and power-law singular values.
function randcond(m::Integer,n::Integer, κ::Real)
κ ≥ 1 || throw(ArgumentError("κ=$κ should be ≥ 1"))
r = min(m,n)
σ = exp.(range(0, -log(κ), length=r))
U = randQ(m,r)
V = randQ(n,r)
return U * Diagonal(σ) * V'
end |
In Julia version 1.11.2, it appears as though I can speed up calculation of
opnorm
by taking advantage of the identitySee https://en.wikipedia.org/wiki/Operator_norm#Operators_on_a_Hilbert_space .
In the following test, we can speed up the calculation of the operator norm for tall matrices$A$ by using $\sqrt{\left\lVert A^\top A \right\rVert_{op}}$ (and presumably wide matrices $A$ by using $\sqrt{\left\lVert AA^\top \right\rVert_{op}}$ ). Note that wrapping the Gram matrix with
Symmetric
also reduces the memory needed.This trick also works for square matrices, although with possibly more memory requirements.
More testing might be needed to see if this performance boost still works for other types of arrays like sparce matrices, and element types like complex. But it may be worth redefining
opnorm
if this the performance can be reliably achieved.The text was updated successfully, but these errors were encountered: