Skip to content

eigvecs(A::Hermitian, eigvals) method? #1248

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
stevengj opened this issue Mar 27, 2025 · 6 comments
Open

eigvecs(A::Hermitian, eigvals) method? #1248

stevengj opened this issue Mar 27, 2025 · 6 comments
Labels
enhancement New feature or request

Comments

@stevengj
Copy link
Member

stevengj commented Mar 27, 2025

Currently, we implement eigvecs(A, eigvals) only for real SymTridiagonal matrices, but it would be easy to extend this to Hermitian matrices via a Hessenberg factorization as explained in this discourse post

using LinearAlgebra
import LinearAlgebra: eigvecs, RealHermSymComplexHerm

function eigvecs(A::RealHermSymComplexHerm, eigvals::AbstractVector{<:Real})
    F = hessenberg(A) # transform to SymTridiagonal form
    X = eigvecs(F.H, eigvals)
    return F.Q * X    # transform eigvecs of F.H back to eigvecs of A
end

A quick benchmark with LinearAlgebra.BLAS.set_num_threads(1) shows that it is significantly faster than eigvecs(A) at computing a single eigenvector, for example (slightly faster than using eigen(A, k:k) if you already have the eigenvalues):

julia> A = hermitianpart!(randn(1000,1000));

julia> λ = @btime eigvals($A);
  69.372 ms (21 allocations: 7.99 MiB)

julia> Q = @btime eigvecs($A);
  212.195 ms (25 allocations: 23.25 MiB)

julia> Q[:,17]  @btime eigvecs($A, [λ[17]])
  54.133 ms (73 allocations: 7.99 MiB)
true

julia> @btime eigen($A, 17:17); # computing 17th eigvec and eigval by eigen
  58.106 ms (24 allocations: 15.62 MiB)

Since the implementation above is already working, it should be an easy PR for someone to add it with a test and updated documentation.

@stevengj stevengj added enhancement New feature or request good first issue Good for newcomers labels Mar 27, 2025
@stevengj
Copy link
Member Author

Note, however, that, if you are simply calling eigvals separately, it's even faster to perform the Hessenberg factorization yourself and use it for both eigvals and eigvecs, since this factorization is what takes most of the time in eigen:

julia> F = @btime hessenberg(A);
  54.186 ms (17 allocations: 7.90 MiB)

julia> λ = @btime eigvals($(F.H));
  13.623 ms (9 allocations: 31.45 KiB)

julia> Q[:,17]  @btime $(F.Q) * eigvecs($(F.H), [λ[17]])
  379.409 μs (57 allocations: 95.38 KiB)
true

@WalterMadelim
Copy link

For my application (that post in julia discourse), the ideal API style is

val, vec = eigenmin(A) # return the smallest eigenvalue, and an associated eigenvector

, which resembles the existing API

vals, vecs = eigen(A)

@WalterMadelim
Copy link

Things are a bit unexpected for me.
https://github.com/JuliaLang/LinearAlgebra.jl/blob/1ce842652c07b33289046236b09bc59bad43dbeb/src/eigen.jl#L431C1-L438C4

function eigmin(A::Union{Number, AbstractMatrix};
                permute::Bool=true, scale::Bool=true)
    v = eigvals(A, permute = permute, scale = scale)
    if eltype(v)<:Complex
        throw(DomainError(A, "`A` cannot have complex eigenvalues."))
    end
    minimum(v)
end

This seems to be the LinearAlgebra.jl's realization of the function.
I guess, shouldn't there be a more specialized algorithm to perform eigmin such that it is more efficient?
Otherwise this API is superfluous---users can call minimum themselves.

@WalterMadelim
Copy link

Because the lack of API currently, I could only do

import LinearAlgebra
function trivial_my_eigmin(A::LinearAlgebra.Symmetric)
    vals, vecs = LinearAlgebra.eigen(A)
    valmin, colmin = findmin(vals)
    vecmin = vecs[:, colmin]
    return valmin, vecmin
end
# test code begins
A = LinearAlgebra.SymTridiagonal([1.; 2.; 1.], [2.; 3.])
A = LinearAlgebra.Symmetric(Matrix(A))
valmin, vecmin = trivial_my_eigmin(A) # 🔴 an efficient substitute API is desired here
# Since A is not PSD, the following fact is true theoretically (practically with tolerance)
@assert transpose(vecmin) * A * vecmin == valmin < 0 "maybe due to tolerance"

If there is any substitute invented, please tell me (I guess I can be notified by any replies here), thank you.

@stevengj
Copy link
Member Author

stevengj commented Mar 28, 2025

For the minimum eigenvalue and eigenvector of a Hermitian matrix, just use eigen(A, 1:1).

Things are a bit unexpected for me.

That method is for general matrices, where eigmin doesn't have a good algorithm. A specialized algorithm is used for Hermitian matrices, and also for eigen(A, 1:1) (which only works for Hermitian matrices).

@stevengj stevengj removed the good first issue Good for newcomers label Mar 28, 2025
@WalterMadelim
Copy link

WalterMadelim commented Mar 29, 2025

Considering what you suggests, it seems that eigmin is indeed superfluous? We can just use eigen(A, 1:1). Or, use eigen(A) in conjunction with minimum as a fallback choice which is no worse that eigmin. (right?)

jishnub added a commit that referenced this issue Apr 30, 2025
This implements the suggestion in
#1248.

After this,
```julia
julia> S = Symmetric(rand(3,3))
3×3 Symmetric{Float64, Matrix{Float64}}:
 0.376244  0.895193  0.332219
 0.895193  0.563134  0.148036
 0.332219  0.148036  0.711689

julia> vals = eigvals(S)
3-element Vector{Float64}:
 -0.45018177966363415
  0.5911683834592292
  1.5100812026842658

julia> eigvecs(S, vals[1:2])
3×2 Matrix{Float64}:
  0.752343  -0.16258
 -0.645224  -0.37737
 -0.132912   0.911679
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants