-
-
Notifications
You must be signed in to change notification settings - Fork 17
eigvecs(A::Hermitian, eigvals) method? #1248
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Note, however, that, if you are simply calling julia> F = @btime hessenberg(A);
54.186 ms (17 allocations: 7.90 MiB)
julia> λ = @btime eigvals($(F.H));
13.623 ms (9 allocations: 31.45 KiB)
julia> Q[:,17] ≈ @btime $(F.Q) * eigvecs($(F.H), [λ[17]])
379.409 μs (57 allocations: 95.38 KiB)
true |
For my application (that post in julia discourse), the ideal API style is val, vec = eigenmin(A) # return the smallest eigenvalue, and an associated eigenvector , which resembles the existing API vals, vecs = eigen(A) |
Things are a bit unexpected for me. function eigmin(A::Union{Number, AbstractMatrix};
permute::Bool=true, scale::Bool=true)
v = eigvals(A, permute = permute, scale = scale)
if eltype(v)<:Complex
throw(DomainError(A, "`A` cannot have complex eigenvalues."))
end
minimum(v)
end This seems to be the LinearAlgebra.jl's realization of the function. |
Because the lack of API currently, I could only do import LinearAlgebra
function trivial_my_eigmin(A::LinearAlgebra.Symmetric)
vals, vecs = LinearAlgebra.eigen(A)
valmin, colmin = findmin(vals)
vecmin = vecs[:, colmin]
return valmin, vecmin
end
# test code begins
A = LinearAlgebra.SymTridiagonal([1.; 2.; 1.], [2.; 3.])
A = LinearAlgebra.Symmetric(Matrix(A))
valmin, vecmin = trivial_my_eigmin(A) # 🔴 an efficient substitute API is desired here
# Since A is not PSD, the following fact is true theoretically (practically with tolerance)
@assert transpose(vecmin) * A * vecmin == valmin < 0 "maybe due to tolerance" If there is any substitute invented, please tell me (I guess I can be notified by any replies here), thank you. |
For the minimum eigenvalue and eigenvector of a Hermitian matrix, just use
That method is for general matrices, where |
Considering what you suggests, it seems that |
This implements the suggestion in #1248. After this, ```julia julia> S = Symmetric(rand(3,3)) 3×3 Symmetric{Float64, Matrix{Float64}}: 0.376244 0.895193 0.332219 0.895193 0.563134 0.148036 0.332219 0.148036 0.711689 julia> vals = eigvals(S) 3-element Vector{Float64}: -0.45018177966363415 0.5911683834592292 1.5100812026842658 julia> eigvecs(S, vals[1:2]) 3×2 Matrix{Float64}: 0.752343 -0.16258 -0.645224 -0.37737 -0.132912 0.911679 ```
Currently, we implement
eigvecs(A, eigvals)
only for realSymTridiagonal
matrices, but it would be easy to extend this to Hermitian matrices via a Hessenberg factorization as explained in this discourse postA quick benchmark with
LinearAlgebra.BLAS.set_num_threads(1)
shows that it is significantly faster thaneigvecs(A)
at computing a single eigenvector, for example (slightly faster than usingeigen(A, k:k)
if you already have the eigenvalues):Since the implementation above is already working, it should be an easy PR for someone to add it with a test and updated documentation.
The text was updated successfully, but these errors were encountered: