Skip to content

Commit c219ebf

Browse files
kaihsinKai-Hsin WuKai-Hsin Wuweinbe58
authored
implemented expm multiply (#585)
* Refactor Expr 1) moving Hamiltonian/StepHamiltonian and ThreadedMatrix to Operator module 2) moving all linalg associate to 1) are moving to Operator module. * BloqadeExpr Change 1) move linalg related to elementary structs (PermMatrix,Diagonal,SparseMatrixCSR/CSC) to Lowlevel/backends/ 2) rename module Operator -> Lowlevel to avoid confusion with Yao 3) rename test/linalg.jl to test/linalg_mul.jl 4) [FIX] mul! testing case for ParallelMergeCSR is not properly dispatched. 5) add trace() and bind to LinearAlgebra.tr() for PermMatrix,Diagonal and SparseMatrixCSR/CSC 6) add test/linalg_tr.jl for testing tr() * Get types info on Hamiltonain 1) add precision_type() for Hamiltonian/StepHamiltonian 2) add highest_type() for Hamiltonian/StepHamiltonian * add overload for SparseMatrixCSR, and remove redundant print * Move ValHamiltionain from BloqadeKrylov to BloqadeExpr.Lowlevel * 1) Add more testing cases for ValHamiltonian 2) move linalg associate to Hamiltonian etc to linalg.jl * fix ASCII for '-' causing error on julia-v1.6 * Simplify Lowlevel data structure 1) Remove StepHamiltonian 2) Rename ValHamiltonian -> SumOfLinop * temp up * expm_multiply first version 1) Add expm_multiply, tested but this version consume more memory 2) Add get_optimal_sm() to get the optimal s and m_star. currently the one norm of power p is get exactly, not estimate (need to implement one norm est ! * Add onenormest 1) add onenormest() using block algorithm for 1 norm estimation * update onenorm for expm_multiply 1) swap out the onenorm in expm_multiply using onenormest() 2) reinstat all the testing cases. 3) checking testing of expm_multiply * fix bug in tA * 1) modify SumOfLinop to take tuple of static terms instead of Hamiltonian 2) change test, linalg, size, types related to accomadate change. * 1) modify onenormest and expm_multiply to accept generic type instead of AbstractMatrix. 2) modify parts to accompany change of SumOfLinop * refactor Expr 1) additional field types for SumOfLinop Hermitian, SkewHermitian and RegularLinop 2) adjoint, Base.:*, add_I for lazy evaluate in the case input is Hermitian/SkewHermitian * fix the bug in onenormest. 1) this is dirty version. need to be clean 2) fix the bug in calculating h in onenormest * clean up the code * integrated expm_multiply into integrators 1) fix bug in precision_type/highest_type/eltype for SumOfLinop by considering also dtype of fvals 2) add testing case for expm_multiply backend 3) add expmv_backend as additional options, and update docstring. 4) fix bug in get_optimal_sm when t is negative. * add AD for Hamiltonian 1) add FowardDiff in BloqadeExpr 2) add derivative(H,t) for calculating Hamiltonian H'(t) 3) add testing case * start developing of adaptive cft * Bump patch version * Refactor Expr (#572) * Refactor Expr 1) moving Hamiltonian/StepHamiltonian and ThreadedMatrix to Operator module 2) moving all linalg associate to 1) are moving to Operator module. * BloqadeExpr Change 1) move linalg related to elementary structs (PermMatrix,Diagonal,SparseMatrixCSR/CSC) to Lowlevel/backends/ 2) rename module Operator -> Lowlevel to avoid confusion with Yao 3) rename test/linalg.jl to test/linalg_mul.jl 4) [FIX] mul! testing case for ParallelMergeCSR is not properly dispatched. 5) add trace() and bind to LinearAlgebra.tr() for PermMatrix,Diagonal and SparseMatrixCSR/CSC 6) add test/linalg_tr.jl for testing tr() * Get types info on Hamiltonain 1) add precision_type() for Hamiltonian/StepHamiltonian 2) add highest_type() for Hamiltonian/StepHamiltonian * add overload for SparseMatrixCSR, and remove redundant print * Move ValHamiltionain from BloqadeKrylov to BloqadeExpr.Lowlevel * 1) Add more testing cases for ValHamiltonian 2) move linalg associate to Hamiltonian etc to linalg.jl * fix ASCII for '-' causing error on julia-v1.6 * Simplify Lowlevel data structure 1) Remove StepHamiltonian 2) Rename ValHamiltonian -> SumOfLinop * 1) remove BloqadeKrylov from toml 2) remove redundant comments --------- Co-authored-by: Kai-Hsin Wu <[email protected]> Co-authored-by: Kai-Hsin Wu <[email protected]> * remove redundant * fix missing end on runtest.jl after merge * add compat for ForwardDiff * add eltype() for threadedmatrix * remove hard constrain on fix-time check for CFET clocks * re-removing tests. --------- Co-authored-by: Kai-Hsin Wu <[email protected]> Co-authored-by: Kai-Hsin Wu <[email protected]> Co-authored-by: Phillip Weinberg <[email protected]>
1 parent 6c0e573 commit c219ebf

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

49 files changed

+1820
-52
lines changed

lib/BloqadeExpr/Project.toml

+2
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@ version = "0.1.14"
66
Adapt = "79e6a3ab-5dfb-504d-930d-738a2a938a0e"
77
BitBasis = "50ba71b6-fa0f-514d-ae9a-0916efc90dcf"
88
BloqadeLattices = "bd27d05e-4ce1-5e79-84dd-c5d7d508bbe4"
9+
ForwardDiff = "f6369f11-7733-5829-9624-2563aa707210"
910
InteractiveUtils = "b77e0a4c-d291-57a0-90e8-8db25a27a240"
1011
LaTeXStrings = "b964fa9f-0449-5b57-a5c2-d3ea65f4040f"
1112
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
@@ -28,6 +29,7 @@ LaTeXStrings = "1"
2829
LuxurySparse = "0.7"
2930
MLStyle = "0.4"
3031
ParallelMergeCSR = "1.0.2"
32+
ForwardDiff="0.10"
3133
Polyester = "0.7.3"
3234
Preferences = "1.3"
3335
SparseMatricesCSR = "0.6.7"

lib/BloqadeExpr/src/BloqadeExpr.jl

+11-2
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,9 @@ using BloqadeLattices: BoundedLattice, rydberg_interaction_matrix
1919

2020

2121
include("Lowlevel/Lowlevel.jl")
22-
using .Lowlevel: Hamiltonian, SumOfLinop, ThreadedMatrix, storage_size, to_matrix, precision_type, highest_type
22+
23+
using .Lowlevel: Hamiltonian, SumOfLinop, ThreadedMatrix, storage_size, to_matrix, precision_type, highest_type, add_I, derivative, RegularLinop, isskewhermitian
24+
2325

2426
export rydberg_h,
2527
rydberg_h_3,
@@ -47,7 +49,14 @@ export rydberg_h,
4749
emulate!,
4850
precision_type,
4951
highest_type,
50-
to_matrix
52+
53+
to_matrix,
54+
add_I,
55+
derivative,
56+
RegularLinop, # abstype
57+
SkewHermitian, # abstype
58+
isskewhermitian
59+
5160

5261
include("assert.jl")
5362
include("space.jl")

lib/BloqadeExpr/src/Lowlevel/Lowlevel.jl

+5
Original file line numberDiff line numberDiff line change
@@ -9,12 +9,17 @@ using Preferences
99
using Adapt
1010
using LaTeXStrings
1111
using LinearAlgebra
12+
using ForwardDiff
13+
using Base.Threads: nthreads
14+
1215

1316
export Hamiltonian, SumOfLinop
1417
export ThreadedMatrix
1518
export set_backend
1619
export storage_size, to_matrix
1720
export precision_type, highest_type
21+
export add_I, isskewhermitian
22+
1823
#export ValH, get_f # convert StepHamiltonian to ValHamiltonian
1924

2025
include("types.jl")

lib/BloqadeExpr/src/Lowlevel/linalg.jl

+140-10
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
#**************************************************************
22
# Here, one can find the linalg API for
3-
# 1. Hamiltonian/SumOfLinop
3+
# 1. Hamiltonian/SumOfLinop
44
# 2. Binding of standard LinearAlgebra functions
55
# to the backend of choice for ThreadedMatrix
66
#**************************************************************
@@ -10,32 +10,50 @@
1010
#--------------------------------
1111
function LinearAlgebra.mul!(C::AbstractVecOrMat, A::SumOfLinop, B::AbstractVecOrMat)
1212
fill!(C, zero(eltype(C)))
13-
for (f, term) in zip(A.fvals, A.h.ts)
13+
for (f, term) in zip(A.fvals, A.ts)
1414
mul!(C, term, B, f, one(f))
1515
end
1616
return C
1717
end
1818

1919
## additionals, maybe we don't need this.
20-
function Base.:*(a::Number, b::SumOfLinop)
21-
return SumOfLinop(b.fvals .* a, b.h)
20+
21+
function Base.:*(a::Number,b::SumOfLinop)
22+
return SumOfLinop{RegularLinop}(b.fvals .* a, b.ts)
23+
end
24+
25+
function Base.:*(a::Complex,b::SumOfLinop{T}) where {T}
26+
if real(a) 0
27+
return SumOfLinop{anti_type(T)}(b.fvals .* a, b.ts)
28+
elseif imag(a) 0
29+
return SumOfLinop{T}(b.fvals .* a, b.ts)
30+
else
31+
return SumOfLinop{RegularLinop}(b.fvals .* a, b.ts)
32+
end
2233
end
34+
35+
function Base.:*(a::Real,b::SumOfLinop{T}) where {T}
36+
return SumOfLinop{T}(b.fvals .* a, b.ts)
37+
end
38+
2339
Base.:*(n, m::T) where {T <: ThreadedMatrix} = n * m.matrix
2440

41+
42+
#=
2543
function Base.:+(a::SumOfLinop, b::SumOfLinop)
26-
if !(a === b)
44+
if !(a.ts === b.ts)
2745
error("two SumOfLinop must share the same static terms ")
2846
end
29-
return SumOfLinop(a.fvals + b.fvals, a.h)
47+
return SumOfLinop(a.fvals + b.fvals, a.ts)
3048
end
3149
3250
function Base.:-(a::SumOfLinop, b::SumOfLinop)
33-
if !(a === b)
51+
if !(a.ts === b.ts)
3452
error("two SumOfLinop must share the same static terms ")
3553
end
36-
return SumOfLinop(a.fvals - b.fvals, a.h)
54+
return SumOfLinop(a.fvals - b.fvals, a.ts)
3755
end
38-
56+
=#
3957

4058

4159
# if BloqadeExpr was the backend of choice, then ThreadedMatrix will have the SparseMatrixCSC type
@@ -47,6 +65,7 @@ LinearAlgebra.mul!(C, A::ThreadedMatrix, B, α, β) = bmul!(C, A.matrix, B, α,
4765

4866
##-------------------------------- mul!
4967

68+
5069
## opnorm()
5170
# --------------------------------
5271
function LinearAlgebra.opnorm(h::SumOfLinop, p = 2)
@@ -59,7 +78,7 @@ end
5978
## tr()
6079
# --------------------------------
6180
function LinearAlgebra.tr(A::SumOfLinop)
62-
return sum(zip(A.fvals, A.h.ts)) do (f, t)
81+
return sum(zip(A.fvals, A.ts)) do (f, t)
6382
return f * tr(t)
6483
end
6584
end
@@ -77,3 +96,114 @@ end
7796
LinearAlgebra.tr(A::ThreadedMatrix) = tr(A.matrix)
7897

7998
##-------------------------------- tr()
99+
100+
101+
## check if is hermitian.
102+
# --------------------------------
103+
LinearAlgebra.ishermitian(A::SumOfLinop{<: LinearAlgebra.Hermitian}) = true
104+
LinearAlgebra.ishermitian(A::SumOfLinop) = false
105+
106+
isskewhermitian(A::SumOfLinop{<: SkewHermitian}) = true
107+
isskewhermitian(A::SumOfLinop) = false
108+
109+
110+
## adjoint()
111+
function LinearAlgebra.adjoint(A::SumOfLinop{<:LinearAlgebra.Hermitian})
112+
return A
113+
end
114+
function LinearAlgebra.adjoint(A::SumOfLinop{<:SkewHermitian})
115+
return SumOfLinop{SkewHermitian}(A.fvals.*(-1), A.ts)
116+
end
117+
function LinearAlgebra.adjoint(A::SumOfLinop{OPTYPE}) where {OPTYPE}
118+
return SumOfLinop{OPTYPE}(conj.(A.fvals), map(adjoint,A.ts))
119+
end
120+
121+
## add constant identity term into SumOfLinop
122+
# [NOTE] this does not check the type consistency of c w.r.t. A.fvals.
123+
function add_I(A,c::Number)
124+
Iop = LinearAlgebra.I(size(A,1))
125+
return A + c*Iop
126+
end
127+
128+
function add_I(A::SumOfLinop, c::Number)
129+
Iop = LinearAlgebra.I(size(A,1))
130+
131+
if nthreads() > 1
132+
return SumOfLinop{RegularLinop}((A.fvals...,c),(A.ts...,ThreadedMatrix(Iop)))
133+
else
134+
return SumOfLinop{RegularLinop}((A.fvals...,c),(A.ts...,Iop))
135+
end
136+
137+
end
138+
139+
function add_I(A::SumOfLinop{<:LinearAlgebra.Hermitian}, c::Real)
140+
# check backend:
141+
Iop = LinearAlgebra.I(size(A,1))
142+
143+
if nthreads() > 1
144+
return SumOfLinop{LinearAlgebra.Hermitian}((A.fvals...,c),(A.ts...,ThreadedMatrix(Iop)))
145+
else
146+
return SumOfLinop{LinearAlgebra.Hermitian}((A.fvals...,c),(A.ts...,Iop))
147+
end
148+
149+
end
150+
function add_I(A::SumOfLinop{<:LinearAlgebra.Hermitian}, c::Complex)
151+
# check backend:
152+
Iop = LinearAlgebra.I(size(A,1))
153+
154+
OPTYPE=RegularLinop
155+
if imag(c) 0
156+
OPTYPE = LinearAlgebra.Hermitian
157+
end
158+
159+
if nthreads() > 1
160+
return SumOfLinop{OPTYPE}((A.fvals...,c),(A.ts...,ThreadedMatrix(Iop)))
161+
else
162+
return SumOfLinop{OPTYPE}((A.fvals...,c),(A.ts...,Iop))
163+
end
164+
165+
end
166+
167+
function add_I(A::SumOfLinop{<:SkewHermitian}, c::Real)
168+
# check backend:
169+
Iop = LinearAlgebra.I(size(A,1))
170+
171+
if nthreads() > 1
172+
return SumOfLinop{RegularLinop}((A.fvals...,c),(A.ts...,ThreadedMatrix(Iop)))
173+
else
174+
return SumOfLinop{RegularLinop}((A.fvals...,c),(A.ts...,Iop))
175+
end
176+
177+
end
178+
function add_I(A::SumOfLinop{<:SkewHermitian}, c::Complex)
179+
# check backend:
180+
Iop = LinearAlgebra.I(size(A,1))
181+
182+
OPTYPE=RegularLinop
183+
if real(c) 0
184+
OPTYPE = SkewHermitian
185+
end
186+
187+
if nthreads() > 1
188+
return SumOfLinop{OPTYPE}((A.fvals...,c),(A.ts...,ThreadedMatrix(Iop)))
189+
else
190+
return SumOfLinop{OPTYPE}((A.fvals...,c),(A.ts...,Iop))
191+
end
192+
193+
end
194+
195+
196+
197+
## taking derivative of Hamiltonian and evaluate at time
198+
## H'(t)
199+
## this returns a SumOfLinop with fvals being the derivative
200+
#function derivative(h::Hamiltonian, t::Real)
201+
# return SumOfLinop{Hermitian}(ForwardDiff.derivative.(h.fs,t), h.ts)
202+
#end
203+
204+
function derivative(h::Hamiltonian, t::Real)
205+
## remove terms that are zero
206+
fvals = ForwardDiff.derivative.(h.fs,t)
207+
mask = collect(fvals .!= 0)
208+
return SumOfLinop{Hermitian}(fvals[mask],h.ts[mask])
209+
end

lib/BloqadeExpr/src/Lowlevel/types.jl

+41-9
Original file line numberDiff line numberDiff line change
@@ -24,8 +24,11 @@ end
2424

2525
Base.size(m::ThreadedMatrix) = size(m.matrix)
2626
Base.size(m::ThreadedMatrix, i) = size(m.matrix)[i]
27+
Base.eltype(m::ThreadedMatrix) = eltype(m.matrix)
2728
Base.pointer(m::T) where {T <: Diagonal} = pointer(m.diag)
2829

30+
31+
precision_type(m::T) where {T <: Number} = real(typeof(m))
2932
precision_type(m::T) where {T <: Diagonal} = real(eltype(m))
3033
precision_type(m::T) where {T <: PermMatrix} = real(eltype(m))
3134
precision_type(m::T) where {T <: SparseMatrixCSR} = real(eltype(m))
@@ -61,27 +64,56 @@ function highest_type(h::Hamiltonian)
6164
return promote_type(tp...)
6265
end
6366

67+
68+
Base.eltype(h::Hamiltonian) = highest_type(h)
69+
70+
6471
Adapt.@adapt_structure Hamiltonian
6572

73+
74+
75+
76+
abstract type RegularLinop end
77+
abstract type SkewHermitian end
78+
79+
anti_type(::Type{LinearAlgebra.Hermitian}) = SkewHermitian
80+
anti_type(::Type{SkewHermitian}) = LinearAlgebra.Hermitian
81+
anti_type(::Type{RegularLinop}) = RegularLinop
82+
83+
6684
"""
6785
struct SumOfLinop
6886
A low-level linear-map object that explicitly evaluate time dependent
6987
coefficients at given time `t` fvals = fs(t) of Hamiltonian.
7088
7189
This object supports the linear map interface `mul!(Y, H, X)`.
7290
"""
73-
struct SumOfLinop{VS,FS,TS}
91+
92+
struct SumOfLinop{OPTYPE, VS,TS}
7493
fvals::VS
75-
h::Hamiltonian{FS,TS}
94+
ts::TS
95+
function SumOfLinop{OPTYPE}(fvals::VS, ts::TS) where {OPTYPE, VS, TS}
96+
return new{OPTYPE,VS,TS}(fvals, ts)
97+
end
7698
end
7799

78-
Base.size(h::SumOfLinop, idx::Int) = size(h.h, idx)
79-
Base.size(h::SumOfLinop) = size(h.h)
80-
precision_type(h::SumOfLinop) = precision_type(h.h)
81-
highest_type(h::SumOfLinop) = highest_type(h.h)
100+
Base.size(h::SumOfLinop, idx::Int) = size(h.ts[1], idx)
101+
Base.size(h::SumOfLinop) = size(h.ts[1])
102+
function precision_type(h::SumOfLinop)
103+
tp = unique(precision_type.(h.ts))
104+
tp2 = unique(precision_type.(h.fvals))
105+
tp = unique((tp...,tp2...))
106+
return Union{tp...}
107+
end
108+
function highest_type(h::SumOfLinop)
109+
tp = unique(eltype.(h.ts))
110+
tp2 = unique(typeof.(h.fvals))
111+
return promote_type(tp...,tp2...)
112+
end
113+
Base.eltype(h::SumOfLinop) = highest_type(h)
82114

83115
function to_matrix(h::SumOfLinop)
84-
return sum(zip(h.fvals, h.h.ts)) do (f, t)
116+
return sum(zip(h.fvals, h.ts)) do (f, t)
85117
return f * t
86118
end
87119
end
@@ -93,9 +125,9 @@ function _getf(h::Hamiltonian,t)
93125
)
94126
end
95127

96-
(h::Hamiltonian)(t::Real) = SumOfLinop(_getf(h,t), h)
97-
98128

129+
## lowering by Hamiltonian, so its Hermitian type
130+
(h::Hamiltonian)(t::Real) = SumOfLinop{LinearAlgebra.Hermitian}(_getf(h,t), h.ts)
99131

100132

101133

0 commit comments

Comments
 (0)