-
Notifications
You must be signed in to change notification settings - Fork 66
SDIRK scheme implementation #1598
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Can you add the documentation? Furthermore, have you tested if you recoved the 2nd and 3rd order convergence rate for both SDIRK implementations? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First round of comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So you added a new bdf2 test case? Is it to have some sort of stored comparison between bdf2,sdirk22 and sdirk33?
In that case, I would suggest you do a case with a larger time-step so that the accuracy in the case be limited by the time-step and not the mesh resolution. Do. like a time-step of 0.2 maybe
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes that was my idea since the BDF2 is broadly used in Lethe. I found it more relevant than BDF1. I will do the simulation again with the larger time step
applications_tests/lethe-fluid/taylor-green-vortex_gls_sdirk33.mpirun=2.output
Show resolved
Hide resolved
applications_tests/lethe-fluid/taylor-green-vortex_gls_sdirk33.prm
Outdated
Show resolved
Hide resolved
applications_tests/lethe-fluid/taylor-green-vortex_gls_sdirk22.prm
Outdated
Show resolved
Hide resolved
applications_tests/lethe-fluid/taylor-green-vortex_gls_bdf2.prm
Outdated
Show resolved
Hide resolved
include/core/simulation_control.h
Outdated
unsigned int | ||
get_stage_i_number() | ||
{ | ||
return stage_i; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
needs a brief and a description of the return. PLease be also a bit more verbose. What does i mean here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, since we are calculating the sum of a_ij * k_j in the loop in the iterate() function, I think we don't need these two functions. In the assembler, we only need to have the diagonal term, so we don't need to know at which stage we are. In the loop, we already know the stage. These two functions can maybe be usefull for more general implicit schemes, so I don't know if you want me to keep them. But for SDIRK, it is not necessary
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you don't use them, then you can remove them for now i presume.
include/core/simulation_control.h
Outdated
void | ||
set_stage_i_number(unsigned int s) | ||
{ | ||
stage_i = s; | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See the comment just above about the need (or not) to keep these two functions in the code
include/solvers/navier_stokes_base.h
Outdated
VectorType present_k_i_solution; | ||
VectorType temp_present_k_i_solution; | ||
|
||
VectorType system_rhs; | ||
VectorType tmp; | ||
|
||
// Previous solutions vectors | ||
std::vector<VectorType> previous_solutions; | ||
VectorType previous_solutions_0; | ||
|
||
std::vector<VectorType> previous_k_j_solutions; | ||
VectorType previous_k_j_solutions_p; | ||
|
||
VectorType sum_bi_ki; | ||
VectorType temp_sum_bi_ki; | ||
|
||
VectorType sum_over_previous_stages; | ||
VectorType temp_sum_over_previous_stages; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok that's a lot of vectors... You really don't need that many! Please try to clean this to have as little as possible.
@@ -990,6 +1006,8 @@ class NavierStokesBase : public PhysicsSolver<VectorType> | |||
/// Dynamic homogeneous constraints used for temperature-dependent solid | |||
/// domain constraints | |||
AffineConstraints<double> dynamic_zero_constraints; | |||
|
|||
std::unique_ptr<NavierStokesScratchData<dim>> scratch_data; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
??? You don't need this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I need the Butcher's table in the iterate() function to build the sums. The table is in the scratch data. But I also need this table for the matrix assembly. Should I store the table in an other location ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can recalculate the Butcher table when. you need it there. Just remake it and not store it as part of the class. It will be simpler. There,s no point in having a scratch data live just for a table. Just reget the table. Otherwise if you want to store it, store it in simulation control where the time related stuff is stored.
source/solvers/navier_stokes_base.cc
Outdated
tmp = previous_k_j_solutions[0]; | ||
tmp.equ(stage_data.a_ij[p] / a_ii, | ||
previous_k_j_solutions_p); | ||
temp_sum_over_previous_stages.add(1.0, tmp); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why don't you just do instead
temp_sum_over_previous_stages.add(1.0, tmp); | |
temp_sum_over_previous_stages.add(stage_data.a_ij[p] / a_ii, previous_k_j_solutions[p]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I don't need to have all these lines, your solution works. But we still need to have a non-locally relevant vector in the .add operation, and previous_k_j_solutions[p] is locally-relevant
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand, but why create a new locally_owned vector? You can use local evaluation point to store your temporary solution, this will remove one more vector.
Try to minimize the amount of local vectors you generate.
source/solvers/navier_stokes_base.cc
Outdated
previous_k_j_solutions_p = previous_k_j_solutions[p]; | ||
tmp = previous_k_j_solutions[0]; | ||
tmp.equ(stage_data.a_ij[p] / a_ii, | ||
previous_k_j_solutions_p); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
previous_k_j_solutions_p); | |
previous_k_j_solutions[p]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the operations (addition, multiplication, etc.) we need a non-locally relevant dofs vector, I think
@etiennemarin Once you have done the convergence study, can you ask for two other reviewer outside of me? You can use the lethe channel in the slack to ask people. Don't forget to check the "check list" and ensure that you can really check all tasks of the checklist. It is a reminder of what you need to do when you make a PR. |
…tage_data in simu_control (unresolved)
…s ne fonctionne toujours pas
1fe889f
to
b4108a0
Compare
Description
This pull request implements the SDIRK (Singly Diagonally Implicit Runge-Kutta) time integration method for the unsteady Navier–Stokes equations.
The current implementation is quite brute-force and would surely benefit from a proper refactoring to improve efficiency and readability. The idea is to also use the SDIRK scheme for other applications beyond just
lethe-fluid
.The code is structured to match as closely as possible the existing BDF structure. Here's how it works:
kᵢ = F(tᵢ, uᵢ)
, we solve the equivalent system:(uᵢ* - uₙ) / (h aᵢᵢ) - ∑_{j=0}^{i-1} (aᵢⱼ / aᵢᵢ) kⱼ = F(tᵢ, uᵢ)
uₙ₊₁ = uₙ + h ∑ bᵢ kᵢ
The main changes are implemented in the
solvers/navier_stokes_base.cc
file, within theiterate()
function.The Butcher table is stored in
scratch-data
because it's required during matrix assembly.Testing
The code has been tested on the
lethe-fluid/application-test
Taylor–Green vortex (GLS formulation) and yields results close to the BDF2 scheme.The test outputs, at each time step, the enstrophy, the kinetic energy, and the L2 error.
The goal is now to test this new scheme on multiple examples and with different Butcher tables.
Documentation
scratch-data
.sum_over_previous_stages
is now required for matrix assembly.Miscellaneous (will be removed when merged)
Checklist (will be removed when merged)
See this page for more information about the pull request process.
Code related list:
Pull request related list: