-
Notifications
You must be signed in to change notification settings - Fork 242
Implement an implicit free surface solver in the NonhydrostaticModel #3968
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
So here's how the algorithm changes with an implicit free surface (which is all I'd like to attempt for the time being):
I think the simplest way to implement this is therefore to add the implicit free surface solve as a precursor to the pressure correction. |
Okay, this script: using Oceananigans
using Oceananigans.Models.HydrostaticFreeSurfaceModels: ImplicitFreeSurface
using GLMakie
grid = RectilinearGrid(size=(128, 32), halo=(4, 4), x=(-5, 5), z=(0, 1), topology=(Bounded, Flat, Bounded))
mountain(x) = (x - 3) / 2
grid = ImmersedBoundaryGrid(grid, GridFittedBottom(mountain))
Fu(x, z, t) = sin(t)
free_surface = ImplicitFreeSurface(gravitational_acceleration=10)
model = NonhydrostaticModel(; grid, free_surface, advection=WENO(order=5), forcing=(; u=Fu))
simulation = Simulation(model, Δt=0.1, stop_time=20*2π)
conjure_time_step_wizard!(simulation, cfl=0.7)
progress(sim) = @info string(iteration(sim), ": ", time(sim))
add_callback!(simulation, progress, IterationInterval(100))
ow = JLD2OutputWriter(model, merge(model.velocities, (; η=model.free_surface.η)),
filename = "nonhydrostatic_internal_tide.jld2",
schedule = TimeInterval(0.1),
overwrite_existing = true)
simulation.output_writers[:jld2] = ow
run!(simulation)
fig = Figure()
axη = Axis(fig[1, 1], xlabel="x", ylabel="Free surface \n displacement")
axw = Axis(fig[2, 1], xlabel="x", ylabel="Surface vertical velocity")
axu = Axis(fig[3, 1], xlabel="x", ylabel="z")
ut = FieldTimeSeries("nonhydrostatic_internal_tide.jld2", "u")
wt = FieldTimeSeries("nonhydrostatic_internal_tide.jld2", "w")
ηt = FieldTimeSeries("nonhydrostatic_internal_tide.jld2", "η")
Nt = length(wt)
slider = Slider(fig[4, 1], range=1:Nt, startvalue=1)
n = slider.value
Nz = size(ut.grid, 3)
u = @lift ut[$n]
η = @lift interior(ηt[$n], :, 1, 1)
w = @lift interior(wt[$n], :, 1, Nz+1)
x = xnodes(wt)
ulim = maximum(abs, ut) * 3/4
lines!(axη, x, η)
lines!(axw, x, w)
heatmap!(axu, u)
ylims!(axη, -0.1, 0.1)
ylims!(axw, -0.01, 0.01)
record(fig, "nonhydrostatic_internal_tide.mp4", 1:Nt) do nn
@info "Drawing frame $nn of $Nt..."
n[] = nn
end produces this movie nonhydrostatic_internal_tide.mp4some weird grid artifacts in there but maybe higher resolution will help with that. |
@shriyafruitwala let me know if this code works for the problem you are interested in. The implementation is fairly clean, but there are a few things we could consider before merging, like tests, some validation in the constructor. @simone-silvestri I feel this code exposes some messiness with the peformance optimization stuff regarding kernel parameters. Please check it over to make sure what I did will work and add any tests that may be missing... |
|
||
for (wpar, ppar, κpar) in zip(w_parameters, p_parameters, κ_parameters) | ||
if !isnothing(model.free_surface) | ||
compute_w_from_continuity!(model; parameters = wpar) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am unsure of the algorithm, but wouldn't this replace the w-velocity that should be prognostic?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For sure. That's what is written in MITgcm docs. @jm-c can you confirm?
it seems that you are missing |
@shriyafruitwala to set η = model.free_surface.η
set!(η, value) |
It's been discussed elsewhere (and there are notes lying around -- hopefully they can be shared here) that the implementation in this PR is incorrect. In particular, the pressure satisfies a Robin boundary condition (not a Neumann condition) at the surface; implementing a Robin boundary condition requires changing the pressure solver. It's possible that the current implementation represents some kind of approximation to the true problem which may be revealed by comparing to an exact solution. In thinking about a different problem, I realized that --- I think --- a Robin boundary condition can be implemented using If true then it should be relatively straightforward to develop a direct solver for the flat bottom case. Hopefully, this would help in developing a CG solver for the case with bathymetry. I'd also like to point out there seem to be a few issues with the CG solver currently, as documented in #4007 and #3848. |
Hey everyone! I have set up a test of the nonhydrostatic free surface code of a deep water surface gravity wave initialized with a Gaussian bump. Here, G = 0, so per the MITgcm docs, the free surface equation (2.52) essentially becomes a diffusion equation, with K = gH \Delta t. We see exactly this in the test case, where the velocities are essentially zero, and the evolution of the free surface is diffusive. nonhydrostatic_deepwater_test.jld2.mp4hydrostatic_deepwater_test.jld2.mp4 |
Great work. Can you include the code that you used to run the simulation? (The code that generates the animation may be helpful too if anyone would like to reproduce your work.) The hydrostatic model can also be used with |
Sure! Here is the code: using Oceananigans
using Oceananigans.Models.HydrostaticFreeSurfaceModels: ImplicitFreeSurface
using GLMakie
using Oceananigans.Units
Nx, Nz = 50, 50
const H = 50meters
const L = 50meters
const g = 10
k = 2
ω² = g*abs(k)
ω = sqrt(ω²)
coriolis = FPlane(latitude=28)
f=coriolis.f
grid = RectilinearGrid(size = (Nx, Nz),
x = (-25meters, 25meters),
z = (-H, 0),
halo = (4,4),
topology = (Periodic, Flat, Bounded))
free_surface = ImplicitFreeSurface(gravitational_acceleration=10)
model = NonhydrostaticModel(; grid, free_surface, coriolis, advection=WENO(order=5))
#model = HydrostaticFreeSurfaceModel(; grid, free_surface, coriolis, momentum_advection=WENO(order=5))
#initialize with gaussian bump surface
η = model.free_surface.η
bump(x) = exp(-(x^2)/(2 * (5meters)^2))
x_vals = LinRange(-L/2, L/2, Nx)
η₀_vals = bump.(x_vals)
η₀ = reshape(η₀_vals, (Nx,1,1))
set!(η, η₀)
#sets up simulation run
simulation = Simulation(model, Δt=0.1, stop_time=50)
progress(sim) = @info string(iteration(sim), ": ", time(sim))
add_callback!(simulation, progress, IterationInterval(100))
output_file = "hydrostatic_deepwater_test_k2_implicit.jld2"
ow = JLD2OutputWriter(model, merge(model.velocities, (; η=model.free_surface.η)),
filename = output_file,
schedule = TimeInterval(0.1),
overwrite_existing = true)
simulation.output_writers[:jld2] = ow
run!(simulation)
#produces animation
fig = Figure()
axη = Axis(fig[1, 1], xlabel="x", ylabel="η", width=400, height=75, title="sea surface height")
axw = Axis(fig[2, 1], xlabel="x", ylabel="z", width=400, height=75, title = "w velocity")
axu = Axis(fig[3, 1], xlabel="x", ylabel="z", width=400, height=75, title = "u velocity")
ut = FieldTimeSeries(output_file, "u")
wt = FieldTimeSeries(output_file, "w")
ηt = FieldTimeSeries(output_file, "η")
Nt = length(wt)
n = Observable(1)
u = @lift ut[$n]
η = @lift interior(ηt[$n], :, 1, 1)
w = @lift wt[$n]
x = xnodes(wt)
ulim = maximum(abs, ut)
wlim = maximum(abs, wt)
lines!(axη, x, η)
hm_w = heatmap!(axw, w, colorrange=(-wlim,wlim))
Colorbar(fig[2, 2], hm_w, label = "m s⁻¹")
hm_u = heatmap!(axu, u, colorrange=(-ulim,ulim))
Colorbar(fig[3, 2], hm_u, label = "m s⁻¹")
ylims!(axη, -1, 1)
record(fig, output_file * "title.mp4", 1:Nt) do nn
@info "Drawing frame $nn of $Nt..."
n[] = nn
end Using hydrostatic_deepwater_test_k2_explicit.mp4 |
I also wanted to point out that for deep water waves, the evolution of the free surface should look something like this (in contrast to the pure diffusion we see in the Oceananigans nonhydrostatic test I posted earlier): deepwater_amplitude.mp4 |
That's great to have a ground truth. Do we have a plan to fix the nonhydrostatic solver? |
I think that as we discussed, we want the pressure to satisfy a Robin boundary condition, which from your earlier post seemed to be doable using the |
I am happy to offer guidance but cannot take the lead on implementing it. I agree I would be faster, but this argument breaks down quickly --- with that argument, I should implement everything! That said, I do not think you will need to change very much. The solvers are in place and I believe the implementation is a matter of rearranging things and perhaps a few critical lines here and there. Honestly I am not exactly sure what needs to be changed, and figuring out precisely what code needs to change is one of the major pieces you can take the lead on. I think we can start by rehashing the algorithm that we would like to implement here. I can't remember the specifics, and we need our plan to be documented in this PR. |
I think a good place to start would be to implement the Robin boundary condition in the Fourier pressure solver, so we can simulate deep-water waves over a flat bottom. I'm attaching notes on the algorithm, which show two versions. The first version solves for the pressure field in one go, whereas the second splits the pressure field into hydrostatic and nonhydrostatic parts, following the MITgcm practice. The first version is much simpler and may be a good place to start. The main argument in favor of the second version is that the 3D solve might converge more quickly in nearly hydrostatic conditions, though that should be tested, I suppose. So, I would propose we do the Fourier solver first with the first version of the algorithm. That requires a Robin BC but not much else, as far as I can tell. Once we have this working, we can work on the CG solver to allow for topography, which is what Shriya is after in the end. @glwagner, does that sounds reasonable? |
I think that will work. I will just clarify, I am not aware of a pure Fourier algorithm that will work for the Robin BC. However, I believe that the Robin BC can be implemented in the FourierTridiagonalPoissonSolver: https://github.com/CliMA/Oceananigans.jl/blob/main/src/Solvers/fourier_tridiagonal_poisson_solver.jl which uses Fourier transforms in the horizontal and a tridiagonal solve in the vertical. I believe the modification requires deducing the changes needed here: Oceananigans.jl/src/Solvers/fourier_tridiagonal_poisson_solver.jl Lines 45 to 46 in 859c36f
The batched tridiagonal solver is implemented here for reference: https://github.com/CliMA/Oceananigans.jl/blob/main/src/Solvers/batched_tridiagonal_solver.jl We'll also have to design an interface for specifying which boundary condition we'd like to use when building |
I am trying to test the Robin BC in the MethodError: no method matching build_implicit_step_solver(::Val{:fourier_tridiagonal_poisson_solver}, ::RectilinearGrid{Float64, Periodic, Flat, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Nothing, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, CPU}, ::Base.Pairs{Symbol, Union{}, Tuple{}, @NamedTuple{}}, ::Float64) |
The first thing is that some of the changes that I made need to be walked back. For example, these four lines need to be commented out or deleted:
The next step is to change this line:
to nonhydrostatic_pressure_solver(arch, grid::XYZRegularRG) =
FourierTridiagonalPoissonSolver(grid) This way, no matter what grid you use with Next, hard-code your boundary condition changes into the existing https://github.com/CliMA/Oceananigans.jl/blob/main/src/Solvers/fourier_tridiagonal_poisson_solver.jl You can verify that the modifications are working as expected by directly calling I think you will also need to implement a free surface update, which could go after these lines:
using the vertical component of the predictor velocity field (there are other choices but the crucial thing right now is just to validate the algorithm; afterwards we can shuffle code around. If you make a pull request that is pointed at this branch, we will be able to see the changes and I can see your code and comment on specific lines which may help speed things up. |
Great, thanks! I still seem to be getting an error when I try to build the model that has to do with the grid (stretched_dimensions - I've pasted the error message below). Is there something that needs to be altered in rectilinear_grid.jl? MethodError: stretched_dimensions(::RectilinearGrid{Float64, Periodic, Flat, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Nothing, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, CPU}) is ambiguous. |
Can you paste the whole error so I can find the file? |
As a stop gap, change the problem you are working on so that the z-direction is an array (which will be interpreted as "stretched"), for example if you are using z = (-Lz, 0) when you build the grid then change this to dz = Lz / Nz
z = -Lz:dz:0 |
yes, here it is! MethodError: stretched_dimensions(::RectilinearGrid{Float64, Periodic, Flat, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Nothing, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, CPU}) is ambiguous. Candidates: stretched_dimensions(::RectilinearGrid{<:Any, <:Any, <:Any, <:Any, <:Number, <:Number})
stretched_dimensions(::RectilinearGrid{<:Any, <:Any, <:Any, <:Any, <:Number, <:Any, <:Number})
stretched_dimensions(::RectilinearGrid{<:Any, <:Any, <:Any, <:Any, <:Any, <:Number, <:Number})
Possible fix, define stretched_dimensions(::RectilinearGrid{<:Any, <:Any, <:Any, <:Any, <:Number, <:Number, <:Number}) Stacktrace: [1] Oceananigans.Solvers.FourierTridiagonalPoissonSolver(grid::RectilinearGrid{Float64, Periodic, Flat, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Nothing, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, CPU}, planner_flag::UInt32) @ Oceananigans.Solvers c:\Users\shriy\Documents\Research\Oceananigans.jl\src\Solvers\fourier_tridiagonal_poisson_solver.jl:64 [2] nonhydrostatic_pressure_solver(arch::CPU, grid::RectilinearGrid{Float64, Periodic, Flat, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Nothing, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, CPU}) @ Oceananigans.Models.NonhydrostaticModels c:\Users\shriy\Documents\Research\Oceananigans.jl\src\Models\NonhydrostaticModels\NonhydrostaticModels.jl:38 [3] nonhydrostatic_pressure_solver(grid::RectilinearGrid{Float64, Periodic, Flat, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Nothing, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, CPU}) @ Oceananigans.Models.NonhydrostaticModels c:\Users\shriy\Documents\Research\Oceananigans.jl\src\Models\NonhydrostaticModels\NonhydrostaticModels.jl:66 [4] NonhydrostaticModel(; grid::RectilinearGrid{Float64, Periodic, Flat, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Nothing, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, CPU}, clock::Clock{Float64, Float64}, advection::WENO{3, Float64, Nothing, Nothing, Nothing, Nothing, WENO{2, Float64, Nothing, Nothing, Nothing, Nothing, UpwindBiased{1, Float64, Nothing, Nothing, Nothing, Nothing, Centered{1, Float64, Nothing, Nothing, Nothing, Nothing}}, Centered{1, Float64, Nothing, Nothing, Nothing, Nothing}}, Centered{2, Float64, Nothing, Nothing, Nothing, Centered{1, Float64, Nothing, Nothing, Nothing, Nothing}}}, buoyancy::Nothing, coriolis::FPlane{Float64}, stokes_drift::Nothing, forcing::@NamedTuple{}, closure::Nothing, free_surface::ImplicitFreeSurface{Nothing, Int64, Nothing, Nothing, Symbol, @kwargs{}}, boundary_conditions::@NamedTuple{}, tracers::Tuple{}, timestepper::Symbol, background_fields::@NamedTuple{}, particles::Nothing, biogeochemistry::Nothing, velocities::Nothing, hydrostatic_pressure_anomaly::Oceananigans.Models.NonhydrostaticModels.DefaultHydrostaticPressureAnomaly, nonhydrostatic_pressure::Field{Center, Center, Center, Nothing, RectilinearGrid{Float64, Periodic, Flat, Bounded, Float64, Float64, Float64, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, Nothing, OffsetArrays.OffsetVector{Float64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}, Int64}}, CPU}, Tuple{Colon, Colon, Colon}, OffsetArrays.OffsetArray{Float64, 3, Array{Float64, 3}}, Float64, FieldBoundaryConditions{BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Periodic, Nothing}, Nothing, Nothing, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}, BoundaryCondition{Oceananigans.BoundaryConditions.Flux, Nothing}}, Nothing, Oceananigans.Fields.FieldBoundaryBuffers{Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing, Nothing}}, diffusivity_fields::Nothing, pressure_solver::Nothing, auxiliary_fields::@NamedTuple{}) @ Oceananigans.Models.NonhydrostaticModels c:\Users\shriy\Documents\Research\Oceananigans.jl\src\Models\NonhydrostaticModels\nonhydrostatic_model.jl:228 [5] top-level scope @ c:\Users\shriy\Documents\Research\Oceananigans.jl\jl_notebook_cell_df34fa98e69747e1a8f8a730347b8e2f_W1sZmlsZQ==.jl:20 |
Thank you! PS when you paste code in a comment, surround it in triple backticks (```) to make it easy to read. |
I think we need to bring this branch up to date with Let me know if the "stop gap" works or not in the meantime. |
I can also try to give some more substantial help the week after next. But next week I will be on vacation. We also need to solve the merge conflicts on this PR, and we should merge all the changes into a single PR. |
@shriyafruitwala can you check if the issue is solved by using a stretched vertical grid? |
#4583 may resolve issues here. However we can't use that until we resolve the conflicts here, so the workaround in the mean time is to use a stretched vertical grid. |
Hi Greg, I reverted back to the rigid-lid condition to make sure things looked reasonable, and I had a few questions. The Poisson equation does not hold at the top cell, but it does at the bottom. Are the top and bottom boundary being handled differently in the rigid-lid case? Additionally, similar to before, it looks like a switch happens between some time steps, where the residual (∇²ϕ - ∇ ⋅ u* / Δt) is very large (~1e10) at the top cell and nonzero for a couple grid points below. This does not happen at the bottom boundary. Do you have a guess for why this happens? |
Can you post the script you are working with? I'd like to reproduce your results to get a handle on what you're observing. For a rigid lid case with homogeneous boundary conditions on |
Sure! The divergence looks right (0 to machine precision). Here is the script - I put in an initial buoyancy anomaly to make internal waves.
To calculate the Poisson residual, I put the following before this line in pressure_correction.jl (not sure if that's the easiest way to do it).
|
You should be able to compute the residual using abstract operations, something like ∇²p = ∂x(∂x(p)) + ∂y(∂y(p)) + ∂z(∂z(p))
δ = ∂x(u) + ∂y(v) + ∂z(w)
r = Field(∇²p - δ / Δt)
@show r or to store in a precomputed field (I'm not sure this will be more efficient but it could be) r = ∇²p - δ / Δt
poisson .= r you may need |
|
I modified the mwe slightly here: using Oceananigans
using Oceananigans.Units
using Statistics
using Printf
H = 50
Nx, Nz = 50, 50
z = -H:(H/Nz):0
grid = RectilinearGrid(size = (Nx, Nz); z,
x = (-25, 25),
halo = (4, 4),
topology = (Periodic, Flat, Bounded))
residual = CenterField(grid)
model = NonhydrostaticModel(; grid,
advection = WENO(),
auxiliary_fields = (; residual),
buoyancy = BuoyancyTracer(),
tracers = :b)
N² = 1e-4
x₀, z₀, σ = 0.0, -H/2, 5
bᵢ(x, z) = N² * z + exp(-((x - x₀)^2 / 2σ^2 + (z - z₀)^2 / 2σ^2))
set!(model, b=bᵢ)
simulation = Simulation(model, Δt=0.01, stop_time=12)
u, v, w = model.velocities
δ = Field(∂x(u) + ∂y(v) + ∂z(w))
function progress(sim)
w_max = maximum(model.velocities.w)
w_min = minimum(model.velocities.w)
w_mean = mean(model.velocities.w)
u_max = maximum(model.velocities.u)
u_min = minimum(model.velocities.u)
u_mean = mean(model.velocities.u)
compute!(δ)
msg = @sprintf("Time: %.2f, max|δ|: %.2e, max(w): %.3e, min(w): %.3e, mean(w): %.3e",
time(sim), maximum(abs, δ), w_max, w_min, w_mean)
msg *= @sprintf(", max(u): %.3e, min(u): %.3e, mean(u): %.3e",
u_max, u_min, u_mean)
@info msg
return nothing
end
add_callback!(simulation, progress, IterationInterval(1))
run!(simulation) I then ran this from this branch: https://github.com/CliMA/Oceananigans.jl/tree/glw/test-pressure-correction which is This produces the output: julia> include("mwe.jl")
Precompiling Oceananigans finished.
1 dependency successfully precompiled in 11 seconds. 140 already precompiled.
[ Info: Oceananigans will use 16 threads
[ Info: Precompiling OceananigansMakieExt [8b7e02c2-18e1-5ade-af7b-cfb5875075c8]
r = 50×1×50 Field{Center, Center, Center} on RectilinearGrid on CPU
├── grid: 50×1×50 RectilinearGrid{Float64, Periodic, Flat, Bounded} on CPU with 4×0×4 halo
├── boundary conditions: FieldBoundaryConditions
│ └── west: Periodic, east: Periodic, south: Nothing, north: Nothing, bottom: ZeroFlux, top: ZeroFlux, immersed: ZeroFlux
└── data: 58×1×58 OffsetArray(::Array{Float64, 3}, -3:54, 1:1, -3:54) with eltype Float64 with indices -3:54×1:1×-3:54
└── max=0.0, min=0.0, mean=0.0
[ Info: Initializing simulation...
[ Info: Time: 0.00, max|δ|: 0.00e+00, max(w): 0.000e+00, min(w): 0.000e+00, mean(w): 0.000e+00, max(u): 0.000e+00, min(u): 0.000e+00, mean(u): 0.000e+00
[ Info: ... simulation initialization complete (4.924 seconds)
[ Info: Executing initial time step...
r = 50×1×50 Field{Center, Center, Center} on RectilinearGrid on CPU
├── grid: 50×1×50 RectilinearGrid{Float64, Periodic, Flat, Bounded} on CPU with 4×0×4 halo
├── boundary conditions: FieldBoundaryConditions
│ └── west: Periodic, east: Periodic, south: Nothing, north: Nothing, bottom: ZeroFlux, top: ZeroFlux, immersed: ZeroFlux
└── data: 58×1×58 OffsetArray(::Array{Float64, 3}, -3:54, 1:1, -3:54) with eltype Float64 with indices -3:54×1:1×-3:54
└── max=5.42101e-17, min=-4.65737e-17, mean=2.07703e-23
r = 50×1×50 Field{Center, Center, Center} on RectilinearGrid on CPU
├── grid: 50×1×50 RectilinearGrid{Float64, Periodic, Flat, Bounded} on CPU with 4×0×4 halo
├── boundary conditions: FieldBoundaryConditions
│ └── west: Periodic, east: Periodic, south: Nothing, north: Nothing, bottom: ZeroFlux, top: ZeroFlux, immersed: ZeroFlux
└── data: 58×1×58 OffsetArray(::Array{Float64, 3}, -3:54, 1:1, -3:54) with eltype Float64 with indices -3:54×1:1×-3:54
└── max=1.32273e-17, min=-1.69136e-17, mean=-6.1664e-23
r = 50×1×50 Field{Center, Center, Center} on RectilinearGrid on CPU
├── grid: 50×1×50 RectilinearGrid{Float64, Periodic, Flat, Bounded} on CPU with 4×0×4 halo
├── boundary conditions: FieldBoundaryConditions
│ └── west: Periodic, east: Periodic, south: Nothing, north: Nothing, bottom: ZeroFlux, top: ZeroFlux, immersed: ZeroFlux
└── data: 58×1×58 OffsetArray(::Array{Float64, 3}, -3:54, 1:1, -3:54) with eltype Float64 with indices -3:54×1:1×-3:54
└── max=2.92735e-17, min=-2.45436e-17, mean=-8.63974e-23
[ Info: Time: 0.01, max|δ|: 2.94e-17, max(w): 4.606e-03, min(w): -1.460e-03, mean(w): -1.911e-20, max(u): 1.468e-03, min(u): -1.468e-03, mean(u): 1.187e-21
[ Info: ... initial time step complete (12.814 seconds).
r = 50×1×50 Field{Center, Center, Center} on RectilinearGrid on CPU
├── grid: 50×1×50 RectilinearGrid{Float64, Periodic, Flat, Bounded} on CPU with 4×0×4 halo
├── boundary conditions: FieldBoundaryConditions
│ └── west: Periodic, east: Periodic, south: Nothing, north: Nothing, bottom: ZeroFlux, top: ZeroFlux, immersed: ZeroFlux
└── data: 58×1×58 OffsetArray(::Array{Float64, 3}, -3:54, 1:1, -3:54) with eltype Float64 with indices -3:54×1:1×-3:54
└── max=6.5269e-17, min=-4.79217e-17, mean=-5.6243e-23
r = 50×1×50 Field{Center, Center, Center} on RectilinearGrid on CPU
├── grid: 50×1×50 RectilinearGrid{Float64, Periodic, Flat, Bounded} on CPU with 4×0×4 halo
├── boundary conditions: FieldBoundaryConditions
│ └── west: Periodic, east: Periodic, south: Nothing, north: Nothing, bottom: ZeroFlux, top: ZeroFlux, immersed: ZeroFlux
└── data: 58×1×58 OffsetArray(::Array{Float64, 3}, -3:54, 1:1, -3:54) with eltype Float64 with indices -3:54×1:1×-3:54
└── max=1.30646e-17, min=-1.09504e-17, mean=1.47723e-22
r = 50×1×50 Field{Center, Center, Center} on RectilinearGrid on CPU
├── grid: 50×1×50 RectilinearGrid{Float64, Periodic, Flat, Bounded} on CPU with 4×0×4 halo
├── boundary conditions: FieldBoundaryConditions
│ └── west: Periodic, east: Periodic, south: Nothing, north: Nothing, bottom: ZeroFlux, top: ZeroFlux, immersed: ZeroFlux
└── data: 58×1×58 OffsetArray(::Array{Float64, 3}, -3:54, 1:1, -3:54) with eltype Float64 with indices -3:54×1:1×-3:54
└── max=3.92481e-17, min=-2.73219e-17, mean=-2.11419e-22
[ Info: Time: 0.02, max|δ|: 3.92e-17, max(w): 9.213e-03, min(w): -2.920e-03, mean(w): -3.457e-20, max(u): 2.936e-03, min(u): -2.936e-03, mean(u): -1.225e-21
r = 50×1×50 Field{Center, Center, Center} on RectilinearGrid on CPU
├── grid: 50×1×50 RectilinearGrid{Float64, Periodic, Flat, Bounded} on CPU with 4×0×4 halo
├── boundary conditions: FieldBoundaryConditions
│ └── west: Periodic, east: Periodic, south: Nothing, north: Nothing, bottom: ZeroFlux, top: ZeroFlux, immersed: ZeroFlux
└── data: 58×1×58 OffsetArray(::Array{Float64, 3}, -3:54, 1:1, -3:54) with eltype Float64 with indices -3:54×1:1×-3:54
└── max=5.11743e-17, min=-4.85723e-17, mean=-6.05798e-22
r = 50×1×50 Field{Center, Center, Center} on RectilinearGrid on CPU
├── grid: 50×1×50 RectilinearGrid{Float64, Periodic, Flat, Bounded} on CPU with 4×0×4 halo
├── boundary conditions: FieldBoundaryConditions
│ └── west: Periodic, east: Periodic, south: Nothing, north: Nothing, bottom: ZeroFlux, top: ZeroFlux, immersed: ZeroFlux
└── data: 58×1×58 OffsetArray(::Array{Float64, 3}, -3:54, 1:1, -3:54) with eltype Float64 with indices -3:54×1:1×-3:54
└── max=1.64257e-17, min=-1.14925e-17, mean=-1.76183e-23
r = 50×1×50 Field{Center, Center, Center} on RectilinearGrid on CPU
├── grid: 50×1×50 RectilinearGrid{Float64, Periodic, Flat, Bounded} on CPU with 4×0×4 halo
├── boundary conditions: FieldBoundaryConditions
│ └── west: Periodic, east: Periodic, south: Nothing, north: Nothing, bottom: ZeroFlux, top: ZeroFlux, immersed: ZeroFlux
└── data: 58×1×58 OffsetArray(::Array{Float64, 3}, -3:54, 1:1, -3:54) with eltype Float64 with indices -3:54×1:1×-3:54
└── max=3.01408e-17, min=-2.68882e-17, mean=-6.24772e-22
[ Info: Time: 0.03, max|δ|: 3.02e-17, max(w): 1.382e-02, min(w): -4.380e-03, mean(w): -3.598e-20, max(u): 4.404e-03, min(u): -4.404e-03, mean(u): 8.159e-21 so at least on that commit, the residual is machine eps. (Note that I had to adjust the residual computation, because on |
Yes, I see. It looks like solving for p*dt instead of p (in |
True but if you don't account for the division by dt, then you aren't computing the residual correctly, right? |
Right, but before I was accounting for the division by dt in the calculation of the rhs (e.g. |
Are you using adaptive time stepping or do you have a |
My dt should be constant. Sorry, I wasn't being very clear - my branch now solves for p*dt, mirroring what you did above, so the residual looks good. Solving for p seemed to mess up the top cell. |
Reverting back to the free surface, there seems to be a problem with computing the diagonal in the |
Ok, this sounds like a core issue. First, I would try using AB2 instead to avoid the substep issue. Is that possible? It's achieved by passing To allow variable time steps, we may have to use a different formulation of the tridiagonal coefficients that avoids fully specifying arrays. The https://github.com/CliMA/Oceananigans.jl/blob/main/src/Solvers/batched_tridiagonal_solver.jl for example: Oceananigans.jl/src/Solvers/batched_tridiagonal_solver.jl Lines 219 to 245 in e1f1080
This interface is invoked by the vertical diffusion solver, so an example of using it is provided here: Oceananigans.jl/src/TurbulenceClosures/vertically_implicit_diffusion_solver.jl Lines 160 to 162 in e1f1080
here: Oceananigans.jl/src/TurbulenceClosures/vertically_implicit_diffusion_solver.jl Lines 150 to 153 in e1f1080
and here: Oceananigans.jl/src/TurbulenceClosures/vertically_implicit_diffusion_solver.jl Lines 202 to 204 in e1f1080
For the I am wondering if we need to start from scratch here, given the large number of conflicts on this PR and the need for fairly involved extension of FourierTridiagonalPoissonSolver I just described (which I can help with). It might make sense just to test AB2 for now to see what that gives us? Let me know what you think @shriyafruitwala , and also if you see a path forward. |
I think it makes sense to try and get this working with AB2 before moving to variable dt. I tested the free surface case with AB2, and it blows up almost immediately. As a sanity check, I went back to the rigid lid case, and that seems to yield the same results with AB2 as RK3, so there is probably another issue with the free surface implementation besides the time stepper. One question that might indicate what is going wrong is the difference between calculating p and p*dt in the rigid lid case. Calculating p*dt seems to give a reasonable answer (as discussed above), but solving for p yields a nonzero Poisson residual in the top cell for certain time steps. Dividing the equation by dt shouldn’t yield a different answer right? Also it seems odd that this only happens at the top boundary and not also at the bottom. I also tested the free surface case with AB2 and calculating p instead of p*dt. This runs without blowing up (the flickering in w that we were seeing before is no longer there). The divergence is pretty consistently nonzero, which is a problem, but w and the free surface evolution look reasonable. I think that understanding this p vs p*dt discrepancy would give us a clue for what’s going wrong. |
Correct. There's no mathematical difference between solving for p*dt or p for fixed non-small |
Do you know why solving for p vs p*dt could result in different answers? |
It should not result in different answers. When we made the change on |
That’s interesting - in the rigid lid case, I get a nonzero Poisson residual in the top grid cell for certain time steps when calculating p. This does not happen for p*dt, and I don’t understand why. |
Are you assuming a constant |
I believe
|
No, TimeInterval can incur a change in the time-step so that output occurs exactly at the scheduled time. Because of round off error this can occasionally result in very small time-steps. You should use |
Oh I see, that makes sense. |
Looks like it’s working now with ab2! I have plotted the Oceananigans free surface compared to the Fourier analytical solution and provided the code below. nonhydro_robin_ab2_pdt.jld2_compare_eta.mp4
In terms of next steps, I think we need to clean up my code a bit (e.g. not have gravity hard coded, use the right dz, etc.). For my problem, I use an ImmersedBoundary, so I need to apply the Robin boundary condition to the CG solver. Where in the solver does the boundary condition get set? Also, there remains the problem with RK3… |
Really good news! Here is my thought: there are a lot of conflicts with As for problems in complex domains: I suggest setting up a test problem and running simulations with the "naive" solver first. We will want to have that as a baseline before developing a CG solver anyways. We need to develop a new boundary condition type to express Robin boundary conditions, as well as the associated halo filling algorithm. |
Okay, great! I had actually pulled in changes from Also, it might be useful to use this same test case with the deep water waves to test the CG solver, since we have a ground truth for what that should look like. Where do boundary conditions get implemented in the CG solver? |
Okay, that makes sense! What are the next steps? |
Closes #3946