Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
22 commits
Select commit Hold shift + click to select a range
3521de7
add output start time and end time test
Oct 14, 2025
659b162
Update tutorial_kernelloop.ipynb
erikvansebille Oct 15, 2025
0f5bdff
First attempt at cleaning up the Kernel Loop
erikvansebille Oct 15, 2025
7c62244
Merge branch 'v4-dev' into pr/2333
erikvansebille Oct 16, 2025
833d62f
Fixing the first adding of dt to time
erikvansebille Oct 16, 2025
c145154
Removing time_nextloop altogether
erikvansebille Oct 16, 2025
8091c1f
Removing write_latest_location
erikvansebille Oct 16, 2025
b2ca55d
reverting change to test_dont_run_particles_outside_starttime
erikvansebille Oct 16, 2025
38b71cf
Fixing particlefile writing for negative dt
erikvansebille Oct 16, 2025
50f9603
Testing for non-negative runtime and outputdt
erikvansebille Oct 16, 2025
930d423
Fixing writing of Kernel loop with delayed starts and sampling
erikvansebille Oct 20, 2025
48da5dd
Adding microsecond dt support
erikvansebille Oct 20, 2025
f504f83
Merge branch 'v4-dev' into pr/2333
erikvansebille Oct 20, 2025
65a7bff
Adding extra particlefile unit test
erikvansebille Oct 20, 2025
c821248
Fixing another particlefile unit test
erikvansebille Oct 21, 2025
2899e37
Add test on writing particle age
erikvansebille Oct 21, 2025
268d294
Remvoing age-related caveat from Kernel loop tutorial
erikvansebille Oct 21, 2025
0eea8b5
Fixing more particlefile unit tests
erikvansebille Oct 21, 2025
676c188
Merge branch 'v4-dev' into pr/2333
erikvansebille Oct 24, 2025
c8c61d2
More unit tests fixed now that decode_cf works
erikvansebille Oct 24, 2025
83b4989
Merge branch 'v4-dev' into output-time-test
erikvansebille Oct 24, 2025
0af49c2
Merge branch 'v4-dev' into output-time-test
erikvansebille Oct 31, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
40 changes: 12 additions & 28 deletions docs/examples_v3/tutorial_kernelloop.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@
"\n",
"When you run a Parcels simulation (i.e. a call to `pset.execute()`), the Kernel loop is the main part of the code that is executed. This part of the code loops through all particles and executes the Kernels that are defined for each particle.\n",
"\n",
"In order to make sure that the displacements of a particle in the different Kernels can be summed, all Kernels add to a _change_ in position (`particles.dlon`, `particles.dlat`, and `particles.ddepth`). This is important, because there are situations where movement kernels would otherwise not commute. Take the example of advecting particles by currents _and_ winds. If the particle would first be moved by the currents and then by the winds, the result could be different from first moving by the winds and then by the currents. Instead, by adding the changes in position, the ordering of the Kernels has no consequence on the particle displacement."
"In order to make sure that the displacements of a particle in the different Kernels can be summed, all Kernels add to a _change_ in position (`particles.dlon`, `particles.dlat`, and `particles.dz`). This is important, because there are situations where movement kernels would otherwise not commute. Take the example of advecting particles by currents _and_ winds. If the particle would first be moved by the currents and then by the winds, the result could be different from first moving by the winds and then by the currents. Instead, by adding the changes in position, the ordering of the Kernels has no consequence on the particle displacement."
]
},
{
Expand All @@ -40,29 +40,27 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Below is a structured overview of the Kernel loop is implemented. Note that this is for longitude only, but the same process is applied for latitude and depth.\n",
"Below is a structured overview of the Kernel loop is implemented. Note that this is for `lon` only, but the same process is applied for `lat` and `z`.\n",
"\n",
"1. Initialise an extra Variable `particles.lon=0` and `particles.time_nextloop = particles.time`\n",
"1. Initialise an extra Variable `particles.dlon=0`\n",
"\n",
"2. Within the Kernel loop, for each particle:<br>\n",
"\n",
" 1. Update `particles.lon += particles.dlon`<br>\n",
"\n",
" 2. Set variable `particles.dlon = 0`<br>\n",
" 2. Update `particles.time += particles.dt` (except for on the first iteration of the Kernel loop)<br>\n",
"\n",
" 3. Update `particles.time = particles.time_nextloop`\n",
" 3. Set variable `particles.dlon = 0`<br>\n",
"\n",
" 4. For each Kernel in the list of Kernels:\n",
" \n",
" 1. Execute the Kernel\n",
" \n",
" 2. Update `particles.dlon` by adding the change in longitude, if needed<br>\n",
"\n",
" 5. Update `particles.time_nextloop += particles.dt`<br>\n",
" 5. If `outputdt` is a multiple of `particle.time`, write `particle.lon` and `particle.time` to zarr output file<br>\n",
"\n",
" 6. If `outputdt` is a multiple of `particle.time`, write `particle.lon` and `particle.time` to zarr output file<br>\n",
"\n",
"Besides having commutable Kernels, the main advantage of this implementation is that, when using Field Sampling with e.g. `particle.temp = fieldset.Temp[particle.time, particle.depth, particle.lat, particle.lon]`, the particle location stays the same throughout the entire Kernel loop. Additionally, this implementation ensures that the particle location is the same as the location of the sampled field in the output file."
"Besides having commutable Kernels, the main advantage of this implementation is that, when using Field Sampling with e.g. `particle.temp = fieldset.Temp[particle.time, particle.z, particle.lat, particle.lon]`, the particle location stays the same throughout the entire Kernel loop. Additionally, this implementation ensures that the particle location is the same as the location of the sampled field in the output file."
]
},
{
Expand Down Expand Up @@ -155,10 +153,10 @@
"source": [
"def wind_kernel(particle, fieldset, time):\n",
" particle_dlon += (\n",
" fieldset.UWind[time, particle.depth, particle.lat, particle.lon] * particle.dt\n",
" fieldset.UWind[time, particle.z, particle.lat, particle.lon] * particle.dt\n",
" )\n",
" particle_dlat += (\n",
" fieldset.VWind[time, particle.depth, particle.lat, particle.lon] * particle.dt\n",
" fieldset.VWind[time, particle.z, particle.lat, particle.lon] * particle.dt\n",
" )"
]
},
Expand Down Expand Up @@ -242,30 +240,16 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Caveats"
"## Caveat: Avoid updating particle locations directly in Kernels"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"There are a few important considerations to take into account when writing Kernels\n",
"\n",
"### 1. Avoid updating particle locations directly in Kernels\n",
"It is better not to update `particle.lon` directly in a Kernel, as it can interfere with the loop above. Assigning a value to `particle.lon` in a Kernel will throw a warning. \n",
"\n",
"Instead, update the local variable `particle.dlon`.\n",
"\n",
"### 2. Be careful with updating particle variables that do not depend on Fields.\n",
"While assigning the interpolated value of a `Field` to a Particle goes well in the loop above, this is not necessarily so for assigning other attributes. For example, a line like `particle.age += particle.dt` is executed directly so may result in the age being `dt` at `time = 0` in the output file. \n",
"\n",
"A workaround is to either initialise the age to `-dt`, or to increase the `age` only when `particle.time > 0` (using an `np.where` statement).\n",
"\n",
"\n",
"### 3. The last time is not written to file\n",
"Because the location at the start of the loop is written at the end of the Kernel loop, the last `particle.time` of the particle is not written to file. This is similar behaviour to e.g. `np.arange(start, stop)`, which also doesn't include the `stop` value itself. \n",
"\n",
"If you do want to write the last time to file, you can increase the `runtime` or `endtime` by `dt` (although this may cause an OutsideTimeInterval if your run was to the end of the available hydrodynamic data), or you can call `pfile.write_latest_locations(pset, time=pset[0].time_nextloop)`. Note that in the latter case, the particle locations (longitude, latitude and depth) will be updated, but other variables will _not_ be updated as the Kernels are not run again."
"Instead, update the local variable `particle.dlon`."
]
},
{
Expand Down Expand Up @@ -355,7 +339,7 @@
"source": [
"def KeepInOcean(particle, fieldset, time):\n",
" if particle.state == StatusCode.ErrorThroughSurface:\n",
" particle_ddepth = 0.0\n",
" particle_dz = 0.0\n",
" particle.state = StatusCode.Success"
]
},
Expand Down
37 changes: 18 additions & 19 deletions src/parcels/_core/kernel.py
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ def __init__(
self._fieldset = fieldset
self._ptype = ptype

self._positionupdate_kernels_added = False
self._positionupdate_kernel_added = False

for f in pyfuncs:
self.check_fieldsets_in_kernels(f)
Expand Down Expand Up @@ -111,23 +111,19 @@ def remove_deleted(self, pset):
if len(indices) > 0:
pset.remove_indices(indices)

def add_positionupdate_kernels(self):
def add_positionupdate_kernel(self):
# Adding kernels that set and update the coordinate changes
def Setcoords(particles, fieldset): # pragma: no cover
def PositionUpdate(particles, fieldset): # pragma: no cover
particles.lon += particles.dlon
particles.lat += particles.dlat
particles.z += particles.dz
particles.time += particles.dt

particles.dlon = 0
particles.dlat = 0
particles.dz = 0

particles.time = particles.time_nextloop

def UpdateTime(particles, fieldset): # pragma: no cover
particles.time_nextloop = particles.time + particles.dt

self._pyfuncs = (Setcoords + self + UpdateTime)._pyfuncs
self._pyfuncs = (PositionUpdate + self)._pyfuncs

def check_fieldsets_in_kernels(self, pyfunc): # TODO v4: this can go into another method? assert_is_compatible()?
"""
Expand Down Expand Up @@ -234,14 +230,13 @@ def execute(self, pset, endtime, dt):

pset._data["state"][:] = StatusCode.Evaluate

if not self._positionupdate_kernels_added:
self.add_positionupdate_kernels()
self._positionupdate_kernels_added = True

while (len(pset) > 0) and np.any(np.isin(pset.state, [StatusCode.Evaluate, StatusCode.Repeat])):
time_to_endtime = compute_time_direction * (endtime - pset.time_nextloop)
time_to_endtime = compute_time_direction * (endtime - pset.time)

if all(time_to_endtime <= 0):
evaluate_particles = (np.isin(pset.state, [StatusCode.Success, StatusCode.Evaluate])) & (
time_to_endtime >= 0
)
if not np.any(evaluate_particles):
return StatusCode.Success

# adapt dt to end exactly on endtime
Expand All @@ -251,7 +246,6 @@ def execute(self, pset, endtime, dt):
pset.dt = np.minimum(np.maximum(pset.dt, -time_to_endtime), 0)

# run kernels for all particles that need to be evaluated
evaluate_particles = (pset.state == StatusCode.Evaluate) & (pset.dt != 0)
for f in self._pyfuncs:
f(pset[evaluate_particles], self._fieldset)

Expand All @@ -265,9 +259,9 @@ def execute(self, pset, endtime, dt):
if not hasattr(self.fieldset, "RK45_tol"):
pset._data["dt"][:] = dt

# Reset particle state for particles that signalled success and have not reached endtime yet
particles_to_evaluate = (pset.state == StatusCode.Success) & (time_to_endtime > 0)
pset[particles_to_evaluate].state = StatusCode.Evaluate
# Set particle state for particles that reached endtime
particles_endofloop = (pset.state == StatusCode.Evaluate) & (pset.time == endtime)
pset[particles_endofloop].state = StatusCode.EndofLoop

# delete particles that signalled deletion
self.remove_deleted(pset)
Expand All @@ -284,4 +278,9 @@ def execute(self, pset, endtime, dt):
else:
error_func(pset[inds].z, pset[inds].lat, pset[inds].lon)

# Only add PositionUpdate kernel at the end of the first execute call to avoid adding dt to time too early
if not self._positionupdate_kernel_added:
self.add_positionupdate_kernel()
self._positionupdate_kernel_added = True

return pset
1 change: 0 additions & 1 deletion src/parcels/_core/particle.py
Original file line number Diff line number Diff line change
Expand Up @@ -180,7 +180,6 @@ def get_default_particle(spatial_dtype: np.float32 | np.float64) -> ParticleClas
dtype=_SAME_AS_FIELDSET_TIME_INTERVAL.VALUE,
attrs={"standard_name": "time", "units": "seconds", "axis": "T"},
),
Variable("time_nextloop", dtype=_SAME_AS_FIELDSET_TIME_INTERVAL.VALUE, to_write=False),
Variable(
"trajectory",
dtype=np.int64,
Expand Down
35 changes: 9 additions & 26 deletions src/parcels/_core/particlefile.py
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,9 @@ def __init__(self, store, outputdt, chunks=None, create_new_zarrfile=True):
if not isinstance(outputdt, np.timedelta64):
raise ValueError(f"Expected outputdt to be a np.timedelta64 or datetime.timedelta, got {type(outputdt)}")

if outputdt <= np.timedelta64(0, "s"):
raise ValueError(f"outputdt must be a positive non-zero timedelta. Got {outputdt=!r}")

self._outputdt = outputdt

_assert_valid_chunks_tuple(chunks)
Expand Down Expand Up @@ -242,23 +245,6 @@ def _write_particle_data(self, *, particle_data, pclass, time_interval, time, in

particle_data["obs_written"][indices_to_write] = obs + 1

def write_latest_locations(self, pset, time):
"""Write the current (latest) particle locations to zarr file.
This can be useful at the end of a pset.execute(), when the last locations are not written yet.
Note that this only updates the locations, not any of the other Variables. Therefore, use with care.

Parameters
----------
pset :
ParticleSet object to write
time :
Time at which to write ParticleSet. Note that typically this would be pset.time_nextloop
"""
for var in ["lon", "lat", "z"]:
pset._data[f"{var}"] += pset._data[f"d{var}"]
pset._data["time"] = pset._data["time_nextloop"]
self.write(pset, time)


def _get_store_from_pathlike(path: Path | str) -> DirectoryStore:
path = str(Path(path)) # Ensure valid path, and convert to string
Expand Down Expand Up @@ -301,21 +287,21 @@ def _to_write_particles(particle_data, time):
(
np.less_equal(
time - np.abs(particle_data["dt"] / 2),
particle_data["time_nextloop"],
where=np.isfinite(particle_data["time_nextloop"]),
particle_data["time"],
where=np.isfinite(particle_data["time"]),
)
& np.greater_equal(
time + np.abs(particle_data["dt"] / 2),
particle_data["time_nextloop"],
where=np.isfinite(particle_data["time_nextloop"]),
particle_data["time"],
where=np.isfinite(particle_data["time"]),
) # check time - dt/2 <= particle_data["time"] <= time + dt/2
| (
(np.isnan(particle_data["dt"]))
& np.equal(time, particle_data["time_nextloop"], where=np.isfinite(particle_data["time_nextloop"]))
& np.equal(time, particle_data["time"], where=np.isfinite(particle_data["time"]))
) # or dt is NaN and time matches particle_data["time"]
)
& (np.isfinite(particle_data["trajectory"]))
& (np.isfinite(particle_data["time_nextloop"]))
& (np.isfinite(particle_data["time"]))
)[0]


Expand All @@ -324,9 +310,6 @@ def _convert_particle_data_time_to_float_seconds(particle_data, time_interval):
particle_data = particle_data.copy()

particle_data["time"] = ((particle_data["time"] - time_interval.left) / np.timedelta64(1, "s")).astype(np.float64)
particle_data["time_nextloop"] = (
(particle_data["time_nextloop"] - time_interval.left) / np.timedelta64(1, "s")
).astype(np.float64)
particle_data["dt"] = (particle_data["dt"] / np.timedelta64(1, "s")).astype(np.float64)
return particle_data

Expand Down
12 changes: 6 additions & 6 deletions src/parcels/_core/particleset.py
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,6 @@ def __init__(
lat=lat,
z=z,
time=time,
time_nextloop=time,
trajectory=trajectory_ids,
),
)
Expand Down Expand Up @@ -421,7 +420,7 @@ def update_dt_dtype(self, dt_dtype: np.dtype):
dt_dtype : np.dtype
New dtype for dt.
"""
if dt_dtype not in [np.timedelta64, "timedelta64[ns]", "timedelta64[ms]", "timedelta64[s]"]:
if dt_dtype not in [np.timedelta64, "timedelta64[ns]", "timedelta64[μs]", "timedelta64[ms]", "timedelta64[s]"]:
raise ValueError(f"dt_dtype must be a numpy timedelta64 dtype. Got {dt_dtype=!r}")

self._data["dt"] = self._data["dt"].astype(dt_dtype)
Expand Down Expand Up @@ -516,15 +515,16 @@ def execute(
raise ValueError(
f"The runtime must be a datetime.timedelta or np.timedelta64 object. Got {type(runtime)}"
) from e
if runtime < np.timedelta64(0, "s"):
raise ValueError(f"The runtime must be a non-negative timedelta. Got {runtime=!r}")

start_time, end_time = _get_simulation_start_and_end_times(
self.fieldset.time_interval, self._data["time_nextloop"], runtime, endtime, sign_dt
self.fieldset.time_interval, self._data["time"], runtime, endtime, sign_dt
)

# Set the time of the particles if it hadn't been set on initialisation
if np.isnat(self._data["time"]).any():
self._data["time"][:] = start_time
self._data["time_nextloop"][:] = start_time

outputdt = output_file.outputdt if output_file else None

Expand All @@ -536,7 +536,7 @@ def execute(
pbar = tqdm(total=(end_time - start_time) / np.timedelta64(1, "s"), file=sys.stdout)
pbar.set_description("Integration time: " + str(start_time))

next_output = start_time + sign_dt * outputdt if output_file else None
next_output = start_time if output_file else None

time = start_time
while sign_dt * (time - end_time) < 0:
Expand All @@ -553,7 +553,7 @@ def execute(
if output_file:
output_file.write(self, next_output)
if np.isfinite(outputdt):
next_output += outputdt
next_output += outputdt * sign_dt

if verbose_progress:
pbar.set_description("Integration time: " + str(time))
Expand Down
1 change: 1 addition & 0 deletions src/parcels/_core/statuscodes.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ class StatusCode:
"""Class defining the status codes for particles.state."""

Success = 0
EndofLoop = 1
Evaluate = 10
Repeat = 20
Delete = 30
Expand Down
Loading