Skip to content

Commit 76212f6

Browse files
Merge branch 'v4-dev' into implementing_field_interpolation_api
2 parents cb24503 + 698de64 commit 76212f6

File tree

12 files changed

+149
-169
lines changed

12 files changed

+149
-169
lines changed

.github/workflows/cache-pixi-lock.yml

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,9 @@ name: Generate and cache Pixi lockfile
22

33
on:
44
workflow_call:
5+
inputs:
6+
pixi-version:
7+
type: string
58
outputs:
69
cache-id:
710
description: "The lock file contents"
@@ -30,7 +33,7 @@ jobs:
3033
- uses: prefix-dev/[email protected]
3134
if: ${{ !steps.restore.outputs.cache-hit }}
3235
with:
33-
pixi-version: v0.56.0
36+
pixi-version: ${{ inputs.pixi-version }}
3437
run-install: false
3538
- name: Run pixi lock
3639
if: ${{ !steps.restore.outputs.cache-hit }}

docs/examples_v3/tutorial_kernelloop.ipynb

Lines changed: 12 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@
2626
"\n",
2727
"When you run a Parcels simulation (i.e. a call to `pset.execute()`), the Kernel loop is the main part of the code that is executed. This part of the code loops through all particles and executes the Kernels that are defined for each particle.\n",
2828
"\n",
29-
"In order to make sure that the displacements of a particle in the different Kernels can be summed, all Kernels add to a _change_ in position (`particles.dlon`, `particles.dlat`, and `particles.ddepth`). This is important, because there are situations where movement kernels would otherwise not commute. Take the example of advecting particles by currents _and_ winds. If the particle would first be moved by the currents and then by the winds, the result could be different from first moving by the winds and then by the currents. Instead, by adding the changes in position, the ordering of the Kernels has no consequence on the particle displacement."
29+
"In order to make sure that the displacements of a particle in the different Kernels can be summed, all Kernels add to a _change_ in position (`particles.dlon`, `particles.dlat`, and `particles.dz`). This is important, because there are situations where movement kernels would otherwise not commute. Take the example of advecting particles by currents _and_ winds. If the particle would first be moved by the currents and then by the winds, the result could be different from first moving by the winds and then by the currents. Instead, by adding the changes in position, the ordering of the Kernels has no consequence on the particle displacement."
3030
]
3131
},
3232
{
@@ -40,29 +40,27 @@
4040
"cell_type": "markdown",
4141
"metadata": {},
4242
"source": [
43-
"Below is a structured overview of the Kernel loop is implemented. Note that this is for longitude only, but the same process is applied for latitude and depth.\n",
43+
"Below is a structured overview of the Kernel loop is implemented. Note that this is for `lon` only, but the same process is applied for `lat` and `z`.\n",
4444
"\n",
45-
"1. Initialise an extra Variable `particles.lon=0` and `particles.time_nextloop = particles.time`\n",
45+
"1. Initialise an extra Variable `particles.dlon=0`\n",
4646
"\n",
4747
"2. Within the Kernel loop, for each particle:<br>\n",
4848
"\n",
4949
" 1. Update `particles.lon += particles.dlon`<br>\n",
5050
"\n",
51-
" 2. Set variable `particles.dlon = 0`<br>\n",
51+
" 2. Update `particles.time += particles.dt` (except for on the first iteration of the Kernel loop)<br>\n",
5252
"\n",
53-
" 3. Update `particles.time = particles.time_nextloop`\n",
53+
" 3. Set variable `particles.dlon = 0`<br>\n",
5454
"\n",
5555
" 4. For each Kernel in the list of Kernels:\n",
5656
" \n",
5757
" 1. Execute the Kernel\n",
5858
" \n",
5959
" 2. Update `particles.dlon` by adding the change in longitude, if needed<br>\n",
6060
"\n",
61-
" 5. Update `particles.time_nextloop += particles.dt`<br>\n",
61+
" 5. If `outputdt` is a multiple of `particle.time`, write `particle.lon` and `particle.time` to zarr output file<br>\n",
6262
"\n",
63-
" 6. If `outputdt` is a multiple of `particle.time`, write `particle.lon` and `particle.time` to zarr output file<br>\n",
64-
"\n",
65-
"Besides having commutable Kernels, the main advantage of this implementation is that, when using Field Sampling with e.g. `particle.temp = fieldset.Temp[particle.time, particle.depth, particle.lat, particle.lon]`, the particle location stays the same throughout the entire Kernel loop. Additionally, this implementation ensures that the particle location is the same as the location of the sampled field in the output file."
63+
"Besides having commutable Kernels, the main advantage of this implementation is that, when using Field Sampling with e.g. `particle.temp = fieldset.Temp[particle.time, particle.z, particle.lat, particle.lon]`, the particle location stays the same throughout the entire Kernel loop. Additionally, this implementation ensures that the particle location is the same as the location of the sampled field in the output file."
6664
]
6765
},
6866
{
@@ -155,10 +153,10 @@
155153
"source": [
156154
"def wind_kernel(particle, fieldset, time):\n",
157155
" particle_dlon += (\n",
158-
" fieldset.UWind[time, particle.depth, particle.lat, particle.lon] * particle.dt\n",
156+
" fieldset.UWind[time, particle.z, particle.lat, particle.lon] * particle.dt\n",
159157
" )\n",
160158
" particle_dlat += (\n",
161-
" fieldset.VWind[time, particle.depth, particle.lat, particle.lon] * particle.dt\n",
159+
" fieldset.VWind[time, particle.z, particle.lat, particle.lon] * particle.dt\n",
162160
" )"
163161
]
164162
},
@@ -242,30 +240,16 @@
242240
"cell_type": "markdown",
243241
"metadata": {},
244242
"source": [
245-
"## Caveats"
243+
"## Caveat: Avoid updating particle locations directly in Kernels"
246244
]
247245
},
248246
{
249247
"cell_type": "markdown",
250248
"metadata": {},
251249
"source": [
252-
"There are a few important considerations to take into account when writing Kernels\n",
253-
"\n",
254-
"### 1. Avoid updating particle locations directly in Kernels\n",
255250
"It is better not to update `particle.lon` directly in a Kernel, as it can interfere with the loop above. Assigning a value to `particle.lon` in a Kernel will throw a warning. \n",
256251
"\n",
257-
"Instead, update the local variable `particle.dlon`.\n",
258-
"\n",
259-
"### 2. Be careful with updating particle variables that do not depend on Fields.\n",
260-
"While assigning the interpolated value of a `Field` to a Particle goes well in the loop above, this is not necessarily so for assigning other attributes. For example, a line like `particle.age += particle.dt` is executed directly so may result in the age being `dt` at `time = 0` in the output file. \n",
261-
"\n",
262-
"A workaround is to either initialise the age to `-dt`, or to increase the `age` only when `particle.time > 0` (using an `np.where` statement).\n",
263-
"\n",
264-
"\n",
265-
"### 3. The last time is not written to file\n",
266-
"Because the location at the start of the loop is written at the end of the Kernel loop, the last `particle.time` of the particle is not written to file. This is similar behaviour to e.g. `np.arange(start, stop)`, which also doesn't include the `stop` value itself. \n",
267-
"\n",
268-
"If you do want to write the last time to file, you can increase the `runtime` or `endtime` by `dt` (although this may cause an OutsideTimeInterval if your run was to the end of the available hydrodynamic data), or you can call `pfile.write_latest_locations(pset, time=pset[0].time_nextloop)`. Note that in the latter case, the particle locations (longitude, latitude and depth) will be updated, but other variables will _not_ be updated as the Kernels are not run again."
252+
"Instead, update the local variable `particle.dlon`."
269253
]
270254
},
271255
{
@@ -355,7 +339,7 @@
355339
"source": [
356340
"def KeepInOcean(particle, fieldset, time):\n",
357341
" if particle.state == StatusCode.ErrorThroughSurface:\n",
358-
" particle_ddepth = 0.0\n",
342+
" particle_dz = 0.0\n",
359343
" particle.state = StatusCode.Success"
360344
]
361345
},

src/parcels/_core/kernel.py

Lines changed: 18 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ def __init__(
7575
self._fieldset = fieldset
7676
self._ptype = ptype
7777

78-
self._positionupdate_kernels_added = False
78+
self._positionupdate_kernel_added = False
7979

8080
for f in pyfuncs:
8181
self.check_fieldsets_in_kernels(f)
@@ -111,23 +111,19 @@ def remove_deleted(self, pset):
111111
if len(indices) > 0:
112112
pset.remove_indices(indices)
113113

114-
def add_positionupdate_kernels(self):
114+
def add_positionupdate_kernel(self):
115115
# Adding kernels that set and update the coordinate changes
116-
def Setcoords(particles, fieldset): # pragma: no cover
116+
def PositionUpdate(particles, fieldset): # pragma: no cover
117117
particles.lon += particles.dlon
118118
particles.lat += particles.dlat
119119
particles.z += particles.dz
120+
particles.time += particles.dt
120121

121122
particles.dlon = 0
122123
particles.dlat = 0
123124
particles.dz = 0
124125

125-
particles.time = particles.time_nextloop
126-
127-
def UpdateTime(particles, fieldset): # pragma: no cover
128-
particles.time_nextloop = particles.time + particles.dt
129-
130-
self._pyfuncs = (Setcoords + self + UpdateTime)._pyfuncs
126+
self._pyfuncs = (PositionUpdate + self)._pyfuncs
131127

132128
def check_fieldsets_in_kernels(self, pyfunc): # TODO v4: this can go into another method? assert_is_compatible()?
133129
"""
@@ -234,14 +230,13 @@ def execute(self, pset, endtime, dt):
234230

235231
pset._data["state"][:] = StatusCode.Evaluate
236232

237-
if not self._positionupdate_kernels_added:
238-
self.add_positionupdate_kernels()
239-
self._positionupdate_kernels_added = True
240-
241233
while (len(pset) > 0) and np.any(np.isin(pset.state, [StatusCode.Evaluate, StatusCode.Repeat])):
242-
time_to_endtime = compute_time_direction * (endtime - pset.time_nextloop)
234+
time_to_endtime = compute_time_direction * (endtime - pset.time)
243235

244-
if all(time_to_endtime <= 0):
236+
evaluate_particles = (np.isin(pset.state, [StatusCode.Success, StatusCode.Evaluate])) & (
237+
time_to_endtime >= 0
238+
)
239+
if not np.any(evaluate_particles):
245240
return StatusCode.Success
246241

247242
# adapt dt to end exactly on endtime
@@ -251,7 +246,6 @@ def execute(self, pset, endtime, dt):
251246
pset.dt = np.minimum(np.maximum(pset.dt, -time_to_endtime), 0)
252247

253248
# run kernels for all particles that need to be evaluated
254-
evaluate_particles = (pset.state == StatusCode.Evaluate) & (pset.dt != 0)
255249
for f in self._pyfuncs:
256250
f(pset[evaluate_particles], self._fieldset)
257251

@@ -265,9 +259,9 @@ def execute(self, pset, endtime, dt):
265259
if not hasattr(self.fieldset, "RK45_tol"):
266260
pset._data["dt"][:] = dt
267261

268-
# Reset particle state for particles that signalled success and have not reached endtime yet
269-
particles_to_evaluate = (pset.state == StatusCode.Success) & (time_to_endtime > 0)
270-
pset[particles_to_evaluate].state = StatusCode.Evaluate
262+
# Set particle state for particles that reached endtime
263+
particles_endofloop = (pset.state == StatusCode.Evaluate) & (pset.time == endtime)
264+
pset[particles_endofloop].state = StatusCode.EndofLoop
271265

272266
# delete particles that signalled deletion
273267
self.remove_deleted(pset)
@@ -284,4 +278,9 @@ def execute(self, pset, endtime, dt):
284278
else:
285279
error_func(pset[inds].z, pset[inds].lat, pset[inds].lon)
286280

281+
# Only add PositionUpdate kernel at the end of the first execute call to avoid adding dt to time too early
282+
if not self._positionupdate_kernel_added:
283+
self.add_positionupdate_kernel()
284+
self._positionupdate_kernel_added = True
285+
287286
return pset

src/parcels/_core/particle.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -180,7 +180,6 @@ def get_default_particle(spatial_dtype: np.float32 | np.float64) -> ParticleClas
180180
dtype=_SAME_AS_FIELDSET_TIME_INTERVAL.VALUE,
181181
attrs={"standard_name": "time", "units": "seconds", "axis": "T"},
182182
),
183-
Variable("time_nextloop", dtype=_SAME_AS_FIELDSET_TIME_INTERVAL.VALUE, to_write=False),
184183
Variable(
185184
"trajectory",
186185
dtype=np.int64,

src/parcels/_core/particlefile.py

Lines changed: 9 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -71,6 +71,9 @@ def __init__(self, store, outputdt, chunks=None, create_new_zarrfile=True):
7171
if not isinstance(outputdt, np.timedelta64):
7272
raise ValueError(f"Expected outputdt to be a np.timedelta64 or datetime.timedelta, got {type(outputdt)}")
7373

74+
if outputdt <= np.timedelta64(0, "s"):
75+
raise ValueError(f"outputdt must be a positive non-zero timedelta. Got {outputdt=!r}")
76+
7477
self._outputdt = outputdt
7578

7679
_assert_valid_chunks_tuple(chunks)
@@ -242,23 +245,6 @@ def _write_particle_data(self, *, particle_data, pclass, time_interval, time, in
242245

243246
particle_data["obs_written"][indices_to_write] = obs + 1
244247

245-
def write_latest_locations(self, pset, time):
246-
"""Write the current (latest) particle locations to zarr file.
247-
This can be useful at the end of a pset.execute(), when the last locations are not written yet.
248-
Note that this only updates the locations, not any of the other Variables. Therefore, use with care.
249-
250-
Parameters
251-
----------
252-
pset :
253-
ParticleSet object to write
254-
time :
255-
Time at which to write ParticleSet. Note that typically this would be pset.time_nextloop
256-
"""
257-
for var in ["lon", "lat", "z"]:
258-
pset._data[f"{var}"] += pset._data[f"d{var}"]
259-
pset._data["time"] = pset._data["time_nextloop"]
260-
self.write(pset, time)
261-
262248

263249
def _get_store_from_pathlike(path: Path | str) -> DirectoryStore:
264250
path = str(Path(path)) # Ensure valid path, and convert to string
@@ -301,21 +287,21 @@ def _to_write_particles(particle_data, time):
301287
(
302288
np.less_equal(
303289
time - np.abs(particle_data["dt"] / 2),
304-
particle_data["time_nextloop"],
305-
where=np.isfinite(particle_data["time_nextloop"]),
290+
particle_data["time"],
291+
where=np.isfinite(particle_data["time"]),
306292
)
307293
& np.greater_equal(
308294
time + np.abs(particle_data["dt"] / 2),
309-
particle_data["time_nextloop"],
310-
where=np.isfinite(particle_data["time_nextloop"]),
295+
particle_data["time"],
296+
where=np.isfinite(particle_data["time"]),
311297
) # check time - dt/2 <= particle_data["time"] <= time + dt/2
312298
| (
313299
(np.isnan(particle_data["dt"]))
314-
& np.equal(time, particle_data["time_nextloop"], where=np.isfinite(particle_data["time_nextloop"]))
300+
& np.equal(time, particle_data["time"], where=np.isfinite(particle_data["time"]))
315301
) # or dt is NaN and time matches particle_data["time"]
316302
)
317303
& (np.isfinite(particle_data["trajectory"]))
318-
& (np.isfinite(particle_data["time_nextloop"]))
304+
& (np.isfinite(particle_data["time"]))
319305
)[0]
320306

321307

@@ -324,9 +310,6 @@ def _convert_particle_data_time_to_float_seconds(particle_data, time_interval):
324310
particle_data = particle_data.copy()
325311

326312
particle_data["time"] = ((particle_data["time"] - time_interval.left) / np.timedelta64(1, "s")).astype(np.float64)
327-
particle_data["time_nextloop"] = (
328-
(particle_data["time_nextloop"] - time_interval.left) / np.timedelta64(1, "s")
329-
).astype(np.float64)
330313
particle_data["dt"] = (particle_data["dt"] / np.timedelta64(1, "s")).astype(np.float64)
331314
return particle_data
332315

src/parcels/_core/particleset.py

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -124,7 +124,6 @@ def __init__(
124124
lat=lat,
125125
z=z,
126126
time=time,
127-
time_nextloop=time,
128127
trajectory=trajectory_ids,
129128
),
130129
)
@@ -421,7 +420,7 @@ def update_dt_dtype(self, dt_dtype: np.dtype):
421420
dt_dtype : np.dtype
422421
New dtype for dt.
423422
"""
424-
if dt_dtype not in [np.timedelta64, "timedelta64[ns]", "timedelta64[ms]", "timedelta64[s]"]:
423+
if dt_dtype not in [np.timedelta64, "timedelta64[ns]", "timedelta64[μs]", "timedelta64[ms]", "timedelta64[s]"]:
425424
raise ValueError(f"dt_dtype must be a numpy timedelta64 dtype. Got {dt_dtype=!r}")
426425

427426
self._data["dt"] = self._data["dt"].astype(dt_dtype)
@@ -516,15 +515,16 @@ def execute(
516515
raise ValueError(
517516
f"The runtime must be a datetime.timedelta or np.timedelta64 object. Got {type(runtime)}"
518517
) from e
518+
if runtime < np.timedelta64(0, "s"):
519+
raise ValueError(f"The runtime must be a non-negative timedelta. Got {runtime=!r}")
519520

520521
start_time, end_time = _get_simulation_start_and_end_times(
521-
self.fieldset.time_interval, self._data["time_nextloop"], runtime, endtime, sign_dt
522+
self.fieldset.time_interval, self._data["time"], runtime, endtime, sign_dt
522523
)
523524

524525
# Set the time of the particles if it hadn't been set on initialisation
525526
if np.isnat(self._data["time"]).any():
526527
self._data["time"][:] = start_time
527-
self._data["time_nextloop"][:] = start_time
528528

529529
outputdt = output_file.outputdt if output_file else None
530530

@@ -536,7 +536,7 @@ def execute(
536536
pbar = tqdm(total=(end_time - start_time) / np.timedelta64(1, "s"), file=sys.stdout)
537537
pbar.set_description("Integration time: " + str(start_time))
538538

539-
next_output = start_time + sign_dt * outputdt if output_file else None
539+
next_output = start_time if output_file else None
540540

541541
time = start_time
542542
while sign_dt * (time - end_time) < 0:
@@ -553,7 +553,7 @@ def execute(
553553
if output_file:
554554
output_file.write(self, next_output)
555555
if np.isfinite(outputdt):
556-
next_output += outputdt
556+
next_output += outputdt * sign_dt
557557

558558
if verbose_progress:
559559
pbar.set_description("Integration time: " + str(time))

src/parcels/_core/statuscodes.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,7 @@ class StatusCode:
2020
"""Class defining the status codes for particles.state."""
2121

2222
Success = 0
23+
EndofLoop = 1
2324
Evaluate = 10
2425
Repeat = 20
2526
Delete = 30

0 commit comments

Comments
 (0)