Skip to content

Commit

Permalink
Merge pull request #1650 from KOMPALALOKESH/feature/from_a_grid_dataset
Browse files Browse the repository at this point in the history
added: FieldSet.from_a_grid_dataset
  • Loading branch information
VeckoTheGecko authored Aug 12, 2024
2 parents 5b3a213 + 4d023c8 commit 6e17127
Show file tree
Hide file tree
Showing 5 changed files with 49 additions and 30 deletions.
4 changes: 3 additions & 1 deletion docs/contributing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ There are two primary groups that contribute to Parcels; oceanographers who brin

.. note::

The first component of this documentation is geared to those new to open source. Already familiar with GitHub and open source? Skip ahead to `"Editing Parcels code" <https://www.notion.so/Parcels-Contributing-md-c8bc5a057b7f47e885f3eee82ea6fdf6?pvs=21>`_. See the panel on the right hand side for the table of contents.
The first component of this documentation is geared to those new to open source. Already familiar with GitHub and open source? Skip ahead to the `Editing Parcels code`_ section.

What is open source?
--------------------
Expand Down Expand Up @@ -53,6 +53,8 @@ If you're having trouble using Parcels, feel free to create a discussion in our

In the `Projects panel <https://github.com/OceanParcels/parcels/projects?query=is%3Aopen>`_, you'll see the "Parcels development" project. This is used by the core development team for project management, as well as drafting up new ideas for the codebase that aren't mature enough to be issues themselves. Everything in "backlog" is not being actively worked on and is fair game for open source contributions.

.. _editing-parcels-code:

Editing Parcels code
---------------------

Expand Down
2 changes: 1 addition & 1 deletion docs/documentation/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Documentation and Tutorials
Parcels has several documentation and tutorial Jupyter notebooks and scripts which go through various aspects of Parcels. Static versions of the notebooks are available below via the gallery in the site, with the interactive notebooks being available either completely online at the following `Binder link <https://mybinder.org/v2/gh/OceanParcels/parcels/master?labpath=docs%2Fexamples%2Fparcels_tutorial.ipynb>`_. Following the gallery of notebooks is a list of scripts which provide additional examples to users. You can work with the example notebooks and scripts locally by downloading :download:`parcels_tutorials.zip </_downloads/parcels_tutorials.zip>` and running with your own Parcels installation.

.. warning::
When browsing/downloading the tutorials, it's vital that you are using the documentation corresponding to the version of Parcels that you have installed. You can find which parcels version you have installed by doing ``import parcels`` followed by ``print(parcels.__version__)``. If you don't want to use the latest version of Parcels, you can browse prior versions of the documentation by using the version switcher in the bottom right of this page.
When browsing/downloading the tutorials, it's important that you are using the documentation corresponding to the version of Parcels that you have installed. You can find which parcels version you have installed by doing ``import parcels`` followed by ``print(parcels.__version__)``. If you don't want to use the latest version of Parcels, you can browse prior versions of the documentation by using the version switcher in the bottom right of this page.

.. nbgallery::
:caption: Overview
Expand Down
29 changes: 26 additions & 3 deletions parcels/fieldset.py
Original file line number Diff line number Diff line change
Expand Up @@ -533,7 +533,7 @@ def from_nemo(cls, filenames, variables, dimensions, indices=None, mesh='spheric
This flag overrides the allow_time_extrapolation and sets it to False
tracer_interp_method : str
Method for interpolation of tracer fields. It is recommended to use 'cgrid_tracer' (default)
Note that in the case of from_nemo() and from_cgrid(), the velocity fields are default to 'cgrid_velocity'
Note that in the case of from_nemo() and from_c_grid_dataset(), the velocity fields are default to 'cgrid_velocity'
chunksize :
size of the chunks in dask loading. Default is None (no chunking)
**kwargs :
Expand Down Expand Up @@ -645,7 +645,7 @@ def from_c_grid_dataset(cls, filenames, variables, dimensions, indices=None, mes
This flag overrides the allow_time_extrapolation and sets it to False
tracer_interp_method : str
Method for interpolation of tracer fields. It is recommended to use 'cgrid_tracer' (default)
Note that in the case of from_nemo() and from_cgrid(), the velocity fields are default to 'cgrid_velocity'
Note that in the case of from_nemo() and from_c_grid_dataset(), the velocity fields are default to 'cgrid_velocity'
gridindexingtype : str
The type of gridindexing. Set to 'nemo' in FieldSet.from_nemo()
See also the Grid indexing documentation on oceanparcels.org (Default value = 'nemo')
Expand Down Expand Up @@ -745,7 +745,7 @@ def from_pop(cls, filenames, variables, dimensions, indices=None, mesh='spherica
This flag overrides the allow_time_extrapolation and sets it to False
tracer_interp_method : str
Method for interpolation of tracer fields. It is recommended to use 'bgrid_tracer' (default)
Note that in the case of from_pop() and from_bgrid(), the velocity fields are default to 'bgrid_velocity'
Note that in the case of from_pop() and from_b_grid_dataset(), the velocity fields are default to 'bgrid_velocity'
chunksize :
size of the chunks in dask loading (Default value = None)
depth_units :
Expand Down Expand Up @@ -855,6 +855,29 @@ def from_mom5(cls, filenames, variables, dimensions, indices=None, mesh='spheric
fieldset.W.set_scaling_factor(-1)
return fieldset

@classmethod
def from_a_grid_dataset(cls, filenames, variables, dimensions, **kwargs):
"""
Load a FieldSet from an A-grid dataset, which is the default grid type.
Parameters
----------
filenames :
Path(s) to the input files.
variables :
Dictionary of the variables in the NetCDF file.
dimensions :
Dictionary of the dimensions in the NetCDF file.
**kwargs :
Additional keyword arguments for `from_netcdf()`.
Returns
-------
FieldSet
A FieldSet object.
"""
return cls.from_netcdf(filenames, variables, dimensions, **kwargs)

@classmethod
def from_b_grid_dataset(cls, filenames, variables, dimensions, indices=None, mesh='spherical',
allow_time_extrapolation=None, time_periodic=False,
Expand Down
7 changes: 2 additions & 5 deletions tests/test_advection.py
Original file line number Diff line number Diff line change
Expand Up @@ -503,12 +503,9 @@ def test_analyticalAgrid(mode):
V = np.ones((lat.size, lon.size), dtype=np.float32)
fieldset = FieldSet.from_data({'U': U, 'V': V}, {'lon': lon, 'lat': lat}, mesh='flat')
pset = ParticleSet(fieldset, pclass=ptype[mode], lon=1, lat=1)
failed = False
try:

with pytest.raises(NotImplementedError):
pset.execute(AdvectionAnalytical, runtime=1)
except NotImplementedError:
failed = True
assert failed


@pytest.mark.parametrize('mode', ['scipy']) # JIT not implemented
Expand Down
37 changes: 17 additions & 20 deletions tests/test_fieldset.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,12 +68,9 @@ def test_fieldset_from_data(xdim, ydim):
def test_fieldset_extra_syntax():
"""Simple test for fieldset initialisation from data."""
data, dimensions = generate_fieldset(10, 10)
failed = False
try:

with pytest.raises(SyntaxError):
FieldSet.from_data(data, dimensions, unknown_keyword=5)
except SyntaxError:
failed = True
assert failed


def test_fieldset_vmin_vmax():
Expand Down Expand Up @@ -152,13 +149,9 @@ def test_field_from_netcdf_variables():
assert np.allclose(f1.data, f2.data, atol=1e-12)
assert np.allclose(f1.data, f3.data, atol=1e-12)

failed = False
try:
with pytest.raises(AssertionError):
variable = {'U': 'vozocrtx', 'nav_lat': 'nav_lat'} # multiple variables will fail
f3 = Field.from_netcdf(filename, variable, dims)
except AssertionError:
failed = True
assert failed


@pytest.mark.parametrize('calendar, cftime_datetime',
Expand Down Expand Up @@ -237,6 +230,17 @@ def test_field_from_netcdf_fieldtypes():
fset = FieldSet.from_nemo(filenames, variables, dimensions, fieldtype={'varU': 'U', 'varV': 'V'})
assert isinstance(fset.varU.units, GeographicPolar)

def test_fieldset_from_agrid_dataset():
data_path = os.path.join(os.path.dirname(__file__), 'test_data/')

filenames = {
'lon': data_path + 'mask_nemo_cross_180lon.nc',
'lat': data_path + 'mask_nemo_cross_180lon.nc',
'data': data_path + 'Uu_eastward_nemo_cross_180lon.nc'
}
variable = {'U': 'U'}
dimensions = {'lon': 'glamf', 'lat': 'gphif'}
FieldSet.from_a_grid_dataset(filenames, variable, dimensions)

def test_fieldset_from_cgrid_interpmethod():
data_path = os.path.join(os.path.dirname(__file__), 'test_data/')
Expand All @@ -246,13 +250,10 @@ def test_fieldset_from_cgrid_interpmethod():
'data': data_path + 'Uu_eastward_nemo_cross_180lon.nc'}
variable = 'U'
dimensions = {'lon': 'glamf', 'lat': 'gphif'}
failed = False
try:

with pytest.raises(TypeError):
# should fail because FieldSet.from_c_grid_dataset does not support interp_method
FieldSet.from_c_grid_dataset(filenames, variable, dimensions, interp_method='partialslip')
except TypeError:
failed = True
assert failed


@pytest.mark.parametrize('cast_data_dtype', ['float32', 'float64'])
Expand Down Expand Up @@ -951,12 +952,8 @@ def SampleField(particle, fieldset, time):
pset = ParticleSet(fieldset, pclass=SampleParticle, time=[0, time2], lon=[0.5, 0.5], lat=[0.5, 0.5], depth=[0.5, 0.5])

if time2 > 1:
failed = False
try:
with pytest.raises(TimeExtrapolationError):
pset.execute(SampleField, runtime=10)
except TimeExtrapolationError:
failed = True
assert failed
else:
pset.execute(SampleField, runtime=1)
assert np.allclose([p.u_kernel for p in pset], [p.u_scipy for p in pset], atol=1e-5)
Expand Down

0 comments on commit 6e17127

Please sign in to comment.