Skip to content

Commit b1bab6b

Browse files
authored
Merge pull request #1 from jr3cermak/esmf840
esmf840
2 parents 34739e4 + 50f7f2a commit b1bab6b

File tree

6 files changed

+379
-15
lines changed

6 files changed

+379
-15
lines changed

setup/CONDA.md

Lines changed: 89 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,89 @@
1+
# Conda Environment Instructions
2+
3+
## Python 3.7.x
4+
5+
1) `conda create -n xesmf_env python=3.7`
6+
2) `conda activate xesmf_env`
7+
3) `conda install -c conda-forge xesmf=0.3.0 esmpy=8.2.0 bottleneck=1.3.5`
8+
4) `conda install -c conda-forge dask=2021.10.0 netcdf4`
9+
10+
## Python 3.9.x
11+
12+
NOTE: This method takes a long time for the resolver to find all the
13+
right package combinations.
14+
15+
`$ conda create -n xesmf_env_test -c conda-forge xesmf=0.3.0 esmpy=8.2.0 bottleneck=1.3.5 dask=2021.10.0 netcdf4`
16+
17+
Last checked on Jan 26, 2023.
18+
19+
## Python 3.10.x
20+
21+
1) `conda create -n xesmf_env python=3.10.8`
22+
2) `conda activate xesmf_env`
23+
3) `conda install -c conda-forge xesmf=0.3.0 esmpy=8.2.0 bottleneck=1.3.5`
24+
4) `conda install -c conda-forge dask=2021.10.0 netcdf4`
25+
26+
Last checked on Jan 26, 2023.
27+
28+
## Python 3.11.x
29+
30+
There are reported speed improvements to this version of python.
31+
32+
## Jupyter
33+
34+
5) `conda install -c conda-forge jupyter jupyterlab numba nodejs`
35+
36+
## Flooding
37+
38+
If flooding is required:
39+
40+
6) Install HCtFlood
41+
42+
```
43+
git clone https://github.com/raphaeldussin/HCtFlood
44+
cd src/HCtFlood
45+
python -m pip install -e .
46+
```
47+
48+
If HCtFlood requires more iterations to converge on a solution
49+
here is the needed modification:
50+
```
51+
diff --git a/HCtFlood/kara.py b/HCtFlood/kara.py
52+
index 539050b..00201f0 100644
53+
--- a/HCtFlood/kara.py
54+
+++ b/HCtFlood/kara.py
55+
@@ -106,7 +106,7 @@ def flood_kara_ma(masked_array, spval=1e+15):
56+
57+
58+
@njit
59+
-def flood_kara_raw(field, mask, nmax=1000):
60+
+def flood_kara_raw(field, mask, nmax=2000):
61+
"""Extrapolate land values onto land using the kara method
62+
(https://doi.org/10.1175/JPO2984.1)
63+
```
64+
65+
# Notes
66+
67+
- ESMF packages must have mpi support, see below.
68+
- You must use `xesmf=0.3.0` to be able to create and reuse weight files.
69+
- There is also a problem with a later version of dask, recommend `dask=2021.10.0`
70+
71+
After the conda packages are installed, run: `conda list | grep mpi`, the following
72+
packages should be similar in appearance:
73+
74+
```
75+
esmf 8.2.0 mpi_mpich_h4975321_100 conda-forge
76+
esmpy 8.2.0 mpi_mpich_py37h7352969_101 conda-forge
77+
fftw 3.3.10 nompi_h77c792f_102 conda-forge
78+
hdf5 1.12.1 mpi_mpich_h9c45103_0 conda-forge
79+
libnetcdf 4.8.1 mpi_mpich_h319fa22_1 conda-forge
80+
mpi 1.0 mpich conda-forge
81+
mpi4py 3.1.3 py37h52370cb_1 conda-forge
82+
mpich 4.0.2 h846660c_100 conda-forge
83+
netcdf-fortran 4.5.4 mpi_mpich_h1364a43_0 conda-forge
84+
netcdf4 1.5.8 nompi_py37hf784469_101 conda-forge
85+
```
86+
87+
It is very important that esmf, esmpy, libnetcdf, hdf5 and netcdf-fortran have
88+
`mpi_mpich` within the build name (3rd column) of the package listing.
89+
If they show up as `nompi` then ESMF will not work.

setup/README.md

Lines changed: 33 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,43 @@
11
# Conda Environment Instructions
22

3-
These scripts have been run successfully on Antares using a Python 3.7 conda environment located here - `/home/james/anaconda3/envs/xesmf/bin`. Here is a recipe for creating this.
3+
These scripts have been run successfully using a Python 3.7 conda environment.
4+
Here is a recipe for creating this.
45

56
1) `conda create -n xesmf_env python=3.7`
67
2) `conda activate xesmf_env`
78
3) `conda install -c conda-forge xesmf esmpy=8.2.0`
8-
4) `conda install -c conda-forge dask netCDF4`
9+
4) `conda install -c conda-forge dask netcdf4`
910

11+
Please refer to [CONDA](CONDA.md) link for expanded information for installation of
12+
esmf, esmpy and xesmf for python.
1013

14+
## Notes
1115

12-
Notes:
16+
- Note that `esmpy=8.2.0` must be [installed in the same instance](https://github.com/JiaweiZhuang/xESMF/issues/47#issuecomment-665516640) of `xesmf` installation. The packages must be built with MPI (either `mpich` or `openmpi`)
1317

14-
- Note that `esmpy=8.2.0` must be [installed in the same instance](https://github.com/JiaweiZhuang/xESMF/issues/47#issuecomment-665516640) of `xesmf` installation.
15-
- If you're running on antares, my environment for this can be found at `/home/james/anaconda3/envs/xesmf`
18+
## MPI
19+
20+
To ensure the installed conda packages are installed with MPI, look for `mpich` or `openmpi` in the `Build`
21+
column. Here is some examples using `conda search esmpy`:
22+
23+
```
24+
# Name Version Build Channel
25+
esmpy 8.4.0 mpi_mpich_py311hf216de5_101 conda-forge
26+
esmpy 8.4.0 mpi_openmpi_py311hcdf3ef6_1 conda-forge
27+
esmpy 8.4.0 nompi_py311h8e2db7d_1 conda-forge
28+
```
29+
30+
In this example, the first two packages are built with MPI since ('mpich` and `openmpi`) appear in
31+
the build title. The last package does not have MPI support and will not work if you attempt to
32+
utilize operations that require MPI to work.
33+
34+
# Flooding
35+
36+
If you require flooding of OBCs, then you also need to install:
37+
* https://github.com/raphaeldussin/HCtFlood
38+
39+
# Site specific notes
40+
41+
## antares
42+
43+
- An example installed environment for this can be found at `/home/james/anaconda3/envs/xesmf`

setup/boundary/boundary.py

Lines changed: 24 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -299,30 +299,43 @@ def __init__(self, num, border, hgrid, in_degrees=True, output_dir='.', regrid_d
299299

300300
@property
301301
def coords(self):
302+
# Rename nxp and nyp to locations
302303
if self.border == 'south':
303-
return xarray.Dataset({
304+
rcoord = xarray.Dataset({
304305
'lon': self.hgrid['x'].isel(nyp=0),
305306
'lat': self.hgrid['y'].isel(nyp=0),
306307
'angle': self.hgrid['angle_dx'].isel(nyp=0)
307308
})
309+
rcoord = rcoord.rename_dims({'nxp': 'locations'})
308310
elif self.border == 'north':
309-
return xarray.Dataset({
311+
rcoord = xarray.Dataset({
310312
'lon': self.hgrid['x'].isel(nyp=-1),
311313
'lat': self.hgrid['y'].isel(nyp=-1),
312314
'angle': self.hgrid['angle_dx'].isel(nyp=-1)
313315
})
316+
rcoord = rcoord.rename_dims({'nxp': 'locations'})
314317
elif self.border == 'west':
315-
return xarray.Dataset({
318+
rcoord = xarray.Dataset({
316319
'lon': self.hgrid['x'].isel(nxp=0),
317320
'lat': self.hgrid['y'].isel(nxp=0),
318321
'angle': self.hgrid['angle_dx'].isel(nxp=0)
319322
})
323+
rcoord = rcoord.rename_dims({'nyp': 'locations'})
320324
elif self.border == 'east':
321-
return xarray.Dataset({
325+
rcoord = xarray.Dataset({
322326
'lon': self.hgrid['x'].isel(nxp=-1),
323327
'lat': self.hgrid['y'].isel(nxp=-1),
324328
'angle': self.hgrid['angle_dx'].isel(nxp=-1)
325329
})
330+
rcoord = rcoord.rename_dims({'nyp': 'locations'})
331+
332+
# Make lat and lon coordinates
333+
rcoord = rcoord.assign_coords(
334+
lat=rcoord['lat'],
335+
lon=rcoord['lon']
336+
)
337+
338+
return rcoord
326339

327340
@property
328341
def nx(self):
@@ -496,18 +509,14 @@ def regrid_velocity(
496509
udest = uregrid(usource)
497510
vdest = vregrid(vsource)
498511

499-
# if lat and lon are variables in u/vsource, u/vdest will be dataset
500512
if isinstance(udest, xarray.Dataset):
501513
udest = udest.to_array().squeeze()
502514
if isinstance(vdest, xarray.Dataset):
503515
vdest = vdest.to_array().squeeze()
504516

505517
# Rotate velocities to be model-relative.
506518
if rotate:
507-
if self.border in ['south', 'north']:
508-
angle = self.coords['angle'].rename({'nxp': 'locations'})
509-
elif self.border in ['west', 'east']:
510-
angle = self.coords['angle'].rename({'nyp': 'locations'})
519+
angle = self.coords['angle']
511520
udest, vdest = rotate_uv(udest, vdest, angle)
512521

513522
ds_uv = xarray.Dataset({
@@ -582,7 +591,12 @@ def regrid_tracer(
582591
tdest = regrid(tsource)
583592

584593
if not isinstance(tdest, xarray.Dataset):
585-
tdest = tdest.to_dataset()
594+
# If tdest has a name use it,
595+
# otherwise use the tsource.name
596+
if tdest.name:
597+
tdest = tdest.to_dataset()
598+
else:
599+
tdest = tdest.to_dataset(name=tsource.name)
586600

587601
if 'z' in tsource.coords:
588602
tdest = fill_missing(tdest, fill=fill)

setup/boundary/validate_OBC.py

Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,41 @@
1+
#!/usr/bin/env python
2+
3+
import os
4+
import numpy as np
5+
import xarray as xr
6+
7+
obc_dir = '/home/cermak/workdir/configs/nwa25/OBC'
8+
f1 = os.path.join(obc_dir, 'indiv1/uv_001_1996.nc')
9+
f2 = os.path.join(obc_dir, 'indiv.ans/uv_001_1996.nc')
10+
11+
ds1 = xr.open_dataset(f1)
12+
ds2 = xr.open_dataset(f2)
13+
14+
allvars = list(ds1.variables)
15+
coords = list(ds1.coords)
16+
onlyvars = set(allvars) - set(coords)
17+
18+
notEqual = {
19+
'count': 0,
20+
'coord': [],
21+
'variable': [],
22+
}
23+
24+
for i in coords:
25+
if not(np.all(ds1[i] == ds2[i])):
26+
notEqual['coord'].append(i)
27+
notEqual['count'] = notEqual['count'] + 1
28+
29+
for i in onlyvars:
30+
if not(np.all(ds1[i] == ds2[i])):
31+
notEqual['variable'].append(i)
32+
notEqual['count'] = notEqual['count'] + 1
33+
34+
if notEqual['count'] > 0:
35+
print("Some items do not match:")
36+
if len(notEqual['coord']) > 0:
37+
print(" Coordinates:\n %s" % (", ".join(notEqual['coord'])))
38+
if len(notEqual['variable']) > 0:
39+
print(" Variables:\n %s" % (", ".join(notEqual['variable'])))
40+
else:
41+
print("NetCDF files are equivalent.")
Lines changed: 96 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,96 @@
1+
from glob import glob
2+
from subprocess import run
3+
from os import path
4+
import xarray
5+
from boundary import Segment
6+
import os
7+
import numpy
8+
9+
# xarray gives a lot of unnecessary warnings
10+
import warnings
11+
warnings.filterwarnings('ignore')
12+
13+
14+
def write_year(year, glorys_dir, segments, is_first_year=False):
15+
glorys = (
16+
xarray.open_mfdataset(sorted(glob(os.path.join(glorys_dir, f'{year}/GLORYS_REANALYSIS_{year}-*.nc'))))
17+
.rename({'latitude': 'lat', 'longitude': 'lon', 'depth': 'z'})
18+
)
19+
20+
# Floor first time down to midnight so that it matches initial conditions
21+
if is_first_year:
22+
glorys.time.data[0] = numpy.datetime64(f'{year}-01-01T00:00:00.000000000')
23+
for seg in segments:
24+
seg.regrid_velocity(glorys['uo'], glorys['vo'], suffix=year, flood=False)
25+
for tr in ['thetao', 'so']:
26+
seg.regrid_tracer(glorys[tr], suffix=year, flood=False)
27+
seg.regrid_tracer(glorys['zos'], suffix=year, flood=False)
28+
29+
30+
# this is an xarray based way to concatenate the obc yearly files into one file (per variable of output)
31+
# the former way to do this was based on ncrcat from NCO tools
32+
def ncrcat_rename(nsegments, ncrcat_outdir,output_dir, delete_old_files=False):
33+
rename_dict={'thetao':'temp', 'so':'salt', 'zos':'zeta', 'uv':'uv'}
34+
for var in ['thetao', 'so', 'zos', 'uv']:
35+
for seg in range(1, nsegments+1):
36+
comb = xarray.open_mfdataset(f'{output_dir}{var}_00{seg}_*')
37+
if var!='uv':
38+
comb=comb.rename({f'{var}_segment_00{seg}':f'{rename_dict[var]}_segment_00{seg}'})
39+
if var!='zos':
40+
comb=comb.rename({f'dz_{var}_segment_00{seg}':f'dz_{rename_dict[var]}_segment_00{seg}'})
41+
# Fix output metadata, including removing all _FillValues.
42+
all_vars = list(comb.data_vars.keys()) + list(comb.coords.keys())
43+
encodings = {v: {'_FillValue': None} for v in all_vars}
44+
encodings['time'].update({'dtype':'float64', 'calendar': 'gregorian'})
45+
year1 = str(comb.time.values[2])[0:4]
46+
year_end = str(comb.time.values[-2])[0:4]
47+
print(year1,year_end)
48+
comb.to_netcdf(f'{ncrcat_outdir}{rename_dict[var]}_00{seg}_{year1}_{year_end}.nc',
49+
encoding=encodings,
50+
unlimited_dims='time',
51+
format='NETCDF3_64BIT')
52+
print(f'concatenated and renamed {rename_dict[var]}_00{seg}.nc')
53+
del(comb)
54+
if delete_old_files==True:
55+
os.remove(f'{output_dir}{var}_00{seg}_*')
56+
57+
58+
def main():
59+
first_year = 1996
60+
61+
# Original
62+
#glorys_dir = '/Volumes/A1/workdir/james/glorys/'
63+
#output_dir = '/Volumes/A1/workdir/james/nwa25_input/boundary/indiv_years/'
64+
#ncrcat_outdir = '/Volumes/A1/workdir/james/nwa25_input/boundary/boundary_final/'
65+
#hgrid = xarray.open_dataset('/home/james/gridInfo/nwa25/ocean_hgrid.nc')
66+
67+
# Rob
68+
glorys_dir = '/Volumes/A1/workdir/james/glorys/'
69+
output_dir = '/home/cermak/workdir/configs/nwa25/OBC/indiv/'
70+
ncrcat_outdir = '/home/cermak/workdir/configs/nwa25/OBC/final/'
71+
hgrid = xarray.open_dataset('/home/cermak/workdir/configs/nwa25/INPUT/ocean_hgrid.nc')
72+
73+
segments = [
74+
Segment(1, 'south', hgrid, output_dir=output_dir),
75+
Segment(2, 'east', hgrid, output_dir=output_dir),
76+
Segment(3, 'north', hgrid, output_dir=output_dir)
77+
]
78+
79+
for y in range(1996, 2018):
80+
print(y)
81+
write_year(y, glorys_dir, segments, is_first_year=y==first_year)
82+
83+
# Original
84+
#output_dir = '/Volumes/A1/workdir/james/nwa25_input/boundary/indiv_years/'
85+
#ncrcat_outdir = '/Volumes/A1/workdir/james/nwa25_input/boundary/boundary_final/'
86+
# Rob
87+
output_dir = '/home/cermak/workdir/configs/nwa25/OBC/indiv/'
88+
ncrcat_outdir = '/home/cermak/workdir/configs/nwa25/OBC/final/'
89+
90+
ncrcat_rename(3, ncrcat_outdir, output_dir)
91+
92+
93+
94+
if __name__ == '__main__':
95+
main()
96+

0 commit comments

Comments
 (0)