Skip to content

Commit

Permalink
Merge branch 'develop' into main
Browse files Browse the repository at this point in the history
  • Loading branch information
tsj5 committed Aug 3, 2020
2 parents a5cbcc6 + 83e1b36 commit 1702266
Show file tree
Hide file tree
Showing 49 changed files with 1,197 additions and 687 deletions.
29 changes: 29 additions & 0 deletions .readthedocs.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# .readthedocs.yml
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details

# Required
version: 2

build:
image: stable

# Build documentation in the docs/ directory with Sphinx
sphinx:
builder: html
configuration: doc/conf.py
fail_on_warning: false

# Optionally build your docs in additional formats such as PDF
formats:
- pdf

# Optionally set the version of Python and requirements required to build your docs
python:
version: 3.7
install:
- requirements: doc/requirements.txt
- method: pip
path: .
system_packages: false

182 changes: 85 additions & 97 deletions README.md

Large diffs are not rendered by default.

3 changes: 2 additions & 1 deletion diagnostics/MJO_suite/doc/MJO_suite.rst
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,8 @@ An extensive explanation of the figures and techniques used to achieve them can
:align: center
:width: 100 %

# https://stackoverflow.com/questions/4550021/working-example-of-floating-image-in-restructured-text
..
# https://stackoverflow.com/questions/4550021/working-example-of-floating-image-in-restructured-text
.. |clearfloats| raw:: html

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -177,10 +177,15 @@

if (len(data["tave_list"])==0 or len(data["qsat_int_list"])==0):
data["PREPROCESS_TA"]=1
data["SAVE_TAVE_QSAT_INT"]=1 # default:1 (save pre-processed tave & qsat_int); 0 if no permission
else:
data["PREPROCESS_TA"]=0
data["SAVE_TAVE_QSAT_INT"]=0
# Save pre-processed tave & qsat_int or not; default=0 (don't save)
data["SAVE_TAVE_QSAT_INT"]=int(os.environ["SAVE_TAVE_QSAT_INT"])
if data["PREPROCESS_TA"]!=data["SAVE_TAVE_QSAT_INT"]:
print("Pre-processing of air temperature (ta) required to compute weighted column averages,")
print(" but the pre-processed results will not be saved as intermediate output.")
print("To save the pre-processed results as NetCDF files for re-use (write permission required),")
print(" go to settings.jsonc, and changes SAVE_TAVE_QSAT_INT to 1.")

# Taking care of function arguments for binning
data["args1"]=[ \
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# This file is part of the convective_transition_diag module of the MDTF code package (see mdtf/MDTF_v2.0/LICENSE.txt)
# This file is part of the convective_transition_diag module of the MDTF code package (see mdtf/MDTF-diagnostics/LICENSE.txt)

# ======================================================================
# convective_transition_diag_v1r3.py
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,13 @@ Required programming language and libraries
The is package is written in Python 2, and requires the following Python packages:
os, glob, json, Dataset, numpy, scipy, matplotlib, networkx, warnings, numba, & netcdf4. These Python packages are already included in the standard Anaconda installation.

The plotting functions in this package depend on an older version of matplotlib, thus an older version of the Anaconda 2 installer (ver. 5.0.1) is recommended.
Known issue with matplotlib
^^^^^^^^^^^^^^^^^^^^^^^^^^^

The plotting scripts of this POD may not produce the desired figures with the latest version of matplotlib (because of the default size adjustment settings). The matplotlib version comes with the Anaconda 2 installer, version 5.0.1 has been tested. The readers can switch to this older version.

Depending on the platform and Linux distribution/version, a related error may occur with the error message "... ImportError: libcrypto.so.1.0.0: cannot open shared object file: No such file or directory". One can find the missing object file ``libcrypto.so.1.0.0`` in the subdirectory ``~/anaconda2/pkgs/openssl-1.0.2l-h077ae2c_5/lib/``, where ``~/anaconda2/`` is where Anaconda 2 is installed. The precise names of the object file and openssl-folder may vary. Manually copying the object file to ``~/anaconda2/lib/`` should solve the error.


Required model output variables
-------------------------------
Expand Down
4 changes: 3 additions & 1 deletion diagnostics/convective_transition_diag/settings.jsonc
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,9 @@
// as the Bulk Tropospheric Temperature Measure.
"BULK_TROPOSPHERIC_TEMPERATURE_MEASURE" : "1",
// RES: set Spatial Resolution (degree) for TMI Data (0.25, 0.50, 1.00).
"RES" : "1.00"
"RES" : "1.00",
// SAVE_TAVE_QSAT_INT: save tave and qsat_int files (0=no, 1=yes).
"SAVE_TAVE_QSAT_INT" : "0"
},
"runtime_requirements": {
"python2": ["numpy", "scipy", "matplotlib", "netCDF4", "numba", "networkx"]
Expand Down
131 changes: 102 additions & 29 deletions diagnostics/example/example_diag.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,71 @@
"""Example MDTF diagnostic
This script does a simple diagnostic calculation to illustrate how to adapt code
for use in the MDTF diagnostic framework. The main change is to set input/output
paths, variable names etc. from shell environment variables the framework
provides, instead of hard-coding them.
"""
# MDTF Example Diagnostic POD
# ================================================================================
# This script does a simple diagnostic calculation to illustrate how to adapt code
# for use in the MDTF diagnostic framework. The main change is to set input/output
# paths, variable names etc. from shell environment variables the framework
# provides, instead of hard-coding them.
#
# Below, this script consists of 2 parts: (1) a template of comprehensive header POD
# developers must include in their POD's main driver script, (2) actual code, and
# (3) extensive in-line comments.
# ================================================================================
#
# This file is part of the Example Diagnostic POD of the MDTF code package (see mdtf/MDTF-diagnostics/LICENSE.txt)
#
# Example Diagnostic POD
#
# Last update: 8/1/2020
#
# This is a example POD that you can use as a template for your diagnostics.
# If this were a real POD, you'd place a one-paragraph synopsis of your
# diagnostic here (like an abstract).
#
# Version & Contact info
#
# Here you should describe who contributed to the diagnostic, and who should be
# contacted for further information:
#
# - Version/revision information: version 1 (5/06/2020)
# - PI (name, affiliation, email)
# - Developer/point of contact (name, affiliation, email)
# - Other contributors
#
# Open source copyright agreement
#
# The MDTF framework is distributed under the LGPLv3 license (see LICENSE.txt).
# Unless you've distirbuted your script elsewhere, you don't need to change this.
#
# Functionality
#
# In this section you should summarize the stages of the calculations your
# diagnostic performs, and how they translate to the individual source code files
# provided in your submission. This will, e.g., let maintainers fixing a bug or
# people with questions about how your code works know where to look.
#
# Required programming language and libraries
#
# In this section you should summarize the programming languages and third-party
# libraries used by your diagnostic. You also provide this information in the
# ``settings.jsonc`` file, but here you can give helpful comments to human
# maintainers (eg, "We need at least version 1.5 of this library because we call
# this function.")
#
# Required model output variables
#
# In this section you should describe each variable in the input data your
# diagnostic uses. You also need to provide this in the ``settings.jsonc`` file,
# but here you should go into detail on the assumptions your diagnostic makes
# about the structure of the data.
#
# References
#
# Here you should cite the journal articles providing the scientific basis for
# your diagnostic.
#
# Maloney, E. D, and Co-authors, 2019: Process-oriented evaluation of climate
# and wether forcasting models. BAMS, 100(9), 1665-1686,
# doi:10.1175/BAMS-D-18-0042.1.
#
from __future__ import print_function
import os
import matplotlib
Expand All @@ -31,25 +92,7 @@
model_dataset = xr.open_dataset(input_path)


### 2) Loading observational data files: #######################################
#
# If your diagnostic uses any model-independent supporting data (eg. reference
# or observational data) larger than a few kB of text, it should be provided via
# the observational data distribution instead of being included with the source
# code. This data can be in any format: the framework doesn't process it. The
# environment variable OBS_DATA will be set to a path where the framework has
# copied a directory containing your supplied data.
#
# The following command replaces the substring "{OBS_DATA}" with the value of
# the OBS_DATA environment variable.
input_path = "{OBS_DATA}/example_tas_means.nc".format(**os.environ)

# command to load the netcdf file
obs_dataset = xr.open_dataset(input_path)
obs_mean_tas = obs_dataset['mean_tas']


### 3) Doing computations: #####################################################
### 2) Doing computations: #####################################################
#
# Diagnostics in the framework are intended to work with native output from a
# variety of models. For this reason, variable names should not be hard-coded
Expand All @@ -71,7 +114,7 @@
print("Computed time average of {tas_var} for {CASENAME}.".format(**os.environ))


### 4) Saving output data: #####################################################
### 3) Saving output data: #####################################################
#
# Diagnostics should write output data to disk to a) make relevant results
# available to the user for further use or b) to pass large amounts of data
Expand All @@ -85,7 +128,7 @@
model_mean_tas.to_netcdf(out_path)


### 5) Saving output plots: ####################################################
### 4) Saving output plots: ####################################################
#
# Plots should be saved in EPS or PS format at <WK_DIR>/<model or obs>/PS
# (created by the framework). Plots can be given any filename, but should have
Expand Down Expand Up @@ -113,6 +156,24 @@ def plot_and_save_figure(model_or_obs, title_string, dataset):
# Plot the model data:
plot_and_save_figure("model", title_string, model_mean_tas)


### 5) Loading obs data files & plotting obs figures: ##########################
#
# If your diagnostic uses any model-independent supporting data (eg. reference
# or observational data) larger than a few kB of text, it should be provided via
# the observational data distribution instead of being included with the source
# code. This data can be in any format: the framework doesn't process it. The
# environment variable OBS_DATA will be set to a path where the framework has
# copied a directory containing your supplied data.
#
# The following command replaces the substring "{OBS_DATA}" with the value of
# the OBS_DATA environment variable.
input_path = "{OBS_DATA}/example_tas_means.nc".format(**os.environ)

# command to load the netcdf file
obs_dataset = xr.open_dataset(input_path)
obs_mean_tas = obs_dataset['mean_tas']

# Plot the observational data:
title_string = "Observations: mean {tas_var}".format(**os.environ)
plot_and_save_figure("obs", title_string, obs_mean_tas)
Expand All @@ -125,4 +186,16 @@ def plot_and_save_figure(model_or_obs, title_string, dataset):
#
model_dataset.close()
obs_dataset.close()
print("Another log message: finished successfully!")


### 7) Error/Exception-Handling Example ########################################
nonexistent_file_path = "{DATADIR}/mon/nonexistent_file.nc".format(**os.environ)
try:
nonexistent_dataset = xr.open_dataset(nonexistent_file_path)
except IOError as error:
print(error)
print("This message is printed by the example POD because exception-handling is working!")


### 8) Confirm POD executed sucessfully ########################################
print("Last log message by Example POD: finished successfully!")
14 changes: 11 additions & 3 deletions doc/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -16,13 +16,18 @@ ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

.PHONY: help clean html dirhtml singlehtml latex latexpdf text man changes linkcheck Makefile
.PHONY: help clean clean_all html dirhtml singlehtml latex latexpdf text man changes linkcheck Makefile

clean:
-rm -rf $(BUILDDIR)/*

clean_all:
-rm -rf $(BUILDDIR)
-rm -rf sphinx_pods
-rm sphinx/src.*

html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
$(SPHINXBUILD) -v -T -E -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."

Expand All @@ -44,10 +49,13 @@ latex:
"(use \`make latexpdf' here to do that automatically)."

latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
$(SPHINXBUILD) -v -T -E -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
make -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
-cp -f _build/latex/MDTF_getting_started.pdf _static/MDTF_getting_started.pdf
-cp -f _build/latex/MDTF_walkthrough.pdf _static/MDTF_walkthrough.pdf
@echo "Copied PDFs to _static."

text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
Expand Down
Binary file modified doc/_static/MDTF_getting_started.pdf
Binary file not shown.
Binary file modified doc/_static/MDTF_walkthrough.pdf
Binary file not shown.
29 changes: 19 additions & 10 deletions doc/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,21 +25,24 @@
from recommonmark.transform import AutoStructify

# mock out imports of non-standard library modules
autodoc_mock_imports = ['yaml', 'subprocess32']
# Modules in this list are mocked out due to an error encountered in running
# autodoc on six.py with python 3.7. None of the modules are used by the
# framework: they're only referenced by six.py.
autodoc_mock_imports = ['subprocess32', '_gdbm', '_dbm']
import mock # do this twice just to be safe
for module in autodoc_mock_imports:
sys.modules[module] = mock.Mock()

# -- Project information -----------------------------------------------------

project = u'MDTF-diagnostics'
project = u'MDTF Diagnostics'
copyright = u'2020, Model Diagnostics Task Force'
author = u'Model Diagnostics Task Force'

# The short X.Y version
version = u''
# The full version, including alpha/beta/rc tags
release = u'3.0 beta 1'
release = u'3.0 beta 2'

# only used for resolving relative links in markdown docs
# use develop branch because that's what readthedocs is configured to use
Expand Down Expand Up @@ -210,8 +213,8 @@
# build process if it finds multiple .tex files, and doesn't affect sphinx.
'tex_getting_started', 'MDTF_getting_started.tex_',
u"MDTF Getting Started Guide",
r"Thomas Jackson (GFDL), Yi-Hung Kuo (UCLA), Dani Coleman (NCAR)",
'sphinxmdtfhowto'
r"Thomas Jackson (GFDL) \and Yi-Hung Kuo (UCLA) \and Dani Coleman (NCAR)",
'manual'
),(
# another secondary PDF.
'tex_walkthrough', 'MDTF_walkthrough.tex_',
Expand All @@ -223,7 +226,7 @@
r"\and Eric Maloney\textsuperscript{d} \and John Krasting\textsuperscript{c}"
r"\\ {\small (a: UCLA; b: NCAR; c: GFDL; d:CSU)}"
),
'sphinxmdtfhowto'
'manual'
)
]

Expand Down Expand Up @@ -304,6 +307,11 @@
'undoc-members': True,
'show-inheritance': True
}
# For simplicty, the six.py library is included directly in the /src module,
# but we don't want to document it.
# https://stackoverflow.com/a/21449475
def autodoc_skip_member(app, what, name, obj, skip, options):
return skip or ('six' in name) or ('_MovedItems' in name)

# generate autodocs by running sphinx-apidoc when evaluated on readthedocs.org.
# source: https://github.com/readthedocs/readthedocs.org/issues/1139#issuecomment-398083449
Expand Down Expand Up @@ -352,7 +360,7 @@ def patched_parse(self):
GoogleDocstring._parse = patched_parse

# -- Options for intersphinx extension -----------------------------------------
intersphinx_mapping = {'python': ('https://docs.python.org/2', None)}
intersphinx_mapping = {'python': ('https://docs.python.org/3.7', None)}

# -- Options for todo extension ----------------------------------------------

Expand All @@ -362,8 +370,9 @@ def patched_parse(self):
# == Overall Sphinx app setup hook =============================================

def setup(app):
# register autodoc event
# register autodoc events
app.connect('builder-inited', run_apidoc)
app.connect('autodoc-skip-member', autodoc_skip_member)

# AutoStructify for recommonmark
# see eg https://stackoverflow.com/a/52430829
Expand All @@ -372,7 +381,7 @@ def setup(app):
'enable_auto_toc_tree': False,
'enable_math': True,
'enable_inline_math': True,
'enable_eval_rst': True,
'enable_auto_doc_ref': True,
'enable_eval_rst': True
# 'enable_auto_doc_ref': True, # deprecated, now default behavior
}, True)
app.add_transform(AutoStructify)
Loading

0 comments on commit 1702266

Please sign in to comment.