Skip to content

Multi-generational inter-comparison of climate model performance metrics

License

Notifications You must be signed in to change notification settings

hdrake/cmip6hack-multigen

Repository files navigation

CMIP6 Hackathon

Project: Multi-Generational Climate Model Inter-Comparison

DOI

See below for project description (modified from discourse).

Setting up on the cloud

Go to ocean.pangeo.io and login with Globus using your Orcid ID (full instructions here).

Next, clone the repository.

  1. Open a JupyterLab session on the system you plan to use.
  2. Open a terminal in the JupyterLab environment.
  3. Clone the project: git clone https://github.com/hdrake/cmip6hack-multigen.git
  4. Navigate into the project folder with: cd cmip6hack-multigen
  5. Activate the cmip6hack-multigen environment by running source spinup_env.sh.
  6. Analyze the data interactively in the iPython jupyter notebooks in /notebooks/ (do not forget to activate the cmip6hack-multigen kernel when you open a notebook!

Project proposal

Scientific Motivation

While the first ocean-atmosphere coupled general circulation models date back to 1969, transient simulations attempting to reproduce the historical record and project future climate changes were not available until the late 1980s. These first-generation GCMs included only atmospheric and oceanic components, were run at nominal resolutions of 3º-10º, and omitted many important sub-grid scale processes whose parameterizations are now common-place. Since then, climate model development has continued, with pushes towards higher resolution, the inclusion of other components of the Earth System, and a flurry of new and/or improved parameterizations. While several studies have documented the improvements in model skill reaped by these model developments (e.g. Reichler et al. 2008, Knutti et al. 2013, and Glecker et al. 2008), there has been no comprehensive study of climate model skill that spans all the way from the first-generation models of the late 1980s to the state-of-the-art CMIP6 ensemble.

Below: change in climate model performance across CMIP1, CMIP2, and CMIP3 from Reichler et al. (2008).

change in climate model performance across CMIP1, CMIP2, and CMIP3 from <a href="https://journals.ametsoc.org/doi/abs/10.1175/BAMS-89-3-303">Reichler et al. (2008)</a>

Overall Project Goal

Compute variable-specific and general model performance metrics (e.g. normalized area-weighted root-mean square; pattern correlations; area-weighted absolute bias; etc) across several model generations, including CMIP6.

Proposed Hacking Methods

We will use xskillscore to calculate global skill metrics such as the root-mean-square error (xskillscore.rmse) and mean absolute error (xskillscore.mae) and compare across different variables, different models (within a model generation), and different model generations.

Time permitting, we will also compute regional skill metrics (and perhaps interesting cross-correlations) using regionalmask to delineate regions.

Data Needs

Monthly-mean values of a number of common variables for CMIP6 historical (e.g. 1800–2014) simulations, ~1000 years of preindustrial control simulations, and 1% per year CO2 runs.

Variables of interest (based on model skill metric in Reichler 2008):

sea level pressure (psl)
air temperature (ta)
2-m air temperature (tas)
zonal and meridional wind (ua, va, uas, vas)
precipitation (pr)
specific and/or relative humidity (hus, hur)
snow fraction (?)
sea ice fraction (sic or siconc)

We currently are pulling CMIP6 from the google cloud storage, pre-CMIP model output from a restricted google cloud storage bucket (to be made publically available upon preprint publication), and ERA5 reanalysis data (as our 'observational' reference) from the Copernicus Climate Data Store.

Software Tools

We will build off of (and hopefully contribute to) existing packages such as xESMF and xskillscore that leverage xarray and already feature tools for handling model ensembles and computing model performance metrics. We will also make use of packages containing useful metadata, such as the regionalmask library of regional masks.

How to make our work citable (once there is enough for a public release)

Zenodo is a data archiving tool that can help make your project citable by assigning a DOI to the project's GitHub repository.

Follow the guidelines here.

About

Multi-generational inter-comparison of climate model performance metrics

Resources

License

Stars

Watchers

Forks

Packages

No packages published