Skip to content

Commit

Permalink
Merge branch 'develop' into calibrate-impact-functions
Browse files Browse the repository at this point in the history
  • Loading branch information
peanutfun committed Sep 20, 2023
2 parents 5fdbf4e + e792768 commit c2ede47
Show file tree
Hide file tree
Showing 74 changed files with 418 additions and 181 deletions.
5 changes: 3 additions & 2 deletions .github/scripts/setup_devbranch.py
Original file line number Diff line number Diff line change
Expand Up @@ -104,8 +104,9 @@ def setup_devbranch():
Just changes files, all `git` commands are in the setup_devbranch.sh file.
"""
main_version = get_last_version().strip('v')

dev_version = f"{main_version}-dev"
semver = main_version.split(".")
semver[-1] = f"{int(semver[-1]) + 1}-dev"
dev_version = ".".join(semver)

update_setup(dev_version)
update_version(dev_version)
Expand Down
70 changes: 70 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
name: GitHub CI

# Execute this for every push
on: [push]

# Use bash explicitly for being able to enter the conda environment
defaults:
run:
shell: bash -l {0}

jobs:
build-and-test:
name: Build Env, Install, Unit Tests
runs-on: ubuntu-latest
permissions:
# For publishing results
checks: write

# Run this test for different Python versions
strategy:
# Do not abort other tests if only a single one fails
fail-fast: false
matrix:
python-version: ["3.9", "3.10", "3.11"]

steps:
-
name: Checkout Repo
uses: actions/checkout@v3
-
# Store the current date to use it as cache key for the environment
name: Get current date
id: date
run: echo "date=$(date +%Y-%m-%d)" >> "${GITHUB_OUTPUT}"
-
name: Create Environment with Mamba
uses: mamba-org/setup-micromamba@v1
with:
environment-name: climada_env_${{ matrix.python-version }}
environment-file: requirements/env_climada.yml
create-args: >-
python=${{ matrix.python-version }}
make
init-shell: >-
bash
# Persist environment for branch, Python version, single day
cache-environment-key: env-${{ github.ref }}-${{ matrix.python-version }}-${{ steps.date.outputs.date }}
-
name: Install CLIMADA
run: |
python -m pip install ".[test]"
-
name: Run Unit Tests
run: |
make unit_test
-
name: Publish Test Results
uses: EnricoMi/publish-unit-test-result-action@v2
if: always()
with:
junit_files: tests_xml/tests.xml
check_name: "Unit Test Results Python ${{ matrix.python-version }}"
comment_mode: "off"
-
name: Upload Coverage Reports
if: always()
uses: actions/upload-artifact@v3
with:
name: coverage-report-unittests-py${{ matrix.python-version }}
path: coverage/
2 changes: 2 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@ Code freeze date: YYYY-MM-DD

### Changed

- Rearranged file-system structure: `data` directory moved into `climada` package directory. [#781](https://github.com/CLIMADA-project/climada_python/pull/781)

### Fixed

### Deprecated
Expand Down
4 changes: 2 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -29,11 +29,11 @@ install_test : ## Test installation was successful

.PHONY : data_test
data_test : ## Test data APIs
python test_data_api.py
python script/jenkins/test_data_api.py

.PHONY : notebook_test
notebook_test : ## Test notebooks in doc/tutorial
python test_notebooks.py
python script/jenkins/test_notebooks.py report

.PHONY : integ_test
integ_test : ## Integration tests execution with xml reports
Expand Down
28 changes: 10 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,8 @@

CLIMADA stands for **CLIM**ate **ADA**ptation and is a probabilistic natural catastrophe impact model, that also calculates averted damage (benefit) thanks to adaptation measures of any kind (from grey to green infrastructure, behavioural, etc.).

As of today, CLIMADA provides global coverage of major climate-related extreme-weather hazards at high resolution via a [data API](https://climada.ethz.ch/data-api/v1/docs), namely (i) tropical cyclones, (ii) river flood, (iii) agro drought and (iv) European winter storms, all at 4km spatial resolution - wildfire to be added soon. For all hazards, historic and probabilistic event sets exist, for some also under select climate forcing scenarios (RCPs) at distinct time horizons (e.g. 2040). See also [papers](https://github.com/CLIMADA-project/climada_papers) for details.
As of today, CLIMADA provides global coverage of major climate-related extreme-weather hazards at high resolution (4x4km) via a [data API](https://climada.ethz.ch/data-api/v1/docs) For select hazards, historic and probabilistic events sets, for past, present and future climate exist at distinct time horizons.
You will find a repository containing scientific peer-reviewed articles that explain software components implemented in CLIMADA [here](https://github.com/CLIMADA-project/climada_papers).

CLIMADA is divided into two parts (two repositories):

Expand All @@ -15,30 +16,32 @@ CLIMADA is divided into two parts (two repositories):

It is recommend for new users to begin with the core (1) and the [tutorials](https://github.com/CLIMADA-project/climada_python/tree/main/doc/tutorial) therein.

This is the Python (3.8+) version of CLIMADA - please see https://github.com/davidnbresch/climada for backward compatibility (MATLAB).
This is the Python (3.9+) version of CLIMADA - please see [here](https://github.com/davidnbresch/climada) for backward compatibility with the MATLAB version.

## Getting started

CLIMADA runs on Windows, macOS and Linux.
The released versions of the CLIMADA core can be installed directly through Anaconda:

```shell
conda install -c conda-forge climada
```

It is **highly recommended** to install CLIMADA into a **separate** Anaconda environment.
See the [installation guide](https://climada-python.readthedocs.io/en/latest/guide/install.html) for further information.

Follow the [tutorial](https://climada-python.readthedocs.io/en/latest/tutorial/1_main_climada.html) `climada_python-x.y.z/doc/tutorial/1_main_climada.ipynb` in a Jupyter Notebook to see what can be done with CLIMADA and how.
Follow the [tutorials](https://climada-python.readthedocs.io/en/stable/tutorial/1_main_climada.html) in a Jupyter Notebook to see what can be done with CLIMADA and how.

## Documentation

Documentation is available on Read the Docs:
The online documentation is available on [Read the Docs](https://climada-python.readthedocs.io/en/stable/).The documentation of each release version of CLIMADA can be accessed separately through the drop-down menu at the bottom of the left sidebar. Additionally, the version 'stable' refers to the most recent release (installed via `conda`), and 'latest' refers to the latest unstable development version (the `develop` branch).

Note that all the documentations has two versions,'latest' and 'stable', and explicit version numbers, such as 'v3.1.1', in the url path. 'latest' is created from the 'develop' branch and has the latest changes by developers, 'stable' from the latest release. For more details about documentation versions, please have a look at [here](https://readthedocs.org/projects/climada-python/versions/).

CLIMADA python:

* [online (recommended)](https://climada-python.readthedocs.io/en/latest/)
* [PDF file](https://climada-python.readthedocs.io/_/downloads/en/stable/pdf/)
* [core Tutorials on GitHub](https://github.com/CLIMADA-project/climada_python/tree/main/doc/tutorial)

CLIMADA petals:

Expand All @@ -50,23 +53,12 @@ The documentation can also be [built locally](https://climada-python.readthedocs

## Citing CLIMADA

If you use CLIMADA please cite (in general, in particular for academic work) :

The [used version](https://zenodo.org/search?page=1&size=20&q=climada)

and/or the following published articles:
See the [Citation Guide](https://climada-python.readthedocs.io/en/latest/misc/citation.html).

Aznar-Siguan, G. and Bresch, D. N., 2019: CLIMADA v1: a global weather and climate risk assessment platform, Geosci. Model Dev., 12, 3085–3097, https://doi.org/10.5194/gmd-12-3085-2019
Please use the following logo if you are presenting results obtained with or through CLIMADA:

Bresch, D. N. and Aznar-Siguan, G., 2021: CLIMADA v1.4.1: towards a globally consistent adaptation options appraisal tool, Geosci. Model Dev., 14, 351-363, https://doi.org/10.5194/gmd-14-351-2021

Please see all CLIMADA-related scientific publications in our [repository of scientific publications](https://github.com/CLIMADA-project/climada_papers) and cite according to your use of select features, be it hazard set(s), exposure(s) ...

In presentations or other graphical material, as well as in reports etc., where applicable, please add the logo as follows:\
![https://github.com/CLIMADA-project/climada_python/blob/main/doc/guide/img/CLIMADA_logo_QR.png](https://github.com/CLIMADA-project/climada_python/blob/main/doc/guide/img/CLIMADA_logo_QR.png?raw=true)

As key link, please use https://wcr.ethz.ch/research/climada.html, as it will last and provides a bit of an intro, especially for those not familiar with GitHub - plus a nice CLIMADA infographic towards the bottom of the page

## Contributing

See the [Contribution Guide](CONTRIBUTING.md).
Expand Down
6 changes: 3 additions & 3 deletions climada/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@
GSDP_DIR = SYSTEM_DIR.joinpath('GSDP')

REPO_DATA = {
'data/system': [
'climada/data/system': [
ISIMIP_GPWV3_NATID_150AS,
GLB_CENTROIDS_MAT,
ENT_TEMPLATE_XLS,
Expand All @@ -44,12 +44,12 @@
SYSTEM_DIR.joinpath('tc_impf_cal_v01_EDR.csv'),
SYSTEM_DIR.joinpath('tc_impf_cal_v01_RMSF.csv'),
],
'data/system/GSDP': [
'climada/data/system/GSDP': [
GSDP_DIR.joinpath(f'{cc}_GSDP.xls')
for cc in ['AUS', 'BRA', 'CAN', 'CHE', 'CHN', 'DEU', 'FRA', 'IDN', 'IND', 'JPN', 'MEX',
'TUR', 'USA', 'ZAF']
],
'data/demo': [
'climada/data/demo': [
ENT_DEMO_TODAY,
ENT_DEMO_FUTURE,
EXP_DEMO_H5,
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
6 changes: 3 additions & 3 deletions climada/engine/impact.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@

__all__ = ['ImpactFreqCurve', 'Impact']

from dataclasses import dataclass
from dataclasses import dataclass, field
import logging
import copy
import csv
Expand Down Expand Up @@ -1785,10 +1785,10 @@ class ImpactFreqCurve():
"""Impact exceedence frequency curve.
"""

return_per : np.array = np.array([])
return_per : np.ndarray = field(default_factory=lambda: np.empty(0))
"""return period"""

impact : np.array = np.array([])
impact : np.ndarray = field(default_factory=lambda: np.empty(0))
"""impact exceeding frequency"""

unit : str = ''
Expand Down
64 changes: 40 additions & 24 deletions climada/engine/impact_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -802,30 +802,46 @@ def emdat_impact_yearlysum(emdat_file_csv, countries=None, hazard=None, year_ran
df_data[imp_str + " scaled"] = scale_impact2refyear(df_data[imp_str].values,
df_data.Year.values, df_data.ISO.values,
reference_year=reference_year)
out = pd.DataFrame(columns=['ISO', 'region_id', 'year', 'impact',
'impact_scaled', 'reference_year'])
for country in df_data.ISO.unique():
country = u_coord.country_to_iso(country, "alpha3")
if not df_data.loc[df_data.ISO == country].size:
continue
all_years = np.arange(min(df_data.Year), max(df_data.Year) + 1)
data_out = pd.DataFrame(index=np.arange(0, len(all_years)),
columns=out.columns)
df_country = df_data.loc[df_data.ISO == country]
for cnt, year in enumerate(all_years):
data_out.loc[cnt, 'year'] = year
data_out.loc[cnt, 'reference_year'] = reference_year
data_out.loc[cnt, 'ISO'] = country
data_out.loc[cnt, 'region_id'] = u_coord.country_to_iso(country, "numeric")
data_out.loc[cnt, 'impact'] = \
np.nansum(df_country[df_country.Year.isin([year])][imp_str])
data_out.loc[cnt, 'impact_scaled'] = \
np.nansum(df_country[df_country.Year.isin([year])][imp_str + " scaled"])
if '000 US' in imp_str: # EM-DAT damages provided in '000 USD
data_out.loc[cnt, 'impact'] = data_out.loc[cnt, 'impact'] * 1e3
data_out.loc[cnt, 'impact_scaled'] = data_out.loc[cnt, 'impact_scaled'] * 1e3
out = pd.concat([out, data_out])
out = out.reset_index(drop=True)

def country_df(df_data):
for data_iso in df_data.ISO.unique():
country = u_coord.country_to_iso(data_iso, "alpha3")

df_country = df_data.loc[df_data.ISO == country]
if not df_country.size:
continue

# Retrieve impact data for all years
all_years = np.arange(min(df_data.Year), max(df_data.Year) + 1)
data_out = pd.DataFrame.from_records(
[
(
year,
np.nansum(df_country[df_country.Year.isin([year])][imp_str]),
np.nansum(
df_country[df_country.Year.isin([year])][
imp_str + " scaled"
]
),
)
for year in all_years
],
columns=["year", "impact", "impact_scaled"]
)

# Add static data
data_out["reference_year"] = reference_year
data_out["ISO"] = country
data_out["region_id"] = u_coord.country_to_iso(country, "numeric")

# EMDAT provides damage data in 1000 USD
if "000 US" in imp_str:
data_out["impact"] = data_out["impact"] * 1e3
data_out["impact_scaled"] = data_out["impact_scaled"] * 1e3

yield data_out

out = pd.concat(list(country_df(df_data)), ignore_index=True)
return out


Expand Down
2 changes: 1 addition & 1 deletion climada/engine/test/test_impact_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ def test_emdat_impact_event_2020(self):
self.assertEqual(2000, df['reference_year'].min())

def test_emdat_impact_yearlysum_no_futurewarning(self):
"""Ensure that no FutureWarning is issued"""
"""Ensure that no FutureWarning about `DataFrame.append` being deprecated is issued"""
with warnings.catch_warnings():
# Make sure that FutureWarning will cause an error
warnings.simplefilter("error", category=FutureWarning)
Expand Down
2 changes: 1 addition & 1 deletion climada/entity/exposures/test/test_litpop.py
Original file line number Diff line number Diff line change
Expand Up @@ -317,7 +317,7 @@ def test_gridpoints_core_calc_offsets_exp_rescale(self):
self.assertEqual(result_array.shape, results_check.shape)
self.assertAlmostEqual(result_array.sum(), tot)
self.assertEqual(result_array[1,2], results_check[1,2])
np.testing.assert_array_almost_equal_nulp(result_array, results_check)
np.testing.assert_allclose(result_array, results_check)

def test_grp_read_pass(self):
"""test _grp_read() to pass and return either dict with admin1 values or None"""
Expand Down
52 changes: 0 additions & 52 deletions climada/entity/exposures/test/test_nightlight.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,22 +56,6 @@ def test_required_files(self):
self.assertRaises(ValueError, nightlight.get_required_nl_files,
(-90, 90))

def test_check_files_exist(self):
"""Test check_nightlight_local_file_exists"""
# If invalid directory is supplied it has to fail
try:
nightlight.check_nl_local_file_exists(
np.ones(np.count_nonzero(BM_FILENAMES)), 'Invalid/path')[0]
raise Exception("if the path is not valid, check_nl_local_file_exists should fail")
except ValueError:
pass
files_exist = nightlight.check_nl_local_file_exists(
np.ones(np.count_nonzero(BM_FILENAMES)), SYSTEM_DIR)
self.assertTrue(
files_exist.sum() > 0,
f'{files_exist} {BM_FILENAMES}'
)

def test_download_nightlight_files(self):
"""Test check_nightlight_local_file_exists"""
# Not the same length of arguments
Expand Down Expand Up @@ -118,42 +102,6 @@ def test_get_required_nl_files(self):
bool = np.array_equal(np.array([0, 0, 0, 0, 0, 0, 1, 0]), req_files)
self.assertTrue(bool)

def test_check_nl_local_file_exists(self):
""" Test that an array with the correct number of already existing files
is produced, the LOGGER messages logged and the ValueError raised. """

# check logger messages by giving a to short req_file
with self.assertLogs('climada.entity.exposures.litpop.nightlight', level='WARNING') as cm:
nightlight.check_nl_local_file_exists(required_files = np.array([0, 0, 1, 1]))
self.assertIn('The parameter \'required_files\' was too short and is ignored',
cm.output[0])

# check logger message: not all files are available
with self.assertLogs('climada.entity.exposures.litpop.nightlight', level='DEBUG') as cm:
nightlight.check_nl_local_file_exists()
self.assertIn('Not all satellite files available. Found ', cm.output[0])
self.assertIn(f' out of 8 required files in {Path(SYSTEM_DIR)}', cm.output[0])

# check logger message: no files found in checkpath
check_path = Path('climada/entity/exposures')
with self.assertLogs('climada.entity.exposures.litpop.nightlight', level='INFO') as cm:
# using a random path where no files are stored
nightlight.check_nl_local_file_exists(check_path=check_path)
self.assertIn(f'No satellite files found locally in {check_path}',
cm.output[0])

# test raises with wrong path
check_path = Path('/random/wrong/path')
with self.assertRaises(ValueError) as cm:
nightlight.check_nl_local_file_exists(check_path=check_path)
self.assertEqual(f'The given path does not exist: {check_path}',
str(cm.exception))

# test that files_exist is correct
files_exist = nightlight.check_nl_local_file_exists()
self.assertGreaterEqual(int(sum(files_exist)), 3)
self.assertLessEqual(int(sum(files_exist)), 8)

# Execute Tests
if __name__ == "__main__":
TESTS = unittest.TestLoader().loadTestsFromTestCase(TestNightLight)
Expand Down
Loading

0 comments on commit c2ede47

Please sign in to comment.