Hello! I'm Kyle P Messier, PhD, Stadtman Tenure Track Investigator at the National Institute of Environmental Health Sciences.
The Spatiotemporal Exposures and Toxicology group, {SET} group, at NIEHS has a broad interest in geospatial exposomics and risk mapping.
Please check out our gh-pages website for details on the people, papers, and software.
Our code and software is hosted at the NIEHS GitHub Enterprise.
Here is a current list of our software in development:
No. | Package Name | Description | Status |
---|---|---|---|
1. | amadeus | A Machine for Data, Environments, and User Setup for common environmental and climate health datasets is an R package developed to improve and expedite users’ access to large, publicly available geospatial datasets. | |
2. | beethoven | Building an Extensible, Reproducible, Test-driven, Harmonized, Open-source, Versioned, Ensemble model for air quality is an R package developed to facilitate the development of ensemble models for air quality. | |
3. | chopin | Computation for Climate and Health research On Parallelized Infrastructure automates parallelization in spatial operations with chopin functions as well as sf/terra functions. | |
4. | GeoTox | GeoTox, or source-to-outcome, modeling framework with an S3 object-oriented approach. Facilitates the calculation and visualization of single and multiple chemical risk at individual and group levels. | |
5. | RGCA | Implements Reflected Generalized Concentration Addition: A geometric, piecewise inverse function for 3+ parameter sigmoidal models used in chemical mixture concentration-response modeling. | |
6. | PrestoGP | Scalable penalized regression on spatio-temporal outcomes using Gaussian processes. Designed for big data, large-scale geospatial exposure assessment, and geophysical modeling. |
We are focused on developing and promoting software and computational best-practices such as test-driven-development (TDD) and open-source code for the environmental health sciences. To this end, we have protocols in place to ensure that our code is well-documented, tested, and reproducible. Below are some of the key practices we follow:
We will utilize various testing approaches to ensure functionality and quality of code.
Version control of software is essential for reproducibility and collaboration. We use Git and the NIEHS Enterprise GitHub for version control and collaboration.
Within GitHub, we will utilize continuous integration and continuous deployment workflows to ensure that our code is always functional and up-to-date. Multiple ** branch protection rules** within GitHub aresetup and enforced for our GitHub repositories:
- Require pull request and 1 review before merging to
main
- Test pass: Linting: Code shall adhere to the style/linting rules defined in the repository.
- Test pass: Tets Coverage: A given push should not decrease overall test coverage of the repository.
- Test pass: Build Checks: The code should build without errors or warnings.
The CI/CD workflows in GitHub are setup to run on every push to the main branch and on every pull request. The workflows are setup using yaml files in the .github/workflows
directory of the repository.
- data type
- data name
- data size
- relative paths
- output of one module is the expectation of the input of the next module
Starting from the end product, we work backwards while articulating the tests needed at each stage.
- NetCDF
- Numeric, double precision
- NA
- Variable Names Exist
- Naming Convention
- Non-negative variance (
$\sigma^2$ ) - Mean is reasonable (
$\mu$ ) - SI Units
- In the geographic domain (eg. US + buffer)
- In Time range (e.g. 2018-2022)
- Projections
- Coordinate names (e.g. lat/lon)
- Time in acceptable format
-
Write a Test: Before you start writing any code, you write a test case for the functionality you want to implement. This test should fail initially because you haven't written the code to make it pass yet. The test defines the expected behavior of your code.
-
Run the Test: Run the test to ensure it fails. This step confirms that your test is correctly assessing the functionality you want to implement.
-
Write the Minimum Code: Write the minimum amount of code required to make the test pass. Don't worry about writing perfect or complete code at this stage; the goal is just to make the test pass.
-
Run the Test Again: After writing the code, run the test again. If it passes, it means your code now meets the specified requirements.
-
Refactor (if necessary): If your code is working and the test passes, you can refactor your code to improve its quality, readability, or performance. The key here is that you should have test coverage to ensure you don't introduce new bugs while refactoring.
-
Repeat: Continue this cycle of writing a test, making it fail, writing the code to make it pass, and refactoring as needed. Each cycle should be very short and focused on a small piece of functionality.
-
Complete the Feature: Keep repeating the process until your code meets all the requirements for the feature you're working on.
TDD helps ensure that your code is reliable and that it remains functional as you make changes and updates. It also encourages a clear understanding of the requirements and promotes better code design.
We will utilize the targets
and/or snakemake
packages in R and Python respectively to create reproducible workflows for our data analysis. These packages allow us to define the dependencies between the steps in our analysis and ensure that our analysis is reproducible. Additionally, they keep track of pipeline objects and skip steps that have already been run, saving time and resources.
-
Reproducibility: By defining the dependencies between the steps in our analysis, we ensure that our analysis is reproducible. This is essential for scientific research and data analysis.
-
High-Level Abstract: _targets and snakemake allow us to define our analysis at a high level of abstraction, making it easier to understand and maintain.
-
Testing: Creating pipelines and unit/integration testing go hand-in-hand together. As we write the pipeline, the tests to write become obvious.