Skip to content

Welcome to the CTF for Science Framework, a modular and extensible platform designed for benchmarking modeling methods on dynamic systems. This framework supports the evaluation and comparison of models for systems like ordinary differential equations and partial differential equations using standardized datasets and metrics.

License

Notifications You must be signed in to change notification settings

CTF-for-Science/ctf4science

Repository files navigation

CTF for Science Framework

Welcome to the CTF for Science Framework, a modular and extensible platform designed for benchmarking modeling methods on chaotic systems, from the AI Institute in Dynamic Systems. This framework supports the evaluation and comparison of models for systems like ordinary differential equations (ODEs, e.g., Lorenz system) and partial differential equations (PDEs, e.g., Kuramoto-Sivashinsky equation) using standardized datasets and metrics.

Overview

The framework provides:

  • A standardized environment for submitting and evaluating models.
  • Predefined datasets and evaluation metrics.
  • Tools for running models, saving results, and visualizing performance.

Whether you're a researcher benchmarking a new method or a contributor adding a model, this framework streamlines the process.

Team:

Name Email Github Affiliation
Philippe Wyder [email protected] GitWyd University of Washington
Judah Goldfeder [email protected] Jgoldfeder Columbia University
Alexey Yermakov [email protected] yyexela University of Washington
Yue Zhao [email protected] yuezhao6371 SURF (Netherlands)
Stefano Riva [email protected] steriva Politecnico di Milano
Jan Williams [email protected] Jan-Williams University of Washington
David Zoro [email protected] zorodav University of Washington
Amy Sara Rude [email protected] amysrude University of Washington
Matteo Tomasetto [email protected] MatteoTomasetto Politecnico di Milano
Joe Germany [email protected] joeGermany American University of Beirut
Joseph Bakarji [email protected] josephbakarji American University of Beirut
Georg Maierhofer [email protected] GeorgAUT University of Cambridge
Miles Cranmer [email protected] MilesCranmer University of Cambridge
Nathan Kutz [email protected] nathankutz University of Washington

🔧 Quickstart

Run a simple experiment on the Lorenz dataset with naive baselines:

Prerequisites

Clone the Repository

Using SSH (Recommended)

git clone --recursive [email protected]:CTF-for-Science/ctf4science.git

Using HTTPS (requires GitHub authentication):

git clone --recursive https://github.com/CTF-for-Science/ctf4science.git

Install the Repository and run an example

git clone --recurse-submodules https://github.com/CTF-for-Science/ctf4science.git
cd ctf4science
pip install -e .
python models/CTF_NaiveBaselines/run.py models/CTF_NaiveBaselines/config/config_Lorenz_average_batch_all.yaml

Note that the --recurse-submodules flag will clone the associated model submodule repositories as well. If you don't want to download all submodules, and only want to run the CTF_NaiveBaselines, then you can run git submodule update models/CTF_NaiveBaselines after running git clone https://github.com/CTF-for-Science/ctf4science.git, and thereby circumvent cloning all the modules.

Note: This runs the 'average' baseline on the Lorenz dataset for sub-datasets 1 through 6. Results, including predictions, evaluation metrics, and visualizations (e.g., trajectory and histogram plots), are automatically saved in results/ODE_Lorenz/CTF_NaiveBaselines_average/<batch_id>/.

Note: To install optional dependencies, run pip install -e .[all] instead.

📁 Results Directory Structure

After a run, results are saved to:

results/<dataset>/<model>/<batch_id>/
  ├── <pair_id>/               # Metrics, predictions, and visualizations for each sub-dataset
  │   ├── config.yaml          # Configuration used
  │   ├── predictions.npy      # Predicted data
  │   ├── evaluation_results.yaml # Evaluation metrics
  │   └── visualizations/      # Auto-generated plots (e.g., trajectories.png)
  └── batch_results.yaml       # Aggregated batch metrics

Getting Started With Your Own Model

To install and start using the framework, follow the instructions in docs/getting_started.md. This guide covers:

  • Installation steps.
  • Running a quick example with a baseline model.
  • Adding your own model to the framework.

Directory Structure

  • models/: Contains model implementations (e.g., baselines and user-contributed models).
  • results/: Stores model predictions, evaluation results, and visualizations.
  • docs/: Additional documentation (e.g., contributing, configuration).
  • notebooks/: Jupyter notebooks to analyze or visualize results.
  • tests/: Contains unit tests for the ctf4science package.

Contributing a Model

We welcome contributions! To add a new model or improve the framework, see the detailed steps in docs/getting_started.md#contributing-a-new-model.

Contributing to the ctf4science Package

Refer to docs/developer_instructions.md.

Kaggle Page

Check out the Dynamic AI Institute Kaggle Page, for datasets and upcoming contests.

Papers that Inspired This Work

License

This project is licensed under the MIT License. See the LICENSE file for details. This license only covers the ctf4science package. All models linked as submodules are subject to their respective license.

Questions?

For support or inquiries, open an issue on our GitHub repository.

About

Welcome to the CTF for Science Framework, a modular and extensible platform designed for benchmarking modeling methods on dynamic systems. This framework supports the evaluation and comparison of models for systems like ordinary differential equations and partial differential equations using standardized datasets and metrics.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published