Welcome to the CTF for Science Framework, a modular and extensible platform designed for benchmarking modeling methods on chaotic systems, from the AI Institute in Dynamic Systems. This framework supports the evaluation and comparison of models for systems like ordinary differential equations (ODEs, e.g., Lorenz system) and partial differential equations (PDEs, e.g., Kuramoto-Sivashinsky equation) using standardized datasets and metrics.
The framework provides:
- A standardized environment for submitting and evaluating models.
- Predefined datasets and evaluation metrics.
- Tools for running models, saving results, and visualizing performance.
Whether you're a researcher benchmarking a new method or a contributor adding a model, this framework streamlines the process.
| Name | Github | Affiliation | |
|---|---|---|---|
| Philippe Wyder | [email protected] | GitWyd | University of Washington |
| Judah Goldfeder | [email protected] | Jgoldfeder | Columbia University |
| Alexey Yermakov | [email protected] | yyexela | University of Washington |
| Yue Zhao | [email protected] | yuezhao6371 | SURF (Netherlands) |
| Stefano Riva | [email protected] | steriva | Politecnico di Milano |
| Jan Williams | [email protected] | Jan-Williams | University of Washington |
| David Zoro | [email protected] | zorodav | University of Washington |
| Amy Sara Rude | [email protected] | amysrude | University of Washington |
| Matteo Tomasetto | [email protected] | MatteoTomasetto | Politecnico di Milano |
| Joe Germany | [email protected] | joeGermany | American University of Beirut |
| Joseph Bakarji | [email protected] | josephbakarji | American University of Beirut |
| Georg Maierhofer | [email protected] | GeorgAUT | University of Cambridge |
| Miles Cranmer | [email protected] | MilesCranmer | University of Cambridge |
| Nathan Kutz | [email protected] | nathankutz | University of Washington |
Run a simple experiment on the Lorenz dataset with naive baselines:
- Git installed on your system
- Github account
- SSH key set up with GitHub (see https://docs.github.com/en/authentication/connecting-to-github-with-ssh)
Using SSH (Recommended)
git clone --recursive [email protected]:CTF-for-Science/ctf4science.gitUsing HTTPS (requires GitHub authentication):
git clone --recursive https://github.com/CTF-for-Science/ctf4science.gitgit clone --recurse-submodules https://github.com/CTF-for-Science/ctf4science.git
cd ctf4science
pip install -e .
python models/CTF_NaiveBaselines/run.py models/CTF_NaiveBaselines/config/config_Lorenz_average_batch_all.yamlNote that the --recurse-submodules flag will clone the associated model submodule repositories as well. If you don't want to download all submodules, and only want to run the CTF_NaiveBaselines, then you can run git submodule update models/CTF_NaiveBaselines after running git clone https://github.com/CTF-for-Science/ctf4science.git, and thereby circumvent cloning all the modules.
Note: This runs the 'average' baseline on the Lorenz dataset for sub-datasets 1 through 6. Results, including predictions, evaluation metrics, and visualizations (e.g., trajectory and histogram plots), are automatically saved in results/ODE_Lorenz/CTF_NaiveBaselines_average/<batch_id>/.
Note: To install optional dependencies, run pip install -e .[all] instead.
After a run, results are saved to:
results/<dataset>/<model>/<batch_id>/
├── <pair_id>/ # Metrics, predictions, and visualizations for each sub-dataset
│ ├── config.yaml # Configuration used
│ ├── predictions.npy # Predicted data
│ ├── evaluation_results.yaml # Evaluation metrics
│ └── visualizations/ # Auto-generated plots (e.g., trajectories.png)
└── batch_results.yaml # Aggregated batch metrics
To install and start using the framework, follow the instructions in docs/getting_started.md. This guide covers:
- Installation steps.
- Running a quick example with a baseline model.
- Adding your own model to the framework.
models/: Contains model implementations (e.g., baselines and user-contributed models).results/: Stores model predictions, evaluation results, and visualizations.docs/: Additional documentation (e.g., contributing, configuration).notebooks/: Jupyter notebooks to analyze or visualize results.tests/: Contains unit tests for thectf4sciencepackage.
We welcome contributions! To add a new model or improve the framework, see the detailed steps in docs/getting_started.md#contributing-a-new-model.
Refer to docs/developer_instructions.md.
Check out the Dynamic AI Institute Kaggle Page, for datasets and upcoming contests.
- Descending through a Crowded Valley - Benchmarking Deep Learning Optimizers
- CoDBench: A Critical Evaluation of Data-driven Models for Continuous Dynamical Systems
- Weak baselines and reporting biases lead to over-optimism in machine learning for fluid-related partial differential equations
- The Well: Dynamic System Dataset & Benchmarking
This project is licensed under the MIT License. See the LICENSE file for details. This license only covers the ctf4science package. All models linked as submodules are subject to their respective license.
For support or inquiries, open an issue on our GitHub repository.