Skip to content

Commit d30e06c

Browse files
Merge branch 'master' into extend-scimljlbenchmarks-deadline
2 parents f1e9c0f + 00296cf commit d30e06c

File tree

10 files changed

+142
-37
lines changed

10 files changed

+142
-37
lines changed

.github/workflows/deploy.yml

Lines changed: 9 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -9,38 +9,26 @@ jobs:
99
runs-on: ubuntu-latest
1010
steps:
1111
- name: Checkout
12-
uses: actions/checkout@v2
12+
uses: actions/checkout@v4
1313
with:
1414
persist-credentials: false
15-
# NOTE: Python is necessary for the pre-rendering (minification) step
1615
- name: Install python
17-
uses: actions/setup-python@v2
16+
uses: actions/setup-python@v5
1817
with:
19-
python-version: '3.8'
20-
# NOTE: Here you can install dependencies such as matplotlib if you use
21-
# packages such as PyPlot.
22-
# - run: pip install matplotlib
18+
python-version: '3.11'
2319
- name: Install Julia
24-
uses: julia-actions/setup-julia@v1
20+
uses: julia-actions/setup-julia@v2
2521
with:
26-
version: 1.5
27-
# NOTE
28-
# The steps below ensure that NodeJS and Franklin are loaded then it
29-
# installs highlight.js which is needed for the prerendering step
30-
# (code highlighting + katex prerendering).
31-
# Then the environment is activated and instantiated to install all
32-
# Julia packages which may be required to successfully build your site.
33-
# The last line should be `optimize()` though you may want to give it
34-
# specific arguments, see the documentation or ?optimize in the REPL.
22+
version: '1.10'
3523
- run: julia -e '
3624
using Pkg; Pkg.add(["NodeJS", "Franklin"]);
3725
using NodeJS; run(`$(npm_cmd()) install highlight.js`);
3826
using Franklin;
3927
Pkg.activate("."); Pkg.instantiate();
4028
optimize()'
4129
- name: Build and Deploy
42-
uses: JamesIves/github-pages-deploy-action@releases/v3
30+
uses: JamesIves/github-pages-deploy-action@v4
4331
with:
44-
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
45-
BRANCH: gh-pages
46-
FOLDER: __site
32+
token: ${{ secrets.GITHUB_TOKEN }}
33+
branch: gh-pages
34+
folder: __site

gsoc/gsoc_sciml.md

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -113,3 +113,17 @@ analysis.
113113
**Expected Project Size**: 175 hour or 350 hour depending on the chosen subtasks.
114114

115115
**Difficulty**: Easy to Medium depending on the chosen subtasks.
116+
117+
## ODE-Based Reservoir Models in ReservoirComputing.jl
118+
119+
[ReservoirComputing.jl](https://github.com/SciML/ReservoirComputing.jl) currently targets discrete-time reservoir models such as Echo State Networks and Next Generation Reservoir Computing. The aim of this project would be to add a ContinuousReservoirComputer model: a continuous-time general approach for reservoir computing models, where the hidden state evolves via an ODE. This extension would enable to then extend ReservoirComputing.jl adding models like Liquid State Machines.
120+
121+
**Recommended Skills**: Background knowledge in numerical analysis and some basics of reservoir computing.
122+
123+
**Expected Results**: New ContinuousReservoirComputer model integrated into ReservoirComputing.jl. Additional time-continuous models that build on the new APIs.
124+
125+
**Mentors**: [Francesco Martinuzzi](https://github.com/MartinuzziFrancesco), [Chris Rackauckas](https://github.com/ChrisRackauckas)
126+
127+
**Expected Project Size**: 175 hour (core model + docs/tests), 350 hour if adding stretch items (additional models).
128+
129+
**Difficulty**: Medium

news/2018/01/24/Parameters.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -173,7 +173,7 @@ use this guide and request new additions as necessary!
173173
Adjoint sensitivity analysis lets you directly solve for the derivative of some
174174
functional of the differential equation solution, such as a cost function in
175175
an optimization problem.
176-
[DifferentialEquations.jl now has a package-independent adjoint sensitivity analysis implementation](https://diffeq.sciml.ai/latest/analysis/sensitivity) that lets you use any of the common
176+
[DifferentialEquations.jl now has a package-independent adjoint sensitivity analysis implementation](https://docs.sciml.ai/SciMLSensitivity/stable/) that lets you use any of the common
177177
interface ODE solvers to perform this analysis. While there are more optimizations
178178
which still need to be done in this area, this will be a useful feature for those
179179
looking to perform optimization on the ODE solver.
@@ -185,7 +185,7 @@ function approach. While L2-error of the solution against data corresponds to
185185
maximum likelihood estimation under the assumption of a Normal likelihood, this
186186
is constrained to very specific likelihood functions (Normal). Now our tools
187187
allow for giving a likelihood distribution associated with each time point.
188-
[We have some examples in the documentation showing how to use MLE estimation to get fitting distributions](https://diffeq.sciml.ai/latest/analysis/parameter_estimation).
188+
[We have some examples in the documentation showing how to use MLE estimation to get fitting distributions](https://docs.sciml.ai/Overview/stable/highlevels/inverse_problems/).
189189
This process is a more precise approach to data fitting and thus should be an
190190
interesting new tool to use in cases where one wants to fit parameters against
191191
a lot of data.
@@ -198,7 +198,7 @@ requires that your function is defined via `@ode_def` and will write and run a
198198
code from [Stan](https://mc-stan.org/) to generate posterior distributions. The
199199
`turing_inference` function uses [Turing.jl](https://github.com/yebai/Turing.jl)
200200
and can work with any DifferentialEquations.jl object. These functions simply
201-
require your `DEProblem`, data, and prior distributions and the [rest of the inference setup is done for you](https://diffeq.sciml.ai/latest/analysis/parameter_estimation). Thus this is a very quick way to make use of
201+
require your `DEProblem`, data, and prior distributions and the [rest of the inference setup is done for you](https://docs.sciml.ai/Overview/stable/highlevels/inverse_problems/). Thus this is a very quick way to make use of
202202
the power of Bayesian inference tools!
203203

204204
## Small Problem Speedups

news/2018/02/17/Reactions.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@ is a combination that can be orders of magnitude faster than Stan.jl (though
7777
additional testing which takes into account accuracy differences will be
7878
needed for a more precise determination). Still, it's as simple to use as the
7979
other Bayesian functions (see
80-
[the example](https://diffeq.sciml.ai/latest/analysis/parameter_estimation))
80+
[the example](https://docs.sciml.ai/Overview/stable/highlevels/inverse_problems/))
8181
and so give it a try if you're up for it.
8282

8383
## Livestream Tutorial

news/2020/03/29/SciML.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -110,11 +110,11 @@ It is very rare that someone thinks their model is perfect. Thus a large portion
110110
of the focus of our organization is to help scientific modelers derive equations
111111
and fit models. This includes tools for:
112112

113-
- [Maximum likelihood and Bayesian parameter estimation](https://diffeq.sciml.ai/dev/analysis/parameter_estimation/)
114-
- [Forward and adjoint local sensitivity analysis](https://diffeq.sciml.ai/dev/analysis/sensitivity/) for fast gradients
115-
- [Global sensitivity analysis](https://diffeq.sciml.ai/dev/analysis/global_sensitivity/)
113+
- [Maximum likelihood and Bayesian parameter estimation](https://docs.sciml.ai/Overview/stable/highlevels/inverse_problems/)
114+
- [Forward and adjoint local sensitivity analysis](https://docs.sciml.ai/SciMLSensitivity/stable/) for fast gradients
115+
- [Global sensitivity analysis](https://docs.sciml.ai/GlobalSensitivity/stable/)
116116
- [Building surrogates of models](https://surrogates.sciml.ai/latest/)
117-
- [Uncertainty quantification](https://diffeq.sciml.ai/dev/analysis/uncertainty_quantification/)
117+
- [Uncertainty quantification](https://docs.sciml.ai/Overview/stable/highlevels/uncertainty_quantification/)
118118

119119
Some of our newer tooling like [DataDrivenDiffEq.jl](https://github.com/SciML/DataDrivenDiffEq.jl)
120120
can even take in timeseries data and generate LaTeX code for the best fitting model
@@ -374,7 +374,7 @@ are the following:
374374
and its Physics-Informed Neural Networks (PINN) functionality,
375375
[DataDrivenDiffEq.jl](https://github.com/SciML/DataDrivenDiffEq.jl), etc.
376376
Because it does not require differential equations, we plan to split out the
377-
documentation of [Global Sensitivity Analysis](https://diffeq.sciml.ai/latest/analysis/global_sensitivity/)
377+
documentation of [Global Sensitivity Analysis](https://docs.sciml.ai/GlobalSensitivity/stable/)
378378
to better facilitate its wider usage.
379379
- We plan to continue improving the [ModelingToolkit](https://github.com/SciML/ModelingToolkit.jl)
380380
ecosystem utilizing its symbolic nature for [generic specification of PDEs](https://github.com/SciML/DifferentialEquations.jl/issues/469).

news/2020/05/09/ModelDiscovery.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -150,7 +150,7 @@ probabilistic programming libraries ([ModelingToolkit.jl](https://github.com/Sci
150150
automatically transforms Julia differential equation code to Stan). Together,
151151
this serves as a very good resource for non-Bayesian-inclined users to utilize
152152
Bayesian parameter estimation with just one function.
153-
[See the parameter estimation documentation for more details](https://diffeq.sciml.ai/latest/analysis/parameter_estimation/).
153+
[See the parameter estimation documentation for more details](https://docs.sciml.ai/Overview/stable/highlevels/inverse_problems/).
154154

155155
As a quick update to the probabilistic programming space, we would like to note
156156
that the Turing.jl library performs exceptionally well in comparison to the

news/2020/06/01/ModellingToolkit.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -278,7 +278,7 @@ there is no modification that is required. When forward-mode automatic
278278
differentiation libraries are used, type handling will automatically
279279
promote to ensure the solution is differentiated properly. When reverse-mode
280280
automatic differentiation is used, the backpropogation will automatically
281-
be replaced with [adjoint sensitivity methods](https://diffeq.sciml.ai/latest/analysis/sensitivity/#solve-Differentiation-Examples-1)
281+
be replaced with [adjoint sensitivity methods](https://docs.sciml.ai/SciMLSensitivity/stable/getting_started/)
282282
which can be controlled through the `sensealg` keyword argument.
283283
**The result is full performance and flexibility, but no code changes
284284
required**.

news/2026/01/04/SciMLHealthBots.md

Lines changed: 101 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,101 @@
1+
@def rss_pubdate = Date(2026,1,4)
2+
@def rss = """Introducing SciML Health Bots: Lowering Barriers While Raising Standards"""
3+
@def published = " 4 January 2026 "
4+
@def title = "Introducing SciML Health Bots: Lowering Barriers While Raising Standards"
5+
@def authors = """<a href="https://github.com/ChrisRackauckas">Chris Rackauckas</a>"""
6+
7+
# Introducing SciML Health Bots: Lowering Barriers While Raising Standards
8+
9+
**How do you make contributing easier while demanding more from the code?**
10+
11+
SciML has 200+ packages with stricter standards than most of the Julia ecosystem. We test for static compilability, type stability, and interface consistency. Many upstream packages don't—AD packages like Zygote and Enzyme frequently introduce regressions, and foundational packages like Distributions.jl don't test for `juliac --trim` compatibility or static interfaces.
12+
13+
This means SciML is often the first to discover upstream bugs. We file issues, contribute fixes, and wait. But in the meantime, CI stays red. Maintainers have to remember "oh yeah, that's the Zygote issue." Contributors unfamiliar with the repo see red badges and don't know if their PR broke something or if it's a known issue. Merging requires manually checking logs to verify "the right test" is failing. This process is time-consuming and error-prone.
14+
15+
We want to raise standards even higher:
16+
17+
- Every SciML package compatible with `juliac --trim`
18+
- Well-defined interfaces with tested assumptions (no relying on 1-based indexing, proper number type compatibility, GPU support)
19+
- Explicit imports only—no `using` in package code
20+
- Pull the rest of the Julia ecosystem toward these stricter coding practices
21+
22+
The question is: **how do we do this without sacrificing the ease of contributions SciML has always had for mathematicians and scientists?** How do we make contributing *even easier* while making code requirements *stricter*?
23+
24+
Our answer: **AI agents that enforce the hard stuff, track upstream issues, keep CI green, and let humans focus on the science.**
25+
26+
## The Key Insight
27+
28+
Strictness and accessibility only conflict when *humans* enforce the rules. With bots:
29+
30+
- Contributors don't memorize rules—bots explain violations
31+
- No tribal knowledge about "known failures"—red means your code, green means merge
32+
- Maintainers review architecture, not compliance
33+
34+
## New Standards
35+
36+
- **Always-green CI.** Green means merge. Red means your PR. No ambiguity.
37+
- **Trim compatibility.** Explicit imports, static dispatch, no runtime codegen.
38+
- **Static interfaces.** Type-stable APIs, consistent patterns.
39+
- **Performance is a bug.** Regressions tracked and fixed automatically.
40+
41+
## Bot Types
42+
43+
13 specialized agents run continuously across all SciML repositories:
44+
45+
| Bot | Purpose |
46+
|-----|---------|
47+
| **CI Health Check** | Diagnose and fix CI failures, keep all repos green |
48+
| **Issue Solver** | Investigate bugs, prioritize `bug`-labeled issues |
49+
| **Dependency Update** | Handle upstream version bumps and breaking changes |
50+
| **Explicit Imports** | Add explicit imports for trim compatibility |
51+
| **Static Improvement** | Fix type instabilities and interface inconsistencies |
52+
| **Performance Check** | Detect regressions, track benchmarks |
53+
| **Deprecation Fix** | Update deprecated API usage |
54+
| **Interface Check** | Verify consistent APIs across packages |
55+
| **Docs Improvement** | Fix documentation issues |
56+
| **Version Bump** | Check for needed releases |
57+
| **Min Version Bump** | Update compat bounds |
58+
| **Precompilation** | Improve load times |
59+
| **Benchmark Check** | Monitor SciMLBenchmarks.jl |
60+
61+
Up to 48 agents run concurrently. Control which types are active:
62+
63+
```bash
64+
sciml-ctl tasks show # See enabled bots
65+
sciml-ctl tasks only ci_health_check # Focus on CI health
66+
sciml-agents list # See running agents
67+
```
68+
69+
## The Bug Fix Flow
70+
71+
When CI Health Check detects a failure, it follows a resolution hierarchy:
72+
73+
**1. Try to fix it directly:**
74+
- Cap a dependency version in `[compat]`?
75+
- Fix the code?
76+
- Make behavior conditional on Julia version?
77+
- Add a missing explicit import?
78+
79+
**2. If it can't fix it, keep CI green anyway:**
80+
- Mark the failing test as `@test_broken` or skip conditionally
81+
- Open a `bug`-labeled issue with full diagnostics
82+
- PR merges, CI stays green
83+
84+
**3. Issue Solver picks it up:**
85+
- Issue Solver prioritizes `bug`-labeled issues
86+
- It attempts deeper fixes the CI Health Check couldn't do
87+
- If solved, removes the `@test_broken` marker and closes the issue
88+
89+
This creates a cycle: **CI stays green, but regressions are tracked as issues and continuously worked on.** Contributors never see mysterious red badges from problems that predate their PR. Maintainers see a clear issue queue of known problems being actively resolved.
90+
91+
## For Contributors
92+
93+
Submit your PR. If something's wrong, a bot explains what and suggests a fix. Green CI? Merge. You never see failures that aren't yours.
94+
95+
## The Result
96+
97+
Standards rise. Barriers fall. Contributors ship with confidence. Maintainers focus on code quality, not rule enforcement.
98+
99+
---
100+
101+
*The bots are running now across SciML repositories. Questions? [GitHub Discussions](https://github.com/orgs/SciML/discussions) or [Julia Discourse](https://discourse.julialang.org/).*

roadmap.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -113,11 +113,11 @@ It is very rare that someone thinks their model is perfect. Thus a large portion
113113
of the focus of our organization is to help scientific modelers derive equations
114114
and fit models. This includes tools for:
115115

116-
- [Maximum likelihood and Bayesian parameter estimation](https://diffeq.sciml.ai/dev/analysis/parameter_estimation/)
117-
- [Forward and adjoint local sensitivity analysis](https://diffeq.sciml.ai/dev/analysis/sensitivity/) for fast gradients
118-
- [Global sensitivity analysis](https://diffeq.sciml.ai/dev/analysis/global_sensitivity/)
116+
- [Maximum likelihood and Bayesian parameter estimation](https://docs.sciml.ai/Overview/stable/highlevels/inverse_problems/)
117+
- [Forward and adjoint local sensitivity analysis](https://docs.sciml.ai/SciMLSensitivity/stable/) for fast gradients
118+
- [Global sensitivity analysis](https://docs.sciml.ai/GlobalSensitivity/stable/)
119119
- [Building surrogates of models](https://surrogates.sciml.ai/latest/)
120-
- [Uncertainty quantification](https://diffeq.sciml.ai/dev/analysis/uncertainty_quantification/)
120+
- [Uncertainty quantification](https://docs.sciml.ai/Overview/stable/highlevels/uncertainty_quantification/)
121121

122122
Some of our newer tooling like [DataDrivenDiffEq.jl](https://github.com/SciML/DataDrivenDiffEq.jl)
123123
can even take in timeseries data and generate LaTeX code for the best fitting model
@@ -379,7 +379,7 @@ are the following:
379379
and its Physics-Informed Neural Networks (PINN) functionality,
380380
[DataDrivenDiffEq.jl](https://github.com/SciML/DataDrivenDiffEq.jl), etc.
381381
Because it does not require differential equations, we plan to split out the
382-
documentation of [Global Sensitivity Analysis](https://diffeq.sciml.ai/latest/analysis/global_sensitivity/)
382+
documentation of [Global Sensitivity Analysis](https://docs.sciml.ai/GlobalSensitivity/stable/)
383383
to better facilitate its wider usage.
384384
- We plan to continue improving the [ModelingToolkit](https://github.com/SciML/ModelingToolkit.jl)
385385
ecosystem utilizing its symbolic nature for [generic specification of PDEs](https://github.com/SciML/DifferentialEquations.jl/issues/469).

small_grants.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -166,6 +166,8 @@ will "go the extra mile" to teach the contributor how the package or mathematics
166166

167167
## Update LoopVectorization.jl to pass all tests on MacOS ARM Systems (\$200)
168168

169+
**In Progress**: Claimed by Khushmagrawal for the time period of January 02, 2026 - February 02, 2026.
170+
169171
[LoopVectorization.jl](https://github.com/JuliaSIMD/LoopVectorization.jl) is a
170172
central package for the performance of many Julia packages. Its internals make
171173
use of many low-level features and manual SIMD that can make it require significant

0 commit comments

Comments
 (0)