You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: NEWS.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,10 +6,11 @@
6
6
- Computations are now alsmost entirely done through `KernelAbstractions.jl`. Objective is to eventually have full support for AMD / ROCm in addition to current NVIDIA / CUDA devices.
7
7
- Important performance increase, notably for larger max depth. Training time is now closely increase linearly with depth.
8
8
9
-
### Improved reproducibility
9
+
### Breaking change: improved reproducibility
10
10
- Training returns exactly the same fitted model for a given learner (ex: `EvoTreeRegressor`).
11
11
- Reproducibility is respected for both `cpu` and `gpu`. However, thes result may differ between `cpu` and `gpu`. Ie: reproducibility is guaranteed only within the same device type.
12
12
- The learner / model constructor (ex: `EvoTreeRegressor`) now has a `seed::Int` argument to set the random seed. Legacy `rng` kwarg will now be ignored.
13
+
- The internal random generator is now `Xishiro` (was previously `MersenneTwister` with `rng::Int`).
13
14
14
15
### Added node weight information in fitted trees
15
16
- The train weight reaching each of the split/leaf nodes is now stored in the fitted trees. This is accessible via `model.trees[i].w` for the i-th tree in the fitted model. This is notably inteded to support SHAP value computations.
0 commit comments