Releases: Evovest/EvoTrees.jl
Releases · Evovest/EvoTrees.jl
v0.18.2
EvoTrees v0.18.2
Merged pull requests:
- Fix a warning when constructing UnivariateFinite with CategoricalArrays v1 (#314) (@sostock)
- Jdb/cat compat (#315) (@jeremiedb)
v0.18.1
EvoTrees v0.18.1
Merged pull requests:
- Fix typo in NEWS.md (#310) (@devmotion)
- Update docstrings of learners (#311) (@devmotion)
- Jdb/show (#313) (@jeremiedb)
Closed issues:
v0.18.0
EvoTrees v0.18.0
Refactor of GPU training backend
- Computations are now alsmost entirely done through
KernelAbstractions.jl. Objective is to eventually have full support for AMD / ROCm in addition to current NVIDIA / CUDA devices. - Important performance increase, notably for larger max depth. Training time is now closely increase linearly with depth.
Breaking change: improved reproducibility
- Training returns exactly the same fitted model for a given learner (ex:
EvoTreeRegressor). - Reproducibility is respected for both
cpuandgpu. However, thes result may differ betweencpuandgpu. Ie: reproducibility is guaranteed only within the same device type. - The learner / model constructor (ex:
EvoTreeRegressor) now has aseed::Intargument to set the random seed. Legacyrngkwarg will now be ignored. - The internal random generator is now
Xishiro(was previouslyMersenneTwisterwithrng::Int).
Added node weight information in fitted trees
- The train weight reaching each of the split/leaf nodes is now stored in the fitted trees. This is accessible via
model.trees[i].wfor the i-th tree in the fitted model. This is notably inteded to support SHAP value computations.
Merged pull requests:
- Improving training performance on GPU backend (#299) (@AdityaPandeyCN)
- Grant gpu v4 (#301) (@jeremiedb)
- Jdb/alpha (#302) (@jeremiedb)
- Jdb/shap w (#303) (@jeremiedb)
- Fix Mae Handling (#304) (@AdityaPandeyCN)
- Remove unused allocation (#305) (@AdityaPandeyCN)
- Grant gpu v4 b (#306) (@jeremiedb)
Closed issues:
v0.17.4
EvoTrees v0.17.4
Merged pull requests:
- Jdb/bagging (#286) (@jeremiedb)
- Juliacon (#289) (@jeremiedb)
- CompatHelper: bump compat for CategoricalArrays to 1, (keep existing compat) (#292) (@github-actions[bot])
- bump version (#293) (@jeremiedb)
v0.17.3
EvoTrees v0.17.3
- Introduces support for bagging through
bagging_sizekwarg in model constructor. A random forest behavior can be obtained by combining with a single iteration (nrounds) and using a learning rate (eta) of 1.0:
config = EvoTreeRegressor(;
nrounds=1,
bagging_size=16,
eta=1.0,
max_depth=9,
rowsample=0.5,
)New experimental credibility-based losses: cred_var and cred_std.
Support for non-gradient based tree-building for mean absolute error loss: mae.
Merged pull requests:
- Jdb/bagging (#286) (@jeremiedb)
v0.17.2
v0.17.1
EvoTrees v0.17.1
Merged pull requests:
- Jdb/depth (#281) (@jeremiedb)
- Jdb/depth-3D (#282) (@jeremiedb)
- Jdb/float32 (#284) (@jeremiedb)
Closed issues:
- not using GPU via MLJ interface (#273)
v0.17.0
EvoTrees v0.17.0
Breaking changes:
Model constructors (EvoTreeRegressor, EvoTreeClassifier...) now include the following arguments:
metric: the evaluation metric to be trackedearly_stopping_roundsdevice: either:cpuor:gpu
Example:
config = EvoTreeRegressor(; loss=:mse, metric=:mae, early_stopping_rounds=10, device=:gpu)Deprecation of fit_evotree in favor of import of MLJModelInterface fit :
Note that fit_evotree results in a call to fit.
The following legacy kwargs of fit_evotree will be ignored:
metricreturn_loggerearly_stopping_roundsdevice
m = fit_evotree(config, dtrain; target_name="y", feature_names=["x1", "x2"]) #old
m = fit(config, dtrain; target_name="y", feature_names=["x1", "x2"]) #newChanges in the naming of variables identity in the Tables / DataFrames based internal API, which were previously kwargs of fit_evotree:
fnames=>feature_namesw_name=>weight_names
m = fit_evotree(config, dtrain; target_name="y", feature_names=["x1", "x2"])The logger, which tracks metrics on eval data through the iterations, is now automatically included in a fitted model info field
m = fit(config, dtrain; target_name="y", feature_names=["x1", "x2"], deval)
logger = m.info[:logger]Changes related to losses:
L1/l1loss is no longer supported. Useloss=:maeinEvoTreeRegressorinstead.
Constructors are not longer parametrics: EvoTreeRegressor{L<:ModelType} => EvoTreeRegressor
This one shouldn't affect user's experience.
Fixes and improvements to GPU:
- Models trained through MLJ now support the :gpu argument (passed through the constructor like
EvoTreeRegressoras showned above). - Inference is now properly dispatch to :gpu when using:
m(dtrain; device=:gpu) - Both
:maeand:quantilelosses are now now suported on GPU (device = :gpu)
Merged pull requests:
- Jdb/api (#276) (@jeremiedb)
- Jdb/api (#279) (@jeremiedb)
Closed issues:
- Scalar indexing when using GPU (#277)
v0.16.9
EvoTrees v0.16.9
Merged pull requests:
- fix #274 (#275) (@jeremiedb)
Closed issues:
isorderednot defined in CUDA extension (#274)
v0.16.8
EvoTrees v0.16.8
Merged pull requests:
- up (#271) (@jeremiedb)
- adhere to MLJ interface (#272) (@OkonSamuel)