Skip to content

Releases: Evovest/EvoTrees.jl

v0.18.2

09 Dec 16:39
e65be37

Choose a tag to compare

EvoTrees v0.18.2

Diff since v0.18.1

Merged pull requests:

v0.18.1

18 Nov 14:23
c227b32

Choose a tag to compare

EvoTrees v0.18.1

Diff since v0.18.0

Merged pull requests:

Closed issues:

  • A reproducibility issue (#296)
  • Document the alpha parameter. (#297)
  • Is it possible for display of EvoTree models to play nicely with MLJ? (#312)

v0.18.0

27 Oct 01:20

Choose a tag to compare

EvoTrees v0.18.0

Diff since v0.17.4

Refactor of GPU training backend

  • Computations are now alsmost entirely done through KernelAbstractions.jl. Objective is to eventually have full support for AMD / ROCm in addition to current NVIDIA / CUDA devices.
  • Important performance increase, notably for larger max depth. Training time is now closely increase linearly with depth.

Breaking change: improved reproducibility

  • Training returns exactly the same fitted model for a given learner (ex: EvoTreeRegressor).
  • Reproducibility is respected for both cpu and gpu. However, thes result may differ between cpu and gpu. Ie: reproducibility is guaranteed only within the same device type.
  • The learner / model constructor (ex: EvoTreeRegressor) now has a seed::Int argument to set the random seed. Legacy rng kwarg will now be ignored.
  • The internal random generator is now Xishiro (was previously MersenneTwister with rng::Int).

Added node weight information in fitted trees

  • The train weight reaching each of the split/leaf nodes is now stored in the fitted trees. This is accessible via model.trees[i].w for the i-th tree in the fitted model. This is notably inteded to support SHAP value computations.

Merged pull requests:

Closed issues:

  • Inefficient speed scaling for larger depth (#280)
  • Allocs (#298)

v0.17.4

01 Aug 03:42
d81c603

Choose a tag to compare

EvoTrees v0.17.4

Diff since v0.17.2

Merged pull requests:

v0.17.3

16 May 19:51

Choose a tag to compare

EvoTrees v0.17.3

Diff since v0.17.2

  • Introduces support for bagging through bagging_size kwarg in model constructor. A random forest behavior can be obtained by combining with a single iteration (nrounds) and using a learning rate (eta) of 1.0:
config = EvoTreeRegressor(;
    nrounds=1,
    bagging_size=16,
    eta=1.0,
    max_depth=9,
    rowsample=0.5,
)

New experimental credibility-based losses: cred_var and cred_std.
Support for non-gradient based tree-building for mean absolute error loss: mae.

Merged pull requests:

v0.17.2

22 Mar 19:58

Choose a tag to compare

EvoTrees v0.17.2

Diff since v0.17.1

Merged pull requests:

v0.17.1

21 Mar 16:56

Choose a tag to compare

EvoTrees v0.17.1

Diff since v0.17.0

Merged pull requests:

Closed issues:

  • not using GPU via MLJ interface (#273)

v0.17.0

26 Feb 17:44
f4fb946

Choose a tag to compare

EvoTrees v0.17.0

Diff since v0.16.9

Breaking changes:

Model constructors (EvoTreeRegressor, EvoTreeClassifier...) now include the following arguments:

  • metric: the evaluation metric to be tracked
  • early_stopping_rounds
  • device: either :cpu or :gpu

Example:

config = EvoTreeRegressor(; loss=:mse, metric=:mae, early_stopping_rounds=10, device=:gpu)

Deprecation of fit_evotree in favor of import of MLJModelInterface fit :

Note that fit_evotree results in a call to fit.
The following legacy kwargs of fit_evotree will be ignored:

  • metric
  • return_logger
  • early_stopping_rounds
  • device
m = fit_evotree(config, dtrain; target_name="y", feature_names=["x1", "x2"]) #old 
m = fit(config, dtrain; target_name="y", feature_names=["x1", "x2"]) #new

Changes in the naming of variables identity in the Tables / DataFrames based internal API, which were previously kwargs of fit_evotree:

  • fnames => feature_names
  • w_name => weight_names
m = fit_evotree(config, dtrain; target_name="y", feature_names=["x1", "x2"])

The logger, which tracks metrics on eval data through the iterations, is now automatically included in a fitted model info field

m = fit(config, dtrain; target_name="y", feature_names=["x1", "x2"], deval)
logger = m.info[:logger]

Changes related to losses:

  • L1 / l1 loss is no longer supported. Use loss=:mae in EvoTreeRegressor instead.

Constructors are not longer parametrics: EvoTreeRegressor{L<:ModelType} => EvoTreeRegressor

This one shouldn't affect user's experience.

Fixes and improvements to GPU:

  • Models trained through MLJ now support the :gpu argument (passed through the constructor like EvoTreeRegressor as showned above).
  • Inference is now properly dispatch to :gpu when using: m(dtrain; device=:gpu)
  • Both :mae and :quantile losses are now now suported on GPU (device = :gpu)

Merged pull requests:

Closed issues:

  • Scalar indexing when using GPU (#277)

v0.16.9

23 Jan 05:25
6a0284e

Choose a tag to compare

EvoTrees v0.16.9

Diff since v0.16.8

Merged pull requests:

Closed issues:

  • isordered not defined in CUDA extension (#274)

v0.16.8

05 Dec 19:54

Choose a tag to compare

EvoTrees v0.16.8

Diff since v0.16.7

Merged pull requests: