Skip to content

v0.10.0

Compare
Choose a tag to compare
@angela97lin angela97lin released this 29 May 20:44

v0.10.0 May 29, 2020

Enhancements

  • Added baseline models for classification and regression, add functionality to calculate baseline models before searching in AutoML #746
  • Port over highly-null guardrail as a data check and define DefaultDataChecks and DisableDataChecks classes #745
  • Update Tuner classes to work directly with pipeline parameters dicts instead of flat parameter lists #779
  • Add Elastic Net as a pipeline option #812
  • Added new Pipeline option ExtraTrees #790
  • Added precicion-recall curve metrics and plot for binary classification problems in evalml.pipeline.graph_utils #794

Fixes

  • Update pipeline score to return nan score for any objective which throws an exception during scoring #787
  • Fixed bug introduced in #787 where binary classification metrics requiring predicted probabilities error in scoring #798
  • CatBoost and XGBoost classifiers and regressors can no longer have a learning rate of 0 #795

Changes

  • Cleanup pipeline score code, and cleanup codecov #711
  • Remove pass for abstract methods for codecov #730
  • Added str for AutoSearch object #675
  • Add util methods to graph ROC and confusion matrix #720
  • Refactor AutoBase to AutoSearchBase #758
  • Updated AutoBase with data_checks parameter, removed previous detect_label_leakage parameter, and added functionality to run data checks before search in AutoML #765
  • Updated our logger to use Python's logging utils #763
  • Refactor most of AutoSearchBase._do_iteration impl into AutoSearchBase._evaluate #762
  • Port over all guardrails to use the new DataCheck API #789
  • Expanded import_or_raise to catch all exceptions #759
  • Adds RMSE, MSLE, RMSLE as standard metrics #788
  • Don't allow Recall to be used as an objective for AutoML #784
  • Removed feature selection from pipelines #819

Documentation Changes

  • Add instructions to freeze master on release.md #726
  • Update release instructions with more details #727 #733
  • Add objective base classes to API reference #736
  • Fix components API to match other modules #747

Testing Changes

  • Delete codecov yml, use codecov.io's default #732
  • Added unit tests for fraud cost, lead scoring, and standard metric objectives #741
  • Update codecov client #782
  • Updated AutoBase str test to include no parameters case #783
  • Added unit tests for ExtraTrees pipeline #790
  • If codecov fails to upload, fail build #810
  • Updated Python version of dependency action #816
  • Update the dependency update bot to use a suffix when creating branches #817

Breaking Changes

  • The detect_label_leakage parameter for AutoML classes has been removed and replaced by a data_checks parameter #765
  • Moved ROC and confusion matrix methods from evalml.pipeline.plot_utils to evalml.pipeline.graph_utils #720
  • Tuner classes require a pipeline hyperparameter range dict as an init arg instead of a space definition #779
  • Tuner.propose and Tuner.add work directly with pipeline parameters dicts instead of flat parameter lists #779
  • PipelineBase.hyperparameters and custom_hyperparameters use pipeline parameters dict format instead of being represented as a flat list #779
  • All guardrail functions previously under evalml.guardrails.utils will be removed and replaced by data checks #789
  • Recall disallowed as an objective for AutoML #784