v0.10.0
v0.10.0 May 29, 2020
Enhancements
- Added baseline models for classification and regression, add functionality to calculate baseline models before searching in AutoML #746
- Port over highly-null guardrail as a data check and define
DefaultDataChecks
andDisableDataChecks
classes #745 - Update
Tuner
classes to work directly with pipeline parameters dicts instead of flat parameter lists #779 - Add Elastic Net as a pipeline option #812
- Added new Pipeline option
ExtraTrees
#790 - Added precicion-recall curve metrics and plot for binary classification problems in
evalml.pipeline.graph_utils
#794
Fixes
- Update pipeline
score
to returnnan
score for any objective which throws an exception during scoring #787 - Fixed bug introduced in #787 where binary classification metrics requiring predicted probabilities error in scoring #798
- CatBoost and XGBoost classifiers and regressors can no longer have a learning rate of 0 #795
Changes
- Cleanup pipeline
score
code, and cleanup codecov #711 - Remove
pass
for abstract methods for codecov #730 - Added str for AutoSearch object #675
- Add util methods to graph ROC and confusion matrix #720
- Refactor
AutoBase
toAutoSearchBase
#758 - Updated AutoBase with
data_checks
parameter, removed previousdetect_label_leakage
parameter, and added functionality to run data checks before search in AutoML #765 - Updated our logger to use Python's logging utils #763
- Refactor most of
AutoSearchBase._do_iteration
impl intoAutoSearchBase._evaluate
#762 - Port over all guardrails to use the new DataCheck API #789
- Expanded
import_or_raise
to catch all exceptions #759 - Adds RMSE, MSLE, RMSLE as standard metrics #788
- Don't allow
Recall
to be used as an objective for AutoML #784 - Removed feature selection from pipelines #819
Documentation Changes
- Add instructions to freeze
master
onrelease.md
#726 - Update release instructions with more details #727 #733
- Add objective base classes to API reference #736
- Fix components API to match other modules #747
Testing Changes
- Delete codecov yml, use codecov.io's default #732
- Added unit tests for fraud cost, lead scoring, and standard metric objectives #741
- Update codecov client #782
- Updated AutoBase str test to include no parameters case #783
- Added unit tests for
ExtraTrees
pipeline #790 - If codecov fails to upload, fail build #810
- Updated Python version of dependency action #816
- Update the dependency update bot to use a suffix when creating branches #817
Breaking Changes
- The
detect_label_leakage
parameter for AutoML classes has been removed and replaced by adata_checks
parameter #765 - Moved ROC and confusion matrix methods from
evalml.pipeline.plot_utils
toevalml.pipeline.graph_utils
#720 Tuner
classes require a pipeline hyperparameter range dict as an init arg instead of a space definition #779Tuner.propose
andTuner.add
work directly with pipeline parameters dicts instead of flat parameter lists #779PipelineBase.hyperparameters
andcustom_hyperparameters
use pipeline parameters dict format instead of being represented as a flat list #779- All guardrail functions previously under
evalml.guardrails.utils
will be removed and replaced by data checks #789 Recall
disallowed as an objective for AutoML #784