1
+
1
2
Tuning of Hyperparameters
2
3
=========================
3
4
To tune pipeline hyperparameters you can use GOLEM. There are two ways:
4
5
5
- 1. Tuning of all models hyperparameters simultaneously. Implemented via ``SimultaneousTuner `` and ``IOptTuner `` classes.
6
+ 1. Tuning of all models hyperparameters simultaneously. Implemented via ``SimultaneousTuner ``, `` OptunaTuner `` and ``IOptTuner `` classes.
6
7
7
8
2. Tuning of models hyperparameters sequentially node by node optimizing metric value for the whole pipeline or tuning
8
9
only one node hyperparametrs. Implemented via ``SequentialTuner `` class.
@@ -16,22 +17,25 @@ using ``SimultaneousTuner`` is applied for composed pipeline and ``metric`` valu
16
17
FEDOT uses tuners implementation from GOLEM, see `GOLEM documentation `_ for more information.
17
18
18
19
.. list-table :: Tuners comparison
19
- :widths: 10 30 30 30
20
+ :widths: 10 30 30 30 30
20
21
:header-rows: 1
21
22
22
23
* -
23
24
- ``SimultaneousTuner ``
24
25
- ``SequentialTuner ``
25
26
- ``IOptTuner ``
27
+ - ``OptunaTuner ``
26
28
* - Based on
27
29
- Hyperopt
28
30
- Hyperopt
29
31
- iOpt
32
+ - Optuna
30
33
* - Type of tuning
31
34
- Simultaneous
32
35
- | Sequential or
33
36
| for one node only
34
37
- Simultaneous
38
+ - Simultaneous
35
39
* - | Optimized
36
40
| parameters
37
41
- | categorical
@@ -42,10 +46,14 @@ FEDOT uses tuners implementation from GOLEM, see `GOLEM documentation`_ for more
42
46
| continuous
43
47
- | discrete
44
48
| continuous
49
+ - | categorical
50
+ | discrete
51
+ | continuous
45
52
* - Algorithm type
46
53
- stochastic
47
54
- stochastic
48
55
- deterministic
56
+ - stochastic
49
57
* - | Supported
50
58
| constraints
51
59
- | timeout
@@ -58,11 +66,22 @@ FEDOT uses tuners implementation from GOLEM, see `GOLEM documentation`_ for more
58
66
| eval_time_constraint
59
67
- | iterations
60
68
| eval_time_constraint
69
+ - | timeout
70
+ | iterations
71
+ | early_stopping_rounds
72
+ | eval_time_constraint
61
73
* - | Supports initial
62
74
| point
63
75
- Yes
64
76
- No
65
77
- No
78
+ - Yes
79
+ * - | Supports multi
80
+ | objective tuning
81
+ - No
82
+ - No
83
+ - No
84
+ - Yes
66
85
67
86
Hyperopt based tuners usually take less time for one iteration, but ``IOptTuner `` is able to obtain much more stable results.
68
87
@@ -488,7 +507,91 @@ Tuned pipeline structure:
488
507
{' depth' : 2 , ' length' : 3 , ' nodes' : [knnreg, knnreg, rfr]}
489
508
knnreg - {' n_neighbors' : 51 }
490
509
knnreg - {' n_neighbors' : 40 }
491
- rfr - {' n_jobs' : 1 , ' max_features' : 0.05324707031250003 , ' min_samples_split' : 12 , ' min_samples_leaf' : 11 }
510
+ rfr - {' n_jobs' : 1 , ' max_features' : 0.05324 , ' min_samples_split' : 12 , ' min_samples_leaf' : 11 }
511
+
512
+ Example for ``OptunaTuner ``:
513
+
514
+ .. code-block :: python
515
+
516
+ from golem.core.tuning.optuna_tuner import OptunaTuner
517
+ from fedot.core.data.data import InputData
518
+ from fedot.core.pipelines.pipeline_builder import PipelineBuilder
519
+ from fedot.core.pipelines.tuning.tuner_builder import TunerBuilder
520
+ from fedot.core.repository.quality_metrics_repository import RegressionMetricsEnum
521
+ from fedot.core.repository.tasks import TaskTypesEnum, Task
522
+
523
+ task = Task(TaskTypesEnum.regression)
524
+
525
+ tuner = OptunaTuner
526
+
527
+ metric = RegressionMetricsEnum.MSE
528
+
529
+ iterations = 100
530
+
531
+ train_data = InputData.from_csv(' train_data.csv' , task = ' regression' )
532
+
533
+ pipeline = PipelineBuilder().add_node(' knnreg' , branch_idx = 0 ).add_branch(' rfr' , branch_idx = 1 ) \
534
+ .join_branches(' knnreg' ).build()
535
+
536
+ pipeline_tuner = TunerBuilder(task) \
537
+ .with_tuner(tuner) \
538
+ .with_metric(metric) \
539
+ .with_iterations(iterations) \
540
+ .build(train_data)
541
+
542
+ tuned_pipeline = pipeline_tuner.tune(pipeline)
543
+
544
+ tuned_pipeline.print_structure()
545
+
546
+ Tuned pipeline structure:
547
+
548
+ .. code-block :: python
549
+
550
+ Pipeline structure:
551
+ {' depth' : 2 , ' length' : 3 , ' nodes' : [knnreg, knnreg, rfr]}
552
+ knnreg - {' n_neighbors' : 51 }
553
+ knnreg - {' n_neighbors' : 40 }
554
+ rfr - {' n_jobs' : 1 , ' max_features' : 0.05 , ' min_samples_split' : 12 , ' min_samples_leaf' : 11 }
555
+
556
+
557
+ Multi objective tuning
558
+ ^^^^^^^^^^^^^^^^^^^^^^
559
+
560
+ Multi objective tuning is available only for ``OptunaTuner ``. Pass a list of metrics to ``.with_metric() ``
561
+ and obtain a list of tuned pipelines representing a pareto front after tuning.
562
+
563
+ .. code-block :: python
564
+
565
+ from typing import Iterable
566
+ from golem.core.tuning.optuna_tuner import OptunaTuner
567
+ from fedot.core.data.data import InputData
568
+ from fedot.core.pipelines.pipeline import Pipeline
569
+ from fedot.core.pipelines.pipeline_builder import PipelineBuilder
570
+ from fedot.core.pipelines.tuning.tuner_builder import TunerBuilder
571
+ from fedot.core.repository.quality_metrics_repository import RegressionMetricsEnum
572
+ from fedot.core.repository.tasks import TaskTypesEnum, Task
573
+
574
+ task = Task(TaskTypesEnum.regression)
575
+
576
+ tuner = OptunaTuner
577
+
578
+ metric = [RegressionMetricsEnum.MSE , RegressionMetricsEnum.MAE ]
579
+
580
+ iterations = 100
581
+
582
+ train_data = InputData.from_csv(' train_data.csv' , task = ' regression' )
583
+
584
+ pipeline = PipelineBuilder().add_node(' knnreg' , branch_idx = 0 ).add_branch(' rfr' , branch_idx = 1 ) \
585
+ .join_branches(' knnreg' ).build()
586
+
587
+ pipeline_tuner = TunerBuilder(task) \
588
+ .with_tuner(tuner) \
589
+ .with_metric(metric) \
590
+ .with_iterations(iterations) \
591
+ .build(train_data)
592
+
593
+ pareto_front: Iterable[Pipeline] = pipeline_tuner.tune(pipeline)
594
+
492
595
493
596
Sequential tuning
494
597
-----------------
0 commit comments