Skip to content

Commit 0b3ab12

Browse files
authored
Pedro dev (#2)
1 parent 5e0f6b4 commit 0b3ab12

19 files changed

+567
-216
lines changed

.gitignore

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
*.pyc
2+
.pytest_cache/
3+
TPOT2.egg-info
4+
*.tar.gz
5+
*.pkl
6+
*.json
7+
joblib/
8+
cache_folder/
9+
dask-worker-space/

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# TPOT2 ALPHA
22

3-
TPOT2 is a rewrite of TPOT with some additional functionality.
3+
TPOT2 is a rewrite of TPOT with some additional functionality. Notably, we added support for graph-based pipelines and additional parameters to better specify the desired search space.
44
TPOT2 is currently in Alpha. This means that there will likely be some backwards incompatible changes to the API as we develop. Some implemented features may be buggy. There is a list of known issues written at the bottom of this README. Some features have placeholder names or are listed as "Experimental" in the doc string. These are features that may not be fully implemented and may or may work with all other features.
55

66
If you are interested in using the current stable release of TPOT, you can do that here: [https://github.com/EpistasisLab/tpot/](https://github.com/EpistasisLab/tpot/).

Tutorial/1_Estimators_Overview.ipynb

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@
5858
" early_stop=5, #how many generations with no improvement to stop after\n",
5959
" \n",
6060
" #List of other objective functions. All objective functions take in an untrained GraphPipeline and return a score or a list of scores\n",
61-
" other_objective_functions= [ tpot2.estimator_objective_functions.average_path_length_objective, tpot2.estimator_objective_functions.complexity_objective],\n",
61+
" other_objective_functions= [ tpot2.estimator_objective_functions.average_path_length_objective, tpot2.estimator_objective_functions.number_of_nodes_objective],\n",
6262
" \n",
6363
" #List of weights for the other objective functions. Must be the same length as other_objective_functions. By default, bigger is better is set to True. \n",
6464
" other_objective_functions_weights=[-1, -1],\n",
@@ -120,7 +120,7 @@
120120
" <th></th>\n",
121121
" <th>roc_auc_score</th>\n",
122122
" <th>average_path_length_objective</th>\n",
123-
" <th>complexity_objective</th>\n",
123+
" <th>number_of_nodes_objective</th>\n",
124124
" <th>Parents</th>\n",
125125
" <th>Variation_Function</th>\n",
126126
" <th>Individual</th>\n",
@@ -268,7 +268,7 @@
268268
"</div>"
269269
],
270270
"text/plain": [
271-
" roc_auc_score average_path_length_objective complexity_objective Parents \\\n",
271+
" roc_auc_score average_path_length_objective number_of_nodes_objective Parents \\\n",
272272
"0 0.997698 1.0 1.0 NaN \n",
273273
"1 0.949345 1.0 1.0 NaN \n",
274274
"2 0.983175 1.0 1.0 NaN \n",
@@ -361,7 +361,7 @@
361361
" <th></th>\n",
362362
" <th>roc_auc_score</th>\n",
363363
" <th>average_path_length_objective</th>\n",
364-
" <th>complexity_objective</th>\n",
364+
" <th>number_of_nodes_objective</th>\n",
365365
" <th>Parents</th>\n",
366366
" <th>Variation_Function</th>\n",
367367
" <th>Individual</th>\n",
@@ -760,7 +760,7 @@
760760
"</div>"
761761
],
762762
"text/plain": [
763-
" roc_auc_score average_path_length_objective complexity_objective \\\n",
763+
" roc_auc_score average_path_length_objective number_of_nodes_objective \\\n",
764764
"0 0.997698 1.0 1.0 \n",
765765
"25 0.997698 1.0 1.0 \n",
766766
"30 0.997698 1.0 1.0 \n",

Tutorial/7_dask_parallelization.ipynb

Lines changed: 158 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -5,25 +5,86 @@
55
"cell_type": "markdown",
66
"metadata": {},
77
"source": [
8-
"TODO: Advanced Dask parallelization for HPC"
8+
"# Parallelization\n",
9+
"\n",
10+
"TPOT2 uses the Dask package for parallelization either locally (dask.destributed.LocalCluster) or multi-node via a job schedule (dask-jobqueue). \n",
11+
"\n",
12+
"## Local Machine Parallelization\n",
13+
"\n",
14+
"TPOT2 can be easily parallelized on a local computer by setting the n_jobs and memory_limit parameters.\n",
15+
"\n",
16+
"`n_jobs` dictates how many dask workers to launch. In TPOT2 this corresponds to the number of pipelines to evaluate in parallel.\n",
17+
"\n",
18+
"`memory_limit` is the amount of RAM to use per worker. "
19+
]
20+
},
21+
{
22+
"cell_type": "code",
23+
"execution_count": null,
24+
"metadata": {},
25+
"outputs": [],
26+
"source": [
27+
"import tpot2\n",
28+
"import sklearn\n",
29+
"import sklearn.datasets\n",
30+
"import numpy as np\n",
31+
"scorer = sklearn.metrics.get_scorer('roc_auc_ovr')\n",
32+
"X, y = sklearn.datasets.load_digits(return_X_y=True)\n",
33+
"X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X, y, train_size=0.75, test_size=0.25)\n",
34+
"\n",
35+
"\n",
36+
"est = tpot2.TPOTClassifier(population_size= 8, generations=5, n_jobs=4, memory_limit=\"4GB\", verbose=1)\n",
37+
"est.fit(X_train, y_train)\n",
38+
"print(scorer(est, X_test, y_test))"
939
]
1040
},
1141
{
1242
"attachments": {},
1343
"cell_type": "markdown",
1444
"metadata": {},
1545
"source": [
16-
"Dask Dashboard\n",
46+
"## Manual Dask Clients and Dashboard\n",
47+
"\n",
48+
"You can also manually initialize a dask client. This can be useful to gain additional control over the parallelization, debugging, as well as viewing a dashboard of the live performance of TPOT2.\n",
49+
"\n",
50+
"You can find more details in the official [documentation here.](https://docs.dask.org/en/stable/)\n",
51+
"\n",
52+
"\n",
53+
"[Dask Python Tutorial](https://docs.dask.org/en/stable/deploying-python.html)\n",
54+
"[Dask Dashboard](https://docs.dask.org/en/stable/dashboard.html)"
55+
]
56+
},
57+
{
58+
"attachments": {},
59+
"cell_type": "markdown",
60+
"metadata": {},
61+
"source": [
62+
"Initializing a basic dask local cluster"
63+
]
64+
},
65+
{
66+
"cell_type": "code",
67+
"execution_count": null,
68+
"metadata": {},
69+
"outputs": [],
70+
"source": [
71+
"from dask.distributed import Client, LocalCluster\n",
72+
"\n",
73+
"n_jobs = 4\n",
74+
"memory_limit = \"4GB\"\n",
1775
"\n",
18-
"https://docs.dask.org/en/stable/dashboard.html"
76+
"cluster = LocalCluster(n_workers=n_jobs, #if no client is passed in and no global client exists, create our own\n",
77+
" threads_per_worker=1,\n",
78+
" memory_limit=memory_limit)\n",
79+
"client = Client(cluster)"
1980
]
2081
},
2182
{
2283
"attachments": {},
2384
"cell_type": "markdown",
2485
"metadata": {},
2586
"source": [
26-
"Click the link to get to a live dashboard"
87+
"Get the link to view the dask Dashboard. "
2788
]
2889
},
2990
{
@@ -32,18 +93,17 @@
3293
"metadata": {},
3394
"outputs": [],
3495
"source": [
35-
"#TODO\n",
36-
"from dask.distributed import Client\n",
37-
"client = Client() # start distributed scheduler locally.\n",
38-
"client"
96+
" client.dashboard_link"
3997
]
4098
},
4199
{
42100
"attachments": {},
43101
"cell_type": "markdown",
44102
"metadata": {},
45103
"source": [
46-
"Dask single node"
104+
"Pass into TPOT to Train.\n",
105+
"Note that the if a client is passed in manually, TPOT will ignore n_jobs and memory_limit.\n",
106+
"If there is no client passed in, TPOT will ignore any global/existing client and create its own."
47107
]
48108
},
49109
{
@@ -52,15 +112,25 @@
52112
"metadata": {},
53113
"outputs": [],
54114
"source": [
55-
"#TODO"
115+
"est = tpot2.TPOTClassifier(population_size= 8, generations=5, client=client verbose=1)\n",
116+
"# this is equivalent to: \n",
117+
"# est = tpot2.TPOTClassifier(population_size= 8, generations=5, n_jobs=4, memory_limit=\"4GB\", verbose=1)\n",
118+
"est.fit(X_train, y_train)\n",
119+
"print(scorer(est, X_test, y_test))\n",
120+
"\n",
121+
"#It is good to close the client and cluster when you are done with them\n",
122+
"client.close()\n",
123+
"cluster.close()"
56124
]
57125
},
58126
{
59127
"attachments": {},
60128
"cell_type": "markdown",
61129
"metadata": {},
62130
"source": [
63-
"Dask multiple nodes"
131+
"Option 2\n",
132+
"\n",
133+
"You can initialize the cluster and client with a context manager that will automatically close them. "
64134
]
65135
},
66136
{
@@ -69,7 +139,82 @@
69139
"metadata": {},
70140
"outputs": [],
71141
"source": [
72-
"#TODO"
142+
"from dask.distributed import Client, LocalCluster\n",
143+
"import tpot2\n",
144+
"import sklearn\n",
145+
"import sklearn.datasets\n",
146+
"import numpy as np\n",
147+
"\n",
148+
"scorer = sklearn.metrics.get_scorer('roc_auc_ovr')\n",
149+
"X, y = sklearn.datasets.load_digits(return_X_y=True)\n",
150+
"X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X, y, train_size=0.75, test_size=0.25)\n",
151+
"\n",
152+
"\n",
153+
"n_jobs = 4\n",
154+
"memory_limit = \"4GB\"\n",
155+
"\n",
156+
"with LocalCluster( \n",
157+
" n_workers=n_jobs,\n",
158+
" threads_per_worker=1,\n",
159+
" memory_limit='4GB',\n",
160+
") as cluster, Client(cluster) as client:\n",
161+
" est = tpot2.TPOTClassifier(population_size= 8, generations=5, client=client, verbose=1)\n",
162+
" est.fit(X_train, y_train)\n",
163+
" print(scorer(est, X_test, y_test))"
164+
]
165+
},
166+
{
167+
"attachments": {},
168+
"cell_type": "markdown",
169+
"metadata": {},
170+
"source": [
171+
"## Dask multi node parallelization\n",
172+
"\n",
173+
"Dask can parallelize across multiple nodes via job queueing systems. This is done using the dask-jobqueue package. More information can be found in the official [documentation here.]( https://jobqueue.dask.org/en/latest/)\n",
174+
"\n",
175+
"To parallelize TPOT2 with dask-jobqueue, simply pass in a client based on a jobqueue cluster with desired settings into the client parameter. Each job will evaluate a single pipeline.\n",
176+
"\n",
177+
"Note that TPOT will ignore n_jobs and memory_limit as these should be set inside the dask cluster. "
178+
]
179+
},
180+
{
181+
"cell_type": "code",
182+
"execution_count": null,
183+
"metadata": {},
184+
"outputs": [],
185+
"source": [
186+
"from dask.distributed import Client, LocalCluster\n",
187+
"import sklearn\n",
188+
"import sklearn.datasets\n",
189+
"import sklearn.metrics\n",
190+
"import sklearn.model_selection\n",
191+
"import tpot2\n",
192+
"\n",
193+
"from dask_jobqueue import SGECluster # or SLURMCluster, PBSCluster, etc. Replace SGE with your scheduler.\n",
194+
"cluster = SGECluster(\n",
195+
" queue='all.q',\n",
196+
" cores=2,\n",
197+
" memory=\"50 GB\"\n",
198+
"\n",
199+
")\n",
200+
"\n",
201+
"cluster.adapt(minimum_jobs=10, maximum_jobs=100) # auto-scale between 10 and 100 jobs\n",
202+
"\n",
203+
"client = Client(cluster)\n",
204+
"\n",
205+
"scorer = sklearn.metrics.get_scorer('roc_auc_ovr')\n",
206+
"X, y = sklearn.datasets.load_digits(return_X_y=True)\n",
207+
"X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X, y, train_size=0.75, test_size=0.25)\n",
208+
"\n",
209+
"est = tpot2.TPOTClassifier(population_size= 100, generations=5, client=client, verbose=1)\n",
210+
"# this is equivalent to: \n",
211+
"# est = tpot2.TPOTClassifier(population_size= 8, generations=5, n_jobs=4, memory_limit=\"4GB\", verbose=1)\n",
212+
"est.fit(X_train, y_train)\n",
213+
"print(scorer(est, X_test, y_test))\n",
214+
"\n",
215+
"#It is good to close the client and cluster when you are done with them\n",
216+
"client.close()\n",
217+
"cluster.close()"
73218
]
74219
}
75220
],
@@ -89,7 +234,7 @@
89234
"name": "python",
90235
"nbconvert_exporter": "python",
91236
"pygments_lexer": "ipython3",
92-
"version": "3.10.9"
237+
"version": "3.10.10"
93238
},
94239
"orig_nbformat": 4,
95240
"vscode": {

setup.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -46,6 +46,7 @@ def calculate_version():
4646
'dask>=2023.3.1',
4747
'distributed>=2023.3.1',
4848
'dask-ml>=2022.5.27',
49+
'dask-jobqueue>=0.8.1',
4950
'func_timeout>=4.3.5',
5051
],
5152
extras_require={

0 commit comments

Comments
 (0)