-
|
Hi all, I have a question regarding a best practice: The priors in botorch are built in a way that it is assumed that data is normalized to the unit cube (between zero and one). I usually realize this by using the Assume that one has an input Currently, I am always taking the max(upper_bound, historical_data) as upper bound and min(lower_bound, historical_data) as lower bound. What is your recommended way of doing it? Best, Johannes |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 4 replies
-
|
Hi @jduerholt! As for what we currently do in Ax: I'm pretty sure the bounds would dynamically adjust to your current search space. This would potentially leave some data outside the unit cube, but it would still be included in the GP fitting if desired. We haven't specifically ablated the bounds' interaction with the lengthscale prior in this case, but I would be very surprised if a 2x change in one dimension would substantially impact performance. With that said, I would keep the bounds at 5 and 10 throughout if you think this is the most relevant range for the aforementioned dimension. Otherwise, I would set them to 3 and 15. What's the reason is for the changing bounds? Sounds interesting. |
Beta Was this translation helpful? Give feedback.
I agree with Carl here that I would be surprised if data outside the unit normalization had material impacts on the model fit (unless it's extremely far away, i.e. order-of-magnitude different). The main thing is to make sure to be consistent w.r.t. normalization and un-normalization -- you don't want to use one scaling for modeling and another one for scaling the search space bounds.