Combining multi-fidelity and risk-adverse BO #2484
-
I'm trying to combine multi-fidelity and risk-aware BO, taken from this tutorial and this tutorial, respectively. The acquisition function for multi-fidelity is a two-parter: def get_mfkg(model):
# Fix fidelity to highest value in order to evaluate information gain
curr_val_acqf = FixedFeatureAcquisitionFunction(
acq_function=PosteriorMean(model),
d=problem_dim+1,
columns=[problem_dim],
values=[1],
)
_, current_value = optimize_acqf(
acq_function=curr_val_acqf,
bounds=bounds[:, :-1],
q=1,
num_restarts=10 if not SMOKE_TEST else 2,
raw_samples=1024 if not SMOKE_TEST else 4,
options={"batch_limit": 10, "maxiter": 200},
)
return qMultiFidelityKnowledgeGradient(
model=model,
num_fantasies=128 if not SMOKE_TEST else 2,
current_value=current_value,
cost_aware_utility=cost_aware_utility,
project=project,
)
def optimize_mfkg_and_get_observation(mfkg_acqf):
"""Optimizes MFKG and returns a new candidate, observation, and cost."""
# generate new candidates
candidates, _ = optimize_acqf_mixed(
acq_function=mfkg_acqf,
bounds=bounds,
fixed_features_list=[{problem_dim: 0.0}, {problem_dim: 1.0}],
q=BATCH_SIZE,
num_restarts=NUM_RESTARTS,
raw_samples=RAW_SAMPLES,
# batch_initial_conditions=X_init,
options={"batch_limit": 5, "maxiter": 200},
)
# observe new values
cost = cost_model(candidates).sum()
new_x = candidates.detach()
new_obj = fitness_fun(new_x)
print(f"candidates:\n{new_x}\n")
print(f"observations:\n{new_obj}\n\n")
return new_x, new_obj, cost and the acquisition function for risk-adverse BO looks like this: def optimize_acqf_and_get_observation():
r"""Optimizes the acquisition function, and returns a new candidate and observation."""
acqf = qNoisyExpectedImprovement(
model=model,
X_baseline=train_X,
sampler=SobolQMCNormalSampler(sample_shape=torch.Size([128])),
objective=risk_measure,
prune_baseline=True,
)
candidate, _ = optimize_acqf(
acq_function=acqf,
bounds=bounds,
q=BATCH_SIZE,
num_restarts=NUM_RESTARTS,
raw_samples=RAW_SAMPLES,
)
new_observations = fitness_fun(candidate)
return candidate, new_observations My first thought was to replace AssertionError: Expected the output shape to match either the t-batch shape of X, or the `model.batch_shape` in the case of acquisition functions using batch models; but got output with shape torch.Size([128, 5, 16]) for X with shape torch.Size([128, 5, 1, 3]). What would be the correct way to combine these acquisition functions? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
Hi @samuelkim16. I haven't worked with risk-averse BO in a while and I never tried to combine it with multi-fidelity. This is a non-standard use case and will likely require some debugging to get working. Here are some thoughts I have:
|
Beta Was this translation helpful? Give feedback.
Hi @samuelkim16. I haven't worked with risk-averse BO in a while and I never tried to combine it with multi-fidelity. This is a non-standard use case and will likely require some debugging to get working. Here are some thoughts I have:
PosteriorMean
would beqSimpleRegret
, its MC counterpart.qSimpleRegret
should work with theobjective=risk_measure
argument to compute the risk measure as the value function.