-
Notifications
You must be signed in to change notification settings - Fork 259
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
return multiple trials per suggestion for bayesian optimization #7
base: master
Are you sure you want to change the base?
return multiple trials per suggestion for bayesian optimization #7
Conversation
Great and thanks for contribution @zrcahyx 👍 I will have a look at this pull-request and test soon. |
@tobegit3hub The idea is simple, just return the top N x_tries which have highest acquisition function values. But since sample number are set to 100000 that is high, so returned top N trials are quite close to each other which can be indistinguishable. Do you think it's better to mix random search with bayesian optimization? For example, we just return the one-best trial via bayesian optimization and the rest |
@tobegit3hub Hi, I have made a second commit to this pull-request that abandons the |
I remember that the problem of retuning multiple trials is not about running the GaussianProgress in multiple times. If we don't give the new data to run BayesianOptimization, the acquisition function may return the same value. In short, if you get multiple trials from BayesianOptimization without new data, the return values may be similar. |
@tobegit3hub Sure, without new data, if you try to get suggestion from BayesianOptimization multiple times, returned trials will have parameters quite similar(because GaussianProgress fit the same completed trials). What I mean is to return multiple trials per suggestion, we do just one time GaussianProgress fit and return the topN trials which have topN highest acquisition values. |
Yes, you know what I means @zrcahyx . But the acquisition function is continuous which means that if we get the highest point, the second highest point is next to it. I'm not sure if we can get the TopN values in different reasonable points. |
Yes, that's the problem! @tobegit3hub So that's why I asked you whether it's better to mix random search with bayesian optimization? Or other better solutions maybe. But it's sure that iterate over |
@zrcahyx , I found your code and planned to to test your parallel Bayesian Optimization algorithm. However, it seems that it relies on suggestion.algorithm.base_algorithm, which is not submitted through PR. Can you take a look? |
Since training a neural network can be time-consuming, so it's efficient to train multiple model simultaneously. However
Advisor
's bayesian optimization currently returns only one trial per suggestion, hence we cannot do multiple trials at the same time. Due to this reason, I made a few changes to bayesian_optimization.py, so that it can return more than one trial per suggestion.