-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature: implement simulated campaigning for "hyper parameter tuning" #33
Comments
@matthewcarbone - sounds great, and I will be happy to help. |
@ziatdinovmax Quick update: I have not forgotten about this. I have funding starting in October and I'll be building on this. Btw, unrelated question (we can open a new issue if you want), but can gpax do batch sampling? I.e. instead of sequential experiments ("given the data at hand, find me the next experiment to maximize the acquisition function"), can gpax do "given the data at hand, find me the next |
@matthewcarbone - Thanks for the update. |
On the "parallelize the campaigning": assuming this is a single program that runs with different input parameters, can this be done with JAX built-in tools for parallel evaluation? |
It's funny I was thinking something similar, but I don't quite know how to do this. The tough part is that it's a combination of the continuous and bandit optimization. There's almost like a tree of decisions. For example, do you choose EI or UCB? If you choose UCB, you also need to choose beta. How does one go about optimizing over that space? I know it's possible, but I'm not sure how to implement it. Btw, I also have concerns about speed. Fitting |
One can use stochastic variational inference GP (viGP) or deep kernel learning (viDKL) for large datasets and high dimensions. The mcmc (or, more precisely, HMC with NUTS sampler) implementation is already dramatically faster than what pymc or pyro packages offer. I generally recommend it in situations where specific physics-based priors are available or one wants a detailed analysis of posterior distributions. |
Edit: whoops please disregard. 😁 |
@ziatdinovmax as we discussed, I plan on implementing a simulated campaigning loop for tuning the "hyper parameters" of an optimization loop. I first want to learn this library inside and out so it might take some time. But anyways, the executive summary of the tasks at hand look something like this:
Campaign
class for storing the state of and running the campaignmpi4py
but more likelymultiprocessing
will be enoughThe text was updated successfully, but these errors were encountered: