You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the zero-cost branch optimizers Npenas and Bananas, the validation accuracy of architectures is being queried from the zero-cost-benchmark as follows:
The question is, whether this supports the case where the user wants to use the ZeroCost predictor because their dataset or search space is not supported by the zero-cost benchmark.
If this is a case that we want to support, one option would be to introduce a parameter use_zc_api and use it as follows:
The code was written this way for the Zero-Cost NAS paper, where we consumed only search spaces for which the values were available in the zc_api. It would make more sense to give users the option to choose whether or not to query the zc_api, as you suggest.
Got it. Another sub-issue that came up is when to call query_zc_scores. The question is whether this function only be called under the following condition:
Or, is there a case where the zero-cost scores can be calculated after the self.max_zerocost parameter has been exceeded? We assume that this parameter refers to the maximum number of zero cost evaluations, so presumably the answer is no. What do you think?
In the zero-cost branch optimizers
Npenas
andBananas
, the validation accuracy of architectures is being queried from the zero-cost-benchmark as follows:The question is, whether this supports the case where the user wants to use the
ZeroCost
predictor because their dataset or search space is not supported by the zero-cost benchmark.If this is a case that we want to support, one option would be to introduce a parameter
use_zc_api
and use it as follows:The text was updated successfully, but these errors were encountered: