-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updating with LCBenchTabular original search space #6
Conversation
self._start = sorted_fids[0] | ||
self._end = sorted_fids[-1] | ||
self._step = sorted_fids[1] - sorted_fids[0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are set in benchmark
and obtained from the self.fidelity_range
. I've made sure to drop the 0
epoch from lcbench tabular data such that:
bench.start = 1
bench.end = 51
bench.step = 1
bench.fidelity_range = (1, 51, 1)
@@ -15,7 +18,7 @@ | |||
@dataclass(frozen=True, eq=False, unsafe_hash=True) # type: ignore[misc] | |||
class LCBenchTabularConfig(TabularConfig): | |||
batch_size: int | |||
loss: str | |||
# loss: str |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good spot
@@ -178,7 +182,7 @@ def __init__( | |||
super().__init__( | |||
table=table, | |||
name=f"lcbench_tabular-{task_id}", | |||
config_name="config_id", | |||
config_name="id", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is used to extract out the config_id
's from the table, this should be "config_id"
as that's what's in the table. This should not effect query()
and the fix was something else.
@property | ||
def fidelity_range(self) -> tuple[int, int, int]: | ||
return (1, 51, 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Gotten from the table now. I don't want to indicate that people should overwrite this as then there is a mismatch between the table contents and what the benchmark advertises. Fixed this for lcbench by dropping the 0'th epoch
def fidelity_range(self) -> tuple[int, int, int]: | ||
return (1, 51, 1) | ||
|
||
def get_raw_space(self, name: int | None = None, seed: int | None = None) -> ConfigurationSpace: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmm, I'm not sure how to approach this actually. I would prefer this to be a property of the benchmark but of course it's going to be specific to each tabular benchmark, which some may not have:
My suggestion that minimizes type difference is:
- Classic
Benchmark
will always have a.space: ConfigurationSpace
, as it is currently. TabularBenchmark
will also always have a.space
. They will be required to either pass in an emptyConfigurationSpace
if there is none available, otherwise they pass in whatever space they can associate with it (i.e. as you've done here).
Changes:
- This will remove the current version of
.space
forTabularBenchmark
which currently is just categorical of ids. These can still be access throughself.configs: dict[config_id, Config]
or through.config_keys
This design serves two purposes:
- Someone using the
GenericTabularBenchmark
interface can pass in the raw space they want to incorporate and it will act like any prebuilt benchmark. - Keep behaviour more uniform, i.e. there's less chance people do weird funky shit in their own overridden
get_raw_space()
. They either pass in a space or not and we can handle logic for that in one place.
Closed as these changes were implemented in a series of commits: |
No description provided.