[c++] enhance error handling for forced splits file loading#6832
[c++] enhance error handling for forced splits file loading#6832KYash03 wants to merge 7 commits intolightgbm-org:masterfrom
Conversation
0cd73e5 to
7e35462
Compare
|
Thanks for working on this, can you please add some tests that cover these exceptions? |
7e35462 to
05430e5
Compare
|
@microsoft-github-policy-service agree |
05430e5 to
133cc75
Compare
jameslamb
left a comment
There was a problem hiding this comment.
Thanks for working on this! The general approach looks good and the error messages are informative. Nice idea thinking about "file exists but cannot be parsed" as a separate case too!
But I think this deserves some more careful consideration to be sure that we don't end up introducing a requirement on the file indicated by forcedsplits_filename also existing at scoring (prediction) time.
| if (!forced_splits_file.good()) { | ||
| Log::Warning("Forced splits file '%s' does not exist. Forced splits will be ignored.", | ||
| config->forcedsplits_filename.c_str()); |
There was a problem hiding this comment.
I think this should be a fatal error at training time... if I'm training a model and expecting specific splits to be used, I'd prefer a big loud error to a training run wasting time and compute resources only to produce a model that accidentally does not look like what I'd wanted.
HOWEVER... I think GBDT::Init() and/or GBDT::ResetConfig() will also be called when you load a model at scoring time, and at scoring time we wouldn't want to get a fatal error because of a missing or malformed file which is only supposed to affect training.
I'm not certain how to resolve that. Can you please investigate that and propose something?
It would probably be helpful to add tests for these different conditions. You can do this in Python for this purpose. Or if you don't have time / interest, I can push some tests here and then you could work on making them pass?
So to be clear, the behavior I want to see is:
- training time:
forcedsplits_filenamefile does not exist or is not readable --> ERRORforcedsplits_filenameis not valid JSON --> ERROR
- prediction / scoring time:
forcedsplits_filenamefile does not exist or is not readable --> no log output, no errorsforcedsplits_filenameis not valid JSON --> no log output, no errors
There was a problem hiding this comment.
We could add a flag to the GBDT class to indicate the current mode.
This is what I was thinking:
bool is_training_ = false;
// Turn the flag on at the start of training, and off at the end.
void GBDT::Train() {
is_training_ = true;
// ... regular training code ...
is_training_ = false;
}
// In Init() and ResetConfig(), handle the file as follows:
if (is_training_) {
// Stop with an error if anything is wrong.
} else {
// Simply continue if there are issues.
}Regarding the tests, I'd be happy to write them!
There was a problem hiding this comment.
Thanks very much. It is not that simple.
For example, there are many workflows where training and prediction are done in the same process, using the same Booster. So a single property is_training_ is not going to work.
There are also multiple APIs for training.
And we'd also want to be careful to not introduce this type of checking on every boosting round, as that would hurt performance.
Maybe @shiyu1994 could help us figure out where to put a check like this.
Also referencing this related PR to help: #5653
There was a problem hiding this comment.
What if we consider force split is forbidden in inference time? I think that also tells the user that force splitting is impossible when the model has already been trained.
There was a problem hiding this comment.
Introducing a flag to check for whether the model is to be used for inference or training is quite complicated. That's why I think the current solution is acceptable.
There was a problem hiding this comment.
@jameslamb What do you think about keeping the current changes in this PR, given the reasons above?
There was a problem hiding this comment.
@jameslamb What do you think about keeping the current changes in this PR, given the reasons above?
Sorry for the delay.
I think just raising a warning is an acceptable compromise... it gives users a hint to follow, and by not being a fatal error it shouldn't cause problems at inference time.
This will mean that if you train a model with forced splits, save it to a file, then load it in another environment where that file referenced by forcedsplits_filename does not exist, you'll now get a warning about this. That might be annoying for people but I think it's worth it for the benefits mentioned above.
So for this PR... I support this, but @KYash03 please added tests for the conditions I mentioned in https://github.com/microsoft/LightGBM/pull/6832/files#r1957536985 (but with the "file does not exist or is not readable" case always resulting in this warning message in logs).
@shiyu1994 @StrikerRUS in the future, do you think we should move towards forced splits being considered "data" instead of a parameter? That way, it wouldn't get persisted in the model file (just as init_score and weight are not persisted in the model file). That'd be a clean way to achieve behavior like "forced splits are only used at training time", I think. If you agree with that as a better long-term state, I can write up a feature request describing it.
There was a problem hiding this comment.
That way, it wouldn't get persisted in the model file (just as
init_scoreandweightare not persisted in the model file).
Hey, I think it's good idea!
|
Thanks for the contribution. I will review this soon. |
shiyu1994
left a comment
There was a problem hiding this comment.
The changes look good to me in general. But let's wait for a conclusion of our discussion above.
|
/AzurePipelines run |
|
/AzurePipelines run |
|
Kindly ping @jameslamb for this comment #6832 (comment). |
|
@KYash03 would you like to continue this? I'd be happy to help with any questions you have. We would love to get this fix into the next release if we could. |
|
Hey, unfortunately I don't have the time right now. Thanks for all the help and guidance though! |
Fixes #6830