Skip to content

Conversation

@nikromen
Copy link
Member

Fix #45

  • few adjustments like reducing size of the models by penalizing large models

This should with a bit of change in parameters prevent mainly the
xgboost model to get enormously big with much improvement.
@gemini-code-assist
Copy link

Summary of Changes

Hello @nikromen, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the RPMeta project by optimizing XGBoost prediction performance through the use of inplace_predict, which resolves a known efficiency issue. It also introduces a new feature to penalize model size during training for both XGBoost and LightGBM, promoting the creation of more compact models. Furthermore, the PR improves the developer experience by providing an interactive shell for model inspection and refactors the core model handling logic for better maintainability.

Highlights

  • XGBoost Prediction Optimization: Implemented inplace_predict for XGBoost to significantly reduce prediction overhead by avoiding repeated DMatrix creation, directly addressing issue predicting on xgboost is too inefficient #45.
  • Model Size Management: Introduced configurable size penalization for both XGBoost and LightGBM models within the Optuna objective, allowing for the training of smaller, more efficient models.
  • Interactive Debugging Shell: Added a new CLI command (rpmeta shell) to launch an interactive Python shell, pre-loaded with the model and configuration, for easier debugging and exploration.
  • Refactored Model Handling: Centralized model loading and prediction logic within the Model base class, improving encapsulation and simplifying the Predictor interface.
  • XGBoost Hyperparameter Tuning Enhancements: Updated XGBoost hyperparameter search space in Optuna, transitioning from max_depth to max_leaves and max_bin for finer control over tree complexity, and adjusting ranges for other parameters.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces two main improvements: it optimizes XGBoost prediction performance by using inplace_predict to avoid DMatrix creation overhead, and it adds a mechanism to penalize large models during hyperparameter tuning to encourage smaller, more efficient models. The refactoring of the Model class to encapsulate the loaded regressor is a good architectural improvement that makes the code cleaner and more object-oriented. The changes are well-tested, including new tests to verify the correctness of the optimized prediction path. I have a few suggestions to improve robustness and fix a potential bug.

def compute_size_penalty(self, regressor: Any, trial: Any = None) -> float:
if not self.config.model.lightgbm.size_penalty_enabled:
return 0.0

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The method call booster.num_leaves() will likely raise an AttributeError at runtime because the lightgbm.Booster object does not have this method. To get the total number of leaves, you would need to parse the model structure from booster.dump_model().

Additionally, for consistency with the XGBoost implementation and the configuration comments which refer to nodes, you might consider calculating the penalty based on the total number of nodes instead of leaves.

Suggested change
model_dump = booster.dump_model()
n_leaves = sum(tree_info["num_leaves"] for tree_info in model_dump["tree_info"])

rpmeta/model.py Outdated
raw_pred = booster.inplace_predict(data)
# lost the inverse transformation here because of the inplace_predict :(
# https://xgboost.readthedocs.io/en/stable/prediction.html#in-place-prediction
return np.expm1(raw_pred)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Hardcoding np.expm1 here makes the prediction logic brittle. If the transformation function (func=np.log1p in create_regressor) is ever changed, this will silently produce incorrect predictions. It's more robust to use the inverse_func stored on the TransformedTargetRegressor instance, which is available via self.regressor.

Suggested change
return np.expm1(raw_pred)
return self.regressor.inverse_func(raw_pred)

Comment on lines 124 to 125
model_size = sys.getsizeof(best_regressor) / (1024 * 1024)
logger.info("Model size is %.2f MB", model_size)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using sys.getsizeof() to determine the model size is likely inaccurate. This function doesn't account for the memory used by the objects the model contains, so the logged size will be misleadingly small. A more reliable method is to measure the size of the model once it's serialized to disk, for example by summing the sizes of the files created by save_regressor.

@nikromen
Copy link
Member Author

/gemini review

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant performance improvement for XGBoost predictions by using inplace_predict, bypassing the overhead of DMatrix creation. It also refactors the model loading logic for better encapsulation, making the Model class stateful. Additionally, it adds a mechanism to penalize large models during hyperparameter tuning to encourage smaller, more efficient models. The changes are well-structured and accompanied by thorough tests, including new tests to validate the optimized prediction path. A new shell command is also added for easier debugging. My review found a minor issue with a duplicated test case, which I've commented on.

Comment on lines 69 to 87
def test_predict_uses_model_predict(example_config):
category_maps = {"package_name": ["pkg1"], "feature": ["a"]}
mock_input = MagicMock()
mock_input.package_name = "pkg1"
df_dict = {
"feature": "a",
"package_name": "pkg1",
"ram": 1000,
"swap": 2000,
}
mock_input.to_data_frame.return_value = pd.DataFrame([df_dict])
mock_model = MagicMock()
mock_model.predict.return_value = np.array([42])

predictor = Predictor(mock_model, category_maps, example_config)
result = predictor.predict(mock_input, example_config.model.behavior)

assert result == 42
mock_model.predict.assert_called_once()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This test test_predict_uses_model_predict is a duplicate of test_predict_returns_prediction (lines 34-52). Having duplicate tests increases maintenance overhead. Please remove this redundant test.

Bypass sklearn's predict() which creates a new DMatrix on every call,
causing expensive memmove operations. Use Booster.inplace_predict()
directly with manual expm1() inverse transformation.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

predicting on xgboost is too inefficient

1 participant