Open
Description
Hi,
I'm experiencing small delta between prediction (same model, same inputs), of gdbt-rs and rust-xgboost, using xbtree and logistic regression, (https://github.com/davechallis/rust-xgboost) which is based on the C++ implementation.
I'm researching this at the moment and suspect a few causers:
- floating point precision differences native to C++ vs Rust
- different XGB implementation
- I'm training on python and loading into Rust via the convert script -- so maybe a problem in reading the dump on the Rust side (I assume the save side is OK because its using the C++ lib)
From your experience is this a known issue? or maybe you can point me into a more specific direction to research from what I listed above?
Thanks
UPDATE:
I have now narrowed it down to initializing parameters on the Python side vs Rust side. Looks like some of the parameters are not loaded or taking into account differently. When both models in Python and Rust sides are loaded with no parameters - results are equal.
Metadata
Metadata
Assignees
Labels
No labels