-
Notifications
You must be signed in to change notification settings - Fork 217
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feature: rng primitive refactoring #2968
base: main
Are you sure you want to change the base?
feature: rng primitive refactoring #2968
Conversation
/intelci: run |
cpp/daal/src/algorithms/engines/philox4x32x10/philox4x32x10_batch_container.h
Outdated
Show resolved
Hide resolved
cpp/daal/src/algorithms/engines/philox4x32x10/philox4x32x10_impl.i
Outdated
Show resolved
Hide resolved
/intelci: run |
/intelci: run |
/intelci: run |
sklearnex ref pr: uxlfoundation/scikit-learn-intelex#2228 |
Couple questions from my side:
|
For RF algorithm the feature to choose engine_method will be added in the follow-up pr(likely here #3029). I hold MT2203 to save result compatibility with previous releases. Not sure that its the best strategy in terms of performance, but I expect potential accuracy/mse with philox might be worse. Moreover it can make the review process more complex. I probably didnt get the idea of
Do you mean create a dispatcher before, for example unifrom, or inside the unifrom, which will choose the best potential engine method? |
No, we preliminary discussed it with oneMKL team and they suggested the best one in terms of performance and generator period engines for GPU(our initial goal was improve rf performance on GPU). The full list of engines is pretty big https://oneapi-spec.uxlfoundation.org/specifications/oneapi/v1.3-rev-1/elements/onemkl/source/domains/rng/host_api/engines-basic-random-number-generators so, the goal wasnt just add all engine methods from oneMKL.
I am not against this removing, but not sure that we can easily change the default behavior. Based on the experiments new engines(mrg32 philox) + mcg59 are significantly better. It makes sense to discuss, but probably just change the default engine could be good solution.
I guess it depends on implementation on oneMKL side. Not sure, but as I know, by default oneMKL uses N threads sub-engines in mt2203 and mt19937. May be its the reason of such behavior. As I know mrg32 philox and mcg59 are implemented without sub-engines inside. |
Note that NumPy itself, which is used by scikit-learn, does not have such compatibility guarantees when using their Thus, I don't think it should be a big deal to make breaking changes in produced random numbers in sklearnex.
I'd expect it should actually be the opposite, since (a) we are seeding MT with a single integer instead of a sequence, which leaves it with issues for the first draws; (b)
On a deeper look, it seems all generators from MKL are 32-bit only, so please ignore earlier comment.
Philox is a counter-based generator, so parallelizing it and jumping states should be pretty straightforward, without needing to keep sub-engines. |
Let's leave the change in defaults for a different PR then. |
@Alexandr-Solovev Are the issues from the CI meant to be solved with the PR from the sklearnex side?
|
@david-cortes-intel Thanks for the comments!
I will be glad to change it, but based on the testing I temporary disabled one test, so, need to investigate, but overall lets do it.
For sure!
Yes, it should be fixed here uxlfoundation/scikit-learn-intelex#2228. I am waiting for combined CI results |
/intelci: run |
Looks like something went wrong in the examples:
|
Thanks for highlight it, will be fixed soon |
/intelci: run |
/intelci: run |
/intelci: run |
Description
Description:
Feature: RNG primitive refactoring
Summary:
This PR updates the oneDAL rng primitive. It includes various fixes and modifications for RNG primitive.
Key Changes:
New generators have been added:
Host and DPC engines have been refactored and added:
SKLEARNEX related pr: uxlfoundation/scikit-learn-intelex#2228
PR completeness and readability
Testing
Performance