-
Notifications
You must be signed in to change notification settings - Fork 22
Open
Description
The current tests of algorithm implementations are pretty ad-hoc; I basically took some datasets from the original papers/notebooks, ran them in R/Python, and copied the expected values into our tests. It'd be much better if we had a way to automatically generate the test cases and results somehow.
We probably don't need to go as far as running the R/Python algorithms every time, but we should at least have a script or notebook to generate the expected test results so we can update it as required.
Metadata
Metadata
Assignees
Labels
No labels