-
Notifications
You must be signed in to change notification settings - Fork 19.6k
Faster in_top_k implementation for Jax backend #19814
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #19814 +/- ##
==========================================
- Coverage 78.73% 78.73% -0.01%
==========================================
Files 498 498
Lines 45797 45799 +2
Branches 8438 8439 +1
==========================================
Hits 36059 36059
- Misses 8041 8042 +1
- Partials 1697 1698 +1
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Thanks for the PR! The tie case seems to be failing, can you take a look? https://github.com/keras-team/keras/actions/runs/9409035606/job/25918197972?pr=19814 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the contribution!
Fix `LayerNormalization.get_config` (keras-team#19807) Propagate kwargs through `keras.ops.isclose` (keras-team#19782) * propagate kwargs through isclose this allows passing atol and rtol * switch isclose **kwargs to explicit kwargs * reduce line lengths * fix ops.isclose signature * fix ops.IsClose compute_output_spec signature * implement isclose rtol atol equal_nan args for all backends * shorten line lengths again * revert using tf.experimental.numpy.isclose tensorflow version now uses code inspired from tf.experimental.numpy.isclose * fix lint * add docs for new parameters Faster in_top_k implementation for Jax backend (keras-team#19814) * Faster in_top_k implementation. * Fix bug in rank computation. Fix CI Fix TypeError in `Lambda.from_config` (keras-team#19827) fixing dmtree.is_nested() and parameterized tree test (keras-team#19822) Fix `keras.ops.repeat` cannot return an expected shape when `x` is a … (keras-team#19826) * Fix `keras.ops.repeat` cannot return an expected shape when `x` is a `KerasTensor` and the `axis` is `None` * Test dynamic is still dynamic after repetition * Improve error messages `Metric.variables` is now recursive. (keras-team#19830) This allows it to surface variables from metrics nested at any depth. Previously, metrics within metrics within metrics would not have their variables tracked in JAX, causing them to not be updated. Fix `get_file` when the HTTP response has no `Content-Length` header (keras-team#19833) Add `ops.switch` (keras-team#19834) * Add `ops.switch` * Update tests * Fix out-of-bound issue * Revert `torch.cond` Use `absl.testing.parameterized` for `tree_test.py`. (keras-team#19842) For consistency, use `absl.testing.parameterized` instead of `parameterized` for `tree_test.py` since that is used for all other tests. It's one less dependency. It also says `optree` or `dmtree` in each test name. Make batch norm mask shape error more descriptive (keras-team#19829) * Made batch norm mask shape error more descriptive * Added shape info in mask error message to help with degugging Fix code style doc: `ops.slice` (keras-team#19843) corrected the example code in unit_normalization.py (keras-team#19845) Added missing closing bracket and exact output value in example code after replicating the code. Adjust code example Add `training` argument to `Model.compute_loss()`. (keras-team#19840) This allows models to perform different computations during training and evaluation. For instance, some expensive to compute metrics can be skipped during training and only computed during evaluation. Note that backwards compatibility with overrides that do not have the `training` argument is maintained. Fix the compatibility issues of `Orthogonal` and `GRU` (keras-team#19844) * Add legacy `Orthogonal` class name * Add legacy `implementation` arg to `GRU` Fix inconsistent behavior of `losses.sparse_categorical_crossentropy`… (keras-team#19838) * Fix inconsistent behavior of `losses.sparse_categorical_crossentropy` with and without `ignore_class` * Test * chore(format) * Fix tests in `losses` Fix bugs with `Mean`, `Accuracy` and `BinaryAccuracy` metrics. (keras-team#19847) - `reduce_to_samplewise_values` would not reduce `sample_weights` correctly because the number of dimensions of `values` was checked. - `reduce_to_samplewise_values` needs to explicitely broadcast `sample_weights`. Before, it was implicitly broadcast in the multiplication with `values`. However, the explicit broadcast is needed for the computation of `num_samples` for the averaging to be correct. This causes a bug when `sample_weights` is of rank 2 or more and a broadcast happens when doing the multiplication. This logic existed in `tf_keras`: https://github.com/keras-team/tf-keras/blob/master/tf_keras/metrics/base_metric.py#L508 - `Accuracy` and `BinaryAccuracy` were doing a mean reduction too early, before multiplying by `sample_weights`. This matters when the rank of `sample_weights` is the same as `y_true` and `y_pred`. Add tests for `DTypePolicyMap` Fix test Update the logic of `default_policy` Improve serialization of `DTypePolicyMap` Improve `__repr__` and `__eq__` Add `custom_gradient` for the numpy backend (keras-team#19849) fix variable name when add in init function (keras-team#19853) Address comments
Introduce `DTypePolicyMap` Fix `LayerNormalization.get_config` (keras-team#19807) Propagate kwargs through `keras.ops.isclose` (keras-team#19782) * propagate kwargs through isclose this allows passing atol and rtol * switch isclose **kwargs to explicit kwargs * reduce line lengths * fix ops.isclose signature * fix ops.IsClose compute_output_spec signature * implement isclose rtol atol equal_nan args for all backends * shorten line lengths again * revert using tf.experimental.numpy.isclose tensorflow version now uses code inspired from tf.experimental.numpy.isclose * fix lint * add docs for new parameters Faster in_top_k implementation for Jax backend (keras-team#19814) * Faster in_top_k implementation. * Fix bug in rank computation. Fix CI Fix TypeError in `Lambda.from_config` (keras-team#19827) fixing dmtree.is_nested() and parameterized tree test (keras-team#19822) Fix `keras.ops.repeat` cannot return an expected shape when `x` is a … (keras-team#19826) * Fix `keras.ops.repeat` cannot return an expected shape when `x` is a `KerasTensor` and the `axis` is `None` * Test dynamic is still dynamic after repetition * Improve error messages `Metric.variables` is now recursive. (keras-team#19830) This allows it to surface variables from metrics nested at any depth. Previously, metrics within metrics within metrics would not have their variables tracked in JAX, causing them to not be updated. Fix `get_file` when the HTTP response has no `Content-Length` header (keras-team#19833) Add `ops.switch` (keras-team#19834) * Add `ops.switch` * Update tests * Fix out-of-bound issue * Revert `torch.cond` Use `absl.testing.parameterized` for `tree_test.py`. (keras-team#19842) For consistency, use `absl.testing.parameterized` instead of `parameterized` for `tree_test.py` since that is used for all other tests. It's one less dependency. It also says `optree` or `dmtree` in each test name. Make batch norm mask shape error more descriptive (keras-team#19829) * Made batch norm mask shape error more descriptive * Added shape info in mask error message to help with degugging Fix code style doc: `ops.slice` (keras-team#19843) corrected the example code in unit_normalization.py (keras-team#19845) Added missing closing bracket and exact output value in example code after replicating the code. Adjust code example Add `training` argument to `Model.compute_loss()`. (keras-team#19840) This allows models to perform different computations during training and evaluation. For instance, some expensive to compute metrics can be skipped during training and only computed during evaluation. Note that backwards compatibility with overrides that do not have the `training` argument is maintained. Fix the compatibility issues of `Orthogonal` and `GRU` (keras-team#19844) * Add legacy `Orthogonal` class name * Add legacy `implementation` arg to `GRU` Fix inconsistent behavior of `losses.sparse_categorical_crossentropy`… (keras-team#19838) * Fix inconsistent behavior of `losses.sparse_categorical_crossentropy` with and without `ignore_class` * Test * chore(format) * Fix tests in `losses` Fix bugs with `Mean`, `Accuracy` and `BinaryAccuracy` metrics. (keras-team#19847) - `reduce_to_samplewise_values` would not reduce `sample_weights` correctly because the number of dimensions of `values` was checked. - `reduce_to_samplewise_values` needs to explicitely broadcast `sample_weights`. Before, it was implicitly broadcast in the multiplication with `values`. However, the explicit broadcast is needed for the computation of `num_samples` for the averaging to be correct. This causes a bug when `sample_weights` is of rank 2 or more and a broadcast happens when doing the multiplication. This logic existed in `tf_keras`: https://github.com/keras-team/tf-keras/blob/master/tf_keras/metrics/base_metric.py#L508 - `Accuracy` and `BinaryAccuracy` were doing a mean reduction too early, before multiplying by `sample_weights`. This matters when the rank of `sample_weights` is the same as `y_true` and `y_pred`. Add tests for `DTypePolicyMap` Fix test Update the logic of `default_policy` Improve serialization of `DTypePolicyMap` Improve `__repr__` and `__eq__` Add `custom_gradient` for the numpy backend (keras-team#19849) fix variable name when add in init function (keras-team#19853) Address comments Update docstrings
Microbenchmarks are here [1]. Seems to be faster on CPU, GPU, TPU. Also XLA is sometimes able to dedup everything before k is referenced when it's called multiple times with different ks. This results in nice gains for retrieval applications with multiple top k metrics.
[1] https://gist.github.com/Hilly12/85460873d9786924159f2377f320df48