-
Notifications
You must be signed in to change notification settings - Fork 19.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Porting TF fake_quant_with_min_max functions #20641
base: master
Are you sure you want to change the base?
Conversation
* adds fake_quant_with_min_max functions from TF to keras3
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #20641 +/- ##
=======================================
Coverage 81.95% 81.96%
=======================================
Files 553 553
Lines 51458 51524 +66
Branches 7961 7964 +3
=======================================
+ Hits 42174 42233 +59
- Misses 7346 7352 +6
- Partials 1938 1939 +1
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @doncarlos999
I have left some comments.
Additionally, I think we still need fake_quant_with_min_max_vars
, as it is used in TFMOT:
https://github.com/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/python/core/quantization/keras/quant_ops.py#L340
@@ -100,3 +100,759 @@ def test_quantize_and_dequantize(self): | |||
) | |||
# A loose assertion due to an expected quantization error | |||
self.assertAllClose(qdq_values, values, atol=5e-1) | |||
|
|||
def _TestOp( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can use @parameterized.named_parameters
and named_product
to organize similar tests like this one:
https://github.com/keras-team/keras/blob/master/keras/src/ops/nn_test.py#L2355-L2365
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have reduced the number of tests by merging the logic into a single function but if you still think it would help to use named_parameters to organize the tests then I can add it.
@james77777778 thank you for the review. I'm working on revisions now. Regarding the |
We can test the gradients of
You can refer to this test for an example: Using a different function, separate from the user-facing function, for testing purposes seems redundant and fragile to me. However, we should wait for calls from @fchollet |
I agree that having two separate functions is fragile I simply kept the functions separate as that was how they were tested in the Tensorflow repo. |
…nt_with_min_max_vars function
Refactor to use backend specific gradient functions in tests and merges logic into single function
@james77777778 I have addressed your previous comments. |
Thanks for the updates! @james77777778 should we merge? |
Based on the discussion here: #20319 I started porting the
fake_quant_with_min_max
functions from tensorflow to keras3.This PR contains those ported functions and the relevant tests from https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/tests/fake_quant_ops_test.py.
I didn't implement
tf.quantization.fake_quant_with_min_max_vars
as it looks the same astf.quantization.fake_quant_with_min_max_args
. But, I can add this one too if required.For the CLA I am waiting on our CTO to add me to the Edge Impulse <-> Google CLA. But I figured that I can work on revisions to the PR in the meantime.CC: @matpalm, @dansitu, @james77777778