-
Notifications
You must be signed in to change notification settings - Fork 522
Add some basic xnnpack recipes #10035
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: gh/tarun292/5/base
Are you sure you want to change the base?
Conversation
Differential Revision: [D72085170](https://our.internmc.facebook.com/intern/diff/D72085170/) [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/10035
Note: Links to docs will display an error until the docs builds have been completed. ❌ 62 New Failures, 11 Unrelated FailuresAs of commit 910ac24 with merge base 1facfa9 ( NEW FAILURES - The following jobs have failed:
FLAKY - The following jobs failed but were likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D72085170 |
This PR needs a
|
is_per_channel=True, is_dynamic=True | ||
) | ||
quantizer.set_global(operator_config) | ||
DuplicateDynamicQuantChainPass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i've actually done some work to remove the need for this. I guess it's ok to have for now as it should still work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's remove it if not needed?
|
||
) | ||
|
||
def get_dynamic_quant_recipe() -> ExportRecipe: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
organizationally maybe quant recipes can be in a separate folder?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. Stamping, left some comments.
) | ||
from executorch.exir import ExportRecipe | ||
|
||
def get_generic_fp32_cpu_recipe() -> ExportRecipe: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if namespaced to XNNPACK then cpu may not be needed?
def get_generic_fp32_cpu_recipe() -> ExportRecipe: | |
def get_fp32_recipe() -> ExportRecipe: |
"FP32_CPU_ACCELERATED_RECIPE": get_generic_fp32_cpu_recipe, | ||
"DYNAMIC_QUANT_CPU_ACCELERATED_RECIPE": get_dynamic_quant_recipe, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit
"FP32_CPU_ACCELERATED_RECIPE": get_generic_fp32_cpu_recipe, | |
"DYNAMIC_QUANT_CPU_ACCELERATED_RECIPE": get_dynamic_quant_recipe, | |
"FP32_RECIPE": get_fp32_recipe, | |
"DYNAMIC_QUANT_RECIPE": get_dynamic_quant_recipe, |
name = "fp32_recipe", | ||
quantizer = None, | ||
partitioners=[XnnpackPartitioner()], | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
quantizer = XNNPACKQuantizer() | ||
operator_config = get_symmetric_quantization_config(is_per_channel=False) | ||
quantizer.set_global(operator_config) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
quantizer = XNNPACKQuantizer() | |
operator_config = get_symmetric_quantization_config(is_per_channel=False) | |
quantizer.set_global(operator_config) |
is_per_channel=True, is_dynamic=True | ||
) | ||
quantizer.set_global(operator_config) | ||
DuplicateDynamicQuantChainPass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's remove it if not needed?
operator_config = get_symmetric_quantization_config(is_per_channel=False) | ||
quantizer.set_global(operator_config) | ||
return ExportRecipe( | ||
name = "fp32_recipe", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit
name = "fp32_recipe", | |
name = "fp32", |
def setUp(self) -> None: | ||
super().setUp() | ||
|
||
def tearDown(self) -> None: | ||
super().tearDown() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need these?
example_inputs=example_inputs, | ||
export_recipe=get_xnnpack_recipe("FP32_CPU_ACCELERATED_RECIPE") | ||
) | ||
export_session.export() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will this raise? should we catch it?
Stack from ghstack (oldest at bottom):
Differential Revision: D72085170