-
Notifications
You must be signed in to change notification settings - Fork 423
Open
Description
Hello,
When using aimet_torch, I noticed that in the sim.export function, the code sets the option qdq_weights=True. Could you explain the reason for this? Specifically, in aimet_torch/_base/quantsim.py
, inside the export
method of the class _QuantizationSimModelBase
, the following line appears:
# Create a version of the model without any quantization ops
model_to_export = self.get_original_model(self.model, qdq_weights=False)
This seems to imply that the weights stored in the exported ONNX file might differ from the original model’s weights.
In our case, we would like the exported weights to remain identical to the original model’s weights. Would it be acceptable to simply change this to qdq_weights=True before export?
Thanks in advance!
Metadata
Metadata
Assignees
Labels
No labels