-
-
Notifications
You must be signed in to change notification settings - Fork 332
Open
Description
I would like to inquire about the possibility of adding bf16 precision support for TensorRT.
As you may know, some ONNX models do not work correctly in fp16, often resulting in black or artifacted images. In contrast, many of the newer super-resolution ONNX models are designed to run in bf16, which is significantly faster than fp32.
Could you please consider adding support for bf16? I believe this would be a valuable feature and hopefully straightforward to implement.
Thank you very much in advance.
Metadata
Metadata
Assignees
Labels
No labels