-
Notifications
You must be signed in to change notification settings - Fork 158
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Docs: Add precision support reference page #2111
base: develop
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nicely done.
.. _precision-support: | ||
|
||
******************************** | ||
Precision Support |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Precision Support | |
Precision support |
******************************** | ||
|
||
Tensile supports a rich variety of data types for matrix multiplication operations, enabling optimized performance | ||
across different precision requirements. This document outlines the supported data types and precision formats |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
across different precision requirements. This document outlines the supported data types and precision formats | |
across different precision requirements. This topic outlines the supported data types and precision formats |
across different precision requirements. This document outlines the supported data types and precision formats | ||
used in Tensile's GEMM implementations. | ||
|
||
Data Types |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Data Types | |
Data types |
- ``__hip_fp8_e5m2`` / ``__hip_fp8_e5m2_fnuz`` | ||
- 8-bit | ||
- | Brain float8 format with 5 exponent bits, 2 mantissa bits, and 1 sign bit. Provides greater dynamic range than | ||
| F8 at the cost of reduced precision. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| F8 at the cost of reduced precision. | |
F8 at the cost of reduced precision. |
* - X | ||
- N/A | ||
- 32-bit | ||
- | Tensorfloat equivalent with custom bit distribution for enhanced precision in specific computation patterns |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- | Tensorfloat equivalent with custom bit distribution for enhanced precision in specific computation patterns | |
- | Tensorfloat equivalent to custom bit distribution. Used for enhanced precision in specific computation patterns |
- # SGEMM | ||
- {M: 5504, N: 5504, K: 5504, transposeA: false, transposeB: true, dataType: S} | ||
|
||
**Half-Precision with Single-Precision Accumulation** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
**Half-Precision with Single-Precision Accumulation** | |
**Half-precision with single-precision accumulation** |
- # GEMM_EX (HHS) | ||
- {M: 5504, N: 5504, K: 5504, transposeA: false, transposeB: true, dataType: H, destDataType: H, computeDataType: S} | ||
|
||
**BFloat16 Input with Float32 Output** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
**BFloat16 Input with Float32 Output** | |
**BFloat16 input with Float32 output** |
- # GEMM_EX (BSS) | ||
- {M: 4096, N: 4096, K: 4096, transposeA: false, transposeB: true, dataType: B, destDataType: S, computeDataType: S} | ||
|
||
**8-bit Integer Operations** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
**8-bit Integer Operations** | |
**8-bit integer operations** |
- # GEMM_EX (I8II) | ||
- {M: 4096, N: 4096, K: 4096, transposeA: false, transposeB: true, dataType: I8, destDataType: I, computeDataType: I} | ||
|
||
**Mixed F8/B8 Input with Half Precision Output** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
**Mixed F8/B8 Input with Half Precision Output** | |
**Mixed F8/B8 input with half precision output** |
- # GEMM_EX | ||
- {M: 5504, N: 5504, K: 5504, transposeA: false, transposeB: true, dataType: F8B8, destDataType: H, computeDataType: S} | ||
|
||
Library Logic File Naming |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Library Logic File Naming | |
Library logic file naming |
- S | ||
- Matrix A is bfloat8, Matrix B is float8, with half precision output | ||
|
||
Configuration in Tensile |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this heading apt? How about "Data types in configuration files"?
resolves #___
Summary:
Add precison support reference page which details supported data types in Tensile.
Outcomes:
Only affects documentation side of this project.
Notable changes:
Only addition of one RST file, and modification of two files to add a link.