-
Notifications
You must be signed in to change notification settings - Fork 269
Congma/ck tile/preshuffle b #3645
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR refactors how K_Warp_Tile is chosen for preshuffle_b to better align per-lane global-memory load sizes with MFMA K requirements, aiming to improve preshuffle_b performance across architectures.
Changes:
- Added
get_k_warp_tile_for_preshuffle_band updated multiple configs (tests/examples) to use it for preshuffle-B kernels. - Simplified B-shuffle host reference layouts and adjusted warp-lane factoring in
tensor_shuffle_utils.hpp. - Updated WP pipeline policy KB-per-load computation.
Reviewed changes
Copilot reviewed 13 out of 13 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| test/ck_tile/grouped_gemm_preshuffle/test_grouped_gemm_preshuffle_util.hpp | Switches grouped preshuffle tests to the new get_k_warp_tile_for_preshuffle_b helper. |
| test/ck_tile/gemm_weight_preshuffle/test_gemm_pipeline_util.hpp | Updates test configs to compute K_Warp_Tile via helper logic and adds needed include. |
| test/ck_tile/gemm_multi_abd/test_gemm_multi_abd_util.hpp | Removes local get_k_warp_tile helper and relies on shared header. |
| test/ck_tile/gemm_block_scale/test_gemm_quant_fixtures.hpp | Removes duplicated K_Warp_Tile derivation helpers and centralizes on shared header. |
| test/ck_tile/gemm_block_scale/test_gemm_quant_base.hpp | Derives K_Warp_Tile dynamically based on whether preshuffle-B is enabled. |
| include/ck_tile/ops/gemm/pipeline/wp_pipeline_agmem_bgmem_creg_base_policy.hpp | Alters how KB-per-load is computed for the weight preshuffle pipeline policy. |
| include/ck_tile/ops/gemm/pipeline/tile_gemm_shape.hpp | Introduces get_k_warp_tile_for_preshuffle_b. |
| include/ck_tile/host/tensor_shuffle_utils.hpp | Updates host reference shuffling for B to use warp-lane factoring (k-lane-per-warp). |
| example/ck_tile/38_block_scale_gemm/gemm_utils.hpp | Updates example configs to use get_k_warp_tile_for_preshuffle_b. |
| example/ck_tile/17_grouped_gemm/quant_grouped_gemm_config.hpp | Updates grouped GEMM quant config to use new preshuffle-B K sizing. |
| example/ck_tile/17_grouped_gemm/grouped_gemm.hpp | Updates grouped GEMM preshuffle configs (incl. WMMA variant) to use new helper. |
| example/ck_tile/03_gemm/gemm_weight_preshuffle.cpp | Prints CLI help on argument-parse failure. |
| example/ck_tile/03_gemm/gemm_utils.hpp | Updates GEMM preshuffle configs to use get_k_warp_tile_for_preshuffle_b. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| #include "ck_tile/ops/gemm.hpp" | ||
| #include "ck_tile/ops/gemm/kernel/gemm_multi_abd_kernel.hpp" | ||
| #include "ck_tile/ops/elementwise/unary_element_wise_operation.hpp" | ||
| #include "ck_tile/ops/gemm/pipeline/tile_gemm_shape.hpp" | ||
|
|
Copilot
AI
Jan 24, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This file removes the local get_k_warp_tile() helper, but later still calls get_k_warp_tile<ADataType, N_Warp_Tile>() unqualified under CK_TILE_USE_WMMA (around line ~120). That will no longer resolve and will fail to compile. Qualify the call as ck_tile::get_k_warp_tile<...>() (or add a using ck_tile::get_k_warp_tile;).
| static const ck_tile::index_t N_Warp_Tile = 16; | ||
| static const ck_tile::index_t K_Warp_Tile = | ||
| ck_tile::get_k_warp_tile<BDataType, M_Warp_Tile, true>(); | ||
| ck_tile::get_k_warp_tile_for_preshuffle_b<PrecType, N_Warp_Tile>(); |
Copilot
AI
Jan 24, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
K_Warp_Tile is computed using ck_tile::get_k_warp_tile_for_preshuffle_b<PrecType, N_Warp_Tile>(), but PrecType is not defined in this class (only ADataType, BDataType, etc. exist). This will not compile. Use BDataType (or typename Tuple::... if that’s what you intended) as the template argument.
| ck_tile::get_k_warp_tile_for_preshuffle_b<PrecType, N_Warp_Tile>(); | |
| ck_tile::get_k_warp_tile_for_preshuffle_b<BDataType, N_Warp_Tile>(); |
| const int kMaxBytesPerLoad = 16; // buffer load max 16 bytes | ||
| const int kMaxElementsPerLoad = kMaxBytesPerLoad / sizeof(PrecType); | ||
| const int kKLanePerWarp = ck_tile::get_warp_size() / N_Warp_Tile; | ||
| const int kKPerWarp = kMaxElementsPerLoad * kKLanePerWarp; | ||
|
|
||
| // Minimum K_Warp_Tile required by MFMA instructions | ||
| const index_t kMfmaN16Index = 0; | ||
| const index_t kMfmaN32Index = 1; | ||
| #if defined(CK_GFX950_SUPPORT) | ||
| const index_t kF8MfmaMaxK[2] = {128, 64}; | ||
| const index_t kF16MfmaMaxK[2] = {32, 16}; | ||
| #else | ||
| const index_t kF8MfmaMaxK[2] = {32, 16}; | ||
| const index_t kF16MfmaMaxK[2] = {16, 8}; | ||
| #endif | ||
| const bool kIsF8 = std::is_same_v<PrecType, fp8_t> || std::is_same_v<PrecType, bf8_t>; | ||
| const index_t kMfmaIndex = N_Warp_Tile == 16 ? kMfmaN16Index : kMfmaN32Index; | ||
| const index_t kMfmaMaxK = kIsF8 ? kF8MfmaMaxK[kMfmaIndex] : kF16MfmaMaxK[kMfmaIndex]; | ||
|
|
||
| return max(kKPerWarp, kMfmaMaxK); | ||
| #endif |
Copilot
AI
Jan 24, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
get_k_warp_tile_for_preshuffle_b computes kKPerWarp as int and then calls max(kKPerWarp, kMfmaMaxK). In this codebase ck_tile::max(T,T) requires both arguments to be the same type; mixing int and index_t will fail to compile (template recursion ends up with no viable max(int, index_t) overload). Make the intermediate constants constexpr index_t (or cast kKPerWarp to index_t) so the final max call is between identical types.
| constexpr index_t k_b_per_load = | ||
| TileShape::WarpTile::at(I1) * TileShape::WarpTile::at(I2) / get_warp_size(); | ||
|
|
||
| return k_b_per_load; |
Copilot
AI
Jan 24, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
GetKBPerLoad() was simplified to WarpTile::N * WarpTile::K / get_warp_size(), but MakeBFlatDramTileDistribution() still sets KRepeatInWave = 2 on __gfx11__ and asserts TileShape::flatKPerWarp == KThdPerWave * KBPerLoad. With the new formula, that static_assert will fail on gfx11 (it effectively becomes N*K == (warp_size/2) * (N*K/warp_size)). Either fold KRepeatInWave (gfx11) into the KB-per-load calculation, or keep the previous scaling logic so the distribution invariants remain valid.
| constexpr index_t k_b_per_load = | |
| TileShape::WarpTile::at(I1) * TileShape::WarpTile::at(I2) / get_warp_size(); | |
| return k_b_per_load; | |
| constexpr index_t base_k_b_per_load = | |
| TileShape::WarpTile::at(I1) * TileShape::WarpTile::at(I2) / get_warp_size(); | |
| #if defined(__gfx11__) | |
| // On gfx11, MakeBFlatDramTileDistribution() uses KRepeatInWave = 2 and asserts | |
| // TileShape::flatKPerWarp == KThdPerWave * KBPerLoad. To keep this invariant valid, | |
| // fold KRepeatInWave into KBPerLoad here. | |
| return base_k_b_per_load * 2; | |
| #else | |
| return base_k_b_per_load; | |
| #endif |
This PR will improve the performance of the
preshuffle_bget_k_warp_tile_for_preshuffle_bto compute the per-lane load size.