Skip to content

Conversation

@CongMa13
Copy link
Collaborator

This PR will improve the performance of the preshuffle_b

  • Introduces a constraint on the per-thread load size along the K dimension from global memory.
  • Each thread now loads either:
    • 16 bytes (a single dwordx4 instruction), or
    • Exactly the K required by the MFMA instruction when 16 bytes is inadequate.
  • In the 16-byte mode, data from one dwordx4 load can be consumed by one or multiple MFMA instructions.
  • In the MFMA-K mode, multiple dwordx4 loads may be consumed by a single MFMA instruction (e.g., f8_16x16x128 on gfx950).
  • Both modes are tuned to deliver optimal performance.
  • Adds a helper function get_k_warp_tile_for_preshuffle_b to compute the per-lane load size.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR refactors how K_Warp_Tile is chosen for preshuffle_b to better align per-lane global-memory load sizes with MFMA K requirements, aiming to improve preshuffle_b performance across architectures.

Changes:

  • Added get_k_warp_tile_for_preshuffle_b and updated multiple configs (tests/examples) to use it for preshuffle-B kernels.
  • Simplified B-shuffle host reference layouts and adjusted warp-lane factoring in tensor_shuffle_utils.hpp.
  • Updated WP pipeline policy KB-per-load computation.

Reviewed changes

Copilot reviewed 13 out of 13 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
test/ck_tile/grouped_gemm_preshuffle/test_grouped_gemm_preshuffle_util.hpp Switches grouped preshuffle tests to the new get_k_warp_tile_for_preshuffle_b helper.
test/ck_tile/gemm_weight_preshuffle/test_gemm_pipeline_util.hpp Updates test configs to compute K_Warp_Tile via helper logic and adds needed include.
test/ck_tile/gemm_multi_abd/test_gemm_multi_abd_util.hpp Removes local get_k_warp_tile helper and relies on shared header.
test/ck_tile/gemm_block_scale/test_gemm_quant_fixtures.hpp Removes duplicated K_Warp_Tile derivation helpers and centralizes on shared header.
test/ck_tile/gemm_block_scale/test_gemm_quant_base.hpp Derives K_Warp_Tile dynamically based on whether preshuffle-B is enabled.
include/ck_tile/ops/gemm/pipeline/wp_pipeline_agmem_bgmem_creg_base_policy.hpp Alters how KB-per-load is computed for the weight preshuffle pipeline policy.
include/ck_tile/ops/gemm/pipeline/tile_gemm_shape.hpp Introduces get_k_warp_tile_for_preshuffle_b.
include/ck_tile/host/tensor_shuffle_utils.hpp Updates host reference shuffling for B to use warp-lane factoring (k-lane-per-warp).
example/ck_tile/38_block_scale_gemm/gemm_utils.hpp Updates example configs to use get_k_warp_tile_for_preshuffle_b.
example/ck_tile/17_grouped_gemm/quant_grouped_gemm_config.hpp Updates grouped GEMM quant config to use new preshuffle-B K sizing.
example/ck_tile/17_grouped_gemm/grouped_gemm.hpp Updates grouped GEMM preshuffle configs (incl. WMMA variant) to use new helper.
example/ck_tile/03_gemm/gemm_weight_preshuffle.cpp Prints CLI help on argument-parse failure.
example/ck_tile/03_gemm/gemm_utils.hpp Updates GEMM preshuffle configs to use get_k_warp_tile_for_preshuffle_b.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 11 to 15
#include "ck_tile/ops/gemm.hpp"
#include "ck_tile/ops/gemm/kernel/gemm_multi_abd_kernel.hpp"
#include "ck_tile/ops/elementwise/unary_element_wise_operation.hpp"
#include "ck_tile/ops/gemm/pipeline/tile_gemm_shape.hpp"

Copy link

Copilot AI Jan 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file removes the local get_k_warp_tile() helper, but later still calls get_k_warp_tile<ADataType, N_Warp_Tile>() unqualified under CK_TILE_USE_WMMA (around line ~120). That will no longer resolve and will fail to compile. Qualify the call as ck_tile::get_k_warp_tile<...>() (or add a using ck_tile::get_k_warp_tile;).

Copilot uses AI. Check for mistakes.
static const ck_tile::index_t N_Warp_Tile = 16;
static const ck_tile::index_t K_Warp_Tile =
ck_tile::get_k_warp_tile<BDataType, M_Warp_Tile, true>();
ck_tile::get_k_warp_tile_for_preshuffle_b<PrecType, N_Warp_Tile>();
Copy link

Copilot AI Jan 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

K_Warp_Tile is computed using ck_tile::get_k_warp_tile_for_preshuffle_b<PrecType, N_Warp_Tile>(), but PrecType is not defined in this class (only ADataType, BDataType, etc. exist). This will not compile. Use BDataType (or typename Tuple::... if that’s what you intended) as the template argument.

Suggested change
ck_tile::get_k_warp_tile_for_preshuffle_b<PrecType, N_Warp_Tile>();
ck_tile::get_k_warp_tile_for_preshuffle_b<BDataType, N_Warp_Tile>();

Copilot uses AI. Check for mistakes.
Comment on lines +83 to +103
const int kMaxBytesPerLoad = 16; // buffer load max 16 bytes
const int kMaxElementsPerLoad = kMaxBytesPerLoad / sizeof(PrecType);
const int kKLanePerWarp = ck_tile::get_warp_size() / N_Warp_Tile;
const int kKPerWarp = kMaxElementsPerLoad * kKLanePerWarp;

// Minimum K_Warp_Tile required by MFMA instructions
const index_t kMfmaN16Index = 0;
const index_t kMfmaN32Index = 1;
#if defined(CK_GFX950_SUPPORT)
const index_t kF8MfmaMaxK[2] = {128, 64};
const index_t kF16MfmaMaxK[2] = {32, 16};
#else
const index_t kF8MfmaMaxK[2] = {32, 16};
const index_t kF16MfmaMaxK[2] = {16, 8};
#endif
const bool kIsF8 = std::is_same_v<PrecType, fp8_t> || std::is_same_v<PrecType, bf8_t>;
const index_t kMfmaIndex = N_Warp_Tile == 16 ? kMfmaN16Index : kMfmaN32Index;
const index_t kMfmaMaxK = kIsF8 ? kF8MfmaMaxK[kMfmaIndex] : kF16MfmaMaxK[kMfmaIndex];

return max(kKPerWarp, kMfmaMaxK);
#endif
Copy link

Copilot AI Jan 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

get_k_warp_tile_for_preshuffle_b computes kKPerWarp as int and then calls max(kKPerWarp, kMfmaMaxK). In this codebase ck_tile::max(T,T) requires both arguments to be the same type; mixing int and index_t will fail to compile (template recursion ends up with no viable max(int, index_t) overload). Make the intermediate constants constexpr index_t (or cast kKPerWarp to index_t) so the final max call is between identical types.

Copilot uses AI. Check for mistakes.
Comment on lines +44 to +47
constexpr index_t k_b_per_load =
TileShape::WarpTile::at(I1) * TileShape::WarpTile::at(I2) / get_warp_size();

return k_b_per_load;
Copy link

Copilot AI Jan 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GetKBPerLoad() was simplified to WarpTile::N * WarpTile::K / get_warp_size(), but MakeBFlatDramTileDistribution() still sets KRepeatInWave = 2 on __gfx11__ and asserts TileShape::flatKPerWarp == KThdPerWave * KBPerLoad. With the new formula, that static_assert will fail on gfx11 (it effectively becomes N*K == (warp_size/2) * (N*K/warp_size)). Either fold KRepeatInWave (gfx11) into the KB-per-load calculation, or keep the previous scaling logic so the distribution invariants remain valid.

Suggested change
constexpr index_t k_b_per_load =
TileShape::WarpTile::at(I1) * TileShape::WarpTile::at(I2) / get_warp_size();
return k_b_per_load;
constexpr index_t base_k_b_per_load =
TileShape::WarpTile::at(I1) * TileShape::WarpTile::at(I2) / get_warp_size();
#if defined(__gfx11__)
// On gfx11, MakeBFlatDramTileDistribution() uses KRepeatInWave = 2 and asserts
// TileShape::flatKPerWarp == KThdPerWave * KBPerLoad. To keep this invariant valid,
// fold KRepeatInWave into KBPerLoad here.
return base_k_b_per_load * 2;
#else
return base_k_b_per_load;
#endif

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants