Skip to content

Commit

Permalink
[mlir][tensor] Refine the semantics of createPadHighOp (llvm#109667)
Browse files Browse the repository at this point in the history
Refine `createPadHighOp` so that the output tensor is required to be
statically shaped. This is to prevent the current behaviour, which is
incorrect:

>  // If `type` has dynamic dimensions the padding width is set to zero.

The actual padding width should be set to: `%new_dim - %old_dim`, where
%new_dim` and `%old_dim` are defined via e.g. `tensor.dim` Op applied to
output and input tensors, respectively.

This PR is an attempt to clarify the semantics surrounding dynamic
shapes in preparation for adding support for scalable vectors to the
pack/unpack logic in Tensor/Linalg (dynamic shapes is what we use to
model scalable (*) sizes at the Tensor/MemRef level).

(*) Scalable as in Arm's Scalable Vector Extension (SVE)
  • Loading branch information
banach-space authored and augusto2112 committed Sep 26, 2024
1 parent 1eea819 commit 08d1a4a
Show file tree
Hide file tree
Showing 2 changed files with 12 additions and 7 deletions.
8 changes: 4 additions & 4 deletions mlir/include/mlir/Dialect/Tensor/Utils/Utils.h
Original file line number Diff line number Diff line change
Expand Up @@ -14,10 +14,10 @@
namespace mlir {
namespace tensor {

// Return a PadOp that pads `source` to `type` size where the static
// sizes are assumed to be greater than the dynamic sizes. If `type` has dynamic
// dimensions the padding width is set to zero. The op performs "high" padding
// (i.e. it adds trailing padding values until the desired size is met).
// Return a PadOp that pads `source` to `type` size. Output sizes (from `type`)
// are assumed to be static and greater than the potentially dynamic input sizes
// (from `source). The op performs "high" padding (i.e. it adds trailing padding
// values until the desired size is met).
PadOp createPadHighOp(RankedTensorType type, Value source, Value pad,
bool nofold, Location loc, OpBuilder &builder);

Expand Down
11 changes: 8 additions & 3 deletions mlir/lib/Dialect/Tensor/Utils/Utils.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -24,12 +24,17 @@ using namespace mlir::tensor;
PadOp mlir::tensor::createPadHighOp(RankedTensorType type, Value source,
Value pad, bool nofold, Location loc,
OpBuilder &b) {

// TODO: Either relax or turn this into a failure
assert(!ShapedType::isDynamicShape(type.getShape()) &&
"The output type is dynamic - that's not supported ATM.");

// Init "low" and "high" padding values ("low" is kept as is, "high" is
// computed below).
SmallVector<OpFoldResult> low(type.getRank(), b.getIndexAttr(0));
SmallVector<OpFoldResult> high(type.getRank(), b.getIndexAttr(0));

for (const auto &en : enumerate(type.getShape())) {
// Pad only the static dimensions of the result tensor type.
if (ShapedType::isDynamic(en.value()))
continue;
// Compute the padding width.
AffineExpr d0;
bindDims(b.getContext(), d0);
Expand Down

0 comments on commit 08d1a4a

Please sign in to comment.