613 Commits

Author SHA1 Message Date
Charitha Saumya
9617ce4862
[vector][distribution] Bug fix in moveRegionToNewWarpOpAndAppendReturns (#153656) 2025-08-18 13:26:08 -07:00
Sang Ik Lee
baae949f19
[MLIR][GPU][XeVM] Add XeVM target and XeVM dialect integration tests. (#148286)
As part of XeVM dialect upsteaming, covers remaining parts required for XeVM dialect integration and testing.
It has two high level components
- XeVM target and serialization support
- XeVM dialect integration tests using level zero runtime

Co-Authored-by: Artem Kroviakov <artem.kroviakov@intel.com>
2025-08-13 13:17:10 -07:00
Longsheng Mou
2edee0bc79
[mlir][gpu] Support outlining nested gpu.launch (#152696)
This PR fixes a crash in `GpuKernelOutliningPass` that occurred when
encountering a symbol that was not a `FlatSymbolRefAttr`, enabling
outlining of nested `gpu.launch` operations. Fixes #149318.
2025-08-13 11:42:52 +08:00
Longsheng Mou
7d886fab74
[mlir][gpu] Update attribute definitions in gpu::LaunchOp (#152106)
`gpu::LaunchOp` is updated the following way:
- Change the attribute type of kernel function and module from
`SymbolRefAttr` to `FlatSymbolRefAttr` to avoid nested symbol
references.
- Rename variables from camel case (kernelFunc, kernelModule) to lower
case (function, module) and update the syntax.
- `LaunchOp::build` support passing `module` and `function` attributes.
2025-08-08 11:43:21 +08:00
Hsiangkai Wang
0d21522c00
[mlir][gpu] Make offset and width in gpu.rotate as attributes (#150901)
`offset` and `width` must be constants and there are constraints on
their values. Update the operation definition to use attributes instead
of operands.
2025-07-29 09:02:42 +01:00
Maksim Levental
c090ed53fb
[mlir][NFC] update mlir/Dialect create APIs (33/n) (#150659)
See https://github.com/llvm/llvm-project/pull/147168 for more info.
2025-07-25 16:13:55 -04:00
Jacques Pienaar
07967d4af8
[mlir] Switch to new LDBG macro (#150616)
Change local variants to use new central one.
2025-07-25 18:22:46 +02:00
Longsheng Mou
3eb49c482c
[mlir][NFC] Use hasOneBlock instead of llvm::hasSingleElement(region) (#149809) 2025-07-24 10:11:21 +08:00
Kazu Hirata
0925d7572a
[mlir] Remove unused includes (NFC) (#150266)
These are identified by misc-include-cleaner.  I've filtered out those
that break builds.  Also, I'm staying away from llvm-config.h,
config.h, and Compiler.h, which likely cause platform- or
compiler-specific build failures.
2025-07-23 15:18:53 -07:00
Maksim Levental
dce6679cf5
[mlir][NFC] update mlir/Dialect create APIs (16/n) (#149922)
See https://github.com/llvm/llvm-project/pull/147168 for more info.
2025-07-21 19:57:30 -04:00
Sang Ik Lee
61004b7eb5
[MLIR][GPU] Add xevm-attach-target transform pass. (#147372)
Add xevm-attach-target transform pass and unit-tests.

Co-authored-by: by Sang Ik Lee sang.ik.lee@intel.com.
Co-authored-by: Artem Kroviakov artem.kroviakov@intel.com
2025-07-10 15:44:26 -05:00
Kazu Hirata
54bd936ec9
[mlir] Remove unused includes (NFC) (#147455)
These are identified by misc-include-cleaner.  I've filtered out those
that break builds.  Also, I'm staying away from llvm-config.h,
config.h, and Compiler.h, which likely cause platform- or
compiler-specific build failures.
2025-07-07 23:40:44 -07:00
Nicolas Vasilache
c30b5b1549
[mlir][GPU][transform] Add gpu_to_rocdl conversion pattern (#146962)
Co-authored-by: Son Tuan Vu <vuson@google.com>
2025-07-07 18:34:09 +02:00
Nicolas Vasilache
2b28d10022
[mlir][SCF][GPU] Add DeviceMaskingAttrInterface (#146943)
This revision adds DeviceMaskingAttrInterface and extends
DeviceMappingArrayAttr to accept a union of DeviceMappingAttrInterface
and DeviceMaskingAttrInterface.

Support is added to GPUTransformOps to take advantage of this
information and lower to block/warpgroup/warp/thread specialization when
mapped to linear ids.

The revision also connects to scf::ForallOp and uses the new attribute
to implement warp specialization.
The implementation is in the form of a GPUMappingMaskAttr, which can be
additionally passed to the scf.forall.mapping attribute to specify a
mask on compute resources that should be active.

In the first implementation the masking is a bitfield that specifies for
each processing unit whether it is active or not.
In the future, we may want to implement this as a symbol to refer to
dynamically defined values.
Extending op semantics with an operand is deemed too intrusive at this
time.

---------

Co-authored-by: Oleksandr "Alex" Zinenko <git@ozinenko.com>
2025-07-07 18:06:41 +02:00
Nicolas Vasilache
ea62de5b1d
[mlir] NFC - refactor id builder and avoid leaking impl details (#146922) 2025-07-07 15:42:48 +02:00
Nicolas Vasilache
0a62836969
[mlir][gpu][transforms] Add support for mapping to lanes (#146912)
This revision adds a new attribute for mapping `scf.forall` to linear
lane ids.

Example:
```
    // %arg2 and %arg3 map to lanes [0, 6) and are turned into epxressions
    // involving threadIdx.x/y by the map_nested_forall_to_threads
    // transformation. This results in a if (linear_thread_id < 6) conditional.
    scf.forall (%arg2, %arg3) in (2, 3) {
       ...
    } {mapping = [#gpu.lane<linear_dim_0>, #gpu.lane<linear_dim_1>]}
 ```

---------

Co-authored-by: Oleksandr "Alex" Zinenko <git@ozinenko.com>
2025-07-07 15:14:52 +02:00
Kazu Hirata
ed0ee3a419
[mlir] Use llvm::fill (NFC) (#147100)
We can pass a range to llvm::fill.
2025-07-04 13:30:14 -07:00
Hsiangkai Wang
f581ef5b66
[mlir][gpu] Add gpu.rotate operation (#142796)
Add gpu.rotate operation and a pattern to convert gpu.rotate to SPIR-V
OpGroupNonUniformRotateKHR.
2025-07-01 11:32:25 +01:00
Kazu Hirata
28f6f87061
[mlir] Migrate away from std::nullopt (NFC) (#145523)
ArrayRef has a constructor that accepts std::nullopt.  This
constructor dates back to the days when we still had llvm::Optional.

Since the use of std::nullopt outside the context of std::optional is
kind of abuse and not intuitive to new comers, I would like to move
away from the constructor and eventually remove it.

This patch migrates away from std::nullopt in favor of ArrayRef<T>()
where we use perfect forwarding.  Note that {} would be ambiguous for
perfect forwarding to work.
2025-06-25 11:49:22 -07:00
Kazu Hirata
63f30d7d82
[mlir] Migrate away from {TypeRange,ValueRange}(std::nullopt) (NFC) (#145445)
ArrayRef has a constructor that accepts std::nullopt.  This
constructor dates back to the days when we still had llvm::Optional.

Since the use of std::nullopt outside the context of std::optional is
kind of abuse and not intuitive to new comers, I would like to move
away from the constructor and eventually remove it.

This patch migrates away from TypeRagne(std::nullopt) and
ValueRange(std::nullopt).
2025-06-24 07:03:59 -07:00
Skrai Pardus
a45fda6aeb
switch type and value ordering for arith Constant[XX]Op (#144636)
This change standardizes the order of the parameters for `Constant[XXX]
Ops` to match with all other `Op` `build()` constructors.

In all instances of generated code for the MLIR dialects's Ops (that is
the TableGen using the .td files to create the .h.inc/.cpp.inc files),
the desired result type is always specified before the value.

Examples: 
```
// ArithOps.h.inc
class ConstantOp : public ::mlir::Op<ConstantOp, ::mlir::OpTrait::ZeroRegions, ::mlir::OpTrait::OneResult, ::mlir::OpTrait::OneTypedResult<::mlir::Type>::Impl, ::mlir::OpTrait::ZeroSuccessors, ::mlir::OpTrait::ZeroOperands, ::mlir::OpTrait::OpInvariants, ::mlir::BytecodeOpInterface::Trait, ::mlir::OpTrait::ConstantLike, ::mlir::ConditionallySpeculatable::Trait, ::mlir::OpTrait::AlwaysSpeculatableImplTrait, ::mlir::MemoryEffectOpInterface::Trait, ::mlir::OpAsmOpInterface::Trait, ::mlir::InferIntRangeInterface::Trait, ::mlir::InferTypeOpInterface::Trait> {
public:
....
static void build(::mlir::OpBuilder &odsBuilder, ::mlir::OperationState &odsState, ::mlir::Type result, ::mlir::TypedAttr value);
  static void build(::mlir::OpBuilder &odsBuilder, ::mlir::OperationState &odsState, ::mlir::TypedAttr value);
  static void build(::mlir::OpBuilder &odsBuilder, ::mlir::OperationState &odsState, ::mlir::TypeRange resultTypes, ::mlir::TypedAttr value);
  static void build(::mlir::OpBuilder &, ::mlir::OperationState &odsState, ::mlir::TypeRange resultTypes, ::mlir::ValueRange operands, ::llvm::ArrayRef<::mlir::NamedAttribute> attributes = {});
  static void build(::mlir::OpBuilder &odsBuilder, ::mlir::OperationState &odsState, ::mlir::ValueRange operands, ::llvm::ArrayRef<::mlir::NamedAttribute> attributes = {});
...
```
```
// ArithOps.h.inc
class SubIOp : public ::mlir::Op<SubIOp, ::mlir::OpTrait::ZeroRegions, ::mlir::OpTrait::OneResult, ::mlir::OpTrait::OneTypedResult<::mlir::Type>::Impl, ::mlir::OpTrait::ZeroSuccessors, ::mlir::OpTrait::NOperands<2>::Impl, ::mlir::OpTrait::OpInvariants, ::mlir::BytecodeOpInterface::Trait, ::mlir::ConditionallySpeculatable::Trait, ::mlir::OpTrait::AlwaysSpeculatableImplTrait, ::mlir::MemoryEffectOpInterface::Trait, ::mlir::InferIntRangeInterface::Trait, ::mlir::arith::ArithIntegerOverflowFlagsInterface::Trait, ::mlir::OpTrait::SameOperandsAndResultType, ::mlir::VectorUnrollOpInterface::Trait, ::mlir::OpTrait::Elementwise, ::mlir::OpTrait::Scalarizable, ::mlir::OpTrait::Vectorizable, ::mlir::OpTrait::Tensorizable, ::mlir::InferTypeOpInterface::Trait> {
public:
...
static void build(::mlir::OpBuilder &odsBuilder, ::mlir::OperationState &odsState, ::mlir::Type result, ::mlir::Value lhs, ::mlir::Value rhs, ::mlir::arith::IntegerOverflowFlagsAttr overflowFlags);
  static void build(::mlir::OpBuilder &odsBuilder, ::mlir::OperationState &odsState, ::mlir::Value lhs, ::mlir::Value rhs, ::mlir::arith::IntegerOverflowFlagsAttr overflowFlags);
  static void build(::mlir::OpBuilder &odsBuilder, ::mlir::OperationState &odsState, ::mlir::TypeRange resultTypes, ::mlir::Value lhs, ::mlir::Value rhs, ::mlir::arith::IntegerOverflowFlagsAttr overflowFlags);
  static void build(::mlir::OpBuilder &odsBuilder, ::mlir::OperationState &odsState, ::mlir::Type result, ::mlir::Value lhs, ::mlir::Value rhs, ::mlir::arith::IntegerOverflowFlags overflowFlags = ::mlir::arith::IntegerOverflowFlags::none);
  static void build(::mlir::OpBuilder &odsBuilder, ::mlir::OperationState &odsState, ::mlir::Value lhs, ::mlir::Value rhs, ::mlir::arith::IntegerOverflowFlags overflowFlags = ::mlir::arith::IntegerOverflowFlags::none);
  static void build(::mlir::OpBuilder &odsBuilder, ::mlir::OperationState &odsState, ::mlir::TypeRange resultTypes, ::mlir::Value lhs, ::mlir::Value rhs, ::mlir::arith::IntegerOverflowFlags overflowFlags = ::mlir::arith::IntegerOverflowFlags::none);
  static void build(::mlir::OpBuilder &, ::mlir::OperationState &odsState, ::mlir::TypeRange resultTypes, ::mlir::ValueRange operands, ::llvm::ArrayRef<::mlir::NamedAttribute> attributes = {});
  static void build(::mlir::OpBuilder &odsBuilder, ::mlir::OperationState &odsState, ::mlir::ValueRange operands, ::llvm::ArrayRef<::mlir::NamedAttribute> attributes = {});
...
```
In comparison, in the distinct case of `ConstantIntOp` and
`ConstantFloatOp`, the ordering of the result type and the value is
switched.

Thus, this PR corrects the ordering of the aforementioned
`Constant[XXX]Ops` to match with other constructors.
2025-06-23 23:35:50 +02:00
Kazu Hirata
887222e352
[mlir] Migrate away from ArrayRef(std::nullopt) (NFC) (#144989)
ArrayRef has a constructor that accepts std::nullopt.  This
constructor dates back to the days when we still had llvm::Optional.

Since the use of std::nullopt outside the context of std::optional is
kind of abuse and not intuitive to new comers, I would like to move
away from the constructor and eventually remove it.

This patch takes care of the mlir side of the migration, starting with
straightforward places where I see ArrayRef or ValueRange nearby.
Note that ValueRange has a constructor that forwards arguments to an
ArrayRef constructor.
2025-06-20 08:33:59 -07:00
Muzammil
893ef7ffbd
[mlir][GPU] Fixes subgroup reduce lowering (#141825)
Fixes the final reduction steps which were taken from an implementation
of scan, not reduction, causing lanes earlier in the wave to have
incorrect results due to masking.

Now aligning more closely with triton implementation :
https://github.com/triton-lang/triton/pull/5019

# Hypothetical example
To provide an explanation of the issue with the current implementation,
let's take the simple example of attempting to perform a sum over 64
lanes where the initial values are as follows (first lane has value 1,
and all other lanes have value 0):
```
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
```
When performing a sum reduction over these 64 lanes, in the current
implementation we perform 6 dpp instructions which in sequential order
do the following:
1) sum over clusters of 2 contiguous lanes
2) sum over clusters of 4 contiguous lanes
3) sum over clusters of 8 contiguous lanes
4) sum over an entire row
5) broadcast the result of last lane in each row to the next row and
each lane sums current value with incoming value.
5) broadcast the result of the 32nd lane to last two rows and each lane
sums current value with incoming value.

After step 4) the result for the example above looks like this:

```
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
```

After step 5) the result looks like this:
```
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
```

After step 6) the result looks like this:
```
[4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
```
Note that the correct value here is always 1, yet after the
`dpp.broadcast` ops some lanes have incorrect values. The reason is that
for these incorrect lanes, like lanes 0-15 in step 5, the
`dpp.broadcast` op doesn't provide them incoming values from other
lanes. Instead these lanes are provided either their own values, or 0
(depending on whether `bound_ctrl` is true or false) as values to sum
over, either way these values are stale and these lanes shouldn't be
used in general.

So what this means:
- For a subgroup reduce over 32 lanes (like Step 5), the correct result
is stored in lanes 16 to 31
- For a subgroup reduce over 64 lanes (like Step 6), the correct result
is stored in lanes 32 to 63.

However in the current implementation we do not specifically read the
value from one of the correct lanes when returning a final value. In
some workloads it seems without this specification, the stale value from
the first lane is returned instead.

# Actual failing test
For a specific example of how the current implementation causes issues,
take a look at the IR below which represents an additive reduction over
a dynamic dimension.
```
!matA = tensor<1x?xf16>
!matB = tensor<1xf16>
#map = affine_map<(d0, d1) -> (d0, d1)>
#map1 = affine_map<(d0, d1) -> (d0)>
func.func @only_producer_fusion_multiple_result(%arg0: !matA) -> !matB {
  %cst_1 = arith.constant 0.000000e+00 : f16
  %c2_i64 = arith.constant 2 : i64
  %0 = tensor.empty() : !matB
  %2 = linalg.fill ins(%cst_1 : f16) outs(%0 : !matB) -> !matB
  %4 = linalg.generic {indexing_maps = [#map, #map1], iterator_types = ["parallel", "reduction"]} ins(%arg0 : !matA) outs(%2 : !matB)  {
  ^bb0(%in: f16, %out: f16):
    %7 = arith.addf %in, %out : f16
    linalg.yield %7 : f16
  } -> !matB
  return %4 : !matB
}
```
When provided an input of type `tensor<1x2xf16>` and values `{0, 1}` to
perform the reduction over, the value returned is consistently 4. By the
same analysis done above, this shows that the returned value is coming
from one of these stale lanes and needs to be read instead from one of
the lanes storing the correct result.

Signed-off-by: Muzammiluddin Syed <muzasyed@amd.com>
2025-05-28 17:47:22 -05:00
Srinivasa Ravi
9a553d3766
[MLIR][NVVM] Add NVVMRequiresSM op traits (#126886)
Motivation:
Currently, the NVVMOps are not verified against the supported SM
architectures. This can manifest as an ISel failure in the NVPTX LLVM
backend during CodeGen to PTX ISA. This PR addresses this issue by
adding verifier checks for Target-SM architectures in the NVVM Dialect
itself, thereby catching the errors early on.

Summary:
* Parametric traits named `NVVMRequiresSM` and `NVVMRequiresSMa` are
added to facilitate the version checks for typical and arch-accelerated
versions respectively.
* These traits can be attached to any NVVM Op to enable the checks for
the particular Op. (example shown below)
* An attribute interface called named `TargetAttrVerifyInterface` is
added to the GPU dialect which any target attribute seeking to perform
target-verification on the module can implement.
* The checks are performed by the `NVVMTargetAttr` (implementing the
`TargetAttrVerifyInterface` interface) when called from the GPU module
verifier where it walks through the module and performs the checks for
Ops with the `NVVMRequiresSM` traits.
* A few Ops in `NVVMOps.td` have been updated to serve as examples.

Example Usage:
```
       def NVVM_ReduxOp : NVVM_Op<"redux.sync"> {...} 
 ----> def NVVM_ReduxOp : NVVM_Op<"redux.sync", [NVVMRequiresSM<80>]> {...}

       def NVVM_WgmmaFenceAlignedOp : NVVM_Op<"wgmma.fence.aligned"> {...}
 ----> def NVVM_WgmmaFenceAlignedOp : NVVM_Op<"wgmma.fence.aligned", [NVVMRequiresSMa<[90]>]> {...}
```

---------

Co-authored-by: Guray Ozen <guray.ozen@gmail.com>
2025-05-21 08:53:00 +05:30
Shay Kleiman
ffb9bbfd07
[mlir][MemRef] Changed AssumeAlignment into a Pure ViewLikeOp (#139521)
Made AssumeAlignment a ViewLikeOp that returns a new SSA memref equal
to its memref argument and made it have Pure trait. This
gives it a defined memory effect that matches what it does in practice
and makes it behave nicely with optimizations which won't get rid of it
unless its result isn't being used.
2025-05-18 13:50:29 +03:00
Ivan Butygin
91f3cdbd4f
[mlir][gpu] Pattern to promote gpu.shuffle to specialized AMDGPU ops (#137109)
Only swizzle promotion for now, may add DPP ops support later.
2025-05-13 13:26:46 +03:00
Kazu Hirata
15f7c6ed70
[mlir] Remove unused local variables (NFC) (#138481) 2025-05-05 10:08:00 -07:00
Alan Li
0ba1361478
[MLIR][GPU] Use arith instead of index for subgroup_id (#137843)
Trying to simplify situation by using `arith` dialect instead of `index`
in the rewriting of `gpu.subgroup_id`.
2025-04-30 09:03:24 -04:00
Alan Li
ac65b2c327
[MLIR][GPU] Add a pattern to rewrite gpu.subgroup_id (#137671)
This patch impelemnts a rewrite pattern for transforming
`gpu.subgroup_id` to:
```
subgroup_id = linearized_thread_id / gpu.subgroup_size
```

where:
```
linearized_thread_id = thread_id.x + block_dim.x * (thread_id.y + block_dim.y * thread_id.z)
```
2025-04-29 10:54:48 -04:00
Kazu Hirata
d1e85a0ea0
[mlir] Use range constructors of *Set (NFC) (#137563) 2025-04-27 17:52:41 -07:00
Muzammil
30fec128e8
[mlir][AMDGPU] Add missing dependency (#137107)
Add missing deps after
905f1d8068

---------

Signed-off-by: Muzammiluddin Syed <muzasyed@amd.com>
2025-04-23 19:08:16 -07:00
Muzammil
905f1d8068
[mlir][AMDGPU] Implement gpu.subgroup_reduce with DPP intrinsics on AMD GPUs (#133204)
When performing cross-lane reductions using subgroup_reduce ops across
contiguous lanes on AMD GPUs, lower to Data Parallel Primitives (DPP)
ops when possible. This reduces latency on applicable devices.
See related [Issue](https://github.com/iree-org/iree/issues/20007)
To do:
- Improve lowering to subgroup_reduce in compatible matvecs (these get
directly lowered to gpu.shuffles in an earlier pass)

---------

Signed-off-by: Muzammiluddin Syed <muzasyed@amd.com>
2025-04-23 17:37:32 -07:00
Jakub Kuderski
198c5dac37
[mlir][transform] Clean up prints. NFC. (#136401)
Use `llvm::interleaved` from #135517 to simplify printing.
2025-04-19 12:11:06 -04:00
Jakub Kuderski
c62afbfeda
[mlir][linalg][gpu] Clean up printing. NFC. (#136330)
* Use `llvm::interleaved` from #135517 to simplify printing
* Avoid needless vector allocations
2025-04-18 15:05:27 -04:00
Jakub Kuderski
4be84a142e
[mlir][gpu] Clean up prints in GPU dialect. NFC. (#136250)
Clean up printing code by switching to `llvm::interleaved` from
https://github.com/llvm/llvm-project/pull/135517. Also make some minor
readability & performance fixes.
2025-04-18 11:10:17 -04:00
Krzysztof Drewniak
bf3b3d012c
[mlir][GPU] Don't look into neighboring functions for barrier elimination (#135293)
If a `func.func` is nested in some other operation, the barrier
eliminator's recursion into parents will examine the neighbors of each
function. Therefore, don't recurse into the parent of an operation if
that operation is IsolatedFromAbove, like a func.func is.

Furthermore, define functions as a region that executes only once,
since, within the context of this pass (which runs on functions) it is
true.
2025-04-15 07:04:24 -07:00
Ivan Butygin
d893d129e6
[mlir] GPUToROCDL: Fix crashes with unsupported shuffle datatypes (#135504)
Calling `getIntOrFloatBitWidth` on non-int/float types (`gpu.shuffle`
also accepts vectors) will crash.
2025-04-13 20:26:19 +02:00
Kazu Hirata
3041fa6c7a
[mlir] Use *Set::insert_range (NFC) (#132326)
DenseSet, SmallPtrSet, SmallSet, SetVector, and StringSet recently
gained C++23-style insert_range.  This patch replaces:

  Dest.insert(Src.begin(), Src.end());

with:

  Dest.insert_range(Src);

This patch does not touch custom begin like succ_begin for now.
2025-03-20 22:24:17 -07:00
lorenzo chelini
556a64507b
[MLIR][NFC] Retire let constructor for GPU (#129849)
`let constructor` is legacy (do not use in tree!) since the table gen
backend emits most of the glue logic to build a pass.
2025-03-06 11:48:24 +01:00
Matthias Springer
4defac91db
[mlir][GPUToNVVM] Add benefit to populate functions (#128484)
Certain GPU->NVVM patterns compete with Arith->LLVM patterns. (The ones
that lower to libdevice.) Add an optional `benefit` parameter to all
`populate` functions so that users can give preference to GPU->NVVM
patterns.
2025-02-24 17:27:55 +01:00
Zichen Lu
360630b567
[mlir][GPUDialect] Add cmdOption suffix consumer in GpuModuleToBinary Pass (#127646)
Add cmdOption suffix consumer function in GpuModuleToBinary Pass, which
can tokenize and remove a specific suffix of cmdOption.
2025-02-18 19:02:23 +01:00
Guray Ozen
837b89fc0f
[MLIR][NVVM] Add ptxas-cmd-options to pass flags to the downstream compiler (#127457)
This PR adds `cmd-options` to the `gpu-lower-to-nvvm-pipeline` pipeline
and the `nvvm-attach-target` pass, allowing users to pass flags to the
downstream compiler, *ptxas*.

Example:
```
mlir-opt -gpu-lower-to-nvvm-pipeline="cubin-chip=sm_80 ptxas-cmd-options='-v --register-usage-level=8'"
```
2025-02-17 12:09:27 +01:00
lorenzo chelini
c1a2292526
[MLIR][NFC] Retire let constructor for passes in Conversion directory (part1) (#127403)
`let constructor` is deprecated since the table gen backend emits most
of the glue logic to build a pass. This PR retires the td method for
most (I need another pass) passes in the Conversion directory.
2025-02-17 10:55:27 +01:00
Guray Ozen
baf27862dd
[MLIR][NVGPU] Move max threads/blocks size to dialect (NFC) (#124454)
This PR moves maximum number of threads in a block and block in a grid
to nvgpu dialect to avoid replicated code.

The limits are defined here:

https://docs.nvidia.com/cuda/cuda-c-programming-guide/#features-and-technical-specifications-technical-specifications-per-compute-capability
2025-02-05 12:38:37 +01:00
jeanPerier
327d627066
[mlir] share argument attributes interface between calls and callables (#123176)
This patch shares core interface methods dealing with argument and
result attributes from CallableOpInterface with the CallOpInterface and
makes them mandatory to gives more consistent guarantees about concrete
operations using these interfaces.

This allows adding argument attributes on call like operations, which is
sometimes required to get proper ABI, like with  llvm.call (and llvm.invoke).


The patch adds optional `arg_attrs` and `res_attrs` attributes to operations using
these interfaces that did not have that already.
They can then re-use the common "rich function signature"
printing/parsing helpers if they want (for the LLVM dialect, this is
done in the next patch).

Part of RFC: https://discourse.llvm.org/t/mlir-rfc-adding-argument-and-result-attributes-to-llvm-call/84107
2025-02-03 11:27:14 +01:00
Matthias Springer
6aaa8f25b6
[mlir][IR][NFC] Move free-standing functions to MemRefType (#123465)
Turn free-standing `MemRefType`-related helper functions in
`BuiltinTypes.h` into member functions.
2025-01-21 08:48:09 +01:00
Krzysztof Drewniak
0aa831e0ed
[mlir][GPU] Implement ValueBoundsOpInterface for GPU ID operations (#122190)
The GPU ID operations already implement InferIntRangeInterface, which
gives constant lower and upper bounds on those IDs when appropriate
metadata is prentent on the operations or in the surrounding context.

This commit uses that existing code to implement the
ValueBoundsOpInterface, which is used when analyzing affine operations
(unlike the integer range interface, which is used for arithmetic
optimization).

It also implements the interface for gpu.launch, where we can use it to
express the constraint that block/grid sizes are equal to their value
from outside the launch op and that the corresponding IDs are bounded
above by that size.

As a consequence, the test pass for this inference is updated to work on
a FunctionOpInterface and not a func.func, creating minor churn in other
tests.
2025-01-09 11:42:22 -08:00
Kazu Hirata
129f1001c3
[Dialect] Migrate away from PointerUnion::{is,get} (NFC) (#120818)
Note that PointerUnion::{is,get} have been soft deprecated in
PointerUnion.h:

  // FIXME: Replace the uses of is(), get() and dyn_cast() with
  //        isa<T>, cast<T> and the llvm::dyn_cast<T>

I'm not touching PointerUnion::dyn_cast for now because it's a bit
complicated; we could blindly migrate it to dyn_cast_if_present, but
we should probably use dyn_cast when the operand is known to be
non-null.
2024-12-21 08:17:51 -08:00
Jacques Pienaar
09dfc5713d
[mlir] Enable decoupling two kinds of greedy behavior. (#104649)
The greedy rewriter is used in many different flows and it has a lot of
convenience (work list management, debugging actions, tracing, etc). But
it combines two kinds of greedy behavior 1) how ops are matched, 2)
folding wherever it can.

These are independent forms of greedy and leads to inefficiency. E.g.,
cases where one need to create different phases in lowering and is
required to applying patterns in specific order split across different
passes. Using the driver one ends up needlessly retrying folding/having
multiple rounds of folding attempts, where one final run would have
sufficed.

Of course folks can locally avoid this behavior by just building their
own, but this is also a common requested feature that folks keep on
working around locally in suboptimal ways.

For downstream users, there should be no behavioral change. Updating
from the deprecated should just be a find and replace (e.g., `find ./
-type f -exec sed -i
's|applyPatternsAndFoldGreedily|applyPatternsGreedily|g' {} \;` variety)
as the API arguments hasn't changed between the two.
2024-12-20 08:15:48 -08:00
Matthias Springer
0745add7f4
[mlir][GPU] Do not strip location info when lowering to NVVM (#120432)
This is needed for a subsequent commit that reads location information
when lowering `gpu.assert`.
2024-12-19 15:05:45 +01:00