As part of XeVM dialect upsteaming, covers remaining parts required for XeVM dialect integration and testing.
It has two high level components
- XeVM target and serialization support
- XeVM dialect integration tests using level zero runtime
Co-Authored-by: Artem Kroviakov <artem.kroviakov@intel.com>
This PR fixes a crash in `GpuKernelOutliningPass` that occurred when
encountering a symbol that was not a `FlatSymbolRefAttr`, enabling
outlining of nested `gpu.launch` operations. Fixes#149318.
`gpu::LaunchOp` is updated the following way:
- Change the attribute type of kernel function and module from
`SymbolRefAttr` to `FlatSymbolRefAttr` to avoid nested symbol
references.
- Rename variables from camel case (kernelFunc, kernelModule) to lower
case (function, module) and update the syntax.
- `LaunchOp::build` support passing `module` and `function` attributes.
`offset` and `width` must be constants and there are constraints on
their values. Update the operation definition to use attributes instead
of operands.
These are identified by misc-include-cleaner. I've filtered out those
that break builds. Also, I'm staying away from llvm-config.h,
config.h, and Compiler.h, which likely cause platform- or
compiler-specific build failures.
These are identified by misc-include-cleaner. I've filtered out those
that break builds. Also, I'm staying away from llvm-config.h,
config.h, and Compiler.h, which likely cause platform- or
compiler-specific build failures.
This revision adds DeviceMaskingAttrInterface and extends
DeviceMappingArrayAttr to accept a union of DeviceMappingAttrInterface
and DeviceMaskingAttrInterface.
Support is added to GPUTransformOps to take advantage of this
information and lower to block/warpgroup/warp/thread specialization when
mapped to linear ids.
The revision also connects to scf::ForallOp and uses the new attribute
to implement warp specialization.
The implementation is in the form of a GPUMappingMaskAttr, which can be
additionally passed to the scf.forall.mapping attribute to specify a
mask on compute resources that should be active.
In the first implementation the masking is a bitfield that specifies for
each processing unit whether it is active or not.
In the future, we may want to implement this as a symbol to refer to
dynamically defined values.
Extending op semantics with an operand is deemed too intrusive at this
time.
---------
Co-authored-by: Oleksandr "Alex" Zinenko <git@ozinenko.com>
This revision adds a new attribute for mapping `scf.forall` to linear
lane ids.
Example:
```
// %arg2 and %arg3 map to lanes [0, 6) and are turned into epxressions
// involving threadIdx.x/y by the map_nested_forall_to_threads
// transformation. This results in a if (linear_thread_id < 6) conditional.
scf.forall (%arg2, %arg3) in (2, 3) {
...
} {mapping = [#gpu.lane<linear_dim_0>, #gpu.lane<linear_dim_1>]}
```
---------
Co-authored-by: Oleksandr "Alex" Zinenko <git@ozinenko.com>
ArrayRef has a constructor that accepts std::nullopt. This
constructor dates back to the days when we still had llvm::Optional.
Since the use of std::nullopt outside the context of std::optional is
kind of abuse and not intuitive to new comers, I would like to move
away from the constructor and eventually remove it.
This patch migrates away from std::nullopt in favor of ArrayRef<T>()
where we use perfect forwarding. Note that {} would be ambiguous for
perfect forwarding to work.
ArrayRef has a constructor that accepts std::nullopt. This
constructor dates back to the days when we still had llvm::Optional.
Since the use of std::nullopt outside the context of std::optional is
kind of abuse and not intuitive to new comers, I would like to move
away from the constructor and eventually remove it.
This patch migrates away from TypeRagne(std::nullopt) and
ValueRange(std::nullopt).
ArrayRef has a constructor that accepts std::nullopt. This
constructor dates back to the days when we still had llvm::Optional.
Since the use of std::nullopt outside the context of std::optional is
kind of abuse and not intuitive to new comers, I would like to move
away from the constructor and eventually remove it.
This patch takes care of the mlir side of the migration, starting with
straightforward places where I see ArrayRef or ValueRange nearby.
Note that ValueRange has a constructor that forwards arguments to an
ArrayRef constructor.
Fixes the final reduction steps which were taken from an implementation
of scan, not reduction, causing lanes earlier in the wave to have
incorrect results due to masking.
Now aligning more closely with triton implementation :
https://github.com/triton-lang/triton/pull/5019
# Hypothetical example
To provide an explanation of the issue with the current implementation,
let's take the simple example of attempting to perform a sum over 64
lanes where the initial values are as follows (first lane has value 1,
and all other lanes have value 0):
```
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
```
When performing a sum reduction over these 64 lanes, in the current
implementation we perform 6 dpp instructions which in sequential order
do the following:
1) sum over clusters of 2 contiguous lanes
2) sum over clusters of 4 contiguous lanes
3) sum over clusters of 8 contiguous lanes
4) sum over an entire row
5) broadcast the result of last lane in each row to the next row and
each lane sums current value with incoming value.
5) broadcast the result of the 32nd lane to last two rows and each lane
sums current value with incoming value.
After step 4) the result for the example above looks like this:
```
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
```
After step 5) the result looks like this:
```
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
```
After step 6) the result looks like this:
```
[4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
```
Note that the correct value here is always 1, yet after the
`dpp.broadcast` ops some lanes have incorrect values. The reason is that
for these incorrect lanes, like lanes 0-15 in step 5, the
`dpp.broadcast` op doesn't provide them incoming values from other
lanes. Instead these lanes are provided either their own values, or 0
(depending on whether `bound_ctrl` is true or false) as values to sum
over, either way these values are stale and these lanes shouldn't be
used in general.
So what this means:
- For a subgroup reduce over 32 lanes (like Step 5), the correct result
is stored in lanes 16 to 31
- For a subgroup reduce over 64 lanes (like Step 6), the correct result
is stored in lanes 32 to 63.
However in the current implementation we do not specifically read the
value from one of the correct lanes when returning a final value. In
some workloads it seems without this specification, the stale value from
the first lane is returned instead.
# Actual failing test
For a specific example of how the current implementation causes issues,
take a look at the IR below which represents an additive reduction over
a dynamic dimension.
```
!matA = tensor<1x?xf16>
!matB = tensor<1xf16>
#map = affine_map<(d0, d1) -> (d0, d1)>
#map1 = affine_map<(d0, d1) -> (d0)>
func.func @only_producer_fusion_multiple_result(%arg0: !matA) -> !matB {
%cst_1 = arith.constant 0.000000e+00 : f16
%c2_i64 = arith.constant 2 : i64
%0 = tensor.empty() : !matB
%2 = linalg.fill ins(%cst_1 : f16) outs(%0 : !matB) -> !matB
%4 = linalg.generic {indexing_maps = [#map, #map1], iterator_types = ["parallel", "reduction"]} ins(%arg0 : !matA) outs(%2 : !matB) {
^bb0(%in: f16, %out: f16):
%7 = arith.addf %in, %out : f16
linalg.yield %7 : f16
} -> !matB
return %4 : !matB
}
```
When provided an input of type `tensor<1x2xf16>` and values `{0, 1}` to
perform the reduction over, the value returned is consistently 4. By the
same analysis done above, this shows that the returned value is coming
from one of these stale lanes and needs to be read instead from one of
the lanes storing the correct result.
Signed-off-by: Muzammiluddin Syed <muzasyed@amd.com>
Motivation:
Currently, the NVVMOps are not verified against the supported SM
architectures. This can manifest as an ISel failure in the NVPTX LLVM
backend during CodeGen to PTX ISA. This PR addresses this issue by
adding verifier checks for Target-SM architectures in the NVVM Dialect
itself, thereby catching the errors early on.
Summary:
* Parametric traits named `NVVMRequiresSM` and `NVVMRequiresSMa` are
added to facilitate the version checks for typical and arch-accelerated
versions respectively.
* These traits can be attached to any NVVM Op to enable the checks for
the particular Op. (example shown below)
* An attribute interface called named `TargetAttrVerifyInterface` is
added to the GPU dialect which any target attribute seeking to perform
target-verification on the module can implement.
* The checks are performed by the `NVVMTargetAttr` (implementing the
`TargetAttrVerifyInterface` interface) when called from the GPU module
verifier where it walks through the module and performs the checks for
Ops with the `NVVMRequiresSM` traits.
* A few Ops in `NVVMOps.td` have been updated to serve as examples.
Example Usage:
```
def NVVM_ReduxOp : NVVM_Op<"redux.sync"> {...}
----> def NVVM_ReduxOp : NVVM_Op<"redux.sync", [NVVMRequiresSM<80>]> {...}
def NVVM_WgmmaFenceAlignedOp : NVVM_Op<"wgmma.fence.aligned"> {...}
----> def NVVM_WgmmaFenceAlignedOp : NVVM_Op<"wgmma.fence.aligned", [NVVMRequiresSMa<[90]>]> {...}
```
---------
Co-authored-by: Guray Ozen <guray.ozen@gmail.com>
Made AssumeAlignment a ViewLikeOp that returns a new SSA memref equal
to its memref argument and made it have Pure trait. This
gives it a defined memory effect that matches what it does in practice
and makes it behave nicely with optimizations which won't get rid of it
unless its result isn't being used.
When performing cross-lane reductions using subgroup_reduce ops across
contiguous lanes on AMD GPUs, lower to Data Parallel Primitives (DPP)
ops when possible. This reduces latency on applicable devices.
See related [Issue](https://github.com/iree-org/iree/issues/20007)
To do:
- Improve lowering to subgroup_reduce in compatible matvecs (these get
directly lowered to gpu.shuffles in an earlier pass)
---------
Signed-off-by: Muzammiluddin Syed <muzasyed@amd.com>
If a `func.func` is nested in some other operation, the barrier
eliminator's recursion into parents will examine the neighbors of each
function. Therefore, don't recurse into the parent of an operation if
that operation is IsolatedFromAbove, like a func.func is.
Furthermore, define functions as a region that executes only once,
since, within the context of this pass (which runs on functions) it is
true.
DenseSet, SmallPtrSet, SmallSet, SetVector, and StringSet recently
gained C++23-style insert_range. This patch replaces:
Dest.insert(Src.begin(), Src.end());
with:
Dest.insert_range(Src);
This patch does not touch custom begin like succ_begin for now.
Certain GPU->NVVM patterns compete with Arith->LLVM patterns. (The ones
that lower to libdevice.) Add an optional `benefit` parameter to all
`populate` functions so that users can give preference to GPU->NVVM
patterns.
This PR adds `cmd-options` to the `gpu-lower-to-nvvm-pipeline` pipeline
and the `nvvm-attach-target` pass, allowing users to pass flags to the
downstream compiler, *ptxas*.
Example:
```
mlir-opt -gpu-lower-to-nvvm-pipeline="cubin-chip=sm_80 ptxas-cmd-options='-v --register-usage-level=8'"
```
`let constructor` is deprecated since the table gen backend emits most
of the glue logic to build a pass. This PR retires the td method for
most (I need another pass) passes in the Conversion directory.
This patch shares core interface methods dealing with argument and
result attributes from CallableOpInterface with the CallOpInterface and
makes them mandatory to gives more consistent guarantees about concrete
operations using these interfaces.
This allows adding argument attributes on call like operations, which is
sometimes required to get proper ABI, like with llvm.call (and llvm.invoke).
The patch adds optional `arg_attrs` and `res_attrs` attributes to operations using
these interfaces that did not have that already.
They can then re-use the common "rich function signature"
printing/parsing helpers if they want (for the LLVM dialect, this is
done in the next patch).
Part of RFC: https://discourse.llvm.org/t/mlir-rfc-adding-argument-and-result-attributes-to-llvm-call/84107
The GPU ID operations already implement InferIntRangeInterface, which
gives constant lower and upper bounds on those IDs when appropriate
metadata is prentent on the operations or in the surrounding context.
This commit uses that existing code to implement the
ValueBoundsOpInterface, which is used when analyzing affine operations
(unlike the integer range interface, which is used for arithmetic
optimization).
It also implements the interface for gpu.launch, where we can use it to
express the constraint that block/grid sizes are equal to their value
from outside the launch op and that the corresponding IDs are bounded
above by that size.
As a consequence, the test pass for this inference is updated to work on
a FunctionOpInterface and not a func.func, creating minor churn in other
tests.
Note that PointerUnion::{is,get} have been soft deprecated in
PointerUnion.h:
// FIXME: Replace the uses of is(), get() and dyn_cast() with
// isa<T>, cast<T> and the llvm::dyn_cast<T>
I'm not touching PointerUnion::dyn_cast for now because it's a bit
complicated; we could blindly migrate it to dyn_cast_if_present, but
we should probably use dyn_cast when the operand is known to be
non-null.
The greedy rewriter is used in many different flows and it has a lot of
convenience (work list management, debugging actions, tracing, etc). But
it combines two kinds of greedy behavior 1) how ops are matched, 2)
folding wherever it can.
These are independent forms of greedy and leads to inefficiency. E.g.,
cases where one need to create different phases in lowering and is
required to applying patterns in specific order split across different
passes. Using the driver one ends up needlessly retrying folding/having
multiple rounds of folding attempts, where one final run would have
sufficed.
Of course folks can locally avoid this behavior by just building their
own, but this is also a common requested feature that folks keep on
working around locally in suboptimal ways.
For downstream users, there should be no behavioral change. Updating
from the deprecated should just be a find and replace (e.g., `find ./
-type f -exec sed -i
's|applyPatternsAndFoldGreedily|applyPatternsGreedily|g' {} \;` variety)
as the API arguments hasn't changed between the two.