13493 Commits

Author SHA1 Message Date
Martin Erhart
18624ae54b [mlir][SliceAnalysis] Fix stack overflow in graph regions (#139694)
This analysis currently just crashes when applied to a graph region that
has a use-def cycle. This PR fixes that by keeping track of the
operations the DFS has already visited when following use-def edges and
stopping once we visit an operation again.
2025-07-16 16:16:59 +02:00
Mohammadreza Ameri Mahabadian
94b15a1ece
[mlir][spirv] Add basic support for SPV_EXT_replicated_composites (#147067)
This patch introduces two new ops to the SPIR-V dialect:
- `spirv.EXT.ConstantCompositeReplicate`
- `spirv.EXT.SpecConstantCompositeReplicate`

These ops represent composite constants and specialization constants,
respectively, constructed by replicating a single splat constant across
all elements. They correspond to `SPV_EXT_replicated_composites`
extension instructions:
- `OpConstantCompositeReplicatedEXT`
- `OpSpecConstantCompositeReplicatedEXT`

No transformation to these new ops has been introduced in this patch.

This approach is chosen as per the discussions on RFC
https://discourse.llvm.org/t/rfc-basic-support-for-spv-ext-replicated-composites-in-mlir-spir-v-compile-time-constant-lowering-only/86987

---------

Signed-off-by: Mohammadreza Ameri Mahabadian <mohammadreza.amerimahabadian@arm.com>
2025-07-15 09:45:13 -04:00
Tom Eccles
a1c61ac756
[mlir][OpenMP] Allow composite SIMD REDUCTION and IF (#147568)
Reduction support: https://github.com/llvm/llvm-project/pull/146671
If Support is fixed in this PR

The problem for the IF clause in composite constructs was that wsloop
and simd both operate on the same CanonicalLoopInfo structure: with the
SIMD processed first, followed by the wsloop. Previously the IF clause
generated code like
```
if (cond) {
  while (...) {
    simd_loop_body;
  }
} else {
  while (...) {
    nonsimd_loop_body;
  }
}
```
The problem with this is that this invalidates the CanonicalLoopInfo
structure to be processed by the wsloop later. To avoid this, in this
patch I preserve the original loop, moving the IF clause inside of the
loop:
```
while (...) {
  if (cond) {
    simd_loop_body;
  } else {
    non_simd_loop_body;
  }
}
```
On simple examples I tried LLVM was able to hoist the if condition
outside of the loop at -O3.

The disadvantage of this is that we cannot add the
llvm.loop.vectorize.enable attribute on either the SIMD or non-SIMD
loops because they both share a loop back edge. There's no way of
solving this without keeping the old design of having two different
loops: which cannot be represented using only one CanonicalLoopInfo
structure. I don't think the presence or absence of this attribute makes
much difference. In my testing it is the llvm.loop.parallel_access
metadata which makes the difference to vectorization. LLVM will
vectorize if legal whether or not this attribute is there in the TRUE
branch. In the FALSE branch this means the loop might be vectorized even
when the condition is false: but I think this is still standards
compliant: OpenMP 6.0 says that when the if clause is false that should
be treated like the SIMDLEN clause is one. The SIMDLEN clause is defined
as a "hint". For the same reason, SIMDLEN and SAFELEN clauses are
silently ignored when SIMD IF is used.

I think it is better to implement SIMD IF and ignore SIMDLEN and SAFELEN
and some vectorization encouragement metadata when combined with IF than
to ignore IF because IF could have correctness consequences whereas the
rest are optimiztion hints. For example, the user might use the IF
clause to disable SIMD programatically when it is known not safe to
vectorize the loop. In this case it is not at all safe to add the
parallel access or SAFELEN metadata.
2025-07-15 10:30:02 +01:00
Matthias Springer
cbdc18542c
[mlir][arith] Fix bug in arith.bitcast canonicalizer (#148795)
`bitcast(bitcast(x))` was incorrectly folded to `x`.
2025-07-15 10:14:02 +02:00
Charitha Saumya
244ebef1dd
Reapply [mlir][vector] Refactor WarpOpScfForOp to support unused or swapped forOp results. (#148313)
Reapply attempt for : https://github.com/llvm/llvm-project/pull/148291
Fix for the build failure reported in :
https://lab.llvm.org/buildbot/#/builders/116/builds/15477

-----

This crash is caused by mismatch of distributed type returned by
`getDistributedType` and intended distributed type for forOp results.

Solution diff:
20c2cf6766

Example:
```
func.func @warp_scf_for_broadcasted_result(%arg0: index) -> vector<1xf32> {
  %c128 = arith.constant 128 : index
  %c1 = arith.constant 1 : index
  %c0 = arith.constant 0 : index
  %2 = gpu.warp_execute_on_lane_0(%arg0)[32] -> (vector<1xf32>) {
    %ini = "some_def"() : () -> (vector<1xf32>)
    %0 = scf.for %arg3 = %c0 to %c128 step %c1 iter_args(%arg4 = %ini) -> (vector<1xf32>) {
      %1 = "some_op"(%arg4) : (vector<1xf32>) -> (vector<1xf32>)
      scf.yield %1 : vector<1xf32>
    }
    gpu.yield %0 : vector<1xf32>
  }
  return %2 : vector<1xf32>
}
``` 
In this case the distributed type for forOp result is `vector<1xf32>`
(result is not distributed and broadcasted to all lanes instead).
However, in this case `getDistributedType` will return NULL type.

Therefore, if the distributed type can be recovered from warpOp, we
should always do that first before using `getDistributedType`
2025-07-14 15:41:56 -07:00
James Newling
99875733fc
[mlir][vector] Use vector.broadcast in place of vector.splat (#148028)
Part of deprecation of vector.splat

RFC:
https://discourse.llvm.org/t/rfc-mlir-vector-deprecate-then-remove-vector-splat/87143/4
More complete deprecation:
https://github.com/llvm/llvm-project/pull/147818
2025-07-14 15:12:21 -07:00
Nishant Patel
834591e062
[MLIR] [Vector] Linearization patterns for vector.load and vector.store (#145115)
This PR add inearizarion pattern for vector.load and vector.store. It is
follow up PR to
https://github.com/llvm/llvm-project/pull/143420#issuecomment-2967406606
2025-07-14 14:24:52 -07:00
Abid Qadeer
45fa0b29bc
Revert "[OMPIRBuilder] Don't use invalid debug loc in task proxy function." (#148728)
There is a sanitizer fail in CI after this which I need to investigate.
Reverting for now.
Reverts llvm/llvm-project#148284
2025-07-14 22:23:21 +01:00
Abid Qadeer
9d778089db
[OMPIRBuilder] Don't use invalid debug loc in task proxy function. (#148284)
This is similar to https://github.com/llvm/llvm-project/pull/147950 but
for task proxy function.
2025-07-14 21:04:34 +01:00
Quinn Dawkins
b1ef5a8890
[mlir][MemRef] Add support for emulating narrow floats (#148036)
This enables memref.load/store + vector.load/store support for sub-byte
float types. Since the memref types don't matter for loads/stores, we
still use the same types as integers with equivalent widths, with a few
extra bitcasts needed around certain operations.

There is no direct change needed for vector.load/store support. The
tests added for them are to verify that float types are
supported as well.
2025-07-14 11:18:51 -04:00
Maksim Levental
2eb733b5a6
[mlir][tblgen] add concrete create methods (#147168)
Currently `builder.create<...>` does not in any meaningful way hint/show
the various builders an op supports (arg names/types) because [`create`
forwards the args to
`build`](887222e352/mlir/include/mlir/IR/Builders.h (L503)).

To improve QoL, this PR adds static create methods to the ops themselves
like

```c++
static arith::ConstantIntOp create(OpBuilder& builder, Location location, int64_t value, unsigned width);
```

Now if one types `arith::ConstantIntO::create(builder,...` instead of
`builder.create<arith::ConstantIntO>(...` auto-complete/hints will pop
up.

See
https://discourse.llvm.org/t/rfc-building-mlir-operation-observed-caveats-and-proposed-solution/87204/13
for more info.
2025-07-14 10:41:51 -04:00
Jack Frankland
87e39c399c
[mlir][spirv]: Add OpImageFetch (#145873)
At the missing `spirv::ImageFetchOp` operation to the SPIR-V MLIR
dialect ODS with appropriate testing including negative testing of the
verifiers.

Signed-off-by: Jack Frankland <jack.frankland@arm.com>
2025-07-14 12:48:38 +01:00
Christian Ulmann
374d5da214
[MLIR][Interfaces] Remove negative branch weight verifier (#148234)
This commit removes the verifier that checked if branch weights are
negative. This check was too strict because weights are interpreted as
unsigned integers.

This showed up when running the verifier on LLVM dialect modules that
were imported from LLVM IR.
2025-07-14 07:34:29 +02:00
Uday Bondhugula
3de59f79d8
[MLIR][Affine] Rename/update affine fusion test pass options to avoid confusion (#148320)
This test pass is meant to test various affine fusion utilities as
opposed to being a pass to perform valid fusion. Rename an option to
avoid confusion.

Fixes: https://github.com/llvm/llvm-project/issues/132172
2025-07-14 09:23:26 +05:30
Diego Caballero
ace1c838ca
[mlir][Vector] Support scalar vector.extract in VectorLinearize (#147440)
It generates a linearized version of the `vector.extract` for scalar cases.
2025-07-11 16:02:26 -07:00
Charitha Saumya
1d33bbab57
Revert "[mlir][vector] Refactor WarpOpScfForOp to support unused or swapped forOp results." (#148291)
Reverts llvm/llvm-project#147620

Reverting due to build failure:
https://lab.llvm.org/buildbot/#/builders/116/builds/15477
2025-07-11 13:22:54 -07:00
Charitha Saumya
3092b765ba
[mlir][vector] Refactor WarpOpScfForOp to support unused or swapped forOp results. (#147620)
Current implementation generates incorrect code or crashes in the
following valid cases.

1. At least one of the for op results are not yielded by the warpOp.
Example:
```
%0 = gpu.warp_execute_on_lane_0(%arg0)[32] -> (vector<4xf32>) {
    ....
    %3:2 = scf.for %arg3 = %c0 to %c128 step %c1 iter_args(%arg4 = %ini, %arg5 = %ini1) -> (vector<128xf32>, vector<128xf32>) {
      
      %1  = ...
      %acc = ....
      scf.yield %acc, %1 : vector<128xf32>, vector<128xf32>
    }
    gpu.yield %3#0 : vector<128xf32> // %3#1 is not used but can not be removed as dead code (loop carried).
  }
  "some_use"(%0) : (vector<4xf32>) -> ()
  return
```
2. Enclosing warpOp yields the forOp results in different order compared
to the forOp results.
Example:
```
  %0:3 = gpu.warp_execute_on_lane_0(%arg0)[32] -> (vector<4xf32>, vector<4xf32>, vector<8xf32>) {
    ....
    %3:3 = scf.for %arg3 = %c0 to %c128 step %c1 iter_args(%arg4 = %ini1, %arg5 = %ini2, %arg6 = %ini3) -> (vector<256xf32>, vector<128xf32>, vector<128xf32>) {
      .....
      scf.yield %acc1, %acc2, %acc3 : vector<256xf32>, vector<128xf32>, vector<128xf32>
    }
    gpu.yield %3#2, %3#1, %3#0 : vector<128xf32>, vector<128xf32>, vector<256xf32> // swapped order
  }
  "some_use_1"(%0#0) : (vector<4xf32>) -> ()
  "some_use_2"(%0#1) : (vector<4xf32>) -> ()
  "some_use_3"(%0#2) : (vector<8xf32>) -> ()

```
2025-07-11 13:08:33 -07:00
Sang Ik Lee
0a343098b0
[MLIR][Conversion] Add convert-xevm-to-llvm pass. [Re-attempt] (#148103)
Although XeVM is an LLVM extension dialect,
SPIR-V backend relies on [function
calls](https://llvm.org/docs/SPIRVUsage.html#instructions-as-function-calls)
instead of defining LLVM intrinsics to represent SPIR-V instructions.
convert-xevm-to-llvm pass lowers xevm ops to function declarations and
calls using the above naming convention.
In the future, most part of the pass should be replaced with llvmBuilder
and handled as part of translation to LLVM instead.

---------
Co-authored-by: Artem Kroviakov <artem.kroviakov@intel.com>
2025-07-11 13:38:02 -05:00
Kunwar Grover
77914c96df
[mlir][Vector] Do not propagate vector.extract on dynamic position (#148245)
Propagating vector.extract when a dynamic position is present can cause
dominance issues and needs better handling. For now, disable propagation
if there is a dynamic position present.
2025-07-11 15:38:48 +01:00
Darren Wihandi
a89021bc83
[mlir][spirv] Enable dot operation for bfloat16 (#145409)
Allows dot operations to use vectors of bfloat16 type.
2025-07-11 10:16:00 -04:00
arun-thmn
587ba75a49
[mlir][x86vector] AVX2 I8 Dot Op (#147908)
Adds AVX2 i8 dot-product operation and defines lowering to LLVM
intrinsics.

Target assembly instruction: `vpdpbssd.128/256`
2025-07-11 13:19:07 +02:00
Michael Kruse
96bc07d492
[MLIR][OpenMP] Add canonical loop LLVM-IR lowering (#147069)
Support for translating the operations introduced in #144785 to LLVM-IR.

In order to keep the lowering simple,
`OpenMPIRBuider::unrollLoopHeuristic` is applied when encountering the
`omp.unroll_heuristic` op. As a result, the operation that unrolling is
applied to (`omp.canonical_loop`) must have been emitted before even
though logically there is no such requirement.

Eventually, all transformations on a loop must be applied directly after
emitting `omp.canonical_loop`, i.e. future transformations must be
looked-up when encountering `omp.canonical_loop` itself. This is because
many OpenMPIRBuilder methods (e.g. `createParallel`) expect all the
region code to be emitted withing a callback. In the case of
`createParallel`, the region code is getting outlined into a new
function. Therefore, making the operation order a formal requirement
would not make the implementation any easier.
2025-07-11 12:54:25 +02:00
Andrei Golubev
4a35214bdd
[mlir][ODS] Fix TableGen for AttrOrTypeDef::hasStorageCustomConstructor (#147957)
There is a `hasStorageCustomConstructor` flag that allows one to provide
custom attribute/type construction implementation. Unfortunately, it
seems like the flag does not work properly: the generated C++ produces
*empty body* method instead of producing only a declaration.
2025-07-11 12:45:21 +02:00
Pradeep Kumar
5cd56c9216
[MLIR][NVVM] Remove Pure trait from clock, clock64, globaltimer Ops (#147608)
This commit removes Pure trait from clock, clock64 and globaltimer Ops by creating NVVM_NCSpecialRegisterOp class to represent Ops which return non-constant values. This prevents CSE pass from optimizing awayredundant uses of them
2025-07-11 15:35:46 +05:30
Abid Qadeer
7b91df3868
[OMPIRBuilder] Don't use invalid debug loc in reduction functions. (#147950)
We have this pattern of code in OMPIRBuilder for many functions that are
used in reduction operations.

 ```
 Function *LtGRFunc = Function::Create
  BasicBlock *EntryBlock = BasicBlock::Create(Ctx, "entry", LtGRFunc);
  Builder.SetInsertPoint(EntryBlock);
```

The insertion point is moved to the new function but the debug location is not updated. This means that reduction function will use the debug location that points to another function. This problem gets hidden because these functions gets inlined but the potential for failure exists.

This patch resets the debug location when insertion point is moved to new function. Some `InsertPointGuard` have been added to make sure why restore the debug location correctly when we are done with the reduction function.
2025-07-11 09:50:05 +01:00
Charitha Saumya
da608271ae
Revert "[MLIR][Conversion] Add convert-xevm-to-llvm pass." (#148081)
Reverts llvm/llvm-project#147375
2025-07-10 16:21:11 -07:00
Sang Ik Lee
76eead1bd7
[MLIR][Conversion] Add convert-xevm-to-llvm pass. (#147375)
Although XeVM is an LLVM extension dialect,
SPIR-V backend relies on [function
calls](https://llvm.org/docs/SPIRVUsage.html#instructions-as-function-calls)
instead of defining LLVM intrinsics to represent SPIR-V instructions.
convert-xevm-to-llvm pass lowers xevm ops to function declarations and
calls using the above naming convention.
In the future, most part of the pass should be replaced with llvmBuilder
and handled as part of translation to LLVM instead.

---------
Co-authored-by: Artem Kroviakov <artem.kroviakov@intel.com>
2025-07-10 16:04:36 -07:00
Sang Ik Lee
61004b7eb5
[MLIR][GPU] Add xevm-attach-target transform pass. (#147372)
Add xevm-attach-target transform pass and unit-tests.

Co-authored-by: by Sang Ik Lee sang.ik.lee@intel.com.
Co-authored-by: Artem Kroviakov artem.kroviakov@intel.com
2025-07-10 15:44:26 -05:00
Ivan Butygin
f60cc63e8c
[mlir][rocdl] Add s.sleep intrinsic (#147936) 2025-07-10 19:27:02 +03:00
Kunwar Grover
f96492221d
[mlir][AMDGPU] Add better load/store lowering for full mask (#146748)
This patch adds a better maskedload/maskedstore lowering on amdgpu
backend for loads which are either fully masked or fully unmasked. For
these cases, we can either generate a oob buffer load with no if
condition, or we can generate a normal load with a if condition (if no
fat_raw_buffer space).
2025-07-10 16:11:19 +01:00
Kunwar Grover
0227aef688
[mlir][Vector] Add canonicalization for extract_strided_slice(create_mask) (#146745)
extract_strided_slice(create_mask) can be folded into create_mask by
simply subtracting the offsets from the bounds.
2025-07-10 15:43:20 +01:00
Niklas Degener
5954e9c1a5
[MLIR][Target/Cpp] Fix variable naming conflict for function declarations (#147927)
This is a fix for https://github.com/llvm/llvm-project/pull/136102. It
missed scoping for `DeclareFuncOps`.
In scenarios with multiple function declarations, the `valueMapper`
wasn't updated and later uses of values in other functions still used
the assigned names in prior functions.

This is visible in the reproducer here
https://github.com/iree-org/iree/issues/21303: Although the counter for
variable enumeration was reset, as it is visible for the local vars, the
function arguments were mapped to old names. Due to this mapping, the
counter was never increased, and the local variables conflicted with the
arguments.

This fix adds proper scoping for declarations and a test-case to cover
the scenario with multiple `DeclareFuncOps`.
2025-07-10 16:09:49 +02:00
Michael Kruse
628c735010
[MLIR][OpenMP] Add canonical loop operations (#147061)
Add the supporting OpenMP Dialect operations, types, and interfaces for
modelling

MLIR Operations:
 * omp.newcli
 * omp.canonical_loop

MLIR Types:
 * !omp.cli

MLIR Interfaces:
 * LoopTransformationInterface

As a first loop transformations to be able to use these new operation in
follow-up PRs (#144785)
 * omp.unroll_heuristic
2025-07-10 12:53:07 +02:00
Chao Chen
75524dee18
[mlir][xegpu] Relax rank restriction of TensorDescType (#145916) 2025-07-09 19:40:24 -05:00
Diego Caballero
ddf9b91f9f
[mlir][Vector] Add vector.shuffle tree transformation (#145740)
This PR adds a new transformation that turns sequences of `vector.to_elements` and `vector.from_elements` into a binary tree of `vector.shuffle` operations.

(Related RFC:
https://discourse.llvm.org/t/rfc-adding-vector-to-elements-op-to-the-vector-dialect/86779).

Example:

```
  %0:4 = vector.to_elements %a : vector<4xf32>
  %1:4 = vector.to_elements %b : vector<4xf32>
  %2:4 = vector.to_elements %c : vector<4xf32>
  %3 = vector.from_elements %0#0, %0#1, %0#2, %0#3,
                            %1#0, %1#1, %1#2, %1#3,
                            %2#0, %2#1, %2#2, %2#3 : vector<12xf32>

==>

  %0 = vector.shuffle %a, %b [0, 1, 2, 3, 4, 5, 6, 7] : vector<4xf32>, vector<4xf32>
  %1 = vector.shuffle %c, %c [0, 1, 2, 3, -1, -1, -1, -1] : vector<4xf32>, vector<4xf32>
  %2 = vector.shuffle %0, %1 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] : vector<8xf32>, vector<8xf32>
```

The algorithm leverages the structured extraction/insertion information
of `vector.to_elements` and `vector.from_elements` operations and builds
a set of intervals to determine the vector length that should be used at
each level of the tree to combine the level inputs in pairs.

There are a few improvements that can be implemented in the future, such
as shuffle mask compression to avoid unnecessarily large vector lengths
with poison values, but I decided to keep things "simpler" and spend
more time documenting the different steps of the algorithm so that
people can follow along.
2025-07-09 16:09:53 -07:00
Adam Siemieniuk
06ae0c2a10
[mlir][xegpu] Remove vector contract to dpas size restriction (#147470)
Removes contraction shape check to allow representing large
workgroup-level workloads in preparation for distribution.
2025-07-09 22:37:06 +02:00
Diego Caballero
889ac879ce
[mlir][Vector] Remove usage of vector.insertelement/extractelement from Vector (#144413)
This PR is part of the last step to remove `vector.extractelement` and `vector.insertelement` ops.
RFC: https://discourse.llvm.org/t/rfc-psa-remove-vector-extractelement-and-vector-insertelement-ops-in-favor-of-vector-extract-and-vector-insert-ops

It removes instances of `vector.extractelement` and `vector.insertelement` from the Vector dialect layer.
2025-07-09 12:09:17 -07:00
Daniel Hernandez-Juarez
668c964282
[AMDGPU] [MLIR] Add 96 and 128 bit GatherToLDS for gfx950 (#147496)
This PR adds 96 and 128 gather_to_lds support for gfx950. Updating
lowering, verifier and tests.
2025-07-09 11:53:26 -04:00
MaheshRavishankar
c22352175e
[mlir][TilingInterface] Allow tile and fuse to work with ReductionTilingStrategy::PartialReductionOuterParallelStrategy. (#147593)
Since `scf::tileUsingSCF` is the core method used for tiling the root
operation within the `scf::tileConsumersAndFuseProducersUsingSCF`, the
latter can fuse into any tiled loop generated using `scf::tileUsingSCF`.
This patch adds a test for tiling a root operation using
`ReductionTilingStrategy::PartialReductionOuterParallelStrategy` and
fusing producers with it.

Since this strategy generates a rank-reducing extract slice
`tensor::replaceExtractSliceWithTiledProducer` which is the core method
used for the fusion was extended to handle the rank-reducing slices.

Also fix a small bug in the computation of the reduction induction
variable (which needs to use `floorDiv` instead of `ceilDiv`)

Signed-off-by: MaheshRavishankar <mahesh.ravishankar@gmail.com>
2025-07-09 08:50:01 -07:00
Maksim Levental
1770e9b5c6
[mlir] remove dangling builders from td (#147619)
These are "dangling" builders (decls are emitted but there are no defns
anywhere).
2025-07-09 09:59:24 -04:00
Momchil Velikov
962c4217bc
[MLIR][AArch64] Change some tests to ensure SVE vector length is the same throughout the function (#147506)
This change only applies to functions the can be reasonably expected to
use SVE registers.

Modifying vector length in the middle of a function might cause
incorrect stack deallocation if there are callee-saved SVE registers or
incorrect access to SVE stack slots.

Addresses (non-issue) https://github.com/llvm/llvm-project/issues/143670
2025-07-09 09:32:25 +01:00
zbenzion
6033544173
[mlir][linalg] Fix memref type verification in CollapseLinalgDimensions (#147245)
When collapsing linalg dimensions we check if its memref operands are
guaranteed to be collapsible. However, we currently assume that the
matching indexing map is the identity map.

This commit modifies this behavior and checks if the memref is
collapsible on the transformed dimensions.
2025-07-09 01:04:08 -07:00
Menooker
18b409558a
[mlir] [scf-to-cf] attach the loop annotation to latch block (#147462)
As [required by LLVM](https://llvm.org/docs/LangRef.html#llvm-loop), the
loop annotation (loop metadata) should be attached on the ["latch"
block](https://llvm.org/docs/LoopTerminology.html). Otherwise, the
annotation might be ignored by LLVM. This PR fixes this issue.
2025-07-09 12:07:35 +08:00
Tim Gymnich
6f291cb099
[mlir][amdgpu] Add conversion from arith.scaling_extf / arith.scaling_truncf to amdgpu (#146372)
- add conversion from arith.scaling_extf to amdgpu.scaled_ext_packed
- add conversion from arith.scaling_truncf to amdgpu.packed_scaled_trunc
2025-07-08 21:45:23 +02:00
Darren Wihandi
4a68562e9a
[mlir][spirv] Reject coop matrix operands on unsupported arithmetic ops (#147230)
Cooperative matrix operands are only supported for `add/sub/mul/div`
binary arithmetic ops, but currently all binary arithmetic ops accept
cooperative matrix operands, including `mod/rem`. This change fixes this
behaviour.
2025-07-08 10:44:37 -04:00
lonely eagle
517cda12e5
[mlir][vector] Add foldInsertUseChain folder function to insert op (#147045)
When the result of an insert op is used by an insert op, and the
subsequent insert op is inserted at the same location as the previous
insert op, replaces the dest of the subsequent insert op with the dest
of the previous insert op.This is because the previous insert op does
not affect subsequent insert ops.

---------

Co-authored-by: Mehdi Amini <joker.eph@gmail.com>
Co-authored-by: Andrzej Warzyński <andrzej.warzynski@gmail.com>
2025-07-08 22:39:18 +08:00
Kajetan Puchalski
9006bc8717
[OpenMP] Enable simd in non-reduction composite constructs (#146097)
Despite currently being ignored with a warning, simd as a leaf in
composite constructs behaves as expected when the construct does not
contain a reduction. Enable it for those non-reduction constructs.

---------

Signed-off-by: Kajetan Puchalski <kajetan.puchalski@arm.com>
2025-07-08 14:27:33 +01:00
Rolf Morel
db7888ca9a
[MLIR][Transform] Introduce transform.tune.knob op (#146732)
A new transform op to represent that an attribute is to be chosen from a
set of alternatives and that this choice is made available as a
`!transform.param`. When a `selected` argument is provided, the op's
`apply()` semantics is that of just making this selected attribute
available as the result. When `selected` is not provided, `apply()`
complains that nothing has resolved the non-determinism that the op is
representing.
2025-07-08 11:00:34 +01:00
Niklas Degener
dcc692a42f
[MLIR][Target/Cpp] Natural induction variable naming. (#136102)
Changed naming of loop induction variables to follow natural naming (i,
j, k, ...). This helps readability and locating positions referred to.
Created new scopes to represent different behavior at function and loop
level, to still enable re-using value names between different functions
(as before). Removed unused scoping at other levels.
2025-07-08 09:18:00 +02:00
Jakub Kuderski
6512ca7ddb
[mlir] Add isStatic* size check for ShapedTypes. NFCI. (#147085)
The motivation is to avoid having to negate `isDynamic*` checks, avoid
double negations, and allow for `ShapedType::isStaticDim` to be used in
ADT functions without having to wrap it in a lambda performing the
negation.

Also add the new functions to C and Python bindings.
2025-07-07 14:57:27 -04:00