290 Commits

Author SHA1 Message Date
Maksim Levental
c090ed53fb
[mlir][NFC] update mlir/Dialect create APIs (33/n) (#150659)
See https://github.com/llvm/llvm-project/pull/147168 for more info.
2025-07-25 16:13:55 -04:00
Maksim Levental
588845defd
[mlir][NFC] update mlir/Dialect create APIs (20/n) (#149927)
See https://github.com/llvm/llvm-project/pull/147168 for more info.
2025-07-24 07:09:58 -05:00
Longsheng Mou
3eb49c482c
[mlir][NFC] Use hasOneBlock instead of llvm::hasSingleElement(region) (#149809) 2025-07-24 10:11:21 +08:00
Kazu Hirata
c06d3a7b72
[mlir] Remove unused includes (NFC) (#148769)
These are identified by misc-include-cleaner.  I've filtered out those
that break builds.  Also, I'm staying away from llvm-config.h,
config.h, and Compiler.h, which likely cause platform- or
compiler-specific build failures.
2025-07-14 22:19:23 -07:00
Kazu Hirata
70dce3d987
[mlir] Migrate away from std::nullopt (NFC) (#145842)
ArrayRef has a constructor that accepts std::nullopt.  This
constructor dates back to the days when we still had llvm::Optional.

Since the use of std::nullopt outside the context of std::optional is
kind of abuse and not intuitive to new comers, I would like to move
away from the constructor and eventually remove it.

This patch replaces {} with std::nullopt.
2025-06-26 08:41:02 -07:00
Andrei Golubev
ee070d0816
[mlir][bufferization] Support custom types (1/N) (#142986)
Following the addition of TensorLike and BufferLike type interfaces (see
00eaff3e9c897c263a879416d0f151d7ca7eeaff), introduce minimal changes
required to bufferize a custom tensor operation into a custom buffer
operation.

To achieve this, new interface methods are added to TensorLike type
interface that abstract away the differences between existing (tensor ->
memref) and custom conversions.

The scope of the changes is intentionally limited (for example,
BufferizableOpInterface is untouched) in order to first understand the
basics and reach consensus design-wise.

---
Notable changes:
* mlir::bufferization::getBufferType() returns BufferLikeType (instead
of BaseMemRefType)
* ToTensorOp / ToBufferOp operate on TensorLikeType / BufferLikeType.
Operation argument "memref" renamed to "buffer"
* ToTensorOp's tensor type inferring builder is dropped (users now need
to provide the tensor type explicitly)
2025-06-18 16:18:12 +02:00
Michele Scuttari
63cb6af782
[MLIR] Add bufferization state to getBufferType and resolveConflicts interface methods (#141466)
The PR continues the work started in #141019 by adding the `BufferizationState` class also to the `getBufferType` and `resolveConflicts` interface methods, together with the additional support functions that are used throughout the bufferization infrastructure.
2025-05-28 10:35:23 +02:00
Michele Scuttari
61d5fdf50c
[MLIR] Add bufferization state class to OneShotBufferization pass (#141019)
Follow-up on #138143, which was reverted due to a missing update a method signature (more specifically, the bufferization interface for `tensor::ConcatOp`) that was not catched before merging. The old PR description is reported in the next lines.

This PR is a follow-up on https://github.com/llvm/llvm-project/pull/138125, and adds a bufferization state class providing information about the IR. The information currently consists of a cached list of symbol tables, which aims to solve the quadratic scaling of the bufferization task with respect to the number of symbols. The PR breaks API compatibility: the bufferize method of the BufferizableOpInterface has been enriched with a reference to a BufferizationState object.

The bufferization state must be kept in a valid state by the interface implementations. For example, if an operation with the Symbol trait is inserted or replaced, its parent SymbolTable must be updated accordingly (see, for example, the bufferization of arith::ConstantOp, where the symbol table of the module gets the new global symbol inserted). Similarly, the invalidation of a symbol table must be performed if an operation with the SymbolTable trait is removed (this can be performed using the invalidateSymbolTable method, introduced in https://github.com/llvm/llvm-project/pull/138014).
2025-05-23 09:21:35 +02:00
Michele Scuttari
72a8893689
Revert "[MLIR] Add bufferization state class to OneShotBufferization pass" (#141012)
Reverts llvm/llvm-project#138143

The PR for the BufferizationState is temporarily reverted due to API incompatibilities that have been initially missed during the update and were not catched by PR checks.
2025-05-22 09:25:07 +02:00
Michele Scuttari
67fc1660d9
[MLIR] Add bufferization state class to OneShotBufferization pass (#138143)
This PR is a follow-up on #138125, and adds a bufferization state class providing information about the IR. The information currently consists of a cached list of symbol tables, which aims to solve the quadratic scaling of the bufferization task with respect to the number of symbols. The PR breaks API compatibility: the `bufferize` method of the `BufferizableOpInterface` has been enriched with a reference to a `BufferizationState` object.

The bufferization state must be kept in a valid state by the interface implementations. For example, if an operation with the `Symbol` trait is inserted or replaced, its parent `SymbolTable` must be updated accordingly (see, for example, the bufferization of `arith::ConstantOp`, where the symbol table of the module gets the new global symbol inserted). Similarly, the invalidation of a symbol table must be performed if an operation with the `SymbolTable` trait is removed (this can be performed using the `invalidateSymbolTable` method, introduced in #138014).
2025-05-22 08:53:38 +02:00
Mateusz Sokół
df84aa8e06
[MLIR][Shape] Support >2 args in shape.broadcast folder (#126808)
Hi!

As the title says, this PR adds support for >2 arguments in
`shape.broadcast` folder by sequentially calling `getBroadcastedShape`.
2025-04-14 13:50:52 -04:00
Alaa Ali
5812516ae2
[MLIR] Fix canonicalization pattern for 'shape.shape_of' (#134234)
This PR will fix a bug in a canonicalization pattern (operation
shape.shape_of: shape of reshape)

```
// Before
func.func @f(%arg0: tensor<?x1xf32>, %arg1: tensor<3xi32>) -> tensor<3xindex> {
  %reshape = tensor.reshape %arg0(%arg1) : (tensor<?x1xf32>, tensor<3xi32>) -> tensor<?x1x1xf32>
  %0 = shape.shape_of %reshape : tensor<?x1x1xf32> -> tensor<3xindex>
  return %0 : tensor<3xindex>
}
//This is will error out as follows:
error: 'tensor.cast' op operand type 'tensor<3xi32>' and result type 'tensor<3xindex>' are cast incompatible
  %0 = shape.shape_of %reshape : tensor<?x1x1xf32> -> tensor<3xindex>
       ^
note: see current operation: %0 = "tensor.cast"(%arg1) : (tensor<3xi32>) -> tensor<3xindex>
```

```
// After
func.func @f(%arg0: tensor<?x1xf32>, %arg1: tensor<3xi32>) -> tensor<3xindex> {
  %0 = arith.index_cast %arg1 : tensor<3xi32> to tensor<3xindex>
  return %0 : tensor<3xindex>
}
```
See file canonicalize.mlir in the change list for an example.

For the context, this bug was found while running a test on Keras 3, the
canonicalizer errors out due to an invalid tensor.cast operation when
the batch size is dynamic.
The operands of the op are tensor<3xi32> cast to tensor<3xindex>.
This change is related to a previous PR:
https://github.com/llvm/llvm-project/pull/98531

---------

Co-authored-by: Alaa Ali <alaaali@ah-alaaali-l.dhcp.mathworks.com>
Co-authored-by: Mehdi Amini <joker.eph@gmail.com>
2025-04-04 11:46:58 +02:00
AdityaK
340f06a8d4
Fix: bail out when divisor is zero (#133518)
Fixes: #131279
2025-04-02 09:44:18 -07:00
Matthias Springer
6c867e27a7
[mlir] Use getSingleElement/hasSingleElement in various places (#131460)
This is a code cleanup. Update a few places in MLIR that should use
`hasSingleElement`/`getSingleElement`.

Note: `hasSingleElement` is faster than `.getSize() == 1` when it is
used with linked lists etc.

Depends on #131508.
2025-03-17 07:43:18 +01:00
lorenzo chelini
dc74d2f831
[MLIR][NFC] Retire let constructor for Shape and MLProgram (#128869)
`let constructor` is legacy (do not use in tree!) since the table gen
backend emits most of the glue logic to build a pass. This PR retires
the td method for Shape and MLProgram
2025-02-27 08:25:03 +01:00
jeanPerier
327d627066
[mlir] share argument attributes interface between calls and callables (#123176)
This patch shares core interface methods dealing with argument and
result attributes from CallableOpInterface with the CallOpInterface and
makes them mandatory to gives more consistent guarantees about concrete
operations using these interfaces.

This allows adding argument attributes on call like operations, which is
sometimes required to get proper ABI, like with  llvm.call (and llvm.invoke).


The patch adds optional `arg_attrs` and `res_attrs` attributes to operations using
these interfaces that did not have that already.
They can then re-use the common "rich function signature"
printing/parsing helpers if they want (for the LLVM dialect, this is
done in the next patch).

Part of RFC: https://discourse.llvm.org/t/mlir-rfc-adding-argument-and-result-attributes-to-llvm-call/84107
2025-02-03 11:27:14 +01:00
Jacques Pienaar
09dfc5713d
[mlir] Enable decoupling two kinds of greedy behavior. (#104649)
The greedy rewriter is used in many different flows and it has a lot of
convenience (work list management, debugging actions, tracing, etc). But
it combines two kinds of greedy behavior 1) how ops are matched, 2)
folding wherever it can.

These are independent forms of greedy and leads to inefficiency. E.g.,
cases where one need to create different phases in lowering and is
required to applying patterns in specific order split across different
passes. Using the driver one ends up needlessly retrying folding/having
multiple rounds of folding attempts, where one final run would have
sufficed.

Of course folks can locally avoid this behavior by just building their
own, but this is also a common requested feature that folks keep on
working around locally in suboptimal ways.

For downstream users, there should be no behavioral change. Updating
from the deprecated should just be a find and replace (e.g., `find ./
-type f -exec sed -i
's|applyPatternsAndFoldGreedily|applyPatternsGreedily|g' {} \;` variety)
as the API arguments hasn't changed between the two.
2024-12-20 08:15:48 -08:00
Chenhui Huang
1e18815fdc
[MLIR] fix shape.broadcast canonicalize with all empty shape operands (#118941)
Example: all the operands of `shape.broadcast` are empty tensors.
```
func.func @all_empty(%arg0: tensor<f32>) -> tensor<0xindex> {
  %1 = shape.shape_of %arg0 : tensor<f32> -> tensor<0xindex>
  %2 = shape.const_shape [] : tensor<0xindex>
  %3 = shape.broadcast %1, %2, %1 : tensor<0xindex>, tensor<0xindex>, tensor<0xindex> -> tensor<0xindex>
  return %3 : tensor<0xindex>
}
```

One can reproduce crash when canonicalize with *down-top* order, cmd
like this:
`mlir-opt -split-input-file -allow-unregistered-dialect
-canonicalize="test-convergence top-down=0" %s`

The root cause is when all operands are empty tensor,
`RemoveEmptyShapeOperandsPattern` would filter out all operands.

Co-authored-by: Kai Sasaki <lewuathe@gmail.com>
2024-12-20 19:32:11 +09:00
Jakub Kuderski
f4d7586343
[mlir] Use llvm::filter_to_vector. NFC. (#117655)
This got recently added to SmallVectorExtras:
https://github.com/llvm/llvm-project/pull/117460.
2024-11-26 09:11:36 -05:00
Matthias Springer
b613a54075
[mlir][IR][NFC] Cleanup insertion point API usage (#115415)
Use `setInsertionPointToStart` / `setInsertionPointToEnd` when possible.
2024-11-08 14:31:27 +09:00
Kazu Hirata
65a5b18aa0
[Shape] Avoid repeated hash lookups (NFC) (#108926) 2024-09-17 00:18:33 -07:00
Rafael Ubal
38d0b2d174
[mlir] New canonicalization patterns for shape.shape_of and tensor.reshape (#98531)
This PR includes 3 new canonicalization patterns:

- Operation `shape.shape_of`: shape of reshape

```
// Before
func.func @f(%arg0: tensor<*xf32>, %arg1: tensor<?xindex>) -> tensor<?xindex> {
  %reshape = tensor.reshape %arg0(%arg1) : (tensor<*xf32>, tensor<?xindex>) -> tensor<*xf32>
  %0 = shape.shape_of %reshape : tensor<*xf32> -> tensor<?xindex>
  return %0 : tensor<?xindex>
}

// After
func.func @f(%arg0: tensor<*xf32>, %arg1: tensor<?xindex>) -> tensor<?xindex> {
  return %arg1 : tensor<?xindex>
}
```

- Operation `tensor.reshape`: reshape of reshape

```
// Before
func.func @fold_tensor_reshape(%arg0: tensor<*xf32>, %arg1: tensor<?xindex>, %arg2: tensor<?xindex>) -> tensor<*xf32> {
  %0 = tensor.reshape %arg0(%arg1) : (tensor<*xf32>, tensor<?xindex>) -> tensor<*xf32>
  %1 = tensor.reshape %0(%arg2) : (tensor<*xf32>, tensor<?xindex>) -> tensor<*xf32>
  return %1 : tensor<*xf32>
}

// After
func.func @fold_tensor_reshape(%arg0: tensor<*xf32>, %arg1: tensor<?xindex>, %arg2: tensor<?xindex>) -> tensor<*xf32> {
  %reshape = tensor.reshape %arg0(%arg2) : (tensor<*xf32>, tensor<?xindex>) -> tensor<*xf32>
  return %reshape : tensor<*xf32>
}
```

- Operation `tensor.reshape`: reshape 1D to 1D

```
// Before
func.func @fold_reshape_1d(%input: tensor<?xf32>, %shape: tensor<1xindex>) -> tensor<?xf32> {
  %0 = tensor.reshape %input(%shape) : (tensor<?xf32>, tensor<1xindex>) -> tensor<?xf32>
  return %0 : tensor<?xf32>
}

// After
func.func @fold_reshape_1d(%arg0: tensor<?xf32>, %arg1: tensor<1xindex>) -> tensor<?xf32> {
  return %arg0 : tensor<?xf32>
}
```

These three canonicalization patterns cooperate to simplify the IR
structure emerging from the lowering of certain element-wise ops with
unranked tensor inputs. See file `unranked-tensor-lowering.mlir` in the
proposed change list for a detailed example and description.

For context, this PR is meant to enable code optimizations for the code
generated while lowering ops `quant.qcast` and `quant.dcast` with
unranked tensors, as proposed in
https://discourse.llvm.org/t/rfc-improvements-in-the-quant-dialect/79942
(implementation currently in progress).
2024-07-19 10:09:31 -04:00
Kunwar Grover
debdbeda15
[mlir] Remove dialect specific bufferization passes (Reland) (#93535)
These passes have been depreciated for a long time and replaced by
one-shot bufferization. These passes are also unsafe because they do not
check for read-after-write conflicts.

Relands https://github.com/llvm/llvm-project/pull/93488 which failed on
buildbot. Fixes the failure by updating integration tests to use
one-shot-bufferize instead.
2024-05-28 20:04:27 +01:00
Kunwar Grover
39848d0a98
Revert "[mlir] Remove dialect specific bufferization passes" (#93528)
Reverts llvm/llvm-project#93488

Buildbot failure:
https://lab.llvm.org/buildbot/#/builders/220/builds/39911
2024-05-28 11:21:34 +01:00
Kunwar Grover
2fc5106437
[mlir] Remove dialect specific bufferization passes (#93488)
These passes have been depreciated for a long time and replaced by
one-shot bufferization. These passes are also unsafe because they do not
check for read-after-write conflicts.
2024-05-28 11:12:58 +01:00
Jakub Kuderski
971b852546
[mlir][NFC] Simplify type checks with isa predicates (#87183)
For more context on isa predicates, see:
https://github.com/llvm/llvm-project/pull/83753.
2024-04-01 11:40:09 -04:00
Justin Fargnoli
513cdb8222
[mlir] Declare promised interfaces for all dialects (#78368)
This PR adds promised interface declarations for all interfaces declared
in `InitAllDialects.h`.

Promised interfaces allow a dialect to declare that it will have an
implementation of a particular interface, crashing the program if one
isn't provided when the interface is used.
2024-03-15 20:23:20 -07:00
Matthias Springer
91d5653e3a
[mlir] Use OpBuilder::createBlock in op builders and patterns (#82770)
When creating a new block in (conversion) rewrite patterns,
`OpBuilder::createBlock` must be used. Otherwise, no
`notifyBlockInserted` notification is sent to the listener.

Note: The dialect conversion relies on listener notifications to keep
track of IR modifications. Creating blocks without the builder API can
lead to memory leaks during rollback.
2024-02-24 09:10:07 +01:00
Matthias Springer
dbb782dffd
[mlir][shape] Turn ShapeOfOp folding into canonicalization pattern (#74438)
The `ShapeOfOp` folder used to generate invalid IR.

Input:
```
%0 = shape.shape_of %arg1 : tensor<index> -> tensor<?xindex>
```

Output:
```
%0 = "shape.const_shape"() <{shape = dense<> : tensor<0xindex>}> : () -> tensor<?xindex>
error: 'shape.const_shape' op inferred type(s) 'tensor<0xindex>' are incompatible with return type(s) of operation 'tensor<?xindex>'
```

This rewrite cannot be implemented as a folder because the result type
may have to change. In the above example, the original `shape.shape_of`
op had a return type of `tensor<?xindex>`, but the folded attribute
(materialized as a `shape.const_shape` op) must have a type of
`tensor<0xf32>` to be valid.

This commit fixes tests such as
`mlir/test/Dialect/Shape/canonicalize.mlir` when verifying the IR after
each pattern application (#74270).
2023-12-06 09:41:24 +09:00
Ivan Butygin
5dce74817b [mlir][ub] Add poison support to CommonFolders.h
Return poison from foldBinary/unary if argument(s) is poison. Add ub dialect as dependency to affected dialects (arith, math, spirv, shape).
Add poison materialization to dialects. Add tests for some ops from each dialect.
Not all affected ops are covered as it will involve a huge copypaste.

Differential Revision: https://reviews.llvm.org/D159013
2023-09-07 12:30:29 +02:00
Martin Erhart
34a35a8b24 [mlir] Move FunctionInterfaces to Interfaces directory and inherit from CallableOpInterface
Functions are always callable operations and thus every operation
implementing the `FunctionOpInterface` also implements the
`CallableOpInterface`. The only exception was the FuncOp in the toy
example. To make implementation of the `FunctionOpInterface` easier,
this commit lets `FunctionOpInterface` inherit from
`CallableOpInterface` and merges some of their methods. More precisely,
the `CallableOpInterface` has methods to get the argument and result
attributes and a method to get the result types of the callable region.
These methods are always implemented the same way as their analogues in
`FunctionOpInterface` and thus this commit moves all the argument and
result attribute handling methods to the callable interface as well as
the methods to get the argument and result types. The
`FuntionOpInterface` then does not have to declare them as well, but
just inherits them from the `CallableOpInterface`.
Adding the inheritance relation also required to move the
`FunctionOpInterface` from the IR directory to the Interfaces directory
since IR should not depend on Interfaces.

Reviewed By: jpienaar, springerm

Differential Revision: https://reviews.llvm.org/D157988
2023-08-31 11:28:23 +00:00
Markus Böck
4dd744ac9c Reland "[mlir] Use a type for representing branch points in RegionBranchOpInterface"
This reverts commit b26bb30b467b996c9786e3bd426c07684d84d406.
2023-08-30 09:31:54 +02:00
Markus Böck
b26bb30b46 Revert "[mlir] Use a type for representing branch points in RegionBranchOpInterface"
This reverts commit 024f562da67180b7be1663048c960b26c2cc16f8.

Forgot to update flang
2023-08-29 20:17:50 +02:00
Markus Böck
024f562da6 [mlir] Use a type for representing branch points in RegionBranchOpInterface
The current implementation is not very ergonomic or descriptive: It uses `std::optional<unsigned>` where `std::nullopt` represents the parent op and `unsigned` is the region number.
This doesn't give us any useful methods specific to region control flow and makes the code fragile to changes due to now taking the region number into account.

This patch introduces a new type called `RegionBranchPoint`, replacing all uses of `std::optional<unsigned>` in the interface. It can be implicitly constructed from a region or a `RegionSuccessor`, can be compared with a region to check whether the branch point is branching from the parent, adds `isParent` to check whether we are coming from a parent op and adds `RegionSuccessor::parent` as a descriptive way to indicate branching from the parent.

Differential Revision: https://reviews.llvm.org/D159116
2023-08-29 20:02:23 +02:00
Kazu Hirata
69ffd49c43 [mlir] Use {DenseSet,DenseMap,SemallDenseSet}::contains (NFC) 2023-08-27 13:23:36 -07:00
Matthias Springer
a02ad6c177 [mlir][bufferization] Generalize getAliasingOpResults to getAliasingValues
This revision is needed to support bufferization of `cf.br`/`cf.cond_br`. It will also be useful for better analysis of loop ops.

This revision generalizes `getAliasingOpResults` to `getAliasingValues`. An OpOperand can now not only alias with OpResults but also with BlockArguments. In the case of `cf.br` (will be added in a later revision): a `cf.br` operand will alias with the corresponding argument of the destination block.

If an op does not implement the `BufferizableOpInterface`, the analysis in conservative. It previously assumed that an OpOperand may alias with each OpResult. It now assumes that an OpOperand may alias with each OpResult and each BlockArgument of the entry block.

Differential Revision: https://reviews.llvm.org/D157957
2023-08-15 15:02:47 +02:00
Markus Böck
138df29820 [mlir] Revamp RegionBranchOpInterface successor mechanism
The `RegionBranchOpInterface` had a few fundamental issues caused by the API design of `getSuccessorRegions`.

It always required passing values for the `operands` parameter. This is problematic as the operands parameter actually changes meaning depending on which predecessor `index` is referring to. If coming from a region, you'd have to find a `RegionBranchTerminatorOpInterface` in that region, get its operand count, and then create a `SmallVector` of that size.
This is not only inconvenient, but also error-prone, which has lead to a bug in the implementation of a previously existing `getSuccessorRegions` overload.

Additionally, this made the method dual-use, trying to serve two different use-cases: 1) Trying to determine possible control flow edges between regions and 2) Trying to determine the region being branched to based on constant operands.

This patch fixes these issues by changing the interface methods and adding new ones:
* The `operands` argument of `getSuccessorRegions` has been removed. The method is now only responsible for returning possible control flow edges between regions.
* An optional `getEntrySuccessorRegions` method has been added. This is used to determine which regions are branched to from the parent op based on constant operands of the parent op. By default, it calls `getSuccessorRegions`. This is analogous to `getSuccessorForOperands` from `BranchOpInterface`.
* Add `getSuccessorRegions` to `RegionBranchTerminatorOpInterface`. This is used to get the possible successors of the terminator based on constant operands. By default, it calls the containing `RegionBranchOpInterface`s `getSuccessorRegions` method.
* `getSuccessorEntryOperands` was renamed to `getEntrySuccessorOperands` for consistency.

Differential Revision: https://reviews.llvm.org/D157506
2023-08-10 10:27:27 +02:00
Jacques Pienaar
e767ef47a2 [mlir] Enable converting properties during C create
This enables querying properties passed as attributes during
construction time. In particular needed for type inference where the
Operation has not been created at this point. This allows Python
construction of operations whose type inference depends on properties.

Differential Revision: https://reviews.llvm.org/D156070
2023-07-23 21:40:12 -07:00
Amanda Tang
729ffb5cdf [ODS] Quick fix
Quick fix for a failing test in https://reviews.llvm.org/rG5267ed05bc4612e91409d63b4dbc4e01751acb75

Reviewed By: jpienaar

Differential Revision: https://reviews.llvm.org/D155637
2023-07-18 20:44:14 +00:00
Amanda Tang
5267ed05bc [ODS] Use Adaptor Traits for Type Inference
Author inferReturnTypes methods with the Op Adaptor by using the InferTypeOpAdaptor.

Reviewed By: jpienaar

Differential Revision: https://reviews.llvm.org/D155115
2023-07-18 17:58:31 +00:00
Jacques Pienaar
7b59121be5 [mlir][shape] Remove overzealous Dim verifier
Follow up of D143999 and follow
https://mlir.llvm.org/getting_started/DeveloperGuide/#ir-verifier.

Fixes #60808.
2023-07-05 20:01:32 -07:00
Tres Popp
68f58812e3 [mlir] Move casting calls from methods to function calls
The MLIR classes Type/Attribute/Operation/Op/Value support
cast/dyn_cast/isa/dyn_cast_or_null functionality through llvm's doCast
functionality in addition to defining methods with the same name.
This change begins the migration of uses of the method to the
corresponding function call as has been decided as more consistent.

Note that there still exist classes that only define methods directly,
such as AffineExpr, and this does not include work currently to support
a functional cast/isa call.

Context:
- https://mlir.llvm.org/deprecation/ at "Use the free function variants
  for dyn_cast/cast/isa/…"
- Original discussion at https://discourse.llvm.org/t/preferred-casting-style-going-forward/68443

Implementation:
This patch updates all remaining uses of the deprecated functionality in
mlir/. This was done with clang-tidy as described below and further
modifications to GPUBase.td and OpenMPOpsInterfaces.td.

Steps are described per line, as comments are removed by git:
0. Retrieve the change from the following to build clang-tidy with an
   additional check:
   main...tpopp:llvm-project:tidy-cast-check
1. Build clang-tidy
2. Run clang-tidy over your entire codebase while disabling all checks
   and enabling the one relevant one. Run on all header files also.
3. Delete .inc files that were also modified, so the next build rebuilds
   them to a pure state.

```
ninja -C $BUILD_DIR clang-tidy

run-clang-tidy -clang-tidy-binary=$BUILD_DIR/bin/clang-tidy -checks='-*,misc-cast-functions'\
               -header-filter=mlir/ mlir/* -fix

rm -rf $BUILD_DIR/tools/mlir/**/*.inc
```

Differential Revision: https://reviews.llvm.org/D151542
2023-05-26 10:29:55 +02:00
Mehdi Amini
a58ed7fcf6 Fix ConstShapeOp::inferReturnTypes to be resilient to lack of properties
The Python bindings test aren't using properties yet, this is a bit
of a hack to support this here, but hopefully it'll be temporary.
2023-05-15 12:12:28 -07:00
Mehdi Amini
bbe5bf1788 Cleanup uses of getAttrDictionary() in MLIR to use getDiscardableAttrDictionary() when possible
This also speeds up some benchmarks in compiling simple fortan file by 2x!
Fixes #62687

Differential Revision: https://reviews.llvm.org/D150540
2023-05-15 11:35:50 -07:00
Tres Popp
c1fa60b4cd [mlir] Update method cast calls to function calls
The MLIR classes Type/Attribute/Operation/Op/Value support
cast/dyn_cast/isa/dyn_cast_or_null functionality through llvm's doCast
functionality in addition to defining methods with the same name.
This change begins the migration of uses of the method to the
corresponding function call as has been decided as more consistent.

Note that there still exist classes that only define methods directly,
such as AffineExpr, and this does not include work currently to support
a functional cast/isa call.

Context:

* https://mlir.llvm.org/deprecation/ at "Use the free function variants for dyn_cast/cast/isa/…"
* Original discussion at https://discourse.llvm.org/t/preferred-casting-style-going-forward/68443

Implementation:
This follows a previous patch that updated calls
`op.cast<T>()-> cast<T>(op)`. However some cases could not handle an
unprefixed `cast` call due to occurrences of variables named cast, or
occurring inside of class definitions which would resolve to the method.
All C++ files that did not work automatically with `cast<T>()` are
updated here to `llvm::cast` and similar with the intention that they
can be easily updated after the methods are removed through a
find-replace.

See https://github.com/llvm/llvm-project/compare/main...tpopp:llvm-project:tidy-cast-check
for the clang-tidy check that is used and then update printed
occurrences of the function to include `llvm::` before.

One can then run the following:
```
ninja -C $BUILD_DIR clang-tidy

run-clang-tidy -clang-tidy-binary=$BUILD_DIR/bin/clang-tidy -checks='-*,misc-cast-functions'\
                 -export-fixes /tmp/cast/casts.yaml mlir/*\
                 -header-filter=mlir/ -fix

rm -rf $BUILD_DIR/tools/mlir/**/*.inc
```

Differential Revision: https://reviews.llvm.org/D150348
2023-05-12 11:21:30 +02:00
Tres Popp
5550c82189 [mlir] Move casting calls from methods to function calls
The MLIR classes Type/Attribute/Operation/Op/Value support
cast/dyn_cast/isa/dyn_cast_or_null functionality through llvm's doCast
functionality in addition to defining methods with the same name.
This change begins the migration of uses of the method to the
corresponding function call as has been decided as more consistent.

Note that there still exist classes that only define methods directly,
such as AffineExpr, and this does not include work currently to support
a functional cast/isa call.

Caveats include:
- This clang-tidy script probably has more problems.
- This only touches C++ code, so nothing that is being generated.

Context:
- https://mlir.llvm.org/deprecation/ at "Use the free function variants
  for dyn_cast/cast/isa/…"
- Original discussion at https://discourse.llvm.org/t/preferred-casting-style-going-forward/68443

Implementation:
This first patch was created with the following steps. The intention is
to only do automated changes at first, so I waste less time if it's
reverted, and so the first mass change is more clear as an example to
other teams that will need to follow similar steps.

Steps are described per line, as comments are removed by git:
0. Retrieve the change from the following to build clang-tidy with an
   additional check:
   https://github.com/llvm/llvm-project/compare/main...tpopp:llvm-project:tidy-cast-check
1. Build clang-tidy
2. Run clang-tidy over your entire codebase while disabling all checks
   and enabling the one relevant one. Run on all header files also.
3. Delete .inc files that were also modified, so the next build rebuilds
   them to a pure state.
4. Some changes have been deleted for the following reasons:
   - Some files had a variable also named cast
   - Some files had not included a header file that defines the cast
     functions
   - Some files are definitions of the classes that have the casting
     methods, so the code still refers to the method instead of the
     function without adding a prefix or removing the method declaration
     at the same time.

```
ninja -C $BUILD_DIR clang-tidy

run-clang-tidy -clang-tidy-binary=$BUILD_DIR/bin/clang-tidy -checks='-*,misc-cast-functions'\
               -header-filter=mlir/ mlir/* -fix

rm -rf $BUILD_DIR/tools/mlir/**/*.inc

git restore mlir/lib/IR mlir/lib/Dialect/DLTI/DLTI.cpp\
            mlir/lib/Dialect/Complex/IR/ComplexDialect.cpp\
            mlir/lib/**/IR/\
            mlir/lib/Dialect/SparseTensor/Transforms/SparseVectorization.cpp\
            mlir/lib/Dialect/Vector/Transforms/LowerVectorMultiReduction.cpp\
            mlir/test/lib/Dialect/Test/TestTypes.cpp\
            mlir/test/lib/Dialect/Transform/TestTransformDialectExtension.cpp\
            mlir/test/lib/Dialect/Test/TestAttributes.cpp\
            mlir/unittests/TableGen/EnumsGenTest.cpp\
            mlir/test/python/lib/PythonTestCAPI.cpp\
            mlir/include/mlir/IR/
```

Differential Revision: https://reviews.llvm.org/D150123
2023-05-12 11:21:25 +02:00
Tres Popp
5c8ce6d576 [mlir] Move casting method calls to function calls
The MLIR classes Type/Attribute/Operation/Op/Value support
cast/dyn_cast/isa/dyn_cast_or_null functionality through llvm's doCast
functionality in addition to defining methods with the same name.
This change continues the migration of uses of the method to the
corresponding function call as has been decided as more consistent.

This commit attempts to update all occurrences of the casts in .td
files, although it is likely that a couple were missed.

Context:
- https://mlir.llvm.org/deprecation/ at "Use the free function variants for dyn_cast/cast/isa/…"
- Original discussion at https://discourse.llvm.org/t/preferred-casting-style-going-forward/68443

Implementation:
Unfortunatley, this was not automated, but was handled by mindlessly
going to next occurrences of patterns, selecting the piece of code to
be moved into the function call, and running a vim macro over the span
of around 4 hours.

Differential Revision: https://reviews.llvm.org/D150199
2023-05-12 11:17:27 +02:00
Mehdi Amini
5e118f933b Introduce MLIR Op Properties
This new features enabled to dedicate custom storage inline within operations.
This storage can be used as an alternative to attributes to store data that is
specific to an operation. Attribute can also be stored inside the properties
storage if desired, but any kind of data can be present as well. This offers
a way to store and mutate data without uniquing in the Context like Attribute.
See the OpPropertiesTest.cpp for an example where a struct with a
std::vector<> is attached to an operation and mutated in-place:

struct TestProperties {
  int a = -1;
  float b = -1.;
  std::vector<int64_t> array = {-33};
};

More complex scheme (including reference-counting) are also possible.

The only constraint to enable storing a C++ object as "properties" on an
operation is to implement three functions:

- convert from the candidate object to an Attribute
- convert from the Attribute to the candidate object
- hash the object

Optional the parsing and printing can also be customized with 2 extra
functions.

A new options is introduced to ODS to allow dialects to specify:

  let usePropertiesForAttributes = 1;

When set to true, the inherent attributes for all the ops in this dialect
will be using properties instead of being stored alongside discardable
attributes.
The TestDialect showcases this feature.

Another change is that we introduce new APIs on the Operation class
to access separately the inherent attributes from the discardable ones.
We envision deprecating and removing the `getAttr()`, `getAttrsDictionary()`,
and other similar method which don't make the distinction explicit, leading
to an entirely separate namespace for discardable attributes.

Recommit d572cd1b067f after fixing python bindings build.

Differential Revision: https://reviews.llvm.org/D141742
2023-05-01 23:16:34 -07:00
Mehdi Amini
1e853421a4 Revert "Introduce MLIR Op Properties"
This reverts commit d572cd1b067f1177a981a4711bf2e501eaa8117b.

Some bots are broken and investigation is needed before relanding.
2023-05-01 15:55:58 -07:00
Mehdi Amini
d572cd1b06 Introduce MLIR Op Properties
This new features enabled to dedicate custom storage inline within operations.
This storage can be used as an alternative to attributes to store data that is
specific to an operation. Attribute can also be stored inside the properties
storage if desired, but any kind of data can be present as well. This offers
a way to store and mutate data without uniquing in the Context like Attribute.
See the OpPropertiesTest.cpp for an example where a struct with a
std::vector<> is attached to an operation and mutated in-place:

struct TestProperties {
  int a = -1;
  float b = -1.;
  std::vector<int64_t> array = {-33};
};

More complex scheme (including reference-counting) are also possible.

The only constraint to enable storing a C++ object as "properties" on an
operation is to implement three functions:

- convert from the candidate object to an Attribute
- convert from the Attribute to the candidate object
- hash the object

Optional the parsing and printing can also be customized with 2 extra
functions.

A new options is introduced to ODS to allow dialects to specify:

  let usePropertiesForAttributes = 1;

When set to true, the inherent attributes for all the ops in this dialect
will be using properties instead of being stored alongside discardable
attributes.
The TestDialect showcases this feature.

Another change is that we introduce new APIs on the Operation class
to access separately the inherent attributes from the discardable ones.
We envision deprecating and removing the `getAttr()`, `getAttrsDictionary()`,
and other similar method which don't make the distinction explicit, leading
to an entirely separate namespace for discardable attributes.

Differential Revision: https://reviews.llvm.org/D141742
2023-05-01 15:35:48 -07:00