421 Commits

Author SHA1 Message Date
Ingo Müller
9442b441c1 [mlir][linalg][transform][python] Fix optional args of PadOp mix-in.
The mix-in did not allow to *not* set many of the arguments, even though
they represent optional attributes. Instead, it set default values,
which have different semantics in some cases. In other cases, setting
the default values is already done by the C++ layer, in which case they
are currently redundant and may be wrong in some potential future change
in the TD or C++ files. With this patch, `None` is preserved until the
generated binding, which handles them as desired.

Reviewed By: springerm

Differential Revision: https://reviews.llvm.org/D158844
2023-09-02 11:19:06 +00:00
Matthias Springer
a17313794b [mlir][linalg][transform] Return copy_back op from PadOp.
This patch makes the `transform.structured.pad` op return also a handle
to the copy op that it inserts. This allows to continue transformation
on that op, such as mapping it to a GPU thread.

The patch was mainly authored by @springerm as part of the WIP patch
https://reviews.llvm.org/D156371, which also has an example usage of
this change.

Reviewed By: nicolasvasilache

Differential Revision: https://reviews.llvm.org/D159088
2023-08-29 14:55:33 +00:00
Ingo Müller
a470df3ffc [mlir][linalg][transform][python] Extend mix-in for Vectorize
Extends the existing mix-in for VectorizeOp with support for the missing unit attributes.

Also fixes the unintuitive implementation where
`structured.VectorizeOp(target=target, vectorize_padding=False)` still resulted in the creation of the UnitAttr `vectorize_padding`.

Reviewed By: ingomueller-net

Differential Revision: https://reviews.llvm.org/D158726
2023-08-28 08:05:55 +00:00
Yijia Gu
935ba834ed add empty line in the end of the td files 2023-08-23 17:59:36 -07:00
Yijia Gu
ad7e6e90c2 update bazel for python binding 2023-08-23 17:50:29 -07:00
max
2b664d678d [mlir][python bindings] turn on openmp
Just as in https://reviews.llvm.org/D157820, dialect registration is independent of any vendor specific libs having been linked/built/etc.

Reviewed By: rkayaith

Differential Revision: https://reviews.llvm.org/D158670
2023-08-23 18:17:04 -05:00
max
92233062c1 [mlir][python bindings] generate all the enums
This PR implements python enum bindings for *all* the enums - this includes `I*Attrs` (including positional/bit) and `Dialect/EnumAttr`.

There are a few parts to this:

1. CMake: a small addition to `declare_mlir_dialect_python_bindings` and `declare_mlir_dialect_extension_python_bindings` to generate the enum, a boolean arg `GEN_ENUM_BINDINGS` to make it opt-in (even though it works for basically all of the dialects), and an optional `GEN_ENUM_BINDINGS_TD_FILE` for handling corner cases.
2. EnumPythonBindingGen.cpp: there are two weedy aspects here that took investigation:
    1. If an enum attribute is not a `Dialect/EnumAttr` then the `EnumAttrInfo` record is canonical, as far as both the cases of the enum **and the `AttrDefName`**. On the otherhand, if an enum is a `Dialect/EnumAttr` then the `EnumAttr` record has the correct `AttrDefName` ("load bearing", i.e., populates `ods.ir.AttributeBuilder('<NAME>')`) but its `enum` field contains the cases, which is an instance of `EnumAttrInfo`. The solution is to generate an one enum class for both `Dialect/EnumAttr` and "independent" `EnumAttrInfo` but to make that class interopable with two builder registrations that both do the right thing (see next sub-bullet).
    2. Because we don't have a good connection to cpp `EnumAttr`, i.e., only the `enum class` getters are exposed (like `DimensionAttr::get(Dimension value)`), we have to resort to parsing e.g., `Attribute.parse(f'#gpu<dim {x}>')`. This means that the set of supported `assemblyFormat`s (for the enum) is fixed at compile of MLIR (currently 2, the only 2 I saw). There might be some things that could be done here but they would require quite a bit more C API work to support generically (e.g., casting ints to enum cases and binding all the getters or going generically through the `symbolize*` methods, like `symbolizeDimension(uint32_t)` or `symbolizeDimension(StringRef)`).

A few small changes:

1. In addition, since this patch registers default builders for attributes where people might've had their own builders already written, I added a `replace` param to `AttributeBuilder.insert` (`False` by default).
2. `makePythonEnumCaseName` can't handle all the different ways in which people write their enum cases, e.g., `llvm.CConv.Intel_OCL_BI`, which gets turned into `INTEL_O_C_L_B_I` (because `llvm::convertToSnakeFromCamelCase` doesn't look for runs of caps). So I dropped it. On the otherhand regularization does need to done because some enums have `None` as a case (and others might have other python keywords).
3. I turned on `llvm` dialect generation here in order to test `nvvm.WGMMAScaleIn`, which is an enum with [[ d7e26b5620/mlir/include/mlir/IR/EnumAttr.td (L22-L25) | no explicit discriminator ]] for the `neg` case.

Note, dialects that didn't get a `GEN_ENUM_BINDINGS` don't have any enums to generate.

Let me know if I should add more tests (the three trivial ones I added exercise both the supported `assemblyFormat`s and `replace=True`).

Reviewed By: stellaraccident

Differential Revision: https://reviews.llvm.org/D157934
2023-08-23 15:03:55 -05:00
Ingo Müller
57c090b2ea [mlir][linalg][transform][python] Improve mix-in for PadOp.
In particular:

* Fix and extend the support for constructing possibly nested ArrayAttrs
  from lists of Python ints. This can probably be generalized further
  and used in many more places.
* Add arguments for `pad_to_multiple_of` and `copy_back_op`.
* Format with black and reorder (keyword-only) arguments to match
  tablegen and (`*_gen.py`) order.
* Extend tests for new features.

Reviewed By: springerm

Differential Revision: https://reviews.llvm.org/D157789
2023-08-21 13:35:49 +00:00
Rahul Kayaith
0bc1430333 [mlir][linalg][transform][python] Fix type hints
Older python versions (e.g. 3.8) don't accept `tuple[...]` etc. in type hints.
2023-08-16 16:16:47 -04:00
Ingo Müller
d7e26b5620 [mlir][linalg][transform][python] Fix mix-in for MaskedVectorize.
Fix forward bug in dac19b457e2cfd139e0e5cc29872ba3c65b7510f, which uses
the vertical bar operator for type hints, which is only supported by
Python 3.10 and later, and thus breaks the builds on Python 3.8.
2023-08-16 16:27:46 +00:00
Ingo Müller
dac19b457e [mlir][linalg][transform][python] Add mix-in for MaskedVectorize.
Reviewed By: springerm

Differential Revision: https://reviews.llvm.org/D157735
2023-08-16 15:07:46 +00:00
Ingo Müller
2d3dcd4aec [mlir][linalg][transform][python] Add mix-in for BufferizeToAllocOp.
Re-apply https://reviews.llvm.org/D157704.

The original patch broke the tests on Python 3.8 and got reverted by
0c4aad050c23254c3c612e860e1278961d161aef. This patch replaces the usage
of the vertical bar operator for type hints with `Union`.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D158075
2023-08-16 15:07:43 +00:00
Ingo Müller
a7fdb90bd4 [mlir][linalg][transform][python] Add mix-in for MapCopyToThreadsOp.
Reviewed By: springerm

Re-land 691a2fab88a0f2c763bbd26de517dcde156c5188 which was incorrectly
reverted.

Differential Revision: https://reviews.llvm.org/D157706
2023-08-14 09:15:07 -07:00
Mehdi Amini
0c4aad050c Revert "[mlir][linalg][transform][python] Add mix-in for BufferizeToAllocOp."
This reverts commit 20966fcbfea53f7d660b8b93ce56ea6149bcf9f0.

Bot is broken https://lab.llvm.org/buildbot/#/builders/61/builds/47577
2023-08-14 09:05:32 -07:00
Mehdi Amini
ecc4ef9f2b Revert "[mlir][linalg][transform][python] Add mix-in for MapCopyToThreadsOp."
This reverts commit 691a2fab88a0f2c763bbd26de517dcde156c5188.

The bot is broken: https://lab.llvm.org/buildbot/#/builders/61/builds/47577
2023-08-14 08:56:53 -07:00
Ingo Müller
20966fcbfe [mlir][linalg][transform][python] Add mix-in for BufferizeToAllocOp.
Reviewed By: springerm

Differential Revision: https://reviews.llvm.org/D157704
2023-08-14 13:45:25 +00:00
Ingo Müller
691a2fab88 [mlir][linalg][transform][python] Add mix-in for MapCopyToThreadsOp.
Reviewed By: springerm

Differential Revision: https://reviews.llvm.org/D157706
2023-08-14 13:39:52 +00:00
max
a7d80c50aa [MLIR][python bindings] add vendor gpu dialects
Differential Revision: https://reviews.llvm.org/D157820
2023-08-13 16:45:20 -05:00
Ingo Müller
0575ab2d46 [mlir][tensor][transform][python] Add mix-in class.
This patch adds a mix-in class for the only transform op of the tensor
dialect that can benefit from one: the MakeLoopIndependentOp. It adds an
overload that makes providing the return type optional.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D156918
2023-08-03 15:45:09 +00:00
Ingo Müller
1b5a3c90cc [mlir][transform][tensor][python] Add .td files for bindings.
Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D156914
2023-08-03 13:07:28 +00:00
Jerry Wu
7f4026e55a [mlir] Add linalg.batch_mmt4d named op
This op is the batched version of linalg.mmt4d. It performs matrix-matrix-transpose multiplication of batched 4-d (5d) inputs as the following:

```
C[b, m1, n1, m0, n0] = sum_{b, k1, k0}(A[b, m1, k1, m0, k0] * B[b, n1, k1, n0, k0])
```

The current use is to provide `linalg.batch_matmul` a lowering path similar to `linalg.matmul -> linalg.mmt4d`.

Differential Revision: https://reviews.llvm.org/D156912
2023-08-03 00:09:58 +00:00
Ingo Müller
f054901753 [mlir][bufferization][transform][python] Add enums to bindings & mixins.
This patch uses the new enum binding generation to add the enums of the
dialect to the Python bindings and uses them in the mix-in class where
it was still missing (namely, the `LayoutMapOption` for the
`function_boundary_type_conversion` of the `OneShotBufferizeOp`.

The patch also piggy-backs a few smaller clean-ups:
* Order the keyword-only arguments alphabetically.
* Add the keyword-only arguments to an overload where they were left out
  by accident.
* Change some of the attribute values used in the tests to non-default
  values such that they show up in the output IR and check for that
  output.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D156664
2023-08-01 13:46:16 +00:00
Ingo Müller
ccd7f0f1c3 [mlir][memref][transform][python] Create mix-in for MemRefMultiBufferOp.
Create a mix-in class with an overloaded constructor that makes the
return type optional.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D156561
2023-08-01 07:56:40 +00:00
Alex Zinenko
d517117303 [mlir] python bindings for vector transform ops
Provide Python bindings for transform ops defined in the vector dialect.
All of these ops are sufficiently simple that no mixins are necessary
for them to be nicely usable.

Reviewed By: ingomueller-net

Differential Revision: https://reviews.llvm.org/D156554
2023-07-31 15:42:59 +00:00
Alex Zinenko
1f8618f88c [mlir] python enum bindings generator
Add an ODS (tablegen) backend to generate Python enum classes and
attribute builders for enum attributes defined in ODS. This will allow
us to keep the enum attribute definitions in sync between C++ and
Python, as opposed to handwritten enum classes in Python that may end up
using mismatching values. This also makes autogenerated bindings more
convenient even in absence of mixins.

Use this backend for the transform dialect failure propagation mode enum
attribute as demonstration.

Reviewed By: ingomueller-net

Differential Revision: https://reviews.llvm.org/D156553
2023-07-31 15:42:56 +00:00
Ingo Müller
bd17556d55 [mlir][memref][transform][python] Create .td file for bindings.
This patch creates the .td files for the Python bindings of the
transform ops of the MemRef dialect and integrates them into the build
systems (CMake and Bazel).

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D156536
2023-07-31 09:49:28 +00:00
Alex Zinenko
b17acc08a8 [mlir][python] more python gpu transform mixins
Add the Python mix-in for MapNestedForallToThreads. Fix typing
annotations in MapForallToBlocks and drop the attribute wrapping
rendered unnecessary by attribute builders.

Reviewed By: ingomueller-net

Differential Revision: https://reviews.llvm.org/D156528
2023-07-31 08:24:18 +00:00
Alex Zinenko
d3d93772da [mlir] delete yapf config files, NFC
LLVM has converged to using black for Python formatting. Remove the yapf
configs MLIR used to rely on before that (the reformatting has already
happened).
2023-07-27 12:27:29 +00:00
Ingo Müller
a13c715aae [mlir][transform][bufferization][python] Add mix-in classes for two ops.
This patch adds mix-in classes for the Python bindings of
`EmptyTensorToAllocTensorOp` and `OneShotBufferizeOp`. For both classes,
the mix-in add overloads to the `__init__` functions that allow to
construct them without providing the return type, which is defaulted to
the only allowed type and `AnyOpType`, respectively.

Note that the mix-in do not expose the
`function_boundary_type_conversion` attribute. The attribute has a
custom type from the bufferization dialect that is currently not exposed
in the Python bindings. Handling of that attribute can be added easily
to the mix-in class when the need arises.

Reviewed By: springerm

Differential Revision: https://reviews.llvm.org/D155799
2023-07-26 18:00:12 +00:00
Ingo Müller
8fd207fd0d [mlir][transform][structured][python] Allow str arg in match_op_names.
Allow the `names` argument in `MatchOp.match_op_names` to be of type
`str` in addition to `Sequence[str]`. In this case, the argument is
treated as a list with one name, i.e., it is possible to write
`MatchOp.match_op_names(..., "test.dummy")` instead of
`MatchOp.match_op_names(..., ["test.dummy"])`.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D155807
2023-07-21 09:36:55 +00:00
Ingo Müller
4f30746ca0 [mlir][transform][python] Add extended ApplyPatternsOp.
This patch adds a mixin for ApplyPatternsOp to _transform_ops_ext.py
with syntactic sugar for construction such ops. Curiously, the op did
not have any constructors yet, probably because its tablegen definition
said to skip the default builders. The new constructor is thus quite
straightforward. The commit also adds a refined `region` property which
returns the first block of the single region.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D155435
2023-07-20 14:20:50 +00:00
Ingo Müller
5f4f9220f9 [mlir][transform][gpu][python] Add MapForallToBlocks mix-in.
This patch adds a mix-in class for MapForallToBlocks with overloaded
constructors. This makes it optional to provide the return type of the
op, which is defaulte to `AnyOpType`.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D155717
2023-07-20 14:20:40 +00:00
Ingo Müller
b96bd025b3 [mlir][transform][gpu][python] Add .td file for bindings.
Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D155602
2023-07-19 15:34:59 +00:00
Ingo Müller
cca053c1f0 [mlir][transform][linalg][python] Add mix-in for FuseIntoContainingOp.
The class did not have any mix-in until now. The new mix-in has two
overloads for the constructor of the class: one with all arguments and
one without the result types, which are defaulted to `AnyOpType`.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D155695
2023-07-19 14:42:41 +00:00
Ingo Müller
be6e9df11f [mlir][transform][linalg][python] Add extended TileToForallOp.
This patch adds a mixin for TileToForallOp to
_structured_transform_ops_ext.py with syntactic sugar for construction
such ops. First, the types of the results are made optional and filled
with common default values if omitted. Second, for num_threads and
tile_sizes, the three possible forms (static, dynamic, or packed), can
now all be given through the same respective argument, which gets
dispatched to the correct form-specific argument automatically.

Reviewed By: nicolasvasilache, ftynse

Differential Revision: https://reviews.llvm.org/D155090
2023-07-19 14:02:29 +00:00
Ingo Müller
1dccdf7f49 [mlir][linalg][transform][python] Add type arg to MatchOp extension.
The extension class to MatchOp has a class method called match_op_names.
The previous version of that function did not allow to specify the
result type. This, however, may be useful/necessary if the op consuming
the resulting handle requires a particular type (such as the
bufferization.EmptyTensorToAllocTensorOp). This patch adds an overload
to match_op_names that allows to specify the result type.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D155567
2023-07-19 09:15:41 +00:00
Jack Wolfard
9494bd84df [mlir][python] Add install target for MLIR Python sources.
Differential Revision: https://reviews.llvm.org/D155362
2023-07-18 11:05:39 -07:00
Rahul Kayaith
67a910bbff [mlir][python] Remove PythonAttr mapping functionality
This functionality has been replaced by TypeCasters (see D151840)

depends on D154468

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D154469
2023-07-18 12:21:28 -04:00
Ingo Müller
ef240e942a [mlir][transform][bufferization][python] Add .td file for bindings.
Reviewed By: springerm, ftynse

Differential Revision: https://reviews.llvm.org/D155564
2023-07-18 14:16:37 +00:00
Nicolas Vasilache
39427a4fbb [mlir][Linalg] Fold/erase self-copy linalg.copy on buffers
Differential Revision: https://reviews.llvm.org/D155203
2023-07-13 16:38:02 +02:00
Renato Golin
d8dc1c22bf [MLIR][Linalg] Add max named op to linalg
I've been trying to come up with a simple and clean implementation for
ReLU. TOSA uses `clamp` which is probably the goal, but that means
table-gen to make it efficient (attributes, only lower `min` or `max`).

For now, `max` is a reasonable named op despite ReLU, so we can start
using it for tiling and fusion, and upon success, we create a more
complete op `clamp` that doesn't need a whole tensor filled with zeroes
or ones to implement the different activation functions.

As with other named ops, we start "requiring" type casts and broadcasts,
and zero filled constant tensors to a more complex pattern-matcher, and
can slowly simplify with attributes or structured matchers (ex. PDL) in
the future.

Differential Revision: https://reviews.llvm.org/D154703
2023-07-07 13:39:12 +01:00
Renato Golin
fe129311d3 [MLIR][Linalg] Add unary named ops to linalg
Following binary arithmetic in previous commits, this patch adds unary
maths ops to linalg.

It also fixes a few of the previous tests, and makes the binary ops call
BinaryFn.<op> directly instead of relying on Python to recognise the
operation.

Differential Revision: https://reviews.llvm.org/D154618
2023-07-07 10:38:10 +01:00
Jeremy Furtek
6685fd8239 [mlir] Add support for TF32 as a Builtin FloatType
This diff adds support for TF32 as a Builtin floating point type. This
supplements the recent addition of the TF32 semantic to the LLVM APFloat class
by extending usage to MLIR.

https://reviews.llvm.org/D151923

More information on the TF32 type can be found here:

https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format/

Reviewed By: jpienaar

Differential Revision: https://reviews.llvm.org/D153705
2023-07-06 08:56:07 -07:00
Renato Golin
5861b4c6de [MLIR][Linalg] Add more arith named ops to linalg (take 2)
Re-apply eda47fdd258c after implementing __truediv__ for TensorUse.

[MLIR][Linalg] Add more arith named ops to linalg

Following up the 'add' named op, here are the remaining basic arithmetic
and maths, including a 'div_unsigned' for integer unsigned values. In the
same pattern as 'matmul_unsigned', the simply named 'div' assumes signed
values and the '_unsigned' variation handles the unsigned values.

It's a bit odd, but there doesn't seem to be a easy way to restrict to
specific types to make 'div_unsigned' only work with integers in the
structured ops framework.

Same as 'add', these have strict semantics regarding casts.

Unary math ops will need some massaging, so I split these ones for now
as I continue working on them.

Differential Revision: https://reviews.llvm.org/D154524
2023-07-06 13:58:37 +01:00
Matthias Springer
c26e398b49 [mlir][linalg][transform] Fix Python build
This should have been part of D154585.
2023-07-06 12:20:35 +02:00
max
4eee9ef976 Add SymbolRefAttr to python bindings
Differential Revision: https://reviews.llvm.org/D154541
2023-07-05 20:51:33 -05:00
Renato Golin
93d038a0ea Revert "[MLIR][Linalg] Add more arith named ops to linalg"
This reverts commit eda47fdd258ca666815122a931b82699a0629b87.

It failed on NVidia, AMD and Windows bots. Investigating.
2023-07-05 22:02:23 +01:00
Renato Golin
eda47fdd25 [MLIR][Linalg] Add more arith named ops to linalg
Following up the 'add' named op, here are the remaining basic arithmetic
and maths, including a 'div_unsigned' for integer unsigned values. In the
same pattern as 'matmul_unsigned', the simply named 'div' assumes signed
values and the '_unsigned' variation handles the unsigned values.

It's a bit odd, but there doesn't seem to be a easy way to restrict to
specific types to make 'div_unsigned' only work with integers in the
structured ops framework.

Same as 'add', these have strict semantics regarding casts.

Unary math ops will need some massaging, so I split these ones for now
as I continue working on them.

Differential Revision: https://reviews.llvm.org/D154524
2023-07-05 19:29:56 +01:00
Renato Golin
7e486d5c2d [MLIR][Linalg] Named op 'add' element-wise
This adds the first strict element-wise named op to Linalg.

The semantics here is to not allow auto-cast, broadcast semantics and to
restrict the operations only to identical types. The remaining semantics
must come in the form of surrounding operations on operands, to avoid
ambiguity.

Examples:
```
  // Cast int-to-fp
  %0 = linalg.copy ins(%in: tensor<32x32xi32>)
                   outs(%out: tensor<32x32xf32>)
  %1 = linalg.add  ins(%arg, %0: tensor<32x32xf32>, tensor<32x32xf32>)
                   outs(%0: tensor<32x32xf32>)

  // This can be lowered to
  %1 = linalg.generic {...}
            ins(%arg, %in: tensor<32x32xf32>, tensor<32x32xi32>)
            outs(%0: tensor<32x32xf32>) {
    ^bb0(%a: f32, %i: i32, %out: f32):
      %f = arith.uitofp %i : f32
      %0 = arith.addf %a, %f : f32
      linalg.yield %0 : f32
  }

  // Broadcast
  %0 = linalg.broadcast ins(%in: tensor<32xf32>)
                        init(%out: tensor<32x32xf32>)
  %1 = linalg.add  ins(%arg, %0: tensor<32x32xf32>, tensor<32x32xf32>)
                   outs(%0: tensor<32x32xf32>)

  // This can be lowered to
  #bcast_map = affine_map<(d0, d1) -> (d0)>
  %1 = linalg.generic {... #bcast_map] }
            ins(%arg, %in: tensor<32x32xf32>, tensor<32xf32>)
            outs(%0: tensor<32x32xf32>) {
    ^bb0(%a: f32, %b: f32, %out: f32):
      %0 = arith.addf %a, %b : f32
      linalg.yield %0 : f32
  }
```

Once this gets accepted, other arithmetic and maths operations will be
added accordingly, with the same semantics.

Differential Revision: https://reviews.llvm.org/D154500
2023-07-05 16:37:42 +01:00
Andrzej Warzynski
ad7ef1923f [mlir][transform] Allow arbitrary indices to be scalable
This change lifts the limitation that only the trailing dimensions/sizes
in dynamic index lists can be scalable. It allows us to extend
`MaskedVectorizeOp` and `TileOp` from the Transform dialect so that the
following is allowed:

  %1, %loops:3 = transform.structured.tile %0 [4, [4], [4]]

This is also a follow up for https://reviews.llvm.org/D153372
that will enable the following (middle vector dimension is scalable):

  transform.structured.masked_vectorize %0 vector_sizes [2, [4], 8]

To facilate this change, the hooks for parsing and printing dynamic
index lists are updated accordingly (`printDynamicIndexList` and
`parseDynamicIndexList`, respectively). `MaskedVectorizeOp` and `TileOp`
are updated to include an array of attribute of bools that captures
whether the corresponding vector dimension/tile size, respectively, are
scalable or not.

NOTE 1: I am re-landing this after the initial version was reverted. To
fix the regression and in addition to the original patch, this revision
updates the Python bindings for the transform dialect

NOTE 2: This change is a part of a larger effort to enable scalable
vectorisation in Linalg. See this RFC for more context:
  * https://discourse.llvm.org/t/rfc-scalable-vectorisation-in-linalg/

This relands 048764f23a380fd6f8cc562a0008dcc6095fb594 with fixes.

Differential Revision: https://reviews.llvm.org/D154336
2023-07-05 09:53:26 +01:00