421 Commits

Author SHA1 Message Date
Maksim Levental
17ec364b1b
[mlir][python] enable registering dialects with the default Context (#72488) 2023-11-27 19:26:05 -06:00
Maksim Levental
225648e91c
[mlir][python] add type wrappers (#71218) 2023-11-27 15:58:00 -06:00
Maksim Levental
4eaf3a9c12
[mlir][python] reformat transform ext (#71136) 2023-11-16 13:37:52 -06:00
Nicolas Vasilache
1451411e64 [mlir][python] NFC - Expose LoopExtensionOps to SCFLoopTransformOps.td 2023-11-15 16:25:22 +00:00
Maksim Levental
e9453f3c3c
[mlir][python] fix scf.for_ convenience builder (#72170) 2023-11-13 20:25:41 -06:00
Stella Laurenzo
0677e54653
[mlir][python] Allow contexts to be created with a custom thread pool. (#72042)
The existing initialization sequence always enables multi-threading at
MLIRContext construction time, making it impractical to provide a
customized thread pool.

Here, this is changed to always create the context with threading
disabled, process all site-specific init hooks (which can set thread
pools) and ultimately enable multi-threading unless if site-configured
to not do so.

This should preserve the existing user-visible initialization behavior
while also letting downstreams ensure that contexts are always created
with a shared thread pool. This was tested with IREE, which has such a
concept. Using site-specific thread tuning produced up to 2x single
compilation job behavior and customization of batch compilation (i.e. as
part of a build system) to utilize half the memory and run the entire
test suite ~2x faster. Given this, I believe that the additional
configurability can well pay for itself for implementations that use it.
We may also want to present user-level Python APIs for controlling
threading configuration in the future.
2023-11-11 21:41:56 -08:00
Nicolas Vasilache
5967375fcf [mlir][python] Add support for arg_attrs and other attrs to NamedSequenceOp 2023-11-08 13:42:16 +00:00
Nicolas Vasilache
af3d856944 [mlir][python] Reland - Add sugared builder for transform.named_sequence
Address issues with #71597 post-revert and and reland
2023-11-08 09:34:29 +00:00
Nicolas Vasilache
cb3880515f Revert "[mlir][python]Add sugared buider for transform.named_sequence (#71597)"
This reverts commit 4f51b2bfe3ec11b597272be6caa00efb575bc59f.
2023-11-08 09:34:29 +00:00
Nicolas Vasilache
4f51b2bfe3
[mlir][python]Add sugared buider for transform.named_sequence (#71597) 2023-11-08 09:49:57 +01:00
Maksim Levental
c86d35a5f4
[mlir][python] factor out pure python core sources (#71592)
I'd like to be able to install just the Python core sources (without
building/including the pybind sources).
2023-11-07 19:52:43 -06:00
Maksim Levental
7c850867b9
[mlir][python] value casting (#69644)
This PR adds "value casting", i.e., a mechanism to wrap `ir.Value` in a
proxy class that overloads dunders such as `__add__`, `__sub__`, and
`__mul__` for fun and great profit.

This is thematically similar to
bfb1ba7526
and
9566ee2806.
The example in the test demonstrates the value of the feature (no pun
intended):

```python
    @register_value_caster(F16Type.static_typeid)
    @register_value_caster(F32Type.static_typeid)
    @register_value_caster(F64Type.static_typeid)
    @register_value_caster(IntegerType.static_typeid)
    class ArithValue(Value):
        __add__ = partialmethod(_binary_op, op="add")
        __sub__ = partialmethod(_binary_op, op="sub")
        __mul__ = partialmethod(_binary_op, op="mul")

    a = arith.constant(value=FloatAttr.get(f16_t, 42.42))
    b = a + a
    # CHECK: ArithValue(%0 = arith.addf %cst, %cst : f16)
    print(b)

    a = arith.constant(value=FloatAttr.get(f32_t, 42.42))
    b = a - a
    # CHECK: ArithValue(%1 = arith.subf %cst_0, %cst_0 : f32)
    print(b)

    a = arith.constant(value=FloatAttr.get(f64_t, 42.42))
    b = a * a
    # CHECK: ArithValue(%2 = arith.mulf %cst_1, %cst_1 : f64)
    print(b)
```

**EDIT**: this now goes through the bindings and thus supports automatic
casting of `OpResult` (including as an element of `OpResultList`),
`BlockArgument` (including as an element of `BlockArgumentList`), as
well as `Value`.
2023-11-07 10:49:41 -06:00
Maksim Levental
5192e299cf
[mlir][python] remove various caching mechanisms (#70831)
This PR removes the various caching mechanisms currently in the python
bindings - both positive caching and negative caching.
2023-11-03 13:28:20 -05:00
Maksim Levental
2ab14dff43
[mlir][python] fix python_test dialect and I32/I64ElementsBuilder (#70871)
This PR fixes the `I32ElementsAttr` and `I64ElementsAttr` builders and
tests them through the `python_test` dialect.
2023-10-31 19:55:42 -05:00
Mehdi Amini
b2bdc45580 [mlir][python] Fix possible use of variable use before set
The _mlirRegisterEverything symbol may not be built by some customers. The
code here was intended to support this, but didn't properly initialize the
init_module variable.
This would break JAX with:

NameError: free variable 'init_module' referenced before assignment in enclosing scope
2023-10-31 10:31:20 -07:00
Matthias Springer
04736c7f7a
[mlir][SCF] Use transform.get_parent_op instead of transform.loop.get_parent_for (#70757)
Add a new attribute to `get_parent_op` to get the n-th parent. Remove
`transform.loop.get_parent_for`, which is no longer needed.
2023-10-31 18:36:40 +09:00
Jungwook Park
6995183e17
[mlir][python] Register LLVM translations in the RegisterEverything for python (#70428)
Added missing register_translations in python to replicate the same in
the C-API
Cleaned up the current calls to register passes where the other calls
are already embedded in the mlirRegisterAllPasses.
found here,
https://discourse.llvm.org/t/opencl-example/74187
2023-10-30 14:46:21 -07:00
bjacob
8c8336fcad
Add missing linalg.batch_vecmat named op (#70218)
Linalg currently has these named ops:
* `matmul`
* `matvec`
* `vecmat`
* `batch_matmul`
* `batch_matvec`

But it does not have:
* `batch_vecmat`

This PRs adds that for consistency, and I have a short-term need for it
( https://github.com/openxla/iree/issues/15158 ), so not having this
would cause some contortion on my end.
2023-10-25 11:41:24 -04:00
Maksim Levental
a9694043c9
[mlir][linalg] regionBuilder for transpose, broadcast (#69742)
Currently, `linalg.transpose` and `linalg.broadcast` can't be emitted
through either the C API or the python bindings (which of course go
through the C API). See
https://discourse.llvm.org/t/how-to-build-linalg-transposeop-in-mlir-pybind/73989/10.

The reason is even though they're named ops, there is no opdsl
`@linalg_structured_op` for them and thus while they can be instantiated
they cannot be passed to
[`mlirLinalgFillBuiltinNamedOpRegion`](a7cccb9cbb/mlir/lib/CAPI/Dialect/Linalg.cpp (L18)).
I believe the issue is they both take a `IndexAttrDef` but
`IndexAttrDef` cannot represent dynamic rank. Note, if I'm mistaken and
there is a way to write the `@linalg_structured_op` let me know.

The solution here simply implements the `regionBuilder` interface which
is then picked up by
[`LinalgDialect::addNamedOpBuilders`](7557530f42/mlir/lib/Dialect/Linalg/IR/LinalgDialect.cpp (L116)).

Extension classes are added "by hand" that mirror the API of the
`@linalg_structured_op`s. Note, the extension classes are added to to
`dialects/linalg/__init__.py` instead of
`dialects/linalg/opdsl/ops/core_named_ops.py` in order that they're not
confused for opdsl generators/emitters.
2023-10-20 16:14:46 -05:00
Maksim Levental
dd473f1dd1
[mlir][python] simplify extensions (#69642)
https://github.com/llvm/llvm-project/pull/68853 enabled a lot of nice
cleanup. Note, I made sure each of the touched extensions had tests.
2023-10-19 18:07:06 -05:00
Maksim Levental
a2288a8944
[mlir][python] remove mixins (#68853)
This PR replaces the mixin `OpView` extension mechanism with the
standard inheritance mechanism.

Why? Firstly, mixins are not very pythonic (inheritance is usually used
for this), a little convoluted, and too "tight" (can only be used in the
immediately adjacent `_ext.py`). Secondly, it (mixins) are now blocking
are correct implementation of "value builders" (see
[here](https://github.com/llvm/llvm-project/pull/68764)) where the
problem becomes how to choose the correct base class that the value
builder should call.

This PR looks big/complicated but appearances are deceiving; 4 things
were needed to make this work:

1. Drop `skipDefaultBuilders` in
`OpPythonBindingGen::emitDefaultOpBuilders`
2. Former mixin extension classes are converted to inherit from the
generated `OpView` instead of being "mixins"
a. extension classes that simply were calling into an already generated
`super().__init__` continue to do so
b. (almost all) extension classes that were calling `self.build_generic`
because of a lack of default builder being generated can now also just
call `super().__init__`
3. To handle the [lone single
use-case](https://sourcegraph.com/search?q=context%3Aglobal+select_opview_mixin&patternType=standard&sm=1&groupBy=repo)
of `select_opview_mixin`, namely
[linalg](https://github.com/llvm/llvm-project/blob/main/mlir/python/mlir/dialects/_linalg_ops_ext.py#L38),
only a small change was necessary in `opdsl/lang/emitter.py` (thanks to
the emission/generation of default builders/`__init__`s)
4. since the `extend_opview_class` decorator is removed, we need a way
to register extension classes as the desired `OpView` that `op.opview`
conjures into existence; so we do the standard thing and just enable
replacing the existing registered `OpView` i.e.,
`register_operation(_Dialect, replace=True)`.

Note, the upgrade path for the common case is to change an extension to
inherit from the generated builder and decorate it with
`register_operation(_Dialect, replace=True)`. In the slightly more
complicated case where `super().__init(self.build_generic(...))` is
called in the extension's `__init__`, this needs to be updated to call
`__init__` in `OpView`, i.e., the grandparent (see updated docs). 
Note, also `<DIALECT>_ext.py` files/modules will no longer be automatically loaded.

Note, the PR has 3 base commits that look funny but this was done for
the purpose of tracking the line history of moving the
`<DIALECT>_ops_ext.py` class into `<DIALECT>.py` and updating (commit
labeled "fix").
2023-10-19 16:20:14 -05:00
Tomás Longeri
5a600c23f9
[mlir][python] Expose PyInsertionPoint's reference operation (#69082)
The reason I want this is that I am writing my own Python bindings and
would like to use the insertion point from
`PyThreadContextEntry::getDefaultInsertionPoint()` to call C++ functions
that take an `OpBuilder` (I don't need to expose it in Python but it
also seems appropriate). AFAICT, there is currently no way to translate
a `PyInsertionPoint` into an `OpBuilder` because the operation is
inaccessible.
2023-10-18 16:53:18 +02:00
Amy Wang
de7857ab23
[mlir][python] python binding for the affine.store op (#68816)
This PR creates the necessary files to support bindings for operations
in the affine dialect.

This is the first of many PRs which will progressively introduce
affine.load, affine.for, etc operations. I would like to
acknowledge the work by Nelli's author @makslevental :
https://github.com/makslevental/nelli/blob/main/nelli/mlir/affine/affine.py
which jump-starts the work.
2023-10-11 16:37:11 -04:00
Maksim Levental
27c6d55cae
[mlir][python] generate value builders (#68308)
This PR adds the additional generation of what I'm calling "value
builders" (a term I'm not married to) that look like this:

```python
def empty(sizes, element_type, *, loc=None, ip=None):
    return get_result_or_results(tensor.EmptyOp(sizes=sizes, element_type=element_type, loc=loc, ip=ip))
```

which instantiates a `tensor.EmptyOp` and then immediately grabs the
result (`OpResult`) and then returns that *instead of a handle to the
op*.

What's the point of adding these when `EmptyOp.result` already exists?
My claim/feeling/intuition is that eDSL users are more comfortable with
a value centric programming model (i.e., passing values as operands) as
opposed to an operator instantiation programming model. Thus this change
enables (or at least goes towards) the bindings supporting such a user
and use case. For example,

```python
i32 = IntegerType.get_signless(32)
...
ten1 = tensor.empty((10, 10), i32)
ten2 = tensor.empty((10, 10), i32)
ten3 = arith.addi(ten1, ten2)
```

Note, in order to present a "pythonic" API and enable "pythonic" eDSLs,
the generated identifiers (op names and operand names) are snake case
instead of camel case and thus `llvm::convertToSnakeFromCamelCase`
needed a small fix. Thus this PR is stacked on top of
https://github.com/llvm/llvm-project/pull/68375.

In addition, as a kind of victory lap, this PR adds a "rangefor" that
looks and acts exactly like python's `range` but emits `scf.for`.
2023-10-09 14:16:28 -07:00
Jack Frankland
e29a253c9e
[mlir][tosa][linalg] Apply direct tosa -> linalg Conv2D lowering (#68304)
TOSA defines the filter channel ordering for 2D convolution operation
`tosa.conv2d` as `[OC, KH, KW, IC]`. The LinAlg dialect supports `[F, H,
W, C]` and `[H, W, C, F]` orderings via the `linalg.conv_2d_nhwc_fhwc`
and `linalg.conv_2d_nhwc_hwcf` operations respectively. Where `F == OC`,
`KH == H`, `KW == W` and `C == IC`.

Currently `tosa.conv2d` is lowered to `linalg.conv2d_nhwc_hwcf` meaning
we need to insert a transposition operation to permute the filter
channels before they can be passed as weights to the linalg op, that is
`[F, H, W, C]` -> `[H, W, C, F]`. An analogous transformation needs to
be applied to the quantized operation that lowers to
`linalg.conv_2d_nhwc_hwcf_q`.

This commit updates the TOSA->LinAlg lowering so that `tosa.conv2d` is
lowered to `linalg.conv2d_nhwc_fhwc` removing the need for the
introduction of a transposition operation and making the mapping 1-1. It
also adds a `linalg.conv_2d_nhwc_fhwc_q` quantized operation to the
LinAlg dialect so the same direct 1-1 mapping can be applied to the
quantized variant.

This commit does not add any new lit tests but repurposes the current
TosaToLinalgNamed tests by removing the checks for transpositions and
updating the targeted LinAlg operations from `linalg.conv2d_nhwc_hwcf`
to linalg.conv2d_nhwc_fhwc`.

Signed-off-by: Jack Frankland <jack.frankland@arm.com>
2023-10-06 17:10:39 -07:00
Maksim Levental
6f44f87011
[mlir][python] Enable py312. (#68009)
Python 3.12 has been released so why not support it.
2023-10-04 20:35:24 -05:00
Oleksandr "Alex" Zinenko
bc30b415ca
[mlir] enable python bindings for nvgpu transforms (#68088)
Expose the autogenerated bindings.

Co-authored-by: Martin Lücke <mluecke@google.com>
2023-10-03 14:52:52 +02:00
Andrzej Warzynski
1e70ab5f0d [mlir][transform] Update transform.loop.peel (reland #67482)
This patch updates `transform.loop.peel` so that this Op returns two
rather than one handle:
  * one for the peeled loop, and
  * one for the remainder loop.
Also, following this change this Op will fail if peeling fails. This is
consistent with other similar Ops that also fail if no transformation
takes place.

Relands #67482 with an extra fix for transform_loop_ext.py
2023-09-28 14:35:46 +00:00
Oleksandr "Alex" Zinenko
96ff0255f2
[mlir] cleanup of structured.tile* transform ops (#67320)
Rename and restructure tiling-related transform ops from the structured
extension to be more homogeneous. In particular, all ops now follow a
consistent naming scheme:

 - `transform.structured.tile_using_for`;
 - `transform.structured.tile_using_forall`;
 - `transform.structured.tile_reduction_using_for`;
 - `transform.structured.tile_reduction_using_forall`.

This drops the "_op" naming artifact from `tile_to_forall_op` that
shouldn't have been included in the first place, consistently specifies
the name of the control flow op to be produced for loops (instead of
`tile_reduction_using_scf` since `scf.forall` also belongs to `scf`),
and opts for the `using` connector to avoid ambiguity.

The loops produced by tiling are now systematically placed as *trailing*
results of the transform op. While this required changing 3 out of 4 ops
(except for `tile_using_for`), this is the only choice that makes sense
when producing multiple `scf.for` ops that can be associated with a
variadic number of handles. This choice is also most consistent with
*other* transform ops from the structured extension, in particular with
fusion ops, that produce the structured op as the leading result and the
loop as the trailing result.
2023-09-26 09:14:29 +02:00
Ingo Müller
991cb14715
[mlir][memref][transform] Add new alloca_to_global op. (#66511)
This PR adds a new transform op that replaces `memref.alloca`s with
`memref.get_global`s to newly inserted `memref.global`s. This is useful,
for example, for allocations that should reside in the shared memory of
a GPU, which have to be declared as globals.
2023-09-21 18:17:00 +02:00
Ingo Müller
69bc1cbbff
[mlir][linalg][transform] Rename {masked_vectorize => vectorize => vectorize_children_and...}. (#66575)
This PR renames the vectorization transform ops as follows:

* `structured.masked_vectorize` => `structured.vectorize`. This reflects
the fact that since [recently](https://reviews.llvm.org/D157774) the op
can also handle the unmasked case.
* `structured.vectorize` =>
`structured.vectorize_children_and_applies_patterns`. This reflects the
fact that the op does not just vectorize the given payload op but all
vectorizable children contained in it, and applies patterns before and
after for preparation and clean-up.

This rename was discussed first
[here](https://reviews.llvm.org/D157774).

The PR also adapts and cleans ups the tablegen description of the
`VectorizeChildrenAndApplyPatternsOp` (formerly `VectorizeOp`).
2023-09-21 15:38:29 +02:00
Oleksandr "Alex" Zinenko
d579471a98
[mlir][python] smaller scope for vector enumgen (#66992)
Don't generate enums from the main VectorOps.td file as that
transitively includes enums from Arith.

---------

Co-authored-by: Nicolas Vasilache <ntv@google.com>
2023-09-21 12:57:41 +02:00
Jevin Jiang
3d51b40c4a Fix induction variable type in scf.for py binding.
- make sure that the type of induction variable should be determined by the type of the lower bound type.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D159534
2023-09-21 08:07:26 +00:00
Peiming Liu
3d27d1152e
[mlir][sparse] Generates python bindings for SparseTensorTransformOps. (#66937) 2023-09-20 15:35:50 -07:00
Oleksandr "Alex" Zinenko
a590ff589c
[mlir] regenerate linalg named ops yaml (#65475)
The Linalg named ops specification went out of sync with the OpDSL
description, presumably due to direct manual modifications of the yaml
file.

Additionally, the unsigned division has been generating the signed
scalar instruction, which is now fixed.
2023-09-20 18:53:29 +02:00
Nicolas Vasilache
1b8b556443
[mlir][Vector] Add fastmath flags to vector.reduction (#66905)
This revision pipes the fastmath attribute support through the
vector.reduction op. This seemingly simple first step already requires
quite some genuflexions, file and builder reorganization. In the
process, retire the boolean reassoc flag deep in the LLVM dialect
builders and just use the fastmath attribute.

During conversions, templated builders for predicated intrinsics are
partially cleaned up. In the future, to finalize the cleanups, one
should consider adding fastmath to the VPIntrinsic ops.
2023-09-20 16:57:20 +02:00
Ingo Müller
86ddbdd3e7
[mlir][linalg][transform][python] Allow no args in MaskedVectorize. (#66541)
The mix-in of this op did not allow to pass in no argument. This special
case is now handled correctly and covered by the tests.
2023-09-19 10:34:47 +02:00
Stella Laurenzo
33df617dfd
[mlir] Quality of life improvements to python API types. (#66723)
* Moves several orphaned methods from Operation/OpView -> _OperationBase
so that both hierarchies share them (whether unknown or known to ODS).
* Adds typing information for missing `MLIRError` exception.
* Adds `DiagnosticInfo` typing.
* Adds `DenseResourceElementsAttr` typing that was missing.
2023-09-18 21:30:41 -07:00
Martin Erhart
6bf043e743
[mlir][bufferization] Remove allow-return-allocs and create-deallocs pass options, remove bufferization.escape attribute (#66619)
This commit removes the deallocation capabilities of
one-shot-bufferization. One-shot-bufferization should never deallocate
any memrefs as this should be entirely handled by the
ownership-based-buffer-deallocation pass going forward. This means the
`allow-return-allocs` pass option will default to true now,
`create-deallocs` defaults to false and they, as well as the escape
attribute indicating whether a memref escapes the current region, will
be removed. A new `allow-return-allocs-from-loops` option is added as a
temporary workaround for some bufferization limitations.
2023-09-18 16:44:48 +02:00
Ingo Müller
5d3489e940
[mlir][transform][lingalg][python] Replace pdl.operation => transform.any_op. (#66392)
For some reason, the mix-ins of the Python bindings of this dialect used
the PDL type for "any op". However, PDL isn't involved here, so it makes
more sense to use the corresponding type of the transform dialect. This
PR changes that.
2023-09-15 09:06:07 +02:00
Ingo Müller
360c629024
[mlir][linalg][transform][python] Drop _get_op_result... from mix-ins. (#65726)
`_get_op_result_or_value` was used in mix-ins to unify the handling of
op results and values. However, that function is now called in the
generated constructors, such that doing so in the mix-ins is not
necessary anymore.
2023-09-14 17:24:16 +02:00
Martin Erhart
c199f7dc62 Revert "[mlir][bufferization] Remove allow-return-allocs and create-deallocs pass options, remove bufferization.escape attribute"
This reverts commit 6a91dfedeb956dfa092a6a3f411e8b02f0d5d289.

This caused problems in downstream projects. We are reverting to give
them more time for integration.
2023-09-13 13:53:48 +00:00
Martin Erhart
6a91dfedeb [mlir][bufferization] Remove allow-return-allocs and create-deallocs pass options, remove bufferization.escape attribute
This is the first commit in a series with the goal to rework the
BufferDeallocation pass. Currently, this pass heavily relies on copies
to perform correct deallocations, which leads to very slow code and
potentially high memory usage. Additionally, there are unsupported cases
such as returning memrefs which this series of commits aims to add
support for as well.

This first commit removes the deallocation capabilities of
one-shot-bufferization.One-shot-bufferization should never deallocate any
memrefs as this should be entirely handled by the buffer-deallocation pass
going forward. This means the allow-return-allocs pass option will
default to true now, create-deallocs defaults to false and they, as well
as the escape attribute indicating whether a memref escapes the current region,
will be removed.

The documentation should w.r.t. these pass option changes should also be
updated in this commit.

Reviewed By: springerm

Differential Revision: https://reviews.llvm.org/D156662
2023-09-13 09:30:22 +00:00
Daniil Dudkin
8a6e54c9b3
[mlir][arith] Rename operations: maxfmaximumf, minfminimumf (#65800)
This patch is part of a larger initiative aimed at fixing floating-point `max` and `min` operations in MLIR: https://discourse.llvm.org/t/rfc-fix-floating-point-max-and-min-operations-in-mlir/72671.

This commit addresses Task 1.2 of the mentioned RFC. By renaming these operations, we align their names with LLVM intrinsics that have corresponding semantics.
2023-09-11 22:02:19 -07:00
Ingo Müller
ca23c933bd [mlir][python] Create all missing attribute builders.
This patch adds attribute builders for all buildable attributes from the
builtin dialect that did not previously have any. These builders can be
used to construct attributes of a particular type identified by a string
from a Python argument without knowing the details of how to pass that
Python argument to the attribute constructor. This is used, for example,
in the generated code of the Python bindings of ops.

The list of "all" attributes was produced with:

(
  grep -h "ods_ir.AttrBuilder.get" $(find ../build/ -name "*_ops_gen.py") \
    | cut -f2 -d"'"
  git grep -ho "^def [a-zA-Z0-9_]*" -- include/mlir/IR/CommonAttrConstraints.td \
    | cut -f2 -d" "
) | sort -u

Then, I only retained those that had an occurence in
`mlir/include/mlir/IR`. In particular, this drops many dialect-specific
attributes; registering those builders is something that those dialects
should do. Finally, I removed those attrbiutes that had a match in
`mlir/python/mlir/ir.py` already and implemented the remaining ones. The
only ones that still miss a builder now are the following:

* Represent more than one possible attribute type:
  - `Any.*Attr` (9x)
  - `IntNonNegative`
  - `IntPositive`
  - `IsNullAttr`
  - `ElementsAttr`
* I am not sure what "constant attributes" are:
  - `ConstBoolAttrFalse`
  - `ConstBoolAttrTrue`
  - `ConstUnitAttr`
* `Location` not exposed by Python bindings:
  - `LocationArrayAttr`
  - `LocationAttr`
* `get` function not implemented in Python bindings:
  - `StringElementsAttr`

This patch also fixes a compilation problem with
`I64SmallVectorArrayAttr`.

Reviewed By: makslevental, rkayaith

Differential Revision: https://reviews.llvm.org/D159403
2023-09-06 07:09:25 +00:00
Felix Schneider
2a603deec4 [mlir][Python] Fix conversion of non-zero offset memrefs to np.arrays
Memref descriptors contain an `offset` field that denotes the start of
the content of the memref relative to the `alignedPtr`. This offset is
not considered when converting a memref descriptor to a np.array in the
Python runtime library, essentially treating all memrefs as if they had
an offset of zero. This patch introduces the necessary pointer arithmetic
to find the actual beginning of the memref contents to the memref->numpy
conversion functions.

There is an ongoing discussion about whether the `offset` field is needed
at all in the memref descriptor.
Until that is decided, the Python runtime and CRunnerUtils should
still correctly implement the offset handling.

Related: https://reviews.llvm.org/D157008

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D158494
2023-09-05 08:02:59 +00:00
Ingo Müller
33278321e6 [mlir][linalg][transform][python] Make divisor arg to Multitile optional
The mix-in of the `MultiTileSizesOp` set the default value of its
`divisor` argument. This repeats information from the tablegen
defintion, is not necessary (since the generic code deals with `None`
and default values), and has the risk of running out of sync without
people noticing. This patch removes the setting of the value and forward
`None` to the generic constructor instead.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D159416
2023-09-04 11:32:56 +00:00
Ingo Müller
ea4a5127c4 [mlir][linalg][transform][python] Refactor TileOp mix-in.
This patch simplifies and improves the mix-in of the `TileOp`. In
particular:

* Accept all types of sizes (static, dynamic, scalable) in a single
  argument `sizes`.
* Use the existing convenience function to dispatch different types of
  sizes instead of repeating the implementation in the mix-in.
* Pass on `None` values as is of optional arguments to the init function
  of the super class.
* Reformat with default indentation width (4 spaces vs 2 spaces).
* Add a a test for providing scalable sizes.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D159417
2023-09-04 11:32:14 +00:00
Ingo Müller
d5946fd3ed [mlir][linalg][transform][python] Simplify mix-in of PadOp.
This patch removes some manual conversion of mixed Python/attribute
arguments to `I64ArrayAttr`s, which turned out to be unnecessary.
Interestingly, this change does not depend on the additional attribute
builders added in the (currently pending)
https://reviews.llvm.org/D159403 patch.

Reviewed By: ftynse

Differential Revision: https://reviews.llvm.org/D159419
2023-09-04 11:30:50 +00:00
Ingo Müller
9b91ae13f4 [mlir][python][linalg][transform] Forward missing params in mix-ins. 2023-09-04 07:59:12 +00:00