This reduces code generated for type inference and instead reuses
facilities CAPI side that performed same role.
Differential Revision: https://reviews.llvm.org/D156041t
Update remaining `PyAttribute`-returning APIs to return `MlirAttribute` instead,
so that they go through the downcasting mechanism.
Reviewed By: makslevental
Differential Revision: https://reviews.llvm.org/D154462
The bytecode writer config was heap-allocated, but was never freed, causing ASAN errors.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D153440
depends on D150839
This diff uses `MlirTypeID` to register `TypeCaster`s (i.e., `[](PyType pyType) -> DerivedTy { return pyType; }`) for all concrete types (i.e., `PyConcrete<...>`) that are then queried for (by `MlirTypeID`) and called in `struct type_caster<MlirType>::cast`. The result is that anywhere an `MlirType mlirType` is returned from a python binding, that `mlirType` is automatically cast to the correct concrete type. For example:
```
c0 = arith.ConstantOp(f32, 0.0)
# CHECK: F32Type(f32)
print(repr(c0.result.type))
unranked_tensor_type = UnrankedTensorType.get(f32)
unranked_tensor = tensor.FromElementsOp(unranked_tensor_type, [c0]).result
# CHECK: UnrankedTensorType
print(type(unranked_tensor.type).__name__)
# CHECK: UnrankedTensorType(tensor<*xf32>)
print(repr(unranked_tensor.type))
```
This functionality immediately extends to typed attributes (i.e., `attr.type`).
The diff also implements similar functionality for `mlir_type_subclass`es but in a slightly different way - for such types (which have no cpp corresponding `class` or `struct`) the user must provide a type caster in python (similar to how `AttrBuilder` works) or in cpp as a `py::cpp_function`.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D150927
This fixes a -Wunused-member-function warning, at the moment
`PyRegionIterator` is never constructed by anything (the only use was
removed in D111697), and iterating over region lists is just falling
back to a generic python iterator object.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D150244
Currently blocks are always created with UnknownLoc's for their arguments. This
adds an `arg_locs` argument to all block creation APIs, which takes an optional
sequence of locations to use, one per block argument. If no locations are
supplied, the current Location context is used.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D150084
This diff adds python bindings for `MlirTypeID`. It paves the way for returning accurately typed `Type`s from python APIs (see D150927) and then further along building type "conscious" `Value` APIs (see D150413).
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D150839
Currently blocks are always created with UnknownLoc's for their arguments. This
adds an `arg_locs` argument to all block creation APIs, which takes an optional
sequence of locations to use, one per block argument. If no locations are
supplied, the current Location context is used.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D150084
Can't return a well-formed IR output while enabling version to be bumped
up during emission. Previously it would return min version but
potentially invalid IR which was confusing, instead make it return
error and abort immediately instead.
Differential Revision: https://reviews.llvm.org/D149569
Add method to set a desired bytecode file format to generate. Change
write method to be able to return status including the minimum bytecode
version needed by reader. This enables generating an older version of
the bytecode (not dialect ops, attributes or types). But this does not
guarantee that an older version can always be generated, e.g., if a
dialect uses a new encoding only available at later bytecode version.
This clamps setting to at most current version.
Differential Revision: https://reviews.llvm.org/D146555
This resolves some warnings when building with C++20, e.g.:
```
llvm-project/mlir/lib/Bindings/Python/IRAffine.cpp:545:60: warning: ISO C++20 considers use of overloaded operator '==' (with operand types 'mlir::python::PyAffineExpr' and 'mlir::python::PyAffineExpr') to be ambiguous despite there being a unique best viable function [-Wambiguous-reversed-operator]
PyAffineExpr &other) { return self == other; })
~~~~ ^ ~~~~~
llvm-project/mlir/lib/Bindings/Python/IRAffine.cpp:350:20: note: ambiguity is between a regular call to this operator and a call with the argument order reversed
bool PyAffineExpr::operator==(const PyAffineExpr &other) {
^
```
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D147018
This updates most (all?) error-diagnostic-emitting python APIs to
capture error diagnostics and include them in the raised exception's
message:
```
>>> Operation.parse('"arith.addi"() : () -> ()'))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
mlir._mlir_libs.MLIRError: Unable to parse operation assembly:
error: "-":1:1: 'arith.addi' op requires one result
note: "-":1:1: see current operation: "arith.addi"() : () -> ()
```
The diagnostic information is available on the exception for users who
may want to customize the error message:
```
>>> try:
... Operation.parse('"arith.addi"() : () -> ()')
... except MLIRError as e:
... print(e.message)
... print(e.error_diagnostics)
... print(e.error_diagnostics[0].message)
...
Unable to parse operation assembly
[<mlir._mlir_libs._mlir.ir.DiagnosticInfo object at 0x7fed32bd6b70>]
'arith.addi' op requires one result
```
Error diagnostics captured in exceptions aren't propagated to diagnostic
handlers, to avoid double-reporting of errors. The context-level
`emit_error_diagnostics` option can be used to revert to the old
behaviour, causing error diagnostics to be reported to handlers instead
of as part of exceptions.
API changes:
- `Operation.verify` now raises an exception on verification failure,
instead of returning `false`
- The exception raised by the following methods has been changed to
`MLIRError`:
- `PassManager.run`
- `{Module,Operation,Type,Attribute}.parse`
- `{RankedTensorType,UnrankedTensorType}.get`
- `{MemRefType,UnrankedMemRefType}.get`
- `VectorType.get`
- `FloatAttr.get`
closes#60595
depends on D144804, D143830
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D143869
The raw `OpView` classes are used to bypass the constructors of `OpView`
subclasses, but having a separate class can create some confusing
behaviour, e.g.:
```
op = MyOp(...)
# fails, lhs is 'MyOp', rhs is '_MyOp'
assert type(op) == type(op.operation.opview)
```
Instead we can use `__new__` to achieve the same thing without a
separate class:
```
my_op = MyOp.__new__(MyOp)
OpView.__init__(my_op, op)
```
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D143830
Currently the bindings only allow for parsing IR with a top-level
`builtin.module` op, since the parse APIs insert an implicit module op.
This change adds `Operation.parse`, which returns whatever top-level op
is actually in the source.
To simplify parsing of specific operations, `OpView.parse` is also
added, which handles the error checking for `OpView` subclasses.
Reviewed By: ftynse, stellaraccident
Differential Revision: https://reviews.llvm.org/D143352
The asm printer grew the ability to automatically fall back to the
generic format for invalid ops, so this logic doesn't need to be in the
bindings anymore. The printer already handles supressing diagnostics
that get emitted while checking if the op is valid.
Reviewed By: mehdi_amini, stellaraccident
Differential Revision: https://reviews.llvm.org/D144805
Fix Python 3.6.9 issue encountered due to type checking here. Will
add back in follow up.
This reverts commit 1f47fee2948ef48781084afe0426171d000d7997.
For cases where we can automatically construct the Attribute allow for more
user-friendly input. This is consistent with C++ builder generation as well
choice of which single builder to generate here (most
specialized/user-friendly).
Registration of attribute builders from more pythonic input is all Python side.
The downside is that
* extra checking to see if user provided a custom builder in op builders,
* the ODS attribute name is load bearing
upside is that
* easily change these/register dialect specific ones in downstream projects,
* adding support/changing to different convenience builders are all along with
the rest of the convenience functions in Python (and no additional changes
to tablegen file or recompilation needed);
Allow for both building with Attributes as well as raw inputs. This change
should therefore be backwards compatible as well as allow for avoiding
recreating Attribute where already available.
Differential Revision: https://reviews.llvm.org/D139568
This adds a simple PyOpOperand based on MlirOpOperand, which can has
properties for the owner op and operation number.
This also adds a PyOpOperandIterator that defines methods for __iter__
and __next__ so PyOpOperands can be iterated over using the the
MlirOpOperand C API.
Finally, a uses psuedo-container is added to PyValue so the uses can
generically be iterated.
Depends on D139596
Reviewed By: stellaraccident, jdd
Differential Revision: https://reviews.llvm.org/D139597
This allows us to hash Blocks and use them in sets or parts of larger
hashable objects. The implementation is the same as other core IR
constructs: the C API object's pointer is hashed.
Differential Revision: https://reviews.llvm.org/D139599
This adds an `enable` flag to OpPrintingFlags::enableDebugInfo
that allows for overriding any command line flags for debug printing,
and matches the format that we use for other `enableBlah` API.
This adds a `write_bytecode` method to the Operation class.
The method takes a file handle and writes the binary blob to it.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D133210
This reland includes changes to the Python bindings.
Switch variadic operand and result segment size attributes to use the
dense i32 array. Dense integer arrays were introduced primarily to
represent index lists. They are a better fit for segment sizes than
dense elements attrs.
Depends on D131801
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D131803
Previously, calling `Value.owner()` would C++ assert in debug builds if
`Value` was a block argument. Additionally, the behavior was just wrong
in release builds. This patch adds support for BlockArg Values.
Previously the elements of the notes tuple would be invalid objects when
accessed from a diagnostic handler, resulting in a segfault when used.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D129943
The type extraction helper function for block argument and op result
list objects was ignoring the slice entirely. So was the slice addition.
Both are caused by a misleading naming convention to implement slices
via CRTP. Make the convention more explicit and hide the helper
functions so users have harder time calling them directly.
Closes#56540.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D130271
Since the very first commits, the Python and C MLIR APIs have had mis-placed registration/load functionality for dialects, extensions, etc. This was done pragmatically in order to get bootstrapped and then just grew in. Downstreams largely bypass and do their own thing by providing various APIs to register things they need. Meanwhile, the C++ APIs have stabilized around this and it would make sense to follow suit.
The thing we have observed in canonical usage by downstreams is that each downstream tends to have native entry points that configure its installation to its preferences with one-stop APIs. This patch leans in to this approach with `RegisterEverything.h` and `mlir._mlir_libs._mlirRegisterEverything` being the one-stop entry points for the "upstream packages". The `_mlir_libs.__init__.py` now allows customization of the environment and Context by adding "initialization modules" to the `_mlir_libs` package. If present, `_mlirRegisterEverything` is treated as such a module. Others can be added by downstreams by adding a `_site_initialize_{i}.py` module, where '{i}' is a number starting with zero. The number will be incremented and corresponding module loaded until one is not found. Initialization modules can:
* Perform load time customization to the global environment (i.e. registering passes, hooks, etc).
* Define a `register_dialects(registry: DialectRegistry)` function that can extend the `DialectRegistry` that will be used to bootstrap the `Context`.
* Define a `context_init_hook(context: Context)` function that will be added to a list of callbacks which will be invoked after dialect registration during `Context` initialization.
Note that the `MLIRPythonExtension.RegisterEverything` is not included by default when building a downstream (its corresponding behavior was prior). For downstreams which need the default MLIR initialization to take place, they must add this back in to their Python CMake build just like they add their own components (i.e. to `add_mlir_python_common_capi_library` and `add_mlir_python_modules`). It is perfectly valid to not do this, in which case, only the things explicitly depended on and initialized by downstreams will be built/packaged. If the downstream has not been set up for this, it is recommended to simply add this back for the time being and pay the build time/package size cost.
CMake changes:
* `MLIRCAPIRegistration` -> `MLIRCAPIRegisterEverything` (renamed to signify what it does and force an evaluation: a number of places were incidentally linking this very expensive target)
* `MLIRPythonSoure.Passes` removed (without replacement: just drop)
* `MLIRPythonExtension.AllPassesRegistration` removed (without replacement: just drop)
* `MLIRPythonExtension.Conversions` removed (without replacement: just drop)
* `MLIRPythonExtension.Transforms` removed (without replacement: just drop)
Header changes:
* `mlir-c/Registration.h` is deleted. Dialect registration functionality is now in `IR.h`. Registration of upstream features are in `mlir-c/RegisterEverything.h`. When updating MLIR and a couple of downstreams, I found that proper usage was commingled so required making a choice vs just blind S&R.
Python APIs removed:
* mlir.transforms and mlir.conversions (previously only had an __init__.py which indirectly triggered `mlirRegisterTransformsPasses()` and `mlirRegisterConversionPasses()` respectively). Downstream impact: Remove these imports if present (they now happen as part of default initialization).
* mlir._mlir_libs._all_passes_registration, mlir._mlir_libs._mlirTransforms, mlir._mlir_libs._mlirConversions. Downstream impact: None expected (these were internally used).
C-APIs changed:
* mlirRegisterAllDialects(MlirContext) now takes an MlirDialectRegistry instead. It also used to trigger loading of all dialects, which was already marked with a TODO to remove -- it no longer does, and for direct use, dialects must be explicitly loaded. Downstream impact: Direct C-API users must ensure that needed dialects are loaded or call `mlirContextLoadAllAvailableDialects(MlirContext)` to emulate the prior behavior. Also see the `ir.c` test case (e.g. ` mlirContextGetOrLoadDialect(ctx, mlirStringRefCreateFromCString("func"));`).
* mlirDialectHandle* APIs were moved from Registration.h (which now is restricted to just global/upstream registration) to IR.h, arguably where it should have been. Downstream impact: include correct header (likely already doing so).
C-APIs added:
* mlirContextLoadAllAvailableDialects(MlirContext): Corresponds to C++ API with the same purpose.
Python APIs added:
* mlir.ir.DialectRegistry: Mapping for an MlirDialectRegistry.
* mlir.ir.Context.append_dialect_registry(MlirDialectRegistry)
* mlir.ir.Context.load_all_available_dialects()
* mlir._mlir_libs._mlirAllRegistration: New native extension that exposes a `register_dialects(MlirDialectRegistry)` entry point and performs all upstream pass/conversion/transforms registration on init. In this first step, we eagerly load this as part of the __init__.py and use it to monkey patch the Context to emulate prior behavior.
* Type caster and capsule support for MlirDialectRegistry
This should make it possible to build downstream Python dialects that only depend on a subset of MLIR. See: https://github.com/llvm/llvm-project/issues/56037
Here is an example PR, minimally adapting IREE to these changes: https://github.com/iree-org/iree/pull/9638/files In this situation, IREE is opting to not link everything, since it is already configuring the Context to its liking. For projects that would just like to not think about it and pull in everything, add `MLIRPythonExtension.RegisterEverything` to the list of Python sources getting built, and the old behavior will continue.
Reviewed By: mehdi_amini, ftynse
Differential Revision: https://reviews.llvm.org/D128593
The useLocalScope printing flag has been passed around between pybind methods, but doesn't actually enable the corresponding printing flag.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D127907
Currently, building mlir with the python bindings enabled on Windows in Debug is broken because pybind11, python and cmake don't like to play together. This change normalizes how the three interact, so that the builds can now run and succeed.
The main issue is that python and cmake both make assumptions about which libraries are needed in a Windows build based on the flavor.
- cmake assumes that a debug (or a debug-like) flavor of the build will always require pythonX_d.lib and provides no option/hint to tell it to use a different library. cmake does find both the debug and release versions, but then uses the debug library.
- python (specifically pyconfig.h and by extension python.h) hardcodes the dependency on pythonX_d.lib or pythonX.lib depending on whether `_DEBUG` is defined. This is NOT transparent - it does not show up anywhere in the build logs until the link step fails with `pythonX_d.lib is missing` (or `pythonX.lib is missing`)
- pybind11 tries to "fix" this by implementing a workaround - unless Py_DEBUG is defined, `_DEBUG` is explicitly undefined right before including python headers. This also requires some windows headers to be included differently, so while clever, this is a non-trivial workaround.
mlir itself includes the pybind11 headers (which contain the workaround) AS WELL AS python.h, essentially always requiring both pythonX.lib and pythonX_d.lib for linking. cmake explicitly only adds one or the other, so the build fails.
This change does a couple of things:
- In the cmake files, explicitly add the release version of the python library on Windows builds regardless of flavor. Since Py_DEBUG is not defined, pybind11 will always require release and it will be satisfied
- To satisfy python as well, this change removes any explicit inclusions of Python.h on Windows instead relying on the fact that pybind11 headers will bring in what is needed
There are a few additional things that we could do but I rejected as unnecessary at this time:
- define Py_DEBUG based on the CMAKE_BUILD_TYPE - this will *mostly* work, we'd have to think through multiconfig generators like VS, but it's possible. There doesn't seem to be a need to link against debug python at the moment, so I chose not to overcomplicate the build and always default to release
- similar to above, but define Py_DEBUG based on the CMAKE_BUILD_TYPE *as well as* the presence of the debug python library (`Python3_LIBRARY_DEBUG`). Similar to above, this seems unnecessary right now. I think it's slightly better than above because most people don't actually have the debug version of python installed, so this would prevent breaks in that case.
- similar to the two above, but add a cmake variable to control the logic
- implement the pybind11 workaround directly in mlir (specifically in Interop.h) so that Python.h can still be included directly. This seems prone to error and a pain to maintain in lock step with pybind11
- reorganize how the pybind11 headers are included and place at least one of them in Interop.h directly, so that the header has all of its dependencies included as was the original intention. I decided against this because it really doesn't need pybind11 logic and it's always included after pybind11 is, so we don't necessarily need the python includes
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D125284
Introduce a method on PyMlirContext (and plumb it through to Python) to
invalidate all of the operations in the live operations map and clear
it. Since Python has no notion of private data, an end-developer could
reach into some 3rd party API which uses the MLIR Python API (that is
behaving correctly with regard to holding references) and grab a
reference to an MLIR Python Operation, preventing it from being
deconstructed out of the live operations map. This allows the API
developer to clear the map when it calls C++ code which could delete
operations, protecting itself from its users.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D123895
Adds `mlirBlockDetach` to the CAPI to remove a block from its parent
region. Use it in the Python bindings to implement
`Block.append_to(region)`.
Reviewed By: mehdi_amini
Differential Revision: https://reviews.llvm.org/D123165