Why? https://nanobind.readthedocs.io/en/latest/why.html says it better
than I can, but my primary motivation for this change is to improve MLIR
IR construction time from JAX.
For a complicated Google-internal LLM model in JAX, this change improves
the MLIR
lowering time by around 5s (out of around 30s), which is a significant
speedup for simply switching binding frameworks.
To a large extent, this is a mechanical change, for instance changing
`pybind11::`
to `nanobind::`.
Notes:
* this PR needs Nanobind 2.4.0, because it needs a bug fix
(https://github.com/wjakob/nanobind/pull/806) that landed in that
release.
* this PR does not port the in-tree dialect extension modules. They can
be ported in a future PR.
* I removed the py::sibling() annotations from def_static and def_class
in `PybindAdapters.h`. These ask pybind11 to try to form an overload
with an existing method, but it's not possible to form mixed
pybind11/nanobind overloads this ways and the parent class is now
defined in nanobind. Better solutions may be possible here.
* nanobind does not contain an exact equivalent of pybind11's buffer
protocol support. It was not hard to add a nanobind implementation of a
similar API.
* nanobind is pickier about casting to std::vector<bool>, expecting that
the input is a sequence of bool types, not truthy values. In a couple of
places I added code to support truthy values during casting.
* nanobind distinguishes bytes (`nb::bytes`) from strings (e.g.,
`std::string`). This required nb::bytes overloads in a few places.
In the tablegen-generated Python bindings, we typically see a pattern
like:
```
class ConstantOp(_ods_ir.OpView):
...
def __init__(self, value, *, loc=None, ip=None):
...
super().__init__(self.build_generic(attributes=attributes, operands=operands, successors=_ods_successors, regions=regions, loc=loc, ip=ip))
```
i.e., the generated code calls `OpView.__init__()` with the output of
`build_generic`. The purpose of `OpView` is to wrap another operation
object, and `OpView.__init__` can accept any `PyOperationBase` subclass,
and presumably the intention is that `build_generic` returns a
`PyOperation`, so the user ends up with a `PyOpView` wrapping a
`PyOperation`.
However, `PyOpView::buildGeneric` calls `PyOperation::create`, which
does not just build a PyOperation, but it also calls `createOpView` to
wrap that operation in a subclass of `PyOpView` and returns that view.
But that's rather pointless: we called this code from the constructor of
an `OpView` subclass, so we already have a view object ready to go; we
don't need to build another one!
If we change `PyOperation::create` to return the underlying
`PyOperation`, rather than a view wrapper, we can save allocating a
useless `PyOpView` object for each ODS-generated Python object.
This saves approximately 1.5s of Python time in a JAX LLM benchmark that
generates a mixture of upstream dialects and StableHLO.
The MLIR C and Python Bindings expose various methods from
`mlir::OpPrintingFlags` . This PR adds a binding for the `skipRegions`
method, which allows to skip the printing of Regions when printing Ops.
It also exposes this option as parameter in the python `get_asm` and
`print` methods
The PR implements MLIR Python Bindings for a few simple edit operations
on Block arguments, namely, `add_argument`, `erase_argument`, and
`erase_arguments`.
When an operation is erased in Python, its children may still be in the
"live" list inside Python bindings. After this, if some of the newly
allocated operations happen to reuse the same pointer address, this will
trigger an assertion in the bindings. This assertion would be incorrect
because the operations aren't actually live. Make sure we remove the
children operations from the "live" list when erasing the parent.
This also concentrates responsibility over the removal from the "live"
list and invalidation in a single place.
Note that this requires the IR to be sufficiently structurally valid so
a walk through it can succeed. If this invariant was broken by, e.g, C++
pass called from Python, there isn't much we can do.
If the python callback throws an error, the c++ code will throw a
py::error_already_set that needs to be caught and handled in the c++
code .
This change is inspired by the similar solution in
PySymbolTable::walkSymbolTables.
This commit adds `walk` method to PyOperationBase that uses a python
object as a callback, e.g. `op.walk(callback)`. Currently callback must
return a walk result explicitly.
We(SiFive) have implemented walk method with python in our internal
python tool for a while. However the overhead of python is expensive and
it didn't scale well for large MLIR files. Just replacing walk with this
version reduced the entire execution time of the tool by 30~40% and
there are a few configs that the tool takes several hours to finish so
this commit significantly improves tool performance.
Currently, a method exists to get the count of the operation objects
which are still alive. This helps for sanity checking, but isn't
terribly useful for debugging. This new method returns the actual
operation objects which are still alive.
This allows Python code like the following:
```
gc.collect()
live_ops = ir.Context.current._get_live_operation_objects()
for op in live_ops:
print(f"Warning: {op} is still live. Referrers:")
for referrer in gc.get_referrers(op)[0]:
print(f" {referrer}")
```
This reverts commit bbc29768683b394b34600347f46be2b8245ddb30.
This change seems to be at odds with the non-owning part semantics of
MlirOperation in C API. Since downstream clients can only take and
return MlirOperation, it does not sound correct to force all returns of
MlirOperation transfer ownership. Specifically, this makes it impossible
for downstreams to implement IR-traversing functions that, e.g., look at
neighbors of an operation.
The following patch triggers the exception, and there does not seem to
be an alternative way for a downstream binding writer to express this:
```
diff --git a/mlir/lib/Bindings/Python/IRCore.cpp b/mlir/lib/Bindings/Python/IRCore.cpp
index 39757dfad5be..2ce640674245 100644
--- a/mlir/lib/Bindings/Python/IRCore.cpp
+++ b/mlir/lib/Bindings/Python/IRCore.cpp
@@ -3071,6 +3071,11 @@ void mlir::python::populateIRCore(py::module &m) {
py::arg("successors") = py::none(), py::arg("regions") = 0,
py::arg("loc") = py::none(), py::arg("ip") = py::none(),
py::arg("infer_type") = false, kOperationCreateDocstring)
+ .def("_get_first_in_block", [](PyOperation &self) -> MlirOperation {
+ MlirBlock block = mlirOperationGetBlock(self.get());
+ MlirOperation first = mlirBlockGetFirstOperation(block);
+ return first;
+ })
.def_static(
"parse",
[](const std::string &sourceStr, const std::string &sourceName,
diff --git a/mlir/test/python/ir/operation.py b/mlir/test/python/ir/operation.py
index f59b1a26ba48..6b12b8da5c24 100644
--- a/mlir/test/python/ir/operation.py
+++ b/mlir/test/python/ir/operation.py
@@ -24,6 +24,25 @@ def expect_index_error(callback):
except IndexError:
pass
+@run
+def testCustomBind():
+ ctx = Context()
+ ctx.allow_unregistered_dialects = True
+ module = Module.parse(
+ r"""
+ func.func @f1(%arg0: i32) -> i32 {
+ %1 = "custom.addi"(%arg0, %arg0) : (i32, i32) -> i32
+ return %1 : i32
+ }
+ """,
+ ctx,
+ )
+ add = module.body.operations[0].regions[0].blocks[0].operations[0]
+ op = add.operation
+ # This will get a reference to itself.
+ f1 = op._get_first_in_block()
+
+
# Verify iterator based traversal of the op/region/block hierarchy.
# CHECK-LABEL: TEST: testTraverseOpRegionBlockIterators
```
This fixes a longstanding bug in the `Context._CAPICreate` method
whereby it was not taking ownership of the PyMlirContext wrapper when
casting to a Python object. The result was minimally that all such
contexts transferred in that way would leak. In addition, counter to the
documentation for the `_CAPICreate` helper (see
`mlir-c/Bindings/Python/Interop.h`) and the `forContext` /
`forOperation` methods, we were silently upgrading any unknown
context/operation pointer to steal-ownership semantics. This is
dangerous and was causing some subtle bugs downstream where this
facility is getting the most use.
This patch corrects the semantics and will only do an ownership transfer
for `_CAPICreate`, and it will further require that it is an ownership
transfer (if already transferred, it was just silently succeeding).
Removing the mis-aligned behavior made it clear where the downstream was
doing the wrong thing.
It also adds some `_testing_` functions to create unowned context and
operation capsules so that this can be fully tested upstream, reworking
the tests to verify the behavior.
In some torture testing downstream, I was not able to trigger any memory
corruption with the newly enforced semantics. When getting it wrong, a
regular exception is raised.
This reverts commit 0d109035c29408f06efc148d67aab7e4b2aada5d.
Changes make Python bindings unbuildable without additional cmake
modifications (or modified `$PATH`).
```
/llvm-project/mlir/lib/Bindings/Python/IRCore.cpp:33:10: fatal error: 'funcobject.h' file not found
```
This header is provided by cpython, and we are not looking for that in
cmake.
Moreover, the nature of this change is not very clear to me. Seems to
replace one include with two dozens, presumably because the code is only
using transitively included headers, but the value for readability is
dubious. LLVM is also not strictly following IWYU.
Enables reusing the AsmState when printing from Python. Also moves the
fileObject and binary to the end (pybind11::object was resulting in the
overload not working unless `state=` was specified).
---------
Co-authored-by: Maksim Levental <maksim.levental@gmail.com>
The existing initialization sequence always enables multi-threading at
MLIRContext construction time, making it impractical to provide a
customized thread pool.
Here, this is changed to always create the context with threading
disabled, process all site-specific init hooks (which can set thread
pools) and ultimately enable multi-threading unless if site-configured
to not do so.
This should preserve the existing user-visible initialization behavior
while also letting downstreams ensure that contexts are always created
with a shared thread pool. This was tested with IREE, which has such a
concept. Using site-specific thread tuning produced up to 2x single
compilation job behavior and customization of batch compilation (i.e. as
part of a build system) to utilize half the memory and run the entire
test suite ~2x faster. Given this, I believe that the additional
configurability can well pay for itself for implementations that use it.
We may also want to present user-level Python APIs for controlling
threading configuration in the future.
This PR adds "value casting", i.e., a mechanism to wrap `ir.Value` in a
proxy class that overloads dunders such as `__add__`, `__sub__`, and
`__mul__` for fun and great profit.
This is thematically similar to
bfb1ba7526
and
9566ee2806.
The example in the test demonstrates the value of the feature (no pun
intended):
```python
@register_value_caster(F16Type.static_typeid)
@register_value_caster(F32Type.static_typeid)
@register_value_caster(F64Type.static_typeid)
@register_value_caster(IntegerType.static_typeid)
class ArithValue(Value):
__add__ = partialmethod(_binary_op, op="add")
__sub__ = partialmethod(_binary_op, op="sub")
__mul__ = partialmethod(_binary_op, op="mul")
a = arith.constant(value=FloatAttr.get(f16_t, 42.42))
b = a + a
# CHECK: ArithValue(%0 = arith.addf %cst, %cst : f16)
print(b)
a = arith.constant(value=FloatAttr.get(f32_t, 42.42))
b = a - a
# CHECK: ArithValue(%1 = arith.subf %cst_0, %cst_0 : f32)
print(b)
a = arith.constant(value=FloatAttr.get(f64_t, 42.42))
b = a * a
# CHECK: ArithValue(%2 = arith.mulf %cst_1, %cst_1 : f64)
print(b)
```
**EDIT**: this now goes through the bindings and thus supports automatic
casting of `OpResult` (including as an element of `OpResultList`),
`BlockArgument` (including as an element of `BlockArgumentList`), as
well as `Value`.
`PyOperations` are Python-level handles to `Operation *` instances. When
the latter are modified by C++, the former need to be invalidated.
#69746 implements such invalidation mechanism by setting all
`PyReferences` to `invalid`. However, that is not enough: they also need
to be removed from the `liveOperations` map since other parts of the
code (such as `PyOperation::createDetached`) assume that that map only
contains valid refs.
This is required to actually solve the issue in #69730.
Fixes https://github.com/llvm/llvm-project/issues/69730 (also see
https://reviews.llvm.org/D155543).
There are two things outstanding (why I didn't land before):
1. add some C API tests for `mlirOperationWalk`;
2. potentially refactor how the invalidation in `run` works; the first
version of the code looked like this:
```cpp
if (invalidateOps) {
auto *context = op.getOperation().getContext().get();
MlirOperationWalkCallback invalidatingCallback =
[](MlirOperation op, void *userData) {
PyMlirContext *context =
static_cast<PyMlirContext *>(userData);
context->setOperationInvalid(op);
};
auto numRegions =
mlirOperationGetNumRegions(op.getOperation().get());
for (int i = 0; i < numRegions; ++i) {
MlirRegion region =
mlirOperationGetRegion(op.getOperation().get(), i);
for (MlirBlock block = mlirRegionGetFirstBlock(region);
!mlirBlockIsNull(block);
block = mlirBlockGetNextInRegion(block))
for (MlirOperation childOp =
mlirBlockGetFirstOperation(block);
!mlirOperationIsNull(childOp);
childOp = mlirOperationGetNextInBlock(childOp))
mlirOperationWalk(childOp, invalidatingCallback, context,
MlirWalkPostOrder);
}
}
```
This is verbose and ugly but it has the important benefit of not
executing `mlirOperationEqual(rootOp->get(), op)` for every op
underneath the root op.
Supposing there's no desire for the slightly more efficient but highly
convoluted approach, I can land this "posthaste".
But, since we have eyes on this now, any suggestions or approaches (or
needs/concerns) are welcome.
The reason I want this is that I am writing my own Python bindings and
would like to use the insertion point from
`PyThreadContextEntry::getDefaultInsertionPoint()` to call C++ functions
that take an `OpBuilder` (I don't need to expose it in Python but it
also seems appropriate). AFAICT, there is currently no way to translate
a `PyInsertionPoint` into an `OpBuilder` because the operation is
inaccessible.
This does basic plumbing, ideally want a context approach to reduce
needing to thread these manually, but the current is useful even in that
state.
Made Value.get_name change backwards compatible, so one could either set
a field or create a state to pass in.
* Moves several orphaned methods from Operation/OpView -> _OperationBase
so that both hierarchies share them (whether unknown or known to ODS).
* Adds typing information for missing `MLIRError` exception.
* Adds `DiagnosticInfo` typing.
* Adds `DenseResourceElementsAttr` typing that was missing.
Enable usage where capturing AsmState is good (e.g., avoiding creating AsmState over and over again when walking IR and printing).
This also only changes one C API to verify plumbing. But using the AsmState makes the cost more explicit than the flags interface (which hides the traversals and construction here) and also enables a more efficient usage C side.
This PR implements python enum bindings for *all* the enums - this includes `I*Attrs` (including positional/bit) and `Dialect/EnumAttr`.
There are a few parts to this:
1. CMake: a small addition to `declare_mlir_dialect_python_bindings` and `declare_mlir_dialect_extension_python_bindings` to generate the enum, a boolean arg `GEN_ENUM_BINDINGS` to make it opt-in (even though it works for basically all of the dialects), and an optional `GEN_ENUM_BINDINGS_TD_FILE` for handling corner cases.
2. EnumPythonBindingGen.cpp: there are two weedy aspects here that took investigation:
1. If an enum attribute is not a `Dialect/EnumAttr` then the `EnumAttrInfo` record is canonical, as far as both the cases of the enum **and the `AttrDefName`**. On the otherhand, if an enum is a `Dialect/EnumAttr` then the `EnumAttr` record has the correct `AttrDefName` ("load bearing", i.e., populates `ods.ir.AttributeBuilder('<NAME>')`) but its `enum` field contains the cases, which is an instance of `EnumAttrInfo`. The solution is to generate an one enum class for both `Dialect/EnumAttr` and "independent" `EnumAttrInfo` but to make that class interopable with two builder registrations that both do the right thing (see next sub-bullet).
2. Because we don't have a good connection to cpp `EnumAttr`, i.e., only the `enum class` getters are exposed (like `DimensionAttr::get(Dimension value)`), we have to resort to parsing e.g., `Attribute.parse(f'#gpu<dim {x}>')`. This means that the set of supported `assemblyFormat`s (for the enum) is fixed at compile of MLIR (currently 2, the only 2 I saw). There might be some things that could be done here but they would require quite a bit more C API work to support generically (e.g., casting ints to enum cases and binding all the getters or going generically through the `symbolize*` methods, like `symbolizeDimension(uint32_t)` or `symbolizeDimension(StringRef)`).
A few small changes:
1. In addition, since this patch registers default builders for attributes where people might've had their own builders already written, I added a `replace` param to `AttributeBuilder.insert` (`False` by default).
2. `makePythonEnumCaseName` can't handle all the different ways in which people write their enum cases, e.g., `llvm.CConv.Intel_OCL_BI`, which gets turned into `INTEL_O_C_L_B_I` (because `llvm::convertToSnakeFromCamelCase` doesn't look for runs of caps). So I dropped it. On the otherhand regularization does need to done because some enums have `None` as a case (and others might have other python keywords).
3. I turned on `llvm` dialect generation here in order to test `nvvm.WGMMAScaleIn`, which is an enum with [[ d7e26b5620/mlir/include/mlir/IR/EnumAttr.td (L22-L25) | no explicit discriminator ]] for the `neg` case.
Note, dialects that didn't get a `GEN_ENUM_BINDINGS` don't have any enums to generate.
Let me know if I should add more tests (the three trivial ones I added exercise both the supported `assemblyFormat`s and `replace=True`).
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D157934
This renaming started with the native ODS support for properties, this is completing it.
A mass automated textual rename seems safe for most codebases.
Drop also the ods prefix to keep the accessors the same as they were before
this change:
properties.odsOperandSegmentSizes
reverts back to:
properties.operandSegementSizes
The ODS prefix was creating divergence between all the places and make it harder to
be consistent.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D157173
This reduces code generated for type inference and instead reuses
facilities CAPI side that performed same role.
Differential Revision: https://reviews.llvm.org/D156041t
Update remaining `PyAttribute`-returning APIs to return `MlirAttribute` instead,
so that they go through the downcasting mechanism.
Reviewed By: makslevental
Differential Revision: https://reviews.llvm.org/D154462
The bytecode writer config was heap-allocated, but was never freed, causing ASAN errors.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D153440
depends on D150839
This diff uses `MlirTypeID` to register `TypeCaster`s (i.e., `[](PyType pyType) -> DerivedTy { return pyType; }`) for all concrete types (i.e., `PyConcrete<...>`) that are then queried for (by `MlirTypeID`) and called in `struct type_caster<MlirType>::cast`. The result is that anywhere an `MlirType mlirType` is returned from a python binding, that `mlirType` is automatically cast to the correct concrete type. For example:
```
c0 = arith.ConstantOp(f32, 0.0)
# CHECK: F32Type(f32)
print(repr(c0.result.type))
unranked_tensor_type = UnrankedTensorType.get(f32)
unranked_tensor = tensor.FromElementsOp(unranked_tensor_type, [c0]).result
# CHECK: UnrankedTensorType
print(type(unranked_tensor.type).__name__)
# CHECK: UnrankedTensorType(tensor<*xf32>)
print(repr(unranked_tensor.type))
```
This functionality immediately extends to typed attributes (i.e., `attr.type`).
The diff also implements similar functionality for `mlir_type_subclass`es but in a slightly different way - for such types (which have no cpp corresponding `class` or `struct`) the user must provide a type caster in python (similar to how `AttrBuilder` works) or in cpp as a `py::cpp_function`.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D150927
This fixes a -Wunused-member-function warning, at the moment
`PyRegionIterator` is never constructed by anything (the only use was
removed in D111697), and iterating over region lists is just falling
back to a generic python iterator object.
Reviewed By: stellaraccident
Differential Revision: https://reviews.llvm.org/D150244
Currently blocks are always created with UnknownLoc's for their arguments. This
adds an `arg_locs` argument to all block creation APIs, which takes an optional
sequence of locations to use, one per block argument. If no locations are
supplied, the current Location context is used.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D150084
This diff adds python bindings for `MlirTypeID`. It paves the way for returning accurately typed `Type`s from python APIs (see D150927) and then further along building type "conscious" `Value` APIs (see D150413).
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D150839
Currently blocks are always created with UnknownLoc's for their arguments. This
adds an `arg_locs` argument to all block creation APIs, which takes an optional
sequence of locations to use, one per block argument. If no locations are
supplied, the current Location context is used.
Reviewed By: ftynse
Differential Revision: https://reviews.llvm.org/D150084
Can't return a well-formed IR output while enabling version to be bumped
up during emission. Previously it would return min version but
potentially invalid IR which was confusing, instead make it return
error and abort immediately instead.
Differential Revision: https://reviews.llvm.org/D149569
Add method to set a desired bytecode file format to generate. Change
write method to be able to return status including the minimum bytecode
version needed by reader. This enables generating an older version of
the bytecode (not dialect ops, attributes or types). But this does not
guarantee that an older version can always be generated, e.g., if a
dialect uses a new encoding only available at later bytecode version.
This clamps setting to at most current version.
Differential Revision: https://reviews.llvm.org/D146555