* Don't call raw_string_ostream::flush(), which is essentially a no-op
* Strip unneeded calls to raw_string_ostream::str(), to avoid excess indirection.
It was common to see `value.getValue().empty()` or `value.size()`
instead of the idiomatic `value.empty()`.
Depends on D159455
Differential Revision: https://reviews.llvm.org/D159456
When building MLIR using bazel on windows with MSVC2019, bool splats
were being created incorrectly:
```
dense<[true,true,true,true]> : tensor<4xi1>
-(parse with mlir-opt)-> dense<[true, false, false, false]> : tensor<4xi1>
```
Appears that a Windows bazel build produces a corrupt DenseIntOrFPElementsAttr.
Unable to repro using MSVC and cmake.
Issue first discovered here:
https://github.com/google/jax/issues/16394
Added test point for reproduction:
```
$ bazel test @llvm-project//mlir/unittests:ir_tests --test_arg=--gtest_filter=DenseSplatTest.BoolSplatSmall
```
Differential Revision: https://reviews.llvm.org/D155745
The MLIR classes Type/Attribute/Operation/Op/Value support
cast/dyn_cast/isa/dyn_cast_or_null functionality through llvm's doCast
functionality in addition to defining methods with the same name.
This change begins the migration of uses of the method to the
corresponding function call as has been decided as more consistent.
Note that there still exist classes that only define methods directly,
such as AffineExpr, and this does not include work currently to support
a functional cast/isa call.
Caveats include:
- This clang-tidy script probably has more problems.
- This only touches C++ code, so nothing that is being generated.
Context:
- https://mlir.llvm.org/deprecation/ at "Use the free function variants
for dyn_cast/cast/isa/…"
- Original discussion at https://discourse.llvm.org/t/preferred-casting-style-going-forward/68443
Implementation:
This first patch was created with the following steps. The intention is
to only do automated changes at first, so I waste less time if it's
reverted, and so the first mass change is more clear as an example to
other teams that will need to follow similar steps.
Steps are described per line, as comments are removed by git:
0. Retrieve the change from the following to build clang-tidy with an
additional check:
https://github.com/llvm/llvm-project/compare/main...tpopp:llvm-project:tidy-cast-check
1. Build clang-tidy
2. Run clang-tidy over your entire codebase while disabling all checks
and enabling the one relevant one. Run on all header files also.
3. Delete .inc files that were also modified, so the next build rebuilds
them to a pure state.
4. Some changes have been deleted for the following reasons:
- Some files had a variable also named cast
- Some files had not included a header file that defines the cast
functions
- Some files are definitions of the classes that have the casting
methods, so the code still refers to the method instead of the
function without adding a prefix or removing the method declaration
at the same time.
```
ninja -C $BUILD_DIR clang-tidy
run-clang-tidy -clang-tidy-binary=$BUILD_DIR/bin/clang-tidy -checks='-*,misc-cast-functions'\
-header-filter=mlir/ mlir/* -fix
rm -rf $BUILD_DIR/tools/mlir/**/*.inc
git restore mlir/lib/IR mlir/lib/Dialect/DLTI/DLTI.cpp\
mlir/lib/Dialect/Complex/IR/ComplexDialect.cpp\
mlir/lib/**/IR/\
mlir/lib/Dialect/SparseTensor/Transforms/SparseVectorization.cpp\
mlir/lib/Dialect/Vector/Transforms/LowerVectorMultiReduction.cpp\
mlir/test/lib/Dialect/Test/TestTypes.cpp\
mlir/test/lib/Dialect/Transform/TestTransformDialectExtension.cpp\
mlir/test/lib/Dialect/Test/TestAttributes.cpp\
mlir/unittests/TableGen/EnumsGenTest.cpp\
mlir/test/python/lib/PythonTestCAPI.cpp\
mlir/include/mlir/IR/
```
Differential Revision: https://reviews.llvm.org/D150123
This commit restructures the sub element infrastructure to be a core part
of attributes and types, instead of being relegated to an interface. This
establishes sub element walking/replacement as something "always there",
which makes it easier to rely on for correctness/etc (which various bits of
infrastructure want, such as Symbols).
Attribute/Type now have `walk` and `replace` methods directly
accessible, which provide power API for interacting with sub elements. As
part of this, a new AttrTypeWalker class is introduced that supports caching
walked attributes/types, and a friendlier API (see the simplification of symbol
walking in SymbolTable.cpp).
Differential Revision: https://reviews.llvm.org/D142272
Boolean splats currently can't roundtrip via the "raw" DenseElementsAttr
API. This is because internally we treat true splats in some cases as "1"(one bit set)
and in other cases as "0xFF"(all bits set). This commit cleans up this handling to
consistently use 0xFF (all bits set) as the value for a splat of true.
Differential Revision: https://reviews.llvm.org/D133743
Splat of bool is encoded as a byte with all-ones in it [1]. Without this
change, this piece of code:
auto xs = builder.getI32TensorAttr({42, 42, 42, 42});
auto xs2 = xs.mapValues(builder.getI1Type(), [](const llvm::APInt &x) {
return x.isZero() ? llvm::APInt::getZero(1) : llvm::APInt::getAllOnes(1);
});
xs2.dump();
Prints:
dense<[true, false, false, false]> : tensor<4xi1>
Because only the first bit is set. This applies to both
DenseIntElementsAttr::mapValues() and DenseFPElementsAttr::mapValues().
[1]: e877b42e2c/mlir/lib/IR/BuiltinAttributes.cpp (L984)
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D132767
The `IR/AttributeTest.cpp` test fails to compile on Solaris:
```
/vol/llvm/src/llvm-project/local/mlir/unittests/IR/AttributeTest.cpp:223:36: error: no matching function for call to 'allocate'
AttrT::get(type, "resource", UnmanagedAsmResourceBlob::allocate(data));
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/vol/llvm/src/llvm-project/local/mlir/unittests/IR/AttributeTest.cpp:237:3: note: in instantiation of function template specialization 'checkNativeAccess<mlir::detail::DenseResourceElementsAttrBase<int8_t>, char>' requested here
checkNativeAccess<AttrT, T>(builder.getContext(), llvm::makeArrayRef(data),
^
/vol/llvm/src/llvm-project/local/mlir/unittests/IR/AttributeTest.cpp:258:3: note: in instantiation of function template specialization 'checkNativeIntAccess<mlir::detail::DenseResourceElementsAttrBase<int8_t>, char>' requested here
checkNativeIntAccess<DenseI8ResourceElementsAttr, int8_t>(builder, 8);
^
/vol/llvm/src/llvm-project/local/mlir/include/mlir/IR/AsmState.h:221:3: note: candidate template ignored: requirement '!std::is_same<char, char>::value' was not satisfied [with T = char]
allocate(ArrayRef<T> data, bool dataIsMutable = false) {
^
/vol/llvm/src/llvm-project/local/mlir/include/mlir/IR/AsmState.h:214:26: note: candidate function not viable: requires at least 2 arguments, but 1 was provided
static AsmResourceBlob allocate(ArrayRef<char> data, size_t align,
^
```
Because `char` is `signed` by default on Solaris and `int8_t` is
`char`. `std::is_same<int8_t, char>` is `true` unlike elsewhere, rejecting
the one-arg `allocate` overload.
Fixed by renaming the two overloads to avoid the ambiguity.
Tested on `amd64-pc-solaris2.11` ,`sparcv9-sun-solaris2.11`, and
`x86_64-pc-linux-gnu`.
Differential Revision: https://reviews.llvm.org/D131148
This reverts commit 07aaa35f74d845a20d48e644671dce150ebf7748.
This breaks the Windows bot, and while the fix addressed the
`char`/`int8_t` case, it does not make sense for other cases like
`float`.
The `IR/AttributeTest.cpp` test fails to compile on Solaris:
/vol/llvm/src/llvm-project/local/mlir/unittests/IR/AttributeTest.cpp:223:36: error: no matching function for call to 'allocate'
AttrT::get(type, "resource", UnmanagedAsmResourceBlob::allocate(data));
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/vol/llvm/src/llvm-project/local/mlir/unittests/IR/AttributeTest.cpp:237:3: note: in instantiation of function template specialization 'checkNativeAccess<mlir::detail::DenseResourceElementsAttrBase<int8_t>, char>' requested here
checkNativeAccess<AttrT, T>(builder.getContext(), llvm::makeArrayRef(data),
^
/vol/llvm/src/llvm-project/local/mlir/unittests/IR/AttributeTest.cpp:258:3: note: in instantiation of function template specialization 'checkNativeIntAccess<mlir::detail::DenseResourceElementsAttrBase<int8_t>, char>' requested here
checkNativeIntAccess<DenseI8ResourceElementsAttr, int8_t>(builder, 8);
^
/vol/llvm/src/llvm-project/local/mlir/include/mlir/IR/AsmState.h:221:3: note: candidate template ignored: requirement '!std::is_same<char, char>::value' was not satisfied [with T = char]
allocate(ArrayRef<T> data, bool dataIsMutable = false) {
^
/vol/llvm/src/llvm-project/local/mlir/include/mlir/IR/AsmState.h:214:26: note: candidate function not viable: requires at least 2 arguments, but 1 was provided
static AsmResourceBlob allocate(ArrayRef<char> data, size_t align,
^
I suspect this happens because `char` is `signed` by default on Solaris.
Tested on `amd64-pc-solaris2.11` and `sparcv9-sun-solaris2.11`.
Differential Revision: https://reviews.llvm.org/D131148
This attributes is intended cover the current set of use cases that abuse
DenseElementsAttr, e.g. when the data is large. Using resources for large
data is one of the major reasons why they were added; e.g. they can be
deallocated mid-compilation, they support a wide variety of data origins
(e.g, heap allocated, mmap'd, etc.), they can support mutation, etc.
I considered at length not having a builtin variant of this, and instead
having multiple versions of this attribute for dialects that are interested,
but they all boiled down to the exact same attribute definition. Given the
generality of this attribute, it feels more aligned to keep it next to DenseArrayAttr
(given that DenseArrayAttr covers the "small" case, and DenseResourcesElementsAttr
covers the "large" case). The underlying infra used to build this attribute is
general, and having a builtin attribute doesn't preclude users from defining
their own when it makes sense (they can even share a blob manager with the
builtin dialect to avoid data duplication).
Differential Revision: https://reviews.llvm.org/D130022
This patch removes the `type` field from `Attribute` along with the
`Attribute::getType` accessor.
Going forward, this means that attributes in MLIR will no longer have
types as a first-class concept. This patch lays the groundwork to
incrementally remove or refactor code that relies on generic attributes
being typed. The immediate impact will be on attributes that rely on
`Attribute` containing a type, such as `IntegerAttr`,
`DenseElementsAttr`, and `ml_program::ExternAttr`, which will now need
to define a type parameter on their storage classes. This will save
memory as all other attribute kinds will no longer contain a type.
Moreover, it will not be possible to generically query the type of an
attribute directly. This patch provides an attribute interface
`TypedAttr` that implements only one method, `getType`, which can be
used to generically query the types of attributes that implement the
interface. This interface can be used to retain the concept of a "typed
attribute". The ODS-generated accessor for a `type` parameter
automatically implements this method.
Next steps will be to refactor the assembly formats of certain operations
that rely on `parseAttribute(type)` and `printAttributeWithoutType` to
remove special handling of type elision until `type` can be removed from
the dialect parsing hook entirely; and incrementally remove uses of
`TypedAttr`.
Reviewed By: lattner, rriddle, jpienaar
Differential Revision: https://reviews.llvm.org/D130092
There are several aspects of the API that either aren't easy to use, or are
deceptively easy to do the wrong thing. The main change of this commit
is to remove all of the `getValue<T>`/`getFlatValue<T>` from ElementsAttr
and instead provide operator[] methods on the ranges returned by
`getValues<T>`. This provides a much more convenient API for the value
ranges. It also removes the easy-to-be-inefficient nature of
getValue/getFlatValue, which under the hood would construct a new range for
the type `T`. Constructing a range is not necessarily cheap in all cases, and
could lead to very poor performance if used within a loop; i.e. if you were to
naively write something like:
```
DenseElementsAttr attr = ...;
for (int i = 0; i < size; ++i) {
// We are internally rebuilding the APFloat value range on each iteration!!
APFloat it = attr.getFlatValue<APFloat>(i);
}
```
Differential Revision: https://reviews.llvm.org/D113229
This also exposed a bug in Dialect loading where it was not correctly identifying identifiers that had the dialect namespace as a prefix.
Differential Revision: https://reviews.llvm.org/D97431
Update ElementsAttr::isValidIndex to handle ElementsAttr with a scalar. Scalar will have rank 0.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D95663
This better matches the rest of the infrastructure, is much simpler, and makes it easier to move these types to being declaratively specified.
Differential Revision: https://reviews.llvm.org/D93432
This is part of a larger refactoring the better congregates the builtin structures under the BuiltinDialect. This also removes the problematic "standard" naming that clashes with the "standard" dialect, which is not defined within IR/. A temporary forward is placed in StandardTypes.h to allow time for downstream users to replaced references.
Differential Revision: https://reviews.llvm.org/D92435
This changes the behavior of constructing MLIRContext to no longer load globally
registered dialects on construction. Instead Dialects are only loaded explicitly
on demand:
- the Parser is lazily loading Dialects in the context as it encounters them
during parsing. This is the only purpose for registering dialects and not load
them in the context.
- Passes are expected to declare the dialects they will create entity from
(Operations, Attributes, or Types), and the PassManager is loading Dialects into
the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only
need to load the dialect for the IR it will emit, and the optimizer is
self-contained and load the required Dialects. For example in the Toy tutorial,
the compiler only needs to load the Toy dialect in the Context, all the others
(linalg, affine, std, LLVM, ...) are automatically loaded depending on the
optimization pipeline enabled.
To adjust to this change, stop using the existing dialect registration: the
global registry will be removed soon.
1) For passes, you need to override the method:
virtual void getDependentDialects(DialectRegistry ®istry) const {}
and registery on the provided registry any dialect that this pass can produce.
Passes defined in TableGen can provide this list in the dependentDialects list
field.
2) For dialects, on construction you can register dependent dialects using the
provided MLIRContext: `context.getOrLoadDialect<DialectName>()`
This is useful if a dialect may canonicalize or have interfaces involving
another dialect.
3) For loading IR, dialect that can be in the input file must be explicitly
registered with the context. `MlirOptMain()` is taking an explicit registry for
this purpose. See how the standalone-opt.cpp example is setup:
mlir::DialectRegistry registry;
registry.insert<mlir::standalone::StandaloneDialect>();
registry.insert<mlir::StandardOpsDialect>();
Only operations from these two dialects can be in the input file. To include all
of the dialects in MLIR Core, you can populate the registry this way:
mlir::registerAllDialects(registry);
4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in
the context before emitting the IR: context.getOrLoadDialect<ToyDialect>()
Differential Revision: https://reviews.llvm.org/D85622
This changes the behavior of constructing MLIRContext to no longer load globally
registered dialects on construction. Instead Dialects are only loaded explicitly
on demand:
- the Parser is lazily loading Dialects in the context as it encounters them
during parsing. This is the only purpose for registering dialects and not load
them in the context.
- Passes are expected to declare the dialects they will create entity from
(Operations, Attributes, or Types), and the PassManager is loading Dialects into
the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only
need to load the dialect for the IR it will emit, and the optimizer is
self-contained and load the required Dialects. For example in the Toy tutorial,
the compiler only needs to load the Toy dialect in the Context, all the others
(linalg, affine, std, LLVM, ...) are automatically loaded depending on the
optimization pipeline enabled.
To adjust to this change, stop using the existing dialect registration: the
global registry will be removed soon.
1) For passes, you need to override the method:
virtual void getDependentDialects(DialectRegistry ®istry) const {}
and registery on the provided registry any dialect that this pass can produce.
Passes defined in TableGen can provide this list in the dependentDialects list
field.
2) For dialects, on construction you can register dependent dialects using the
provided MLIRContext: `context.getOrLoadDialect<DialectName>()`
This is useful if a dialect may canonicalize or have interfaces involving
another dialect.
3) For loading IR, dialect that can be in the input file must be explicitly
registered with the context. `MlirOptMain()` is taking an explicit registry for
this purpose. See how the standalone-opt.cpp example is setup:
mlir::DialectRegistry registry;
registry.insert<mlir::standalone::StandaloneDialect>();
registry.insert<mlir::StandardOpsDialect>();
Only operations from these two dialects can be in the input file. To include all
of the dialects in MLIR Core, you can populate the registry this way:
mlir::registerAllDialects(registry);
4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in
the context before emitting the IR: context.getOrLoadDialect<ToyDialect>()
Differential Revision: https://reviews.llvm.org/D85622
This changes the behavior of constructing MLIRContext to no longer load globally
registered dialects on construction. Instead Dialects are only loaded explicitly
on demand:
- the Parser is lazily loading Dialects in the context as it encounters them
during parsing. This is the only purpose for registering dialects and not load
them in the context.
- Passes are expected to declare the dialects they will create entity from
(Operations, Attributes, or Types), and the PassManager is loading Dialects into
the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only
need to load the dialect for the IR it will emit, and the optimizer is
self-contained and load the required Dialects. For example in the Toy tutorial,
the compiler only needs to load the Toy dialect in the Context, all the others
(linalg, affine, std, LLVM, ...) are automatically loaded depending on the
optimization pipeline enabled.
To adjust to this change, stop using the existing dialect registration: the
global registry will be removed soon.
1) For passes, you need to override the method:
virtual void getDependentDialects(DialectRegistry ®istry) const {}
and registery on the provided registry any dialect that this pass can produce.
Passes defined in TableGen can provide this list in the dependentDialects list
field.
2) For dialects, on construction you can register dependent dialects using the
provided MLIRContext: `context.getOrLoadDialect<DialectName>()`
This is useful if a dialect may canonicalize or have interfaces involving
another dialect.
3) For loading IR, dialect that can be in the input file must be explicitly
registered with the context. `MlirOptMain()` is taking an explicit registry for
this purpose. See how the standalone-opt.cpp example is setup:
mlir::DialectRegistry registry;
mlir::registerDialect<mlir::standalone::StandaloneDialect>();
mlir::registerDialect<mlir::StandardOpsDialect>();
Only operations from these two dialects can be in the input file. To include all
of the dialects in MLIR Core, you can populate the registry this way:
mlir::registerAllDialects(registry);
4) For `mlir-translate` callback, as well as frontend, Dialects can be loaded in
the context before emitting the IR: context.getOrLoadDialect<ToyDialect>()
This changes the behavior of constructing MLIRContext to no longer load globally registered dialects on construction. Instead Dialects are only loaded explicitly on demand:
- the Parser is lazily loading Dialects in the context as it encounters them during parsing. This is the only purpose for registering dialects and not load them in the context.
- Passes are expected to declare the dialects they will create entity from (Operations, Attributes, or Types), and the PassManager is loading Dialects into the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only need to load the dialect for the IR it will emit, and the optimizer is self-contained and load the required Dialects. For example in the Toy tutorial, the compiler only needs to load the Toy dialect in the Context, all the others (linalg, affine, std, LLVM, ...) are automatically loaded depending on the optimization pipeline enabled.
Differential Revision: https://reviews.llvm.org/D85622
This changes the behavior of constructing MLIRContext to no longer load globally registered dialects on construction. Instead Dialects are only loaded explicitly on demand:
- the Parser is lazily loading Dialects in the context as it encounters them during parsing. This is the only purpose for registering dialects and not load them in the context.
- Passes are expected to declare the dialects they will create entity from (Operations, Attributes, or Types), and the PassManager is loading Dialects into the Context when starting a pipeline.
This changes simplifies the configuration of the registration: a compiler only need to load the dialect for the IR it will emit, and the optimizer is self-contained and load the required Dialects. For example in the Toy tutorial, the compiler only needs to load the Toy dialect in the Context, all the others (linalg, affine, std, LLVM, ...) are automatically loaded depending on the optimization pipeline enabled.
This patch is a follow-up on https://reviews.llvm.org/D81127
BF16 constants were represented as 64-bit floating point values due to the lack
of support for BF16 in APFloat. APFloat was recently extended to support
BF16 so this patch is fixing the BF16 constant representation to be 16-bit.
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D81218
This revision allows for creating DenseElementsAttrs and accessing elements using std::complex<APInt>/std::complex<APFloat>. This allows for opaquely accessing and transforming complex values. This is used by the printer/parser to provide pretty printing for complex values. The form for complex values matches that of std::complex, i.e.:
```
// `(` element `,` element `)`
dense<(10,10)> : tensor<complex<i64>>
```
Differential Revision: https://reviews.llvm.org/D79296
This revision adds support for storing ComplexType elements inside of a DenseElementsAttr. We store complex objects as an array of two elements, matching the definition of std::complex. There is no current attribute storage for ComplexType, but DenseElementsAttr provides API for access/creation using std::complex<>. Given that the internal implementation of DenseElementsAttr is already fairly opaque, the only real complexity here is in the printing/parsing. This revision keeps it simple for now and always uses hex when printing complex elements. A followup will add prettier syntax for this.
Differential Revision: https://reviews.llvm.org/D79281
Summary: Some data values have a different storage width than the corresponding MLIR type, e.g. bfloat is currently stored as a double.
Reviewed By: nicolasvasilache
Differential Revision: https://reviews.llvm.org/D72478