Since `resultTy` might be nullptr, we should use `dyn_cast` instead of
`cast`. Additionally, `typeConverter->convertType<T>` is more
appropriate in this context.
This commit fixes two occurrences where an erased op was accessed at a
later point of time. That won't work anymore in a One-Shot Dialect
Conversion and triggers a use-after-free sanitizer error.
After the One-Shot Dialect Conversion refactoring, a
`ConversionPatternRewriter` will behave more like a normal
`PatternRewriter`.
---------
Co-authored-by: Markus Böck <markus.boeck02@gmail.com>
The motivation is to avoid having to negate `isDynamic*` checks, avoid
double negations, and allow for `ShapedType::isStaticDim` to be used in
ADT functions without having to wrap it in a lambda performing the
negation.
Also add the new functions to C and Python bindings.
These are identified by misc-include-cleaner. I've filtered out those
that break builds. Also, I'm staying away from llvm-config.h,
config.h, and Compiler.h, which likely cause platform- or
compiler-specific build failures.
Changed the indexing used in the extractOp from one that is intended for
0d tensors to one that is intended for 1d tensors.
---------
Co-authored-by: Shay Kleiman <shay.kleiman@mobileye.com>
Updated the dialect to match TOSA v1.0 specification for ConstOp and
ConstShapeOp (https://www.mlplatform.org/tosa/tosa_spec.html#_const).
Also updated lit tests
---------
Signed-off-by: Jerry Ge <jerry.ge@arm.com>
Always generated pad_const and remove input_zp attr for PadOp.
- Co-authored-by: Udaya Ranga <udaya.ranga@arm.com>
- Co-authored-by: Tai Ly <tai.ly@arm.com>
Signed-off-by: Jerry Ge <jerry.ge@arm.com>
`let constructor` is deprecated since the table gen backend emits most
of the glue logic to build a pass. This PR retires the td method for
most (I need another pass) passes in the Conversion directory.
The shape operand is changed to input shape type since V1.0
Change-Id: I508cc1d67e9b017048b3f29fecf202cb7d707110
Co-authored-by: Won Jeon <won.jeon@arm.com>
Removed the TOSA quantization attribute used in various MLIR TOSA
dialect operations in favour of using builtin attributes.
Update any lit tests, conversions and transformations appropriately.
Signed-off-by: Tai Ly <tai.ly@arm.com>
Co-authored-by: Tai Ly <tai.ly@arm.com>
Update to use getConstShapeValue to collect shape information along the
graph.
Change-Id: Ic6fc2341e3bcfbec06a1d08986e26dd08573bd9c
Co-authored-by: TatWai Chong <tatwai.chong@arm.com>
This patch changes PadOp's padding input to type !tosa.shape<2 * rank>,
(where rank is the rank of the PadOp's input), instead of a <rank x 2>
tensor.
This patch is also a part of TOSA v1.0 effort:
https://discourse.llvm.org/t/rfc-tosa-dialect-increment-to-v1-0/83708
This patch updates the PadOp to match all against the TOSA v1.0 form.
Original Authors include:
@Tai78641
@wonjeon
Co-authored-by: Tai Ly <tai.ly@arm.com>
This PR adds a missing verifier for `tosa.pad`, ensuring that the
padding shape matches [2*rank(shape1)] according to V1.0.0
Specification. Fixes#119840.
This commit marks the type converter in `populate...` functions as
`const`. This is useful for debugging.
Patterns already take a `const` type converter. However, some
`populate...` functions do not only add new patterns, but also add
additional type conversion rules. That makes it difficult to find the
place where a type conversion was added in the code base. With this
change, all `populate...` functions that only populate pattern now have
a `const` type converter. Programmers can then conclude from the
function signature that these functions do not register any new type
conversion rules.
Also some minor cleanups around the 1:N dialect conversion
infrastructure, which did not always pass the type converter as a
`const` object internally.
This adds
- `mlir::tosa::populateTosaToLinalgTypeConversion` which converts
tensors of unsigned integers into tensors of signless integers
- modifies the `tosa.reshape` lowering in TosaToTensor to use the type
converter correctly
I choose to implement the type converter in
`mlir/Conversion/TosaToLinalg/TosaToLinalg.h` instead of
`mlir/Conversion/TosaToTensor/TosaToTensor.h` because I need the same
type converter in the TosaToLinalg lowerings (future PR).
Alternatively, I could duplicate the type converter so it exists both in
TosaToLinalg and TosaToTensor. Let me know if you prefer that.
GCC 12 and 13 generate incorrect code for a pattern in the
tosa-to-tensor pass responsible for lowering tosa.reshape. This results
in the tosa.reshape lowering producing IR which fails to verify. I've
narrowed down the set of cmake flags needed to reproduce the issue to
this:
cmake -G Ninja ../llvm \
-DLLVM_ENABLE_PROJECTS="mlir" \
-DLLVM_TARGETS_TO_BUILD=host \
-DLLVM_ENABLE_PROJECTS=mlir \
-DCMAKE_BUILD_TYPE="Release" \
-DCMAKE_CXX_FLAGS_RELEASE="-O2" \
-DCMAKE_CXX_FLAGS="-O2" \
-DCMAKE_CXX_COMPILER=g++ \
-DCMAKE_C_COMPILER=gcc
This is the failing test case:
func.func @fails_in_gcc_12(%arg0: tensor<?xf32>) -> tensor<1x1x1x?xf32>
{
%0 = tosa.reshape %arg0 {new_shape = array<i64: 1, 1, 1, -1>} :
(tensor<?xf32>) -> tensor<1x1x1x?xf32>
return %0 : tensor<1x1x1x?xf32>
}
This should lower to a tensor.expand_shape operation like so:
func.func @foo(%arg0: tensor<?xf32>) -> tensor<1x1x1x?xf32> {
%c0 = arith.constant 0 : index
%dim = tensor.dim %arg0, %c0 : tensor<?xf32>
%c1 = arith.constant 1 : index
%expanded = tensor.expand_shape %arg0 [[0, 1, 2, 3]] output_shape [1, 1,
1, %dim] : tensor<?xf32> into tensor<1x1x1x?xf32>
return %expanded : tensor<1x1x1x?xf32>
}
Under GCC 12/13 with the above cmake configuration, the
tensor.expand_shape looks like this
%2 = "tensor.expand_shape"(%arg0) <{reassociation = [[0, 1, 2, 3]],
static_output_shape = array<i64>}> : (tensor<?xf32>) ->
tensor<?x1x1x?xf32>
The key difference is the computed output type of `tensor<?x1x1x?xf32>`
rather than the expected `tensor<1x1x1x?xf32>`. This expand_shape fails
to verify with this error message:
error: 'tensor.expand_shape' op expected number of static shape dims to
be equal to the output rank (4) but found 0 inputs instead
The problematic code is calculating the intermediate shape of the
generated tensor.expand_shape operation in the
expand_shape/collapse_shape sequence that implements tosa.reshape.
// Compute result shape
bool resultIsStatic = true;
auto resultShape = llvm::map_to_vector(newShape, [&](int64_t size) {
// Omitted
// If we do not know the total size of the tensor, keep this dimension
// dynamic in the result shape.
if (!inputIsStatic) {
resultIsStatic = false;
return ShapedType::kDynamic;
}
});
if (resultIsStatic) {
// do something
return;
}
// do something else
return;
The failure point seems to be the update of the resultIsStatic variable
in the lambda body. The assignment of false is not propagated to the use
in the if-statement, resulting in the branch being taken when it should
not.
I've found several modification to the code that gets around the bug.
The version I settled on is one which makes the logic a little more
obvious.
This patch fixes:
mlir/lib/Conversion/TosaToTensor/TosaToTensor.cpp:76:46: error:
'multiplies' may not intend to support class template argument
deduction [-Werror,-Wctad-maybe-unsupported]
- Revamped lowering conversion pattern for `tosa.reshape` to handle previously unsupported combinations of dynamic dimensions in input and output tensors. The lowering strategy continues to rely on pairs `tensor.collapse_shape` + `tensor.expand_shape`, which allow for downstream fusion with surrounding `linalg.generic` ops.
- Fixed bug in canonicalization pattern `ReshapeOp::fold()` in `TosaCanonicalizations.cpp`. The input and result types being equal is not a sufficient condition for folding. If there is more than 1 dynamic dimension in the input and result types, a productive reshape could still occur.
- This work exposed the fact that bufferization does not properly handle a `tensor.collapse_shape` op producing a 0D tensor from a dynamically shaped one due to a limitation in `memref.collapse_shape`. While the proper way to address this would involve releasing the `memref.collapse_shape` restriction and verifying correct bufferization, this is left as possible future work. For now, this scenario is avoided by casting the `tosa.reshape` input tensor to a static shape if necessary (see `inferReshapeInputType()`.
- An extended set of tests are intended to cover relevant conversion paths. Tests are named using pattern `test_reshape_<rank>_{up|down|same}_{s2s|s2d|d2s|d2d}_{explicit|auto}[_empty][_identity]`, where:
- `<rank>` is the input rank (e.g., 3d, 6d)
- `{up|down|same}` indicates whether the reshape increases, decreases, or retains the input rank.
- `{s2s|s2d|d2s|d2d}` indicates whether reshape converts a statically shaped input to a statically shaped result (`s2s`), a statically shaped input to a dynamically shaped result (`s2d`), etc.
- `{explicit|auto}` is used to indicate that all values in the `new_shape` attribute are >=0 (`explicit`) or that a -1 placeholder value is used (`auto`).
- `empty` is used to indicate that `new_shape` includes a component set to 0.
- `identity` is used when the input and result shapes are the same.
Fixes#68481, In the following scenario, the conversion fails:
1. resultType of tosa.slice is UnrankedTensorType
2. tosa.slice.getsize().size() < resultType.getRank()
reshape(reshape(x)) -> reshape(x) can be directly written as a fold instead of a canonicalization,
to help other passes cleanup while they work.
This initially broke ReshapeConverterExpand/Collapse, which relies on creating foldable reshapes and a carefully crafted
benefit priority of patterns.
I turned this into a single pattern on reshapes, which does expand and/or collapse as needed in one go.
Differential Revision: https://reviews.llvm.org/D155266
The pattern would create reshape ops without a newShape attr.
This fails the verifier (which can be seen in the debug output;
but curiously doesn't abort compilation),
and can cause crashes in other code that expect to see valid
reshape ops, like ReshapeOp::fold.
Differential Revision: https://reviews.llvm.org/D154651
* Remove duplicate functions. `tensor::getMixedSize` and `tensor::getMixedSizes` should be used.
* Use `tensor::getMixedSize` instead of `createOrFold<tensor::DimOp>`. This is more efficient. `createOrFold` will create an op an immediately try to fold it. In case of a static dimension size, an attribute can be used directly.
Differential Revision: https://reviews.llvm.org/D153332
The existing lowering for tosa.concat fails in some instances when the
output shape contains more information the input shapes. The result is
an illegal tensor.empty operation.
This change bases the output shape on the original tosa.concat
operation, while querying the input tensor shapes to build the slicing
operations.
Reviewed By: rsuderman
Differential Revision: https://reviews.llvm.org/D151707
The MLIR classes Type/Attribute/Operation/Op/Value support
cast/dyn_cast/isa/dyn_cast_or_null functionality through llvm's doCast
functionality in addition to defining methods with the same name.
This change begins the migration of uses of the method to the
corresponding function call as has been decided as more consistent.
Note that there still exist classes that only define methods directly,
such as AffineExpr, and this does not include work currently to support
a functional cast/isa call.
Caveats include:
- This clang-tidy script probably has more problems.
- This only touches C++ code, so nothing that is being generated.
Context:
- https://mlir.llvm.org/deprecation/ at "Use the free function variants
for dyn_cast/cast/isa/…"
- Original discussion at https://discourse.llvm.org/t/preferred-casting-style-going-forward/68443
Implementation:
This first patch was created with the following steps. The intention is
to only do automated changes at first, so I waste less time if it's
reverted, and so the first mass change is more clear as an example to
other teams that will need to follow similar steps.
Steps are described per line, as comments are removed by git:
0. Retrieve the change from the following to build clang-tidy with an
additional check:
https://github.com/llvm/llvm-project/compare/main...tpopp:llvm-project:tidy-cast-check
1. Build clang-tidy
2. Run clang-tidy over your entire codebase while disabling all checks
and enabling the one relevant one. Run on all header files also.
3. Delete .inc files that were also modified, so the next build rebuilds
them to a pure state.
4. Some changes have been deleted for the following reasons:
- Some files had a variable also named cast
- Some files had not included a header file that defines the cast
functions
- Some files are definitions of the classes that have the casting
methods, so the code still refers to the method instead of the
function without adding a prefix or removing the method declaration
at the same time.
```
ninja -C $BUILD_DIR clang-tidy
run-clang-tidy -clang-tidy-binary=$BUILD_DIR/bin/clang-tidy -checks='-*,misc-cast-functions'\
-header-filter=mlir/ mlir/* -fix
rm -rf $BUILD_DIR/tools/mlir/**/*.inc
git restore mlir/lib/IR mlir/lib/Dialect/DLTI/DLTI.cpp\
mlir/lib/Dialect/Complex/IR/ComplexDialect.cpp\
mlir/lib/**/IR/\
mlir/lib/Dialect/SparseTensor/Transforms/SparseVectorization.cpp\
mlir/lib/Dialect/Vector/Transforms/LowerVectorMultiReduction.cpp\
mlir/test/lib/Dialect/Test/TestTypes.cpp\
mlir/test/lib/Dialect/Transform/TestTransformDialectExtension.cpp\
mlir/test/lib/Dialect/Test/TestAttributes.cpp\
mlir/unittests/TableGen/EnumsGenTest.cpp\
mlir/test/python/lib/PythonTestCAPI.cpp\
mlir/include/mlir/IR/
```
Differential Revision: https://reviews.llvm.org/D150123
Currently conversions to interfaces may happen implicitly (e.g.
`Attribute -> TypedAttr`), failing a runtime assert if the interface
isn't actually implemented. This change marks the `Interface(ValueT)`
constructor as explicit so that a cast is required.
Where it was straightforward to I adjusted code to not require casts,
otherwise I just made them explicit.
Depends on D148491, D148492
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D148493
tosa.reshape and tosa.concat were moved from TosaToLinalg to TosaToTensor
(D145119 & D145952). So now they are legal after applying TosaToLinalg patterns,
and illegal after applying TosaToTensor patterns.
This includes D146174 (thanks @ramiro050!)
Reviewed By: krzysz00
Differential Revision: https://reviews.llvm.org/D146213
tosa.concat is lowered to tensor.insert_slice thus it should be in
TosaToTensor rather than in TosaToLinalg.
Reviewed By: rsuderman
Differential Revision: https://reviews.llvm.org/D145952
Converting tosa.reshape to tensor.expand_shape and
tensor.collapse_shape logically belongs in the tosa-to-tensor
conversion process. In addition, we (rocMLIR downstream) want to use
the reshape -> expand/collapse_shape logic to simplify parts of our
Tosa integration without using the full tosa-to-linalg flow, further
motivating moving these patterns.
The downside to this change is that it means you need to run
tosa-to-tensor after tosa-to-linalg, which is probably a breaking
change.
Reviewed By: rsuderman
Differential Revision: https://reviews.llvm.org/D145119
Since tosa.pad is lowered strictly to artih and tensor ops, move
ConvertPad from TosaToLinalg to TosaToTensor, benefitting non-Linalg
Tosa targets. TensorToLinalg exists, and is trivial, so nothing is lost.
Signed-off-by: Ramkumar Ramachandra <r@artagnon.com>
Differential Revision: https://reviews.llvm.org/D139091
The patch introduces the required changes to update the pass declarations and definitions to use the new autogenerated files and allow dropping the old infrastructure.
Reviewed By: mehdi_amini, rriddle
Differential Review: https://reviews.llvm.org/D132838
The patch introduces the required changes to update the pass declarations and definitions to use the new autogenerated files and allow dropping the old infrastructure.
Reviewed By: mehdi_amini, rriddle
Differential Review: https://reviews.llvm.org/D132838
Apply scale should be optionally disabled when lowering via TosaToStandard.
In most cases it should persist until the lowering to specific backend.
Reviewed By: jpienaar
Differential Revision: https://reviews.llvm.org/D122948