Fix a problem in convert op rewriting where it used the original index for
ToIndicesOp.
Extend the concatenate op rewriting to handle dense destination and dynamic
shape destination.
Make the concatenate op integration test run on the codegen path.
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D138057
Permutation wasn't handled correctly. Add a test for the rewriting.
Extend an integration test to run with enable_runtime_library=false to
also test the rewriting.
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D137845
(1) also fixes memory leak in sparse2dense rewriting
(2) still needs fix in dense2sparse by skipping zeros
Reviewed By: wrengr
Differential Revision: https://reviews.llvm.org/D137736
This patch re-commit D137468 and D137463, which were reverted by mistakes.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D137579
This reverts commit 53d5d3401120f2aa741a73a5a9ba0ce012ca532c.
This is causing a build failure on the windows mlir bot that was previously hidden by another sparse tensor change that caused failures:
https://lab.llvm.org/buildbot/#/builders/13/builds/28006
This reverts commit 70508b614e6478ba2c3fc79e935e2c68e2d79b71.
This change depends on a reverted change that broke the windows mlir buildbot; reverting to bring remaining mlir bots to green
Also fix the rewrite rule for sparse_tensor.new to reflect the recent change of
the runtime C interface and to use utilities for memref.alloca.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D135891
Previously, it used DimLevelType::SingletonNo to represent an unorder COO
tensor of rank 1 while it should use DimLevelType::CompressedNuNo.
Reviewed By: Peiming, wrengr
Differential Revision: https://reviews.llvm.org/D136387
This differential replaces all uses of SparseTensorEncodingAttr::DimLevelType with DimLevelType. The next differential will break out a separate library for the DimLevelType enum, so that the Dialect code doesn't need to depend on the rest of the runtime
Depends On D135995
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D135996
`getIteratorTypesArray` should be used instead. It's a better substitute for all the current usages of the interface.
The current `ArrayAttr iterator_types()` has a few problems:
* It creates an assumption operation has iterators types as an attribute, but it's not always the case. Sometime iterator types can be inferred from other attribute, or they're just static.
* ArrayAttr is an obscure contained and required extracting values in the client code.
* Makes it hard to migrate iterator types from strings to enums ([RFC](https://discourse.llvm.org/t/rfc-enumattr-for-iterator-types-in-linalg/64535/9)).
Concrete ops, like `linalg.generic` will still have iterator types as an attribute if needed.
As a side effect, this change helps a bit with migration to prefixed accessors.
Differential Revision: https://reviews.llvm.org/D135765
This patch fixes:
mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp:587:27:
error: comparison of integers of different signs: 'int64_t' (aka
'long') and 'uint64_t' (aka 'unsigned long')
[-Werror,-Wsign-compare]
This extension to the sparse tensor type system in MLIR
opens up a whole new set of sparse storage schemes, such as
block sparse storage (e.g. BCSR) and ELL (aka jagged diagonals).
This revision merely introduces the type extension and
initial documentation. The actual interpretation of the type
(reading in tensors, lowering to code, etc.) will follow.
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D135206
Add new option (enable-runtime-library) to sparse compiler pipeline, it allows us to decide whether we need to rewrite operations (e.g., concatenate, reshape) within sparsification (when using codegen) or convert them after sparsification (when using runtime library).
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D133597
The operations to fill zero into newly allocated sparse tensor are redundant, plus it failed
to lowering the test cases provided in the patch as well.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D132500
This prepares patterns that sometimes are generated by the front-end
and would prohibit fusion of SDDMM flavored kernels.
Reviewed By: springerm
Differential Revision: https://reviews.llvm.org/D131126
This rewriting was no longer functional after recent migration to one shot
bufferization. However, this revision makes it work again, with a CHECK test
to ensure fusion happens. Note that functionality is tested by several
integration tests.
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D130996