89 Commits

Author SHA1 Message Date
Matthias Springer
42c31d8302 [mlir][IR] Clean up mergeBlockBefore and mergeBlocks
* `RewriterBase::mergeBlocks` is simplified: it is implemented in terms of `mergeBlockBefore`.
* The signature of `mergeBlockBefore` is consistent with other API (such as `inlineRegionBefore`): an overload for a `Block::iterator` is added.
* Additional safety checks are added to `mergeBlockBefore`: detect cases where the resulting IR could be invalid (no more `dropAllUses`) or partly unreachable (likely a case of incorrect API usage).
* Rename `mergeBlockBefore` to `inlineBlockBefore`.

Differential Revision: https://reviews.llvm.org/D144969
2023-03-06 13:46:08 +01:00
Matthias Springer
ae9e1d1df4 [mlir][SparseTensor] Fix incorrect API usage in RewritePatterns
Incorrect API usage was detected by D144552.

Differential Revision: https://reviews.llvm.org/D145166
2023-03-02 17:59:57 +01:00
bixia1
2c81d43241 [mlir][sparse] Improve the implementation of sparse_tensor.new for the codegen path.
Rewrite a NewOp into a NewOp of a sorted COO tensor and a ConvertOp for
converting the sorted COO tensor to the destination tensor type.

Codegen a NewOp of a sorted COO tensor to use the new bulk reader API and sort
the elements only when the input is not sorted.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D144504
2023-03-01 07:29:49 -08:00
Peiming Liu
85dbb3fc4b [mlir][sparse] support sparse tensor element type conversion in codegen path
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D144578
2023-02-23 17:49:50 +00:00
Peiming Liu
b7d86f3f1c [mlir][sparse] revert optimization for dense->csc conversion.
Eliminates the sort seems make the whole conversion slower (probably because loop rotation leads to bad locality).

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D144517
2023-02-21 21:34:01 +00:00
Peiming Liu
9e8d9316ce [mlir][sparse] allow foreach operation to generate out-of-order loop on non-annotated tensor.
No need for a temp COO and sort even when converting dense -> CSC, we can instead rotate the loop to yield a ordered coordinates at beginning.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D144213
2023-02-16 23:23:20 +00:00
bixia1
c2e248c6ae [mlir][sparse] Remove the expansion of symmetric MTX in the sparse tensor storage.
We will support symmetric MTX without expanding the data in the sparse tensor
storage.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D144059
2023-02-16 13:02:17 -08:00
wren romano
d950bdc73e [mlir][sparse] misc code cleanup
* Flattening/simplifying some nested conditionals
* const-ifying some local variables

Depends On D143800

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D143949
2023-02-15 13:29:00 -08:00
wren romano
bb4fc6b6d6 [mlir][sparse] Adding SparseTensorType::{operator==, hasSameDimToLvlMap}
Depends On D143800

Reviewed By: aartbik, Peiming

Differential Revision: https://reviews.llvm.org/D144052
2023-02-15 12:05:29 -08:00
Jie Fu
71251e8d4f [mlir] Fix -Wsign-compare in SparseTensorRewriting.cpp and Sparsification.cpp (NFC)
/home/jiefu/llvm-project/mlir/lib/Dialect/SparseTensor/Transforms/Sparsification.cpp:279:33: error: comparison of integers of different signs: 'int64_t' (aka 'long') and 'const mlir::sparse_tensor::Level' (aka 'const unsigned long') [-Werror,-Wsign-compare]
    assert(env.op().getRank(&t) == lvlRank);
           ~~~~~~~~~~~~~~~~~~~~ ^  ~~~~~~~
/usr/include/assert.h:93:27: note: expanded from macro 'assert'
     (static_cast <bool> (expr)                                         \
                          ^~~~
1 error generated.

/home/jiefu/llvm-project/mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp:788:29: error: comparison of integers of different signs: 'int64_t' (aka 'long') and 'const mlir::sparse_tensor::Dimension' (aka 'const unsigned long') [-Werror,-Wsign-compare]
    assert(srcRTT.getRank() == dimRank);
           ~~~~~~~~~~~~~~~~ ^  ~~~~~~~
/usr/include/assert.h:93:27: note: expanded from macro 'assert'
     (static_cast <bool> (expr)                                         \
                          ^~~~
/home/jiefu/llvm-project/mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp:810:31: error: comparison of integers of different signs: 'int64_t' (aka 'long') and 'const mlir::sparse_tensor::Dimension' (aka 'const unsigned long') [-Werror,-Wsign-compare]
      assert(srcRTT.getRank() == dimRank);
             ~~~~~~~~~~~~~~~~ ^  ~~~~~~~
/usr/include/assert.h:93:27: note: expanded from macro 'assert'
     (static_cast <bool> (expr)                                         \
                          ^~~~
2 errors generated.
2023-02-15 11:58:08 +08:00
wren romano
f708a549b8 [mlir][sparse] Factoring out SparseTensorType class
This change adds a new `SparseTensorType` class for making the "dim" vs "lvl" distinction more overt, and for abstracting over the differences between sparse-tensors and dense-tensors.  In addition, this change also adds new type aliases `Dimension`, `Level`, and `FieldIndex` to make code more self-documenting.

Although the diff is very large, the majority of the changes are mechanical in nature (e.g., changing types to use the new aliases, updating variable names to match, etc).  Along the way I also made many variables `const` when they could be; the majority of which required only adding the keyword.  A few places had conditional definitions of these variables, requiring actual code changes; however, that was only done when the overall change was extremely local and easy to extract.  All these changes are included in the current patch only because it would be too onerous to split them off into a separate patch.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D143800
2023-02-14 19:17:19 -08:00
Aart Bik
3bd82f30dc [mlir][sparse] compute allocation size_hint
This adds the hint to a number of tensor allocations in codegens,
shaving off quite some time from e.g. reading in sparse matrices
due to zero-reallocation scheme. Note that we can probably provide
hints on all allocations, and refine the heuristics that use them
for general tensors.

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D143309
2023-02-06 14:08:53 -08:00
Peiming Liu
1f07853f2b [mlir][sparse] introduce sparse_tensor.pack operation
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D143224
2023-02-03 22:30:52 +00:00
Rahul Kayaith
0ce25b1235 [mlir] Require explicit casts when using TypedValue
Currently `TypedValue` can be constructed directly from `Value`, hiding
errors that could be caught at compile time. For example the following
will compile, but crash/assert at runtime:
```
void foo(TypedValue<IntegerType>);
void bar(TypedValue<FloatType> v) {
  foo(v);
}
```

This change removes the constructors and replaces them with explicit
llvm casts.

Depends on D142852

Reviewed By: rriddle

Differential Revision: https://reviews.llvm.org/D142855
2023-02-01 21:54:53 -05:00
bixia1
0c7f1c1520 [mlir][sparse] Extend sparse_tensor.sort with a enum attribute to specify a sorting implementation.
Currently, all the non-stable sorting algorithms are implemented via the
straightforward quick sort. This will be fixed in the following PR.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D142678
2023-01-29 18:34:08 -08:00
wren romano
255c3f1159 [mlir][sparse] factoring out getRankedTensorType helper function
Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D142074
2023-01-20 19:36:01 -08:00
bixia1
6646664154 [mlir][sparse] Improve ConcatenateOp rewriting for annotated all dense result.
Previously, we rely on InsertOp to add values to the result, in the same way we
add values to a sparse tensor with compressed dimensions. We now direct store
values to the values buffer.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D141517
2023-01-13 08:36:54 -08:00
Jeff Niu
4d67b27817 [mlir] Add operations to BlockAndValueMapping and rename it to IRMapping
The patch adds operations to `BlockAndValueMapping` and renames it to `IRMapping`. When operations are cloned, old operations are mapped to the cloned operations. This allows mapping from an operation to a cloned operation. Example:

```
Operation *opWithRegion = ...
Operation *opInsideRegion = &opWithRegion->front().front();

IRMapping map
Operation *newOpWithRegion = opWithRegion->clone(map);
Operation *newOpInsideRegion = map.lookupOrNull(opInsideRegion);
```

Migration instructions:
All includes to `mlir/IR/BlockAndValueMapping.h` should be replaced with `mlir/IR/IRMapping.h`. All uses of `BlockAndValueMapping` need to be renamed to `IRMapping`.

Reviewed By: rriddle, mehdi_amini

Differential Revision: https://reviews.llvm.org/D139665
2023-01-12 13:16:05 -08:00
bixia1
14aba2084d [mlir][sparse] Improve the rewriting for dense-to-sparse conversion.
Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D141335
2023-01-11 08:27:04 -08:00
bixia1
f3fd739d39 [mlir][sparse] Improve the rewriting for NewOp with dimension ordering.
Previously, we use a temporary tensor with identity ordering. We now use a
temporary tensor with the destination dimension ordering, to enable the use of
sort_coo for sorting the tensor.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D141295
2023-01-09 14:40:33 -08:00
Jie Fu
a021db346e [mlir] Fix build error due to -Wsign-compare after revision D140871
This patch fixes build failure due to -Wsign-compare in sparse2SparseRewrite(...) after https://reviews.llvm.org/D140871.

```
llvm-project/mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp:842:32: error: comparison of integers of different signs: 'uint64_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Werror,-Wsign-compare]
        for (uint64_t i = 0; i < rank; i++) {
                             ~ ^ ~~~~
1 error generated.
```

Reviewed By: MaskRay

Differential Revision: https://reviews.llvm.org/D141104
2023-01-05 19:36:13 -08:00
bixia1
81e3079d0f [mlir][sparse] Replace sparse_tensor.sort with sparse_tensor.sort_coo for sorting COO tensors.
Add codegen pattern for sparse_tensor.indices_buffer.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D140871
2023-01-05 15:42:57 -08:00
bixia1
90aa436291 [mlir][sparse] Add layout to the memref for the indices buffers to prepare for the AOS storage optimization for COO regions.
Fix relevant FileCheck tests.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D140742
2023-01-04 07:36:11 -08:00
Peiming Liu
781eabeb40 [mlir][sparse] refactoring loop emitter into its own files.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D140701
2022-12-27 19:12:05 +00:00
Peiming Liu
a3672add76 [mlir][sparse] avoid unnecessary tmp COO buffer and convert when lowering ConcatentateOp.
When concat along dim 0, and all inputs/outputs are ordered with identity dimension ordering,
the concatenated coordinates will be yield in lexOrder, thus no need to sort.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D140228
2022-12-16 18:26:39 +00:00
Peiming Liu
b4e2b7f90c [mlir][sparse] avoid sorting when unnecessary when convert sparse tensors.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D139744
2022-12-10 00:24:42 +00:00
Aart Bik
65074179f2 [mlir][sparse] make fusion for SDDMM more robust
Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D139456
2022-12-06 14:32:19 -08:00
Kazu Hirata
1a36588ec6 [mlir] Use std::nullopt instead of None (NFC)
This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated.  The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.

This is part of an effort to migrate from llvm::Optional to
std::optional:

https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
2022-12-03 18:50:27 -08:00
wren romano
2af2e4dbb7 [mlir][sparse] Breaking up openSparseTensor to better support non-permutations
This commit updates how the `SparseTensorConversion` pass handles `NewOp`.  It breaks up the underlying `openSparseTensor` function into two parts (`SparseTensorReader::create` and `SparseTensorReader::readSparseTensor`) so that the pass can inject code for constructing `lvlSizes` between those two parts.  Migrating the construction of `lvlSizes` out of the runtime and into the pass is a necessary first step toward fully supporting non-permutations.  (The alternative would be for the pass to generate a `FuncOp` for performing the construction and then passing that to the runtime; which doesn't seem to have any benefits over the design of this commit.)  And since the pass now generates the code to call these two functions, this change also removes the `Action::kFromFile` value from the enum used by `_mlir_ciface_newSparseTensor`.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D138363
2022-12-02 11:10:57 -08:00
bixia1
101a0c84f7 [mlir][sparse] Improve concatenate operator rewrite for annotated all dense dimensions results.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D138823
2022-11-29 12:25:52 -08:00
bixia1
aedf5d5831 [mlir][sparse] Improve concatenate operator rewriting for dense tensor results.
Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D138465
2022-11-28 07:56:01 -08:00
bixia1
974b4bf9fd [mlir][sparse] Add expand_symmetry attribute to the new operator.
The attribute tells the operator to handle symmetric structures for 2D tensors.
By default, the operator assumes the input tensor is not symmetric.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D138230
2022-11-23 16:32:15 -08:00
bixia1
6c01b5cdad [mlir][sparse] Fix a bug in concatenate operator rewriting.
When calculating the dynamic dimensions for the concatenate result, we
shouldn't accumulate the sizes for the non-concatenating dimensions.

Reviewed By: aartbik, Peiming

Differential Revision: https://reviews.llvm.org/D138436
2022-11-22 08:17:35 -08:00
Aliia Khasanova
399638f98c Merge kDynamicSize and kDynamicSentinel into one constant.
resolve conflicts

Differential Revision: https://reviews.llvm.org/D138282
2022-11-21 13:01:26 +00:00
bixia1
96b3bf4292 [mlir][sparse] Fix a problem in the new operator rewriter.
The getSparseTensorReaderNextX functions should return void.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D138226
2022-11-17 13:21:10 -08:00
Peiming Liu
8d615a23ef [mlir][sparse] fix crash on sparse_tensor.foreach operation on tensors with complex<T> elements.
Reviewed By: aartbik, bixia

Differential Revision: https://reviews.llvm.org/D138223
2022-11-17 19:36:15 +00:00
bixia1
c374ef2eb7 [mlir][sparse] Extend the operator new rewriter to handle isSymmetric flag.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D138214
2022-11-17 10:48:24 -08:00
bixia1
f81f0cb75a [mlir][sparse] Split SparseTensorRewrite into PreSparsificationRewrite and PostSparsificationRewrite.
Reviewed By: aartbik, wrengr

Differential Revision: https://reviews.llvm.org/D138153
2022-11-17 07:13:55 -08:00
Peiming Liu
91e7b9e525 [mlir][sparse] annotate loops that are generated by loop emitter.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D138155
2022-11-17 00:09:33 +00:00
Peiming Liu
61e5c14fa8 [mlir][sparse] fix memory leakage in concatenate rewriter.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D138074
2022-11-16 00:10:33 +00:00
Aart Bik
0e1708ff64 [mlir][sparse] cleanup small vector constant hints
Following advise from

https://llvm.org/docs/ProgrammersManual.html#llvm-adt-smallvector-h

This revision removes the size hints from SmallVector (unless we are
certain of the resulting number of elements). Also, this replaces
SmallVector references with SmallVectorImpl references.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D138063
2022-11-15 15:09:06 -08:00
bixia1
555e7835f4 [mlir][sparse] Fix rewriting for convert op and concatenate op.
Fix a problem in convert op rewriting where it used the original index for
ToIndicesOp.

Extend the concatenate op rewriting to handle dense destination and dynamic
shape destination.

Make the concatenate op integration test run on the codegen path.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D138057
2022-11-15 14:45:21 -08:00
Peiming Liu
c3821e1684 [mlir][sparse] fix bugs in concatenate rewriter.
Reviewed By: aartbik, bixia

Differential Revision: https://reviews.llvm.org/D138053
2022-11-15 19:49:59 +00:00
Peiming Liu
8ffdcc594e [mlir][sparse] fix memory leak sparse2sparse reshape
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D137994
2022-11-15 00:19:51 +00:00
bixia1
fa46de16db [mlir][sparse][NFC] Add comments to tests that are run for with and without runtime libraries.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D137869
2022-11-14 08:21:33 -08:00
Peiming Liu
725e0849b7 [mlir][sparse] fix incorrect coordinates ordering computed by the foreach operation.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D137877
2022-11-12 04:08:50 +00:00
bixia1
57416d872a [mlir][sparse] Fix a bug in rewriting dense2dense convert op.
Permutation wasn't handled correctly. Add a test for the rewriting.

Extend an integration test to run with enable_runtime_library=false to
also test the rewriting.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D137845
2022-11-11 09:11:54 -08:00
bixia1
ccd1a5b9cb [mlir][sparse] Fix a bug in rewriting for the convert op.
The code to retrieve the number of entries isn't correct.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D137795
2022-11-10 14:30:51 -08:00
Aart Bik
e6cbb91483 [mlir][sparse] skip zeros during dense2sparse
This enables the full matmul integration test with runtime_lib=true/false!

Background:
https://github.com/llvm/llvm-project/issues/51657

Reviewed By: bixia

Differential Revision: https://reviews.llvm.org/D137750
2022-11-09 20:54:27 -08:00
Aart Bik
a61a9a700a [mlir][sparse] first end-to-end matmul with codegen
(1) also fixes memory leak in sparse2dense rewriting
(2) still needs fix in dense2sparse by skipping zeros

Reviewed By: wrengr

Differential Revision: https://reviews.llvm.org/D137736
2022-11-09 13:40:05 -08:00