* `RewriterBase::mergeBlocks` is simplified: it is implemented in terms of `mergeBlockBefore`.
* The signature of `mergeBlockBefore` is consistent with other API (such as `inlineRegionBefore`): an overload for a `Block::iterator` is added.
* Additional safety checks are added to `mergeBlockBefore`: detect cases where the resulting IR could be invalid (no more `dropAllUses`) or partly unreachable (likely a case of incorrect API usage).
* Rename `mergeBlockBefore` to `inlineBlockBefore`.
Differential Revision: https://reviews.llvm.org/D144969
Rewrite a NewOp into a NewOp of a sorted COO tensor and a ConvertOp for
converting the sorted COO tensor to the destination tensor type.
Codegen a NewOp of a sorted COO tensor to use the new bulk reader API and sort
the elements only when the input is not sorted.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D144504
Eliminates the sort seems make the whole conversion slower (probably because loop rotation leads to bad locality).
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D144517
No need for a temp COO and sort even when converting dense -> CSC, we can instead rotate the loop to yield a ordered coordinates at beginning.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D144213
We will support symmetric MTX without expanding the data in the sparse tensor
storage.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D144059
* Flattening/simplifying some nested conditionals
* const-ifying some local variables
Depends On D143800
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D143949
This change adds a new `SparseTensorType` class for making the "dim" vs "lvl" distinction more overt, and for abstracting over the differences between sparse-tensors and dense-tensors. In addition, this change also adds new type aliases `Dimension`, `Level`, and `FieldIndex` to make code more self-documenting.
Although the diff is very large, the majority of the changes are mechanical in nature (e.g., changing types to use the new aliases, updating variable names to match, etc). Along the way I also made many variables `const` when they could be; the majority of which required only adding the keyword. A few places had conditional definitions of these variables, requiring actual code changes; however, that was only done when the overall change was extremely local and easy to extract. All these changes are included in the current patch only because it would be too onerous to split them off into a separate patch.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D143800
This adds the hint to a number of tensor allocations in codegens,
shaving off quite some time from e.g. reading in sparse matrices
due to zero-reallocation scheme. Note that we can probably provide
hints on all allocations, and refine the heuristics that use them
for general tensors.
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D143309
Currently `TypedValue` can be constructed directly from `Value`, hiding
errors that could be caught at compile time. For example the following
will compile, but crash/assert at runtime:
```
void foo(TypedValue<IntegerType>);
void bar(TypedValue<FloatType> v) {
foo(v);
}
```
This change removes the constructors and replaces them with explicit
llvm casts.
Depends on D142852
Reviewed By: rriddle
Differential Revision: https://reviews.llvm.org/D142855
Currently, all the non-stable sorting algorithms are implemented via the
straightforward quick sort. This will be fixed in the following PR.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D142678
Previously, we rely on InsertOp to add values to the result, in the same way we
add values to a sparse tensor with compressed dimensions. We now direct store
values to the values buffer.
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D141517
The patch adds operations to `BlockAndValueMapping` and renames it to `IRMapping`. When operations are cloned, old operations are mapped to the cloned operations. This allows mapping from an operation to a cloned operation. Example:
```
Operation *opWithRegion = ...
Operation *opInsideRegion = &opWithRegion->front().front();
IRMapping map
Operation *newOpWithRegion = opWithRegion->clone(map);
Operation *newOpInsideRegion = map.lookupOrNull(opInsideRegion);
```
Migration instructions:
All includes to `mlir/IR/BlockAndValueMapping.h` should be replaced with `mlir/IR/IRMapping.h`. All uses of `BlockAndValueMapping` need to be renamed to `IRMapping`.
Reviewed By: rriddle, mehdi_amini
Differential Revision: https://reviews.llvm.org/D139665
Previously, we use a temporary tensor with identity ordering. We now use a
temporary tensor with the destination dimension ordering, to enable the use of
sort_coo for sorting the tensor.
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D141295
This patch fixes build failure due to -Wsign-compare in sparse2SparseRewrite(...) after https://reviews.llvm.org/D140871.
```
llvm-project/mlir/lib/Dialect/SparseTensor/Transforms/SparseTensorRewriting.cpp:842:32: error: comparison of integers of different signs: 'uint64_t' (aka 'unsigned long') and 'int64_t' (aka 'long') [-Werror,-Wsign-compare]
for (uint64_t i = 0; i < rank; i++) {
~ ^ ~~~~
1 error generated.
```
Reviewed By: MaskRay
Differential Revision: https://reviews.llvm.org/D141104
When concat along dim 0, and all inputs/outputs are ordered with identity dimension ordering,
the concatenated coordinates will be yield in lexOrder, thus no need to sort.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D140228
This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated. The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.
This is part of an effort to migrate from llvm::Optional to
std::optional:
https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
This commit updates how the `SparseTensorConversion` pass handles `NewOp`. It breaks up the underlying `openSparseTensor` function into two parts (`SparseTensorReader::create` and `SparseTensorReader::readSparseTensor`) so that the pass can inject code for constructing `lvlSizes` between those two parts. Migrating the construction of `lvlSizes` out of the runtime and into the pass is a necessary first step toward fully supporting non-permutations. (The alternative would be for the pass to generate a `FuncOp` for performing the construction and then passing that to the runtime; which doesn't seem to have any benefits over the design of this commit.) And since the pass now generates the code to call these two functions, this change also removes the `Action::kFromFile` value from the enum used by `_mlir_ciface_newSparseTensor`.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D138363
The attribute tells the operator to handle symmetric structures for 2D tensors.
By default, the operator assumes the input tensor is not symmetric.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D138230
When calculating the dynamic dimensions for the concatenate result, we
shouldn't accumulate the sizes for the non-concatenating dimensions.
Reviewed By: aartbik, Peiming
Differential Revision: https://reviews.llvm.org/D138436
Fix a problem in convert op rewriting where it used the original index for
ToIndicesOp.
Extend the concatenate op rewriting to handle dense destination and dynamic
shape destination.
Make the concatenate op integration test run on the codegen path.
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D138057
Permutation wasn't handled correctly. Add a test for the rewriting.
Extend an integration test to run with enable_runtime_library=false to
also test the rewriting.
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D137845
(1) also fixes memory leak in sparse2dense rewriting
(2) still needs fix in dense2sparse by skipping zeros
Reviewed By: wrengr
Differential Revision: https://reviews.llvm.org/D137736