147 Commits

Author SHA1 Message Date
Mehdi Amini
61f64d1c23 Apply clang-tidy fixes for llvm-qualified-auto in SparseTensorRewriting.cpp (NFC) 2024-02-12 13:27:49 -08:00
Peiming Liu
298412b578
[mlir][sparse] setup SparseIterator to help generating code to traverse a sparse tensor level. (#78345) 2024-01-24 11:33:06 -08:00
Matthias Springer
5fcf907b34
[mlir][IR] Rename "update root" to "modify op" in rewriter API (#78260)
This commit renames 4 pattern rewriter API functions:
* `updateRootInPlace` -> `modifyOpInPlace`
* `startRootUpdate` -> `startOpModification`
* `finalizeRootUpdate` -> `finalizeOpModification`
* `cancelRootUpdate` -> `cancelOpModification`

The term "root" is a misnomer. The root is the op that a rewrite pattern
matches against
(https://mlir.llvm.org/docs/PatternRewriter/#root-operation-name-optional).
A rewriter must be notified of all in-place op modifications, not just
in-place modifications of the root
(https://mlir.llvm.org/docs/PatternRewriter/#pattern-rewriter). The old
function names were confusing and have contributed to various broken
rewrite patterns.

Note: The new function names use the term "modify" instead of "update"
for consistency with the `RewriterBase::Listener` terminology
(`notifyOperationModified`).
2024-01-17 11:08:59 +01:00
Matthias Springer
0a8e3dd432
[mlir][Interfaces] DestinationStyleOpInterface: Rename hasTensor/BufferSemantics (#77574)
Rename interface functions as follows:
* `hasTensorSemantics` -> `hasPureTensorSemantics`
* `hasBufferSemantics` -> `hasPureBufferSemantics`

These two functions return "true" if the op has tensor/buffer operands
but not buffer/tensor operands.

Also drop the "ranked" part from the interface, i.e., do not distinguish
between ranked/unranked types.

The new function names describe the functions more accurately. They also
align their semantics with the notion of "tensor semantics" with the
bufferization framework. (An op is supposed to be bufferized if it has
tensor operands, and we don't care if it also has memref operands.)

This change is in preparation of #75273, which adds
`BufferizableOpInterface::hasTensorSemantics`. By renaming the functions
in the `DestinationStyleOpInterface`, we can avoid name clashes between
the two interfaces.
2024-01-12 10:02:54 +01:00
Aart Bik
365777ecbe
[mlir][sparse] refactor utilities into transform/utils dir (#75250)
Separates actual transformation files from supporting utility files in
the transforms directory. Includes a bazel overlay fix for the build (as
well as a bit of cleanup of that file to be less verbose and more
flexible).
2023-12-12 15:34:31 -08:00
Aart Bik
5b72950394
[mlir][sparse] move all COO related methods into SparseTensorType (#73881)
This centralizes all COO methods, and provides a cleaner API. Note that
the "enc" only constructor is a temporary workaround the need for COO
methods inside the "enc" only storage specifier.
2023-11-30 09:40:39 -08:00
Aart Bik
45288085b5
[mlir][sparse] move toCOOType into SparseTensorType class (#73708)
Migrates dangling convenience method into proper SparseTensorType class.
Also cleans up some details (picking right dim2lvl/lvl2dim). Removes
more dead code.
2023-11-28 16:04:01 -08:00
Peiming Liu
4e2f1521ec
[mlir][sparse] code cleanup, remove FIXMEs (#73575) 2023-11-27 14:57:08 -08:00
Aart Bik
1944c4f76b
[mlir][sparse] rename DimLevelType to LevelType (#73561)
The "Dim" prefix is a legacy left-over that no longer makes sense, since
we have a very strict "Dimension" vs. "Level" definition for sparse
tensor types and their storage.
2023-11-27 14:27:52 -08:00
Aart Bik
1dd387e106
[mlir][sparse] change dim level type -> level type (#73058)
The "dimension" before "level" does not really make sense Note that
renaming the actual type DimLevelType to LevelType is still TBD, since
this is an externally visible change (e.g. visible to Python API).
2023-11-22 09:06:22 -08:00
Aart Bik
e8fc282ff2
[mlir][sparse] avoid non-perm on sparse tensor convert for new (#72459)
This avoids seeing non-perm on the convert from COO to non-COO for
higher dimensional new operators (viz. reading in BSR).

This is step 1 out of 3 to make sparse_tensor.new work for BSR
2023-11-15 20:47:37 -08:00
Tim Harvey
c43e627457
Changed the phrase sparse-compiler to sparsifier in comments (#71578)
When the Powers That Be decided that the name "sparse compiler" should
be changed to "sparsifier", we negected to change some of the comments
in the code; this pull request completes the name change.
2023-11-07 20:55:00 +00:00
Aart Bik
2b67942139
[mlir][sparse] remove (some) deprecated dim/lvl methods (#71125)
This removes the most obvious ones. The others are still TBD.
2023-11-02 16:29:27 -07:00
Peiming Liu
53ffafb24d
[mlir][sparse] support sparse constant to BSR conversion. (#71114)
support direct convert from a constant tensor defined by
SparseArrayElements to BSR
2023-11-02 14:45:39 -07:00
Aart Bik
22212ca745
[mlir][sparse] simplify some header code (#70989)
This is a first revision in a small series of changes that removes
duplications between direct encoding methods and sparse tensor type
wrapper methods (in favor of the latter abstraction, since it provides
more safety). The goal is to simply end up with "just" SparseTensorType
2023-11-02 09:31:11 -07:00
Peiming Liu
3426d330a7
[mlir][sparse] Implement rewriters to reinterpret maps on foreach (#70868) 2023-11-01 12:11:47 -07:00
Peiming Liu
ef100c228a
[mlir][sparse] implements tensor.insert on sparse tensors. (#70737) 2023-10-30 16:04:41 -07:00
Peiming Liu
f82bee1367
[mlir][sparse] split post-sparsification-rewriting into two passes. (#70727) 2023-10-30 15:22:21 -07:00
Peiming Liu
7d608ee2bb
[mlir][sparse] unify sparse_tensor.out rewriting rules (#70518) 2023-10-27 16:46:58 -07:00
Peiming Liu
c780352de9
[mlir][sparse] implement sparse_tensor.lvl operation. (#69993) 2023-10-24 13:23:28 -07:00
Peiming Liu
e9fa1fdec9
[mlir][sparse] support CSR/BSR conversion (#69800) 2023-10-20 17:24:34 -07:00
Peiming Liu
6456e0bbbb
[mlir][sparse] implement sparse_tensor.crd_translate operation (#69653) 2023-10-20 16:18:02 -07:00
Peiming Liu
71c97c735c
[mlir][sparse] avoid tensor to memref conversion in sparse tensor rewri… (#69362)
…ting rules.
2023-10-17 11:34:06 -07:00
Peiming Liu
761c9dd927
[mlir][sparse] implementating stageSparseOpPass as an interface (#69022) 2023-10-17 10:54:44 -07:00
Peiming Liu
eb14f47bf1
[mlir][sparse][NFC] fix variable naming convension (#69232) 2023-10-16 10:51:28 -07:00
Peiming Liu
f248d0b28d
[mlir][sparse] implement sparse_tensor.reorder_coo (#68916)
As a side effect of the change, it also unifies the convertOp
implementation between lib/codegen path.
2023-10-12 13:22:45 -07:00
Peiming Liu
dda3dc5e38
[mlir][sparse] simplify ConvertOp rewriting rules (#68350)
Canonicalize complex convertOp into multiple stages, such that it can
either be done by a direct conversion or by sorting.
2023-10-11 09:34:11 -07:00
Peiming Liu
0083f8338c
[mlir][sparse] renaming sparse_tensor.sort_coo to sparse_tensor.sort (#68161)
Rationale: the operation does not always sort COO tensors (also used for
sparse_tensor.compress for example).
2023-10-03 16:28:25 -07:00
Peiming Liu
c3b01b4679
[mlir][sparse] unify lib/codegen rewriting rules for sparse tensor concatenation. (#68057) 2023-10-03 08:46:25 -07:00
Peiming Liu
bc878f70fc
[mlir][sparse] unify lib/codegen rewriting rules for sparse tensor re… (#68049)
…shaping operations.
2023-10-02 17:06:22 -07:00
Peiming Liu
bfa3bc4378
[mlir][sparse] unifies sparse_tensor.sort_coo/sort into one operation. (#66722)
The use cases of the two operations are largely overlapped, let's
simplify it and only use one of them.
2023-09-19 17:02:32 -07:00
Aart Bik
6a45339bac
[mlir][sparse] refine sparse fusion with empty tensors materialization (#66563)
This is a minor step towards deprecating bufferization.alloc_tensor().
It replaces the examples with tensor.empty() and adjusts the underlying
rewriting logic to prepare for this upcoming change.
2023-09-18 09:01:11 -07:00
Martin Erhart
6bf043e743
[mlir][bufferization] Remove allow-return-allocs and create-deallocs pass options, remove bufferization.escape attribute (#66619)
This commit removes the deallocation capabilities of
one-shot-bufferization. One-shot-bufferization should never deallocate
any memrefs as this should be entirely handled by the
ownership-based-buffer-deallocation pass going forward. This means the
`allow-return-allocs` pass option will default to true now,
`create-deallocs` defaults to false and they, as well as the escape
attribute indicating whether a memref escapes the current region, will
be removed. A new `allow-return-allocs-from-loops` option is added as a
temporary workaround for some bufferization limitations.
2023-09-18 16:44:48 +02:00
Martin Erhart
c199f7dc62 Revert "[mlir][bufferization] Remove allow-return-allocs and create-deallocs pass options, remove bufferization.escape attribute"
This reverts commit 6a91dfedeb956dfa092a6a3f411e8b02f0d5d289.

This caused problems in downstream projects. We are reverting to give
them more time for integration.
2023-09-13 13:53:48 +00:00
Martin Erhart
6a91dfedeb [mlir][bufferization] Remove allow-return-allocs and create-deallocs pass options, remove bufferization.escape attribute
This is the first commit in a series with the goal to rework the
BufferDeallocation pass. Currently, this pass heavily relies on copies
to perform correct deallocations, which leads to very slow code and
potentially high memory usage. Additionally, there are unsupported cases
such as returning memrefs which this series of commits aims to add
support for as well.

This first commit removes the deallocation capabilities of
one-shot-bufferization.One-shot-bufferization should never deallocate any
memrefs as this should be entirely handled by the buffer-deallocation pass
going forward. This means the allow-return-allocs pass option will
default to true now, create-deallocs defaults to false and they, as well
as the escape attribute indicating whether a memref escapes the current region,
will be removed.

The documentation should w.r.t. these pass option changes should also be
updated in this commit.

Reviewed By: springerm

Differential Revision: https://reviews.llvm.org/D156662
2023-09-13 09:30:22 +00:00
Daniil Dudkin
8a6e54c9b3
[mlir][arith] Rename operations: maxfmaximumf, minfminimumf (#65800)
This patch is part of a larger initiative aimed at fixing floating-point `max` and `min` operations in MLIR: https://discourse.llvm.org/t/rfc-fix-floating-point-max-and-min-operations-in-mlir/72671.

This commit addresses Task 1.2 of the mentioned RFC. By renaming these operations, we align their names with LLVM intrinsics that have corresponding semantics.
2023-09-11 22:02:19 -07:00
Peiming Liu
372d88b051 [mlir][sparse] code cleanup.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D156941
2023-08-02 23:19:55 +00:00
Peiming Liu
e7df82816b [mlir][sparse] rewrite arith::SelectOp to semiring operations to sparsify it.
Reviewed By: aartbik, K-Wu

Differential Revision: https://reviews.llvm.org/D153397
2023-06-21 21:22:18 +00:00
Peiming Liu
fd68d36109 [mlir][sparse] unifying enterLoopOverTensorAtLvl and enterCoIterationOverTensorsAtLvls
The tensor levels are now explicitly categorized into different `LoopCondKind` to instruct LoopEmitter generate different code for different kinds of condition (e.g., `SparseCond`, `SparseSliceCond`, `SparseAffineIdxCond`, etc)

The process of generating a while loop is now dissembled into three steps and they are dispatched to different LoopCondKind handler.
1. Generate LoopCondition (e.g., `pos <= posHi` for `SparseCond`, `slice.isNonEmpty` for `SparseAffineIdxCond`)
2. Generate LoopBody (e.g., compute the coordinates)
3. Generate ExtraChecks (e.g., `if (onSlice(crd))` for `SparseSliceCond`)

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D152464
2023-06-14 20:03:10 +00:00
Aart Bik
80fe3168b5 [mlir][sparse] add support for direct prod/and/min/max reductions
We recently fixed a bug in "sparsifying" such reductions, since
it incorrectly changed this into reductions over stored elements
only , which only works for add/sub/or/xor. However, we still want
to be able to "sparsify" the reductions even in the general case,
and this is a first step by rewriting them into a custom reduction
that feeds in the implicit zeros. NOTE HOWEVER, that in the long run
we want to do this better and feed in any implicit zero only ONCE
for efficiency.

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D152580
2023-06-12 09:27:47 -07:00
wren romano
76647fce13 [mlir][sparse] Combining dimOrdering+higherOrdering fields into dimToLvl
This is a major step along the way towards the new STEA design.  While a great deal of this patch is simple renaming, there are several significant changes as well.  I've done my best to ensure that this patch retains the previous behavior and error-conditions, even though those are at odds with the eventual intended semantics of the `dimToLvl` mapping.  Since the majority of the compiler does not yet support non-permutations, I've also added explicit assertions in places that previously had implicitly assumed it was dealing with permutations.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D151505
2023-05-30 15:19:50 -07:00
Matthias Springer
3d90c8126f [mlir][sparse] Fix incorrect API usage in RewritePatterns
Incorrect API usage was detected by
`MLIR_ENABLE_EXPENSIVE_PATTERN_API_CHECKS`.

Differential Revision: https://reviews.llvm.org/D151302
2023-05-25 08:52:47 +02:00
wren romano
a0615d020a [mlir][sparse] Renaming the STEA field dimLevelType to lvlTypes
This commit is part of the migration of towards the new STEA syntax/design.  In particular, this commit includes the following changes:
* Renaming compiler-internal functions/methods:
  * `SparseTensorEncodingAttr::{getDimLevelType => getLvlTypes}`
  * `Merger::{getDimLevelType => getLvlType}` (for consistency)
  * `sparse_tensor::{getDimLevelType => buildLevelType}` (to help reduce confusion vs actual getter methods)
* Renaming external facets to match:
  * the STEA parser and printer
  * the C and Python bindings
  * PyTACO

However, the actual renaming of the `DimLevelType` itself (along with all the "dlt" names) will be handled in a separate commit.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D150330
2023-05-17 14:24:09 -07:00
Anlun Xu
6116ca67ab [mlir][sparse] Add sparse rewriting rules for tensor::ReshapeOp
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D149564
2023-05-16 14:56:33 -07:00
Tres Popp
5550c82189 [mlir] Move casting calls from methods to function calls
The MLIR classes Type/Attribute/Operation/Op/Value support
cast/dyn_cast/isa/dyn_cast_or_null functionality through llvm's doCast
functionality in addition to defining methods with the same name.
This change begins the migration of uses of the method to the
corresponding function call as has been decided as more consistent.

Note that there still exist classes that only define methods directly,
such as AffineExpr, and this does not include work currently to support
a functional cast/isa call.

Caveats include:
- This clang-tidy script probably has more problems.
- This only touches C++ code, so nothing that is being generated.

Context:
- https://mlir.llvm.org/deprecation/ at "Use the free function variants
  for dyn_cast/cast/isa/…"
- Original discussion at https://discourse.llvm.org/t/preferred-casting-style-going-forward/68443

Implementation:
This first patch was created with the following steps. The intention is
to only do automated changes at first, so I waste less time if it's
reverted, and so the first mass change is more clear as an example to
other teams that will need to follow similar steps.

Steps are described per line, as comments are removed by git:
0. Retrieve the change from the following to build clang-tidy with an
   additional check:
   https://github.com/llvm/llvm-project/compare/main...tpopp:llvm-project:tidy-cast-check
1. Build clang-tidy
2. Run clang-tidy over your entire codebase while disabling all checks
   and enabling the one relevant one. Run on all header files also.
3. Delete .inc files that were also modified, so the next build rebuilds
   them to a pure state.
4. Some changes have been deleted for the following reasons:
   - Some files had a variable also named cast
   - Some files had not included a header file that defines the cast
     functions
   - Some files are definitions of the classes that have the casting
     methods, so the code still refers to the method instead of the
     function without adding a prefix or removing the method declaration
     at the same time.

```
ninja -C $BUILD_DIR clang-tidy

run-clang-tidy -clang-tidy-binary=$BUILD_DIR/bin/clang-tidy -checks='-*,misc-cast-functions'\
               -header-filter=mlir/ mlir/* -fix

rm -rf $BUILD_DIR/tools/mlir/**/*.inc

git restore mlir/lib/IR mlir/lib/Dialect/DLTI/DLTI.cpp\
            mlir/lib/Dialect/Complex/IR/ComplexDialect.cpp\
            mlir/lib/**/IR/\
            mlir/lib/Dialect/SparseTensor/Transforms/SparseVectorization.cpp\
            mlir/lib/Dialect/Vector/Transforms/LowerVectorMultiReduction.cpp\
            mlir/test/lib/Dialect/Test/TestTypes.cpp\
            mlir/test/lib/Dialect/Transform/TestTransformDialectExtension.cpp\
            mlir/test/lib/Dialect/Test/TestAttributes.cpp\
            mlir/unittests/TableGen/EnumsGenTest.cpp\
            mlir/test/python/lib/PythonTestCAPI.cpp\
            mlir/include/mlir/IR/
```

Differential Revision: https://reviews.llvm.org/D150123
2023-05-12 11:21:25 +02:00
Aart Bik
1e0966cd6c [mlir][sparse] add util for ToCoordinatesBuffer for COO AoS
Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D150382
2023-05-11 10:43:31 -07:00
Peiming Liu
36c95ee739 [mlir][sparse] group tensor id and levels into pairs in loop emitter
This addressed some unresolved comments in https://reviews.llvm.org/D142930

Reviewed By: aartbik, wrengr

Differential Revision: https://reviews.llvm.org/D148565
2023-05-04 16:15:42 +00:00
Aart Bik
9a018a7b48 [mlir][sparse] relax constraints on tensor.cast with pre-rewriting
Reviewed By: wrengr

Differential Revision: https://reviews.llvm.org/D149489
2023-05-01 16:03:44 -07:00
Peiming Liu
5fd9d80135 [mlir][sparse] extend loop emitter to emit slice driven loops
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D142930
2023-04-13 03:29:40 +00:00
wren romano
9d4df97ff0 [mlir][sparse] Canonicalizing arguments to genReshapeDstShape and foreachInSparseConstant
These functions don't need a`PatternRewriter`, they only need an `OpBuilder`.  And, the builder should be the first argument, before the `Location`, to match the style used everywhere else in MLIR.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D148059
2023-04-11 19:11:59 -07:00