150 Commits

Author SHA1 Message Date
Matthias Springer
204234a69c
[mlir][SparseTensor][NFC] Pass tensor type to descriptor helper (#116468)
`getDescriptorFromTensorTuple` and `getMutDescriptorFromTensorTuple`
extract the tensor type from an `unrealized_conversion_cast` op that
serves as a workaround for missing 1:N dialect conversion support.

This commit changes these functions so that they explicitly receive the
tensor type as a function argument. This is in preparation of merging
the 1:1 and 1:N conversion drivers. The conversion patterns in this file
will soon start receiving multiple SSA values (`ValueRange`) from their
adaptors (instead of a single value that is the result of
`unrealized_conversion_cast`). It will no longer be possible to take the
tensor type from the `unrealized_conversion_cast` op. The
`unrealized_conversion_cast` workaround will disappear entirely.
2024-11-19 09:27:51 +09:00
Matthias Springer
aed4356252
[mlir][Transforms] Dialect Conversion: Add replaceOpWithMultiple (#115816)
This commit adds a new function
`ConversionPatternRewriter::replaceOpWithMultiple`. This function is
similar to `replaceOp`, but it accepts multiple `ValueRange`
replacements, one per op result.

Note: This function is not an overload of `replaceOp` because of
ambiguous overload resolution that would make the API difficult to use.

This commit aligns "block signature conversions" with "op replacements":
both support 1:N replacements now. Due to incomplete 1:N support in the
dialect conversion driver, an argument materialization is inserted when
an SSA value is replaced with multiple values; same as block signature
conversions already work around the problem. These argument
materializations are going to be removed in a subsequent commit that
adds full 1:N support. The purpose of this PR is to add missing features
gradually in small increments.

This commit also updates two MLIR transformations that have their custom
workarounds around missing 1:N support. These can already start using
`replaceOpWithMultiple`.

Co-authored-by: Markus Böck <markus.boeck02@gmail.com>
2024-11-14 10:27:58 +09:00
Matthias Springer
206fad0e21
[mlir][NFC] Mark type converter in populate... functions as const (#111250)
This commit marks the type converter in `populate...` functions as
`const`. This is useful for debugging.

Patterns already take a `const` type converter. However, some
`populate...` functions do not only add new patterns, but also add
additional type conversion rules. That makes it difficult to find the
place where a type conversion was added in the code base. With this
change, all `populate...` functions that only populate pattern now have
a `const` type converter. Programmers can then conclude from the
function signature that these functions do not register any new type
conversion rules.

Also some minor cleanups around the 1:N dialect conversion
infrastructure, which did not always pass the type converter as a
`const` object internally.
2024-10-05 21:32:40 +02:00
Aart Bik
c4e5a8a4d3
[mlir][sparse] support 'batch' dimensions in sparse_tensor.print (#91411) 2024-05-07 19:01:36 -07:00
Aart Bik
5c5116556f
[mlir][sparse] force a properly sized view on pos/crd/val under codegen (#91288)
Codegen "vectors" for pos/crd/val use the capacity as memref size, not
the actual used size. Although the sparsifier itself always uses just
the defined pos/crd/val parts, printing these and passing them back to a
runtime environment could benefit from wrapping the basic pos/crd/val
getters into a proper memref view that sets the right size.
2024-05-07 09:20:56 -07:00
Matthias Springer
e8e8df4c1b
[mlir][sparse] Add has_runtime_library test op (#85355)
This commit adds a new test-only op:
`sparse_tensor.has_runtime_library`. The op returns "1" if the sparse
compiler runs in runtime library mode.

This op is useful for writing test cases that require different IR
depending on whether the sparse compiler runs in runtime library or
codegen mode.

This commit fixes a memory leak in `sparse_pack_d.mlir`. This test case
uses `sparse_tensor.assemble` to create a sparse tensor SSA value from
existing buffers. This runtime library reallocates+copies the existing
buffers; the codegen path does not. Therefore, the test requires
additional deallocations when running in runtime library mode.

Alternatives considered:
- Make the codegen path allocate. "Codegen" is the "default" compilation
mode and it is handling `sparse_tensor.assemble` correctly. The issue is
with the runtime library path, which should not allocate. Therefore, it
is better to put a workaround in the runtime library path than to work
around the issue with a new flag in the codegen path.
- Add a `sparse_tensor.runtime_only` attribute to
`bufferization.dealloc_tensor`. Verifying that the attribute can only be
attached to `bufferization.dealloc_tensor` may introduce an unwanted
dependency of `MLIRSparseTensorDialect` on `MLIRBufferizationDialect`.
2024-03-15 13:35:48 +09:00
Peiming Liu
94e27c265a
[mlir][sparse] reuse tensor.insert operation to insert elements into … (#84987)
…a sparse tensor.
2024-03-12 16:59:17 -07:00
Peiming Liu
fc9f1d49aa
[mlir][sparse] use a consistent order between [dis]assembleOp and sto… (#84079)
…rage layout.
2024-03-06 09:57:41 -08:00
Peiming Liu
52b69aa32f
[mlir][sparse] support sparsifying batch levels (#83898) 2024-03-04 14:39:06 -08:00
Peiming Liu
6bc7c9df7f
[mlir][sparse] infer returned type for sparse_tensor.to_[buffer] ops (#83343)
The sparse structure buffers might not always be memrefs with rank == 1
with the presence of batch levels.
2024-02-28 16:10:20 -08:00
Peiming Liu
0d1f95760b
[mlir][sparse] support type conversion from batched sparse tensors to… (#83163)
… memrefs.
2024-02-27 12:05:28 -08:00
Peiming Liu
5248a98724
[mlir][sparse] support SoA COO in codegen path. (#82439)
*NOTE*: the `SoA` property only makes a difference on codegen path, and
is ignored in libgen path at the moment (only SoA COO is supported).
2024-02-20 17:06:21 -08:00
Yinying Li
e5924d6499
[mlir][sparse] Implement parsing n out of m (#79935)
1. Add parsing methods for block[n, m].
2. Encode n and m with the newly extended 64-bit LevelType enum.
3. Update 2:4 methods names/comments to n:m.
2024-02-08 14:38:42 -05:00
Aart Bik
365777ecbe
[mlir][sparse] refactor utilities into transform/utils dir (#75250)
Separates actual transformation files from supporting utility files in
the transforms directory. Includes a bazel overlay fix for the build (as
well as a bit of cleanup of that file to be less verbose and more
flexible).
2023-12-12 15:34:31 -08:00
Aart Bik
5b72950394
[mlir][sparse] move all COO related methods into SparseTensorType (#73881)
This centralizes all COO methods, and provides a cleaner API. Note that
the "enc" only constructor is a temporary workaround the need for COO
methods inside the "enc" only storage specifier.
2023-11-30 09:40:39 -08:00
Aart Bik
1944c4f76b
[mlir][sparse] rename DimLevelType to LevelType (#73561)
The "Dim" prefix is a legacy left-over that no longer makes sense, since
we have a very strict "Dimension" vs. "Level" definition for sparse
tensor types and their storage.
2023-11-27 14:27:52 -08:00
Aart Bik
1dd387e106
[mlir][sparse] change dim level type -> level type (#73058)
The "dimension" before "level" does not really make sense Note that
renaming the actual type DimLevelType to LevelType is still TBD, since
this is an externally visible change (e.g. visible to Python API).
2023-11-22 09:06:22 -08:00
Aart Bik
d213220a9a
[mlir][sparse] fixed naming consistency (#73053)
All DLT related methods have DLT at end, removed stale TODO
2023-11-21 16:26:09 -08:00
Peiming Liu
2cc4b3d07c
[mlir][sparse] code cleanup using the assumption that dim2lvl maps ar… (#72894)
…e simplified.
2023-11-20 10:25:42 -08:00
Aart Bik
83cf0dc982
[mlir][sparse] implement direct IR alloc/empty/new for non-permutations (#72585)
This change implements the correct *level* sizes set up for the direct
IR codegen fields in the sparse storage scheme. This brings libgen and
codegen together again.

This is step 3 out of 3 to make sparse_tensor.new work for BSR
2023-11-16 17:17:41 -08:00
Aart Bik
2323f48e0d
[mlir][sparse] refactor dim2lvl/lvl2dim lvlsizes setup (#72474)
This change provides access to the individual components of dim sizes
and lvl sizes after each codegenutil call.

This is step 2 out of 3 to make sparse_tensor.new work for BSR
2023-11-15 21:41:43 -08:00
Aart Bik
af8428c0d9
[mlir][sparse] unify support of (dis)assemble between direct IR/lib path (#71880)
Note that the (dis)assemble operations still make some simplfying
assumptions (e.g. trailing 2-D COO in AoS format) but now at least both
the direct IR and support library path behave exactly the same.

Generalizing the ops is still TBD.
2023-11-13 10:05:00 -08:00
Aart Bik
160d483b1f
[mlir][sparse] implement loose-compressed/2:4 on direct IR codegen path (#71461)
Fills in the missing cases for direct IR codegen.
Note that non-permutation handling is still TBD.
2023-11-06 17:30:56 -08:00
Aart Bik
bc61122a71
[mlir][sparse] reformat SparseTensorCodegen file (#71231) 2023-11-03 14:35:43 -07:00
Aart Bik
2b67942139
[mlir][sparse] remove (some) deprecated dim/lvl methods (#71125)
This removes the most obvious ones. The others are still TBD.
2023-11-02 16:29:27 -07:00
Aart Bik
22212ca745
[mlir][sparse] simplify some header code (#70989)
This is a first revision in a small series of changes that removes
duplications between direct encoding methods and sparse tensor type
wrapper methods (in favor of the latter abstraction, since it provides
more safety). The goal is to simply end up with "just" SparseTensorType
2023-11-02 09:31:11 -07:00
Peiming Liu
ef222988b4
[mlir][sparse] implements sparse_tensor.reinterpret_map (#70388) 2023-10-26 16:00:32 -07:00
Peiming Liu
c780352de9
[mlir][sparse] implement sparse_tensor.lvl operation. (#69993) 2023-10-24 13:23:28 -07:00
Kazu Hirata
6461a824e4 [Transforms] Use llvm::erase_if (NFC) 2023-10-20 00:08:56 -07:00
Peiming Liu
f248d0b28d
[mlir][sparse] implement sparse_tensor.reorder_coo (#68916)
As a side effect of the change, it also unifies the convertOp
implementation between lib/codegen path.
2023-10-12 13:22:45 -07:00
Peiming Liu
837a26f237
[mlir][sparse] fix unused variable warning on release build (#68824) 2023-10-11 10:47:58 -07:00
Peiming Liu
dda3dc5e38
[mlir][sparse] simplify ConvertOp rewriting rules (#68350)
Canonicalize complex convertOp into multiple stages, such that it can
either be done by a direct conversion or by sorting.
2023-10-11 09:34:11 -07:00
Aart Bik
d5622decf1
[mlir][sparse] rename map utility (#68611)
Rename util genReaderBuffers -> genMapBuffers since it is no longer
specific to the reader, but all MapRef data in general.
2023-10-09 14:41:45 -07:00
Aart Bik
d3af65358d
[mlir][sparse] introduce MapRef, unify conversion/codegen for reader (#68360)
This revision introduces a MapRef, which will support a future
generalization beyond permutations (e.g. block sparsity). This revision
also unifies the conversion/codegen paths for the sparse_tensor.new
operation from file (eg. the readers). Note that more unification is
planned as well as general affine dim2lvl and lvl2dim (all marked with
TODOs).
2023-10-06 13:42:01 -07:00
Yinying Li
6280e23124
[mlir][sparse] Print new syntax (#68130)
Printing changes from `#sparse_tensor.encoding<{ lvlTypes = [
"compressed" ] }>` to `map = (d0) -> (d0 : compressed)`. Level
properties, ELL and slice are also supported.
2023-10-04 16:36:05 -04:00
Peiming Liu
0083f8338c
[mlir][sparse] renaming sparse_tensor.sort_coo to sparse_tensor.sort (#68161)
Rationale: the operation does not always sort COO tensors (also used for
sparse_tensor.compress for example).
2023-10-03 16:28:25 -07:00
Yinying Li
d2e8517912
[mlir][sparse] Update Enum name for CompressedWithHigh (#67845)
Change CompressedWithHigh to LooseCompressed.
2023-10-02 11:06:40 -04:00
Peiming Liu
6ca47eb49d
[mlir][sparse] rename sparse_tensor.(un)pack to sparse_tensor.(dis)as… (#67717)
…semble

Pack/Unpack are overridden in many other places, rename the operations
to avoid confusion.
2023-09-28 11:01:10 -07:00
Aart Bik
3e4a8c2c7d
[mlir][sparse] remove most bufferization.alloc_tensor ops from sparse (#66847)
The only ones left need actual deprecation in bufferization module.
2023-09-20 09:51:08 -07:00
Peiming Liu
bfa3bc4378
[mlir][sparse] unifies sparse_tensor.sort_coo/sort into one operation. (#66722)
The use cases of the two operations are largely overlapped, let's
simplify it and only use one of them.
2023-09-19 17:02:32 -07:00
Peiming Liu
098f46dce3
[sparse] allow unpack op to return 0-ranked tensor type. (#66269)
Many frontends canonicalize scalar into 0-ranked tensor, it change will
hopefully make the operation easier to use for those cases.
2023-09-13 11:33:01 -07:00
Peiming Liu
64df1c08d0
[sparse] allow unpack op to return any integer type. (#66161) 2023-09-12 17:27:51 -07:00
Aart Bik
b86d3cbc12 [mlir][sparse] complete various FIXMEs in sparse support lib
Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D159245
2023-08-30 21:30:25 -07:00
Yinying Li
be556ee157 [mlir][sparse] Fix typos in comments
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D158667
2023-08-24 18:13:36 +00:00
Yinying Li
51ebecf309 [mlir][sparse] Changed sparsity properties to use _ instead of -
Example: compressed-no -> compressed_no

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D158567
2023-08-23 17:00:27 +00:00
Peiming Liu
fa6726e27b [mlir][sparse] supports sparse_tensor.pack on libgen path
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D158012
2023-08-15 20:20:54 +00:00
Peiming Liu
a63d6a0014 [mlir][sparse] make UnpackOp return the actual filled length of unpacked memory
This might simplify frontend implementation by avoiding recomputation for the same value.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D154244
2023-06-30 21:35:15 +00:00
wren romano
af2bec7c4a [mlir][sparse] Adding new STEA::{with,without}DimSlices factories
(These factories are used in downstream code, despite not being used within the MLIR codebase.)

Depends On D151513

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D151518
2023-05-30 15:53:30 -07:00
wren romano
76647fce13 [mlir][sparse] Combining dimOrdering+higherOrdering fields into dimToLvl
This is a major step along the way towards the new STEA design.  While a great deal of this patch is simple renaming, there are several significant changes as well.  I've done my best to ensure that this patch retains the previous behavior and error-conditions, even though those are at odds with the eventual intended semantics of the `dimToLvl` mapping.  Since the majority of the compiler does not yet support non-permutations, I've also added explicit assertions in places that previously had implicitly assumed it was dealing with permutations.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D151505
2023-05-30 15:19:50 -07:00
Tres Popp
68f58812e3 [mlir] Move casting calls from methods to function calls
The MLIR classes Type/Attribute/Operation/Op/Value support
cast/dyn_cast/isa/dyn_cast_or_null functionality through llvm's doCast
functionality in addition to defining methods with the same name.
This change begins the migration of uses of the method to the
corresponding function call as has been decided as more consistent.

Note that there still exist classes that only define methods directly,
such as AffineExpr, and this does not include work currently to support
a functional cast/isa call.

Context:
- https://mlir.llvm.org/deprecation/ at "Use the free function variants
  for dyn_cast/cast/isa/…"
- Original discussion at https://discourse.llvm.org/t/preferred-casting-style-going-forward/68443

Implementation:
This patch updates all remaining uses of the deprecated functionality in
mlir/. This was done with clang-tidy as described below and further
modifications to GPUBase.td and OpenMPOpsInterfaces.td.

Steps are described per line, as comments are removed by git:
0. Retrieve the change from the following to build clang-tidy with an
   additional check:
   main...tpopp:llvm-project:tidy-cast-check
1. Build clang-tidy
2. Run clang-tidy over your entire codebase while disabling all checks
   and enabling the one relevant one. Run on all header files also.
3. Delete .inc files that were also modified, so the next build rebuilds
   them to a pure state.

```
ninja -C $BUILD_DIR clang-tidy

run-clang-tidy -clang-tidy-binary=$BUILD_DIR/bin/clang-tidy -checks='-*,misc-cast-functions'\
               -header-filter=mlir/ mlir/* -fix

rm -rf $BUILD_DIR/tools/mlir/**/*.inc
```

Differential Revision: https://reviews.llvm.org/D151542
2023-05-26 10:29:55 +02:00