185 Commits

Author SHA1 Message Date
Aart Bik
3d3e46cc4d
[mlir][sparse] make test for block sparsity more robust (#74798)
For BSR and convolutions, we encounter

(d0, d1, d2, d3) -> ((d0 + d2) floordiv 2, (d1 + d3) floordiv 2, (d0 +
d2) mod 2, (d1 + d3) mod 2)

which crashed the current test. Note that an actual test and working
code is still to follow (since we need to fix a few other things first)
2023-12-08 11:50:10 -08:00
Aart Bik
5b72950394
[mlir][sparse] move all COO related methods into SparseTensorType (#73881)
This centralizes all COO methods, and provides a cleaner API. Note that
the "enc" only constructor is a temporary workaround the need for COO
methods inside the "enc" only storage specifier.
2023-11-30 09:40:39 -08:00
Aart Bik
98f8b1afb4
[mlir][sparse] remove COO test from trait and encoding (#73733)
This is a minor step towards moving ALL COO related tests into the
SparseTensorType class rather than
having it all over the place (with risk of becoming inconsistent). Next
revision will move ALL COO related methods into this class.
2023-11-28 17:46:02 -08:00
Aart Bik
45288085b5
[mlir][sparse] move toCOOType into SparseTensorType class (#73708)
Migrates dangling convenience method into proper SparseTensorType class.
Also cleans up some details (picking right dim2lvl/lvl2dim). Removes
more dead code.
2023-11-28 16:04:01 -08:00
Aart Bik
79cb594fdf
[mlir][sparse] remove unused COO method (#73595)
step closer towards moving all type related methods into encoding and/or
sparse tensor type class
2023-11-27 17:26:45 -08:00
Peiming Liu
4e2f1521ec
[mlir][sparse] code cleanup, remove FIXMEs (#73575) 2023-11-27 14:57:08 -08:00
Aart Bik
1944c4f76b
[mlir][sparse] rename DimLevelType to LevelType (#73561)
The "Dim" prefix is a legacy left-over that no longer makes sense, since
we have a very strict "Dimension" vs. "Level" definition for sparse
tensor types and their storage.
2023-11-27 14:27:52 -08:00
Aart Bik
1dd387e106
[mlir][sparse] change dim level type -> level type (#73058)
The "dimension" before "level" does not really make sense Note that
renaming the actual type DimLevelType to LevelType is still TBD, since
this is an externally visible change (e.g. visible to Python API).
2023-11-22 09:06:22 -08:00
Aart Bik
d213220a9a
[mlir][sparse] fixed naming consistency (#73053)
All DLT related methods have DLT at end, removed stale TODO
2023-11-21 16:26:09 -08:00
Aart Bik
d2d29288bd
[mlir][sparse] code cleanup (#73047)
removed two unused methods, removed obsoleted FIXME
2023-11-21 14:57:38 -08:00
Yinying Li
c5a67e16b6
[mlir][sparse] Use variable instead of inlining sparse encoding (#72561)
Example:

#CSR = #sparse_tensor.encoding<{
  map = (d0, d1) -> (d0 : dense, d1 : compressed),
}>

// CHECK: #[[$CSR.*]] = #sparse_tensor.encoding<{ map = (d0, d1) -> (d0
: dense, d1 : compressed) }>
// CHECK-LABEL: func private @sparse_csr(
// CHECK-SAME: tensor<?x?xf32, **#[[$CSR]]**>)
func.func private @sparse_csr(tensor<?x?xf32, #CSR>)
2023-11-16 19:30:21 -05:00
Peiming Liu
06a65ce500
[mlir][sparse] schedule sparse kernels in a separate pass from sparsification. (#72423) 2023-11-15 12:16:05 -08:00
long.chen
1609f1c2a5
[mlir][affine][nfc] cleanup deprecated T.cast style functions (#71269)
detail see the docment: https://mlir.llvm.org/deprecation/

Not all changes are made manually, most of them are made through a clang
tool I wrote https://github.com/lipracer/cpp-refactor.
2023-11-14 13:01:19 +08:00
Aart Bik
2b67942139
[mlir][sparse] remove (some) deprecated dim/lvl methods (#71125)
This removes the most obvious ones. The others are still TBD.
2023-11-02 16:29:27 -07:00
Peiming Liu
53ffafb24d
[mlir][sparse] support sparse constant to BSR conversion. (#71114)
support direct convert from a constant tensor defined by
SparseArrayElements to BSR
2023-11-02 14:45:39 -07:00
Aart Bik
ee3ee1315a
[mlir][sparse] cleanup of enums header (#71090)
Some DLT related methods leaked into sparse_tensor.h, and this moves it
back to the right header. Also, the asserts were incomplete and some DLT
methods duplicated.
2023-11-02 13:00:27 -07:00
Peiming Liu
deedf554fb
[mlir][sparse] Cleanup sparse_tensor::LvlOp's folder (#71085)
Reuse the util function instead.
2023-11-02 11:31:16 -07:00
Aart Bik
22212ca745
[mlir][sparse] simplify some header code (#70989)
This is a first revision in a small series of changes that removes
duplications between direct encoding methods and sparse tensor type
wrapper methods (in favor of the latter abstraction, since it provides
more safety). The goal is to simply end up with "just" SparseTensorType
2023-11-02 09:31:11 -07:00
Peiming Liu
ef222988b4
[mlir][sparse] implements sparse_tensor.reinterpret_map (#70388) 2023-10-26 16:00:32 -07:00
Yinying Li
b165650aee
[mlir][sparse] Return actual identity map instead of null map (#70365)
Changes:

1. For both dimToLvl and lvlToDim, always returns the actual map instead
of AffineMap() for identity map.
2. Updated custom builder for encoding to have default values.
3. Non-inferable lvlToDim will still return AffineMap() during
inference, so it will be caught by verifier.
2023-10-26 18:30:34 -04:00
Peiming Liu
d808d922b4
[mlir][sparse] introduce sparse_tensor.reinterpret_map operation. (#70378) 2023-10-26 15:04:09 -07:00
Aart Bik
2e2011da38
[mlir][sparse] avoid excessive macro magic (#70276)
The shorthands are not even always shorter and the code is less clear
than when simply written out.
2023-10-25 20:48:37 -07:00
Aart Bik
7e83a1af5d
[mlir][sparse] add verification of absent value in sparse_tensor.unary (#70248)
This value should always be a plain contant or something invariant
computed outside the surrounding linalg operation, since there is no
co-iteration defined on anything done in this branch.

Fixes:
https://github.com/llvm/llvm-project/issues/69395
2023-10-25 13:56:43 -07:00
Peiming Liu
c780352de9
[mlir][sparse] implement sparse_tensor.lvl operation. (#69993) 2023-10-24 13:23:28 -07:00
Peiming Liu
f0f5fdf73d
[mlir][sparse] introduce sparse_tensor.lvl operation. (#69978) 2023-10-23 15:49:39 -07:00
Peiming Liu
e9fa1fdec9
[mlir][sparse] support CSR/BSR conversion (#69800) 2023-10-20 17:24:34 -07:00
Peiming Liu
6456e0bbbb
[mlir][sparse] implement sparse_tensor.crd_translate operation (#69653) 2023-10-20 16:18:02 -07:00
Peiming Liu
ff21a90e51
[mlir][sparse] introduce sparse_tensor.crd_translate operation (#69630) 2023-10-19 15:42:09 -07:00
Yinying Li
fb5047f524
[mlir][sparse] Remove old syntax (#69624) 2023-10-19 17:33:28 -04:00
Yinying Li
7b9fb1c228
[mlir][sparse] Update verifier for block sparsity and singleton (#69389)
Updates:
1. Verification of block sparsity.
2. Verification of singleton level type can only follow compressed or
loose_compressed levels. And all level types after singleton should be
singleton.
3. Added getBlockSize function.
4. Added an invalid encoding test for an incorrect lvlToDim map that
user provides.
2023-10-19 12:34:18 -04:00
Yinying Li
d4088e7d5f
[mlir][sparse] Populate lvlToDim (#68937)
Updates:
1. Infer lvlToDim from dimToLvl
2. Add more tests for block sparsity
3. Finish TODOs related to lvlToDim, including adding lvlToDim to python
binding

Verification of lvlToDim that user provides will be implemented in the
next PR.
2023-10-17 16:09:39 -04:00
Peiming Liu
761c9dd927
[mlir][sparse] implementating stageSparseOpPass as an interface (#69022) 2023-10-17 10:54:44 -07:00
Peiming Liu
f248d0b28d
[mlir][sparse] implement sparse_tensor.reorder_coo (#68916)
As a side effect of the change, it also unifies the convertOp
implementation between lib/codegen path.
2023-10-12 13:22:45 -07:00
Peiming Liu
0aacc2137a
[mlir][sparse] introduce sparse_tensor.reorder_coo operation (#68827) 2023-10-12 09:42:12 -07:00
Peiming Liu
dda3dc5e38
[mlir][sparse] simplify ConvertOp rewriting rules (#68350)
Canonicalize complex convertOp into multiple stages, such that it can
either be done by a direct conversion or by sorting.
2023-10-11 09:34:11 -07:00
Yinying Li
6280e23124
[mlir][sparse] Print new syntax (#68130)
Printing changes from `#sparse_tensor.encoding<{ lvlTypes = [
"compressed" ] }>` to `map = (d0) -> (d0 : compressed)`. Level
properties, ELL and slice are also supported.
2023-10-04 16:36:05 -04:00
Peiming Liu
0083f8338c
[mlir][sparse] renaming sparse_tensor.sort_coo to sparse_tensor.sort (#68161)
Rationale: the operation does not always sort COO tensors (also used for
sparse_tensor.compress for example).
2023-10-03 16:28:25 -07:00
Yinying Li
d2e8517912
[mlir][sparse] Update Enum name for CompressedWithHigh (#67845)
Change CompressedWithHigh to LooseCompressed.
2023-10-02 11:06:40 -04:00
Peiming Liu
6ca47eb49d
[mlir][sparse] rename sparse_tensor.(un)pack to sparse_tensor.(dis)as… (#67717)
…semble

Pack/Unpack are overridden in many other places, rename the operations
to avoid confusion.
2023-09-28 11:01:10 -07:00
Aart Bik
836411b99f
[mlir][sparse] add lvlToDim field to sparse tensor encoding (#67194)
Note the new surface syntax allows for defining a dimToLvl and lvlToDim
map at once (where usually the latter can be inferred from the former,
but not always). This revision adds storage for the latter, together
with some intial boilerplate. The actual support (inference, validation,
printing, etc.) is still TBD of course.
2023-09-22 15:51:25 -07:00
Peiming Liu
bfa3bc4378
[mlir][sparse] unifies sparse_tensor.sort_coo/sort into one operation. (#66722)
The use cases of the two operations are largely overlapped, let's
simplify it and only use one of them.
2023-09-19 17:02:32 -07:00
Yinying Li
51ebecf309 [mlir][sparse] Changed sparsity properties to use _ instead of -
Example: compressed-no -> compressed_no

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D158567
2023-08-23 17:00:27 +00:00
Aart Bik
bb44a6b7bb [mlir][sparse] migrate more to new surface syntax
Replaced the "NEW_SYNTAX" with the more readable "map"
(which we may, or may not keep). Minor improvement in
keyword parsing, migrated a few more examples over.

Reviewed By: Peiming, yinying-lisa-li

Differential Revision: https://reviews.llvm.org/D158325
2023-08-21 12:49:21 -07:00
wren romano
cad4646733 [mlir][sparse] Improve handling of NEW_SYNTAX
Improves the conversion from `DimLvlMap` to STEA, in order to correct rank-mismatch issues in the roundtrip tests.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D157162
2023-08-04 17:53:34 -07:00
wren romano
fdbe9312b1 [mlir][sparse] Adding getters/setters to DimLvlMap
Depends On D156768

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D156770
2023-08-01 12:55:45 -07:00
Peiming Liu
269c82d389 [mlir][sparse] introduce new 2:4 block sparsity level type.
Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D155128
2023-07-12 23:33:53 +00:00
Matthias Springer
cb7bda2ace [mlir][NFC] Use getConstantIntValue instead of casting to ConstantIndexOp
`getConstantIntValue` extracts constant values from all constant-like ops, not just `arith::ConstantIndexOp`.

Differential Revision: https://reviews.llvm.org/D154356
2023-07-04 14:08:37 +02:00
Aart Bik
6b88c852b6 [mlir][sparse] Start migration to new surface syntax for STEA
We are in the progress of migrating to a much improved surface syntax for the Sparse Tensor Encoding Attribute (STEA).

You can see a preview of this in the StableHLO RFC at

 https://github.com/openxla/stablehlo/blob/main/rfcs/20230210-sparsity.md

//**This design is courtesy Wren Romano.**//

This initial revision
(1) Introduces the first version of a new parser written by Wren Romano
(2) Introduces a simple "migration plan" using NEW_SYNTAX on the STEA, which will allow us to test the new parser with new examples, as well as migrate existing examples over without the need to rewrite them all

This first "drop" merely provides the entry points to parse the new syntax. The parser is still under active development. For example, we need to address the "lookahead" issue when parsing the lvl spec (viz. do we see l0 = d0 or a direct d0). Another larger task is to actually implement "affine" parsing (since the MLIR affine parser is not accessible in other parts of the tree).

EXAMPLE:

Currently, CSR looks like

  #CSR = #sparse_tensor.encoding<{
    lvlTypes = ["dense","compressed"],
    dimToLvl = affine_map<(i,j) -> (i,j)>
  }>

but you can "force" the new parser with

  #CSR = #sparse_tensor.encoding<{
    NEW_SYNTAX =
    (d0, d1) -> (l0 = d0 : dense, l1 = d1 : compressed)
  }>

Reviewed By: Peiming

Differential Revision: https://reviews.llvm.org/D153997
2023-06-29 11:32:07 -07:00
Aart Bik
11a4f5bdfb [mlir][sparse] minor code changes
Submitting for Wren

Reviewed By: K-Wu

Differential Revision: https://reviews.llvm.org/D153804
2023-06-26 12:57:52 -07:00
wren romano
7a1077baa0 [mlir][sparse] Improving SparseTensorDimSliceAttr methods
This patch makes the following changes to `SparseTensorDimSliceAttr` methods:
* Mark `isDynamic` constexpr.
* Add new helpers `getStatic` and `getStaticString` to avoid repetition.
* Moved the definitions for `getStatic{Offset,Stride,Size}` and `isCompletelyDynamic` out of the class declaration; because there's no benefit to inlining them.
* Changed `parse` to use `kDynamic` rather than literals.
* Changed `verify` to use the `isDynamic` helper.

Reviewed By: aartbik

Differential Revision: https://reviews.llvm.org/D150919
2023-05-30 17:30:55 -07:00