For BSR and convolutions, we encounter
(d0, d1, d2, d3) -> ((d0 + d2) floordiv 2, (d1 + d3) floordiv 2, (d0 +
d2) mod 2, (d1 + d3) mod 2)
which crashed the current test. Note that an actual test and working
code is still to follow (since we need to fix a few other things first)
This centralizes all COO methods, and provides a cleaner API. Note that
the "enc" only constructor is a temporary workaround the need for COO
methods inside the "enc" only storage specifier.
This is a minor step towards moving ALL COO related tests into the
SparseTensorType class rather than
having it all over the place (with risk of becoming inconsistent). Next
revision will move ALL COO related methods into this class.
Migrates dangling convenience method into proper SparseTensorType class.
Also cleans up some details (picking right dim2lvl/lvl2dim). Removes
more dead code.
The "Dim" prefix is a legacy left-over that no longer makes sense, since
we have a very strict "Dimension" vs. "Level" definition for sparse
tensor types and their storage.
The "dimension" before "level" does not really make sense Note that
renaming the actual type DimLevelType to LevelType is still TBD, since
this is an externally visible change (e.g. visible to Python API).
Some DLT related methods leaked into sparse_tensor.h, and this moves it
back to the right header. Also, the asserts were incomplete and some DLT
methods duplicated.
This is a first revision in a small series of changes that removes
duplications between direct encoding methods and sparse tensor type
wrapper methods (in favor of the latter abstraction, since it provides
more safety). The goal is to simply end up with "just" SparseTensorType
Changes:
1. For both dimToLvl and lvlToDim, always returns the actual map instead
of AffineMap() for identity map.
2. Updated custom builder for encoding to have default values.
3. Non-inferable lvlToDim will still return AffineMap() during
inference, so it will be caught by verifier.
This value should always be a plain contant or something invariant
computed outside the surrounding linalg operation, since there is no
co-iteration defined on anything done in this branch.
Fixes:
https://github.com/llvm/llvm-project/issues/69395
Updates:
1. Verification of block sparsity.
2. Verification of singleton level type can only follow compressed or
loose_compressed levels. And all level types after singleton should be
singleton.
3. Added getBlockSize function.
4. Added an invalid encoding test for an incorrect lvlToDim map that
user provides.
Updates:
1. Infer lvlToDim from dimToLvl
2. Add more tests for block sparsity
3. Finish TODOs related to lvlToDim, including adding lvlToDim to python
binding
Verification of lvlToDim that user provides will be implemented in the
next PR.
Note the new surface syntax allows for defining a dimToLvl and lvlToDim
map at once (where usually the latter can be inferred from the former,
but not always). This revision adds storage for the latter, together
with some intial boilerplate. The actual support (inference, validation,
printing, etc.) is still TBD of course.
Replaced the "NEW_SYNTAX" with the more readable "map"
(which we may, or may not keep). Minor improvement in
keyword parsing, migrated a few more examples over.
Reviewed By: Peiming, yinying-lisa-li
Differential Revision: https://reviews.llvm.org/D158325
Improves the conversion from `DimLvlMap` to STEA, in order to correct rank-mismatch issues in the roundtrip tests.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D157162
`getConstantIntValue` extracts constant values from all constant-like ops, not just `arith::ConstantIndexOp`.
Differential Revision: https://reviews.llvm.org/D154356
We are in the progress of migrating to a much improved surface syntax for the Sparse Tensor Encoding Attribute (STEA).
You can see a preview of this in the StableHLO RFC at
https://github.com/openxla/stablehlo/blob/main/rfcs/20230210-sparsity.md
//**This design is courtesy Wren Romano.**//
This initial revision
(1) Introduces the first version of a new parser written by Wren Romano
(2) Introduces a simple "migration plan" using NEW_SYNTAX on the STEA, which will allow us to test the new parser with new examples, as well as migrate existing examples over without the need to rewrite them all
This first "drop" merely provides the entry points to parse the new syntax. The parser is still under active development. For example, we need to address the "lookahead" issue when parsing the lvl spec (viz. do we see l0 = d0 or a direct d0). Another larger task is to actually implement "affine" parsing (since the MLIR affine parser is not accessible in other parts of the tree).
EXAMPLE:
Currently, CSR looks like
#CSR = #sparse_tensor.encoding<{
lvlTypes = ["dense","compressed"],
dimToLvl = affine_map<(i,j) -> (i,j)>
}>
but you can "force" the new parser with
#CSR = #sparse_tensor.encoding<{
NEW_SYNTAX =
(d0, d1) -> (l0 = d0 : dense, l1 = d1 : compressed)
}>
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D153997
This patch makes the following changes to `SparseTensorDimSliceAttr` methods:
* Mark `isDynamic` constexpr.
* Add new helpers `getStatic` and `getStaticString` to avoid repetition.
* Moved the definitions for `getStatic{Offset,Stride,Size}` and `isCompletelyDynamic` out of the class declaration; because there's no benefit to inlining them.
* Changed `parse` to use `kDynamic` rather than literals.
* Changed `verify` to use the `isDynamic` helper.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D150919