add new interfaces to SparseTensorEncodingAttr to construct the pointer/index types based on pointer/index bitwidth.
Reviewed By: aartbik, wrengr
Differential Revision: https://reviews.llvm.org/D139141
Previously, we generated inlined implementation for insert operation and
observed MLIR compile time increase due to the size of the main routine. We now
put the insert operation implementation in subroutines and leave the inlining
decision to the MLIR compiler.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D138957
This revision generalizes lowering the sparse_tensor.insert op into actual code that directly operates on the memrefs of a sparse storage scheme. The current insertion strategy does *not* rely on a cursor anymore, with introduces some testing overhead for each insertion (but still proportional to the rank, as before). Over time, we can optimize the code generation, but this version enables us to finish the effort to migrate from library to actual codegen.
Things to do:
(1) carefully deal with (un)ordered and (not)unique
(2) omit overhead when not needed
(3) optimize and specialize
(4) try to avoid the pointer "cleanup" (at HasInserts), and make sure the storage scheme is consistent at every insertion point (so that it can "escape" without concerns).
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D137457
The alloc->insert/compress->load chain needs to be
properly represented with an SSA chain now in loops
and if statements to properly reflect the modifying
behavior (runtime support lib is forgiving on breaking
this, but the new codegen is not).
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D136966
This prepare a subsequent revision that will generalize
the insertion code generation. Similar to the support lib,
insertions become much easier to perform with some "cursor"
bookkeeping. Note that we, in the long run, could perhaps
avoid storing the "cursor" permanently and use some
retricted-scope solution (alloca?) instead. However,
that puts harder restrictions on insertion-chain operations,
so for now we follow the more straightforward approach.
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D136800
This is to allow the use of a nop convert to express that the sparse tensor
allocated through bufferization::AllocTensorOp will be expanded to sparse
tensor storage by sparse tensor codegen.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D136214
builds SSA cycle for compress insertion loop
adds casting on index mismatch during push_back
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D136186
This is a proof of concept insertion implementation that sets up
the basic framework and implements it with push backs for just
sparse vectors. It adds insertion/compression through SSA values,
so that we properly update the memref after after pushback operation.
Note that properly using SSA values in sparsification is still TBD
but I will wait until Peiming's loop emitter is in to avoid conflicts.
Reviewed By: wrengr
Differential Revision: https://reviews.llvm.org/D136008
This revision also adds convenience methods to test the
dim level type/property (with the codegen being first client)
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D134776
Rationale:
For every dynamic memref (memref<?xtype>), the stored size really
indicates the capacity and the entry in the memSizes indicates
the actual size. This allows us to use memref's as "vectors".
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D133724
The "sparsification" pass does not need the ability to use runtime values for
the dimension, so the only source for variability would have been user code.
Restricting the dimension to constants simplifies code generation.
Reviewed By: Peiming, wrengr
Differential Revision: https://reviews.llvm.org/D133458
Demonstrates how sparse tensor type -> tuple -> getter
will eventually yield actual code on the memrefs directly
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D133143
Also includes a first codegen example (although full support need tuple access)
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D133080
This builds a compound type for the buffers required for the sparse storage scheme defined by the MLIR sparse tensor types. The use of a tuple allows for a simple 1:1 type conversion. A subsequent pass can expand this tuple into its component with an isolated 1:N type conversion.
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D133050
This new pass provides an alternative to the current conversion pass
that converts sparse tensor types and sparse primitives to opaque pointers
and calls into a runtime support library. This pass will map sparse tensor
types to actual data structures and primitives to actual code. In the long
run, this new pass will remove our dependence on the support library, avoid
the need to link in fully templated and expanded code, and provide much better
opportunities for optimization on the generated code.
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D132766