Use an array of structures to represent the indices for the tailing COO region
of a sparse tensor.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D140870
This is part of an effort to migrate from llvm::Optional to
std::optional. This patch changes the way mlir-tblgen generates .inc
files, and modifies tests and documentation appropriately. It is a "no
compromises" patch, and doesn't leave the user with an unpleasant mix of
llvm::Optional and std::optional.
A non-trivial change has been made to ControlFlowInterfaces to split one
constructor into two, relating to a build failure on Windows.
See also: https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
Signed-off-by: Ramkumar Ramachandra <r@artagnon.com>
Differential Revision: https://reviews.llvm.org/D138934
This reverts commit 10033a179f0c73f28f051ac70b058a0c61882e3a. Plus, it fixed windows warnings and gcc errors
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D139384
This patch abstracts sparse tensor memory scheme into a SparseTensorDescriptor class. Previously, the field accesses are performed in a relatively error-prone way, this patch hides the hairy details behind a SparseTensorDescriptor class to allow users access sparse tensor fields in a more cohesive way.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D138627
This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated. The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.
This is part of an effort to migrate from llvm::Optional to
std::optional:
https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
add new interfaces to SparseTensorEncodingAttr to construct the pointer/index types based on pointer/index bitwidth.
Reviewed By: aartbik, wrengr
Differential Revision: https://reviews.llvm.org/D139141
Previously, we generated inlined implementation for insert operation and
observed MLIR compile time increase due to the size of the main routine. We now
put the insert operation implementation in subroutines and leave the inlining
decision to the MLIR compiler.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D138957
This revision generalizes lowering the sparse_tensor.insert op into actual code that directly operates on the memrefs of a sparse storage scheme. The current insertion strategy does *not* rely on a cursor anymore, with introduces some testing overhead for each insertion (but still proportional to the rank, as before). Over time, we can optimize the code generation, but this version enables us to finish the effort to migrate from library to actual codegen.
Things to do:
(1) carefully deal with (un)ordered and (not)unique
(2) omit overhead when not needed
(3) optimize and specialize
(4) try to avoid the pointer "cleanup" (at HasInserts), and make sure the storage scheme is consistent at every insertion point (so that it can "escape" without concerns).
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D137457
The alloc->insert/compress->load chain needs to be
properly represented with an SSA chain now in loops
and if statements to properly reflect the modifying
behavior (runtime support lib is forgiving on breaking
this, but the new codegen is not).
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D136966
This prepare a subsequent revision that will generalize
the insertion code generation. Similar to the support lib,
insertions become much easier to perform with some "cursor"
bookkeeping. Note that we, in the long run, could perhaps
avoid storing the "cursor" permanently and use some
retricted-scope solution (alloca?) instead. However,
that puts harder restrictions on insertion-chain operations,
so for now we follow the more straightforward approach.
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D136800
This is to allow the use of a nop convert to express that the sparse tensor
allocated through bufferization::AllocTensorOp will be expanded to sparse
tensor storage by sparse tensor codegen.
Reviewed By: aartbik
Differential Revision: https://reviews.llvm.org/D136214
builds SSA cycle for compress insertion loop
adds casting on index mismatch during push_back
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D136186
This is a proof of concept insertion implementation that sets up
the basic framework and implements it with push backs for just
sparse vectors. It adds insertion/compression through SSA values,
so that we properly update the memref after after pushback operation.
Note that properly using SSA values in sparsification is still TBD
but I will wait until Peiming's loop emitter is in to avoid conflicts.
Reviewed By: wrengr
Differential Revision: https://reviews.llvm.org/D136008
This revision also adds convenience methods to test the
dim level type/property (with the codegen being first client)
Reviewed By: bixia
Differential Revision: https://reviews.llvm.org/D134776
Rationale:
For every dynamic memref (memref<?xtype>), the stored size really
indicates the capacity and the entry in the memSizes indicates
the actual size. This allows us to use memref's as "vectors".
Reviewed By: Peiming
Differential Revision: https://reviews.llvm.org/D133724
The "sparsification" pass does not need the ability to use runtime values for
the dimension, so the only source for variability would have been user code.
Restricting the dimension to constants simplifies code generation.
Reviewed By: Peiming, wrengr
Differential Revision: https://reviews.llvm.org/D133458