**TL;DR:** This PR modifies a comparator. The comparator is used in a subsequent call to llvm::stable_sort. Sorting comparators should follow strict weak ordering - in particular, (x < x) should return false. This PR adds a fix to avoid an infinite loop when the inputs to the comparator are equal.
**Details**:
Sometimes when two equivalent tensors passed into the comparator, we encounter infinite looping (at aae2eaae2c/llvm/lib/Transforms/Vectorize/SLPVectorizer.cpp (L4049))
Although it seems like this comparator will never be called with two equivalent pointers, some sanitizers, e.g. https://chromium.googlesource.com/chromiumos/third_party/gcc/+/refs/heads/stabilize-zako-5712.88.B/libstdc++-v3/include/bits/stl_algo.h#360, will add checks for (x < x). When this sanitizer is used with the current implementation, it triggers a comparator check for (x < x) which runs into the infinite loop
Reviewed By: ABataev
Differential Revision: https://reviews.llvm.org/D155874
Need to check for FixedVectorType, not a vector type, since later
compiler performs unconditional cast to FixedVectorType and gets the
number of elements in this type.
in getLastInstructionInBundle(), NFC.
Instead of building EntryToLastInstruction before the vectorization,
build it automatically during the calls to getLastInstructionInBundle()
function.
Need to account reshuffling, required for the reused elements in the
buildvector nodes, which are copies (perfect match) of other nodes, but
include reused elements.
Differential Revision: https://reviews.llvm.org/D149966
in getLastInstructionInBundle(), NFC.
Instead of building EntryToLastInstruction before the vectorization,
build it automatically during the calls to getLastInstructionInBundle()
function.
match the size of base node (PR63668).
Need to adjust the check for assert and take into account case where the
original scalars are reused and were extended to match the vector factor
of the reused SLP node.
match the size of base node (PR63668).
Need to adjust the check for assert and take into account case where the
original scalars are reused and were extended to match the vector factor
of the reused SLP node.
This patch adds support for vectorized reduction of maximum/minimum
intrinsics which are under the appropriate reduction kind.
Differential Revision: https://reviews.llvm.org/D154463
This changes the costmodelling of the vecreduce.min/max nodes to use the costs
of the relevant min/max intrinsics instead of expanding them to compare and
selects. The getMinMaxReductionCost have changed to take a Opcode for the
relevant intrinsic, dropping the IsUnsigned and CondTy parameters as they are
no longer needed.
A follow up patch will add some basic fminimum/fmaximum costmodelling.
Differential Revision: https://reviews.llvm.org/D153547
The patch fixes corner case when no of scalar instructions
required scheduling for vectorized node.
Differential Revision: https://reviews.llvm.org/D154175
Building on D149889, this patch updates SLP to pass the vector type as
the AccessTy to getGEPCost.
This should have the effect of GEPs being costed for more often instead
of being treated as foldable into the address mode and thus free, as
some architectures, notably RISC-V, do not have offset+reg addressing
modes for vector memory accesses.
Note that in SLP, GEPs are costed in two places: getPointersChainCost
and GetGEPCostDiff.
Reviewed By: ABataev
Differential Revision: https://reviews.llvm.org/D153570
Currently getGEPCost uses the target type of the GEP as a heuristic for
the type that will be accessed, to pass onto isLegalAddressingMode.
Targets use this to work out if a GEP can then be folded into the
load/store instruction that uses the GEP.
For example, on RISC-V loads and stores can have an offset added to a
base register folded into a single instruction, so the following GEP is
free:
%p = getelementptr i32, ptr %base, i32 42 ; getInstructionCost = 0
%x = load i32, ptr %p ; getInstructionCost = 1
------------------------------------------------------------------------
lw t0, a0(42)
However vector loads and stores cannot have an offset folded into them,
so the following GEP is costed:
%p = getelementptr <2 x i32>, ptr %base, i32 42 ; getInstructionCost = 1
%x = load <2 x i32>, ptr %p ; getInstructionCost = 1
------------------------------------------------------------------------
addi a0, 42
vle32 v8, (a0)
The issue arises whenever there is a mismatch between the target type of
the GEP and the type that is actually accessed:
%p = getelementptr i32, ptr %base, i32 42 ; getInstructionCost = 0
%x = load <2 x i32>, ptr %p ; getInstructionCost = 1
------------------------------------------------------------------------
addi a0, 42
vle32 v8, (a0)
Even though this GEP will result in an add instruction, because TTI
thinks it's loading an i32, it will think it can be folded and not
charge for it.
The target type can become mismatched with the memory access during
transformations, noticeably during SLP where a scalar base pointer will
be reused to perform a vector load or store.
This patch adds an optional AccessType argument to getGEPCost which
allows the type of memory accessed by users to be passed in as a hint,
so that we can more accurately determine if the GEP can be folded into
its users.
If AccessType is not provided, getGEPCost falls back to the old
behaviour of using the PointeeType to guess the memory access type. This
can be revisited in a later patch.
Also for now, only GEPs with exactly one user use the access type hint.
Whilst we could look through all users and use all access types to
determine if we can fold the GEP, this patch avoids doing so to prevent
O(N) behaviour.
Differential Revision: https://reviews.llvm.org/D149889
If the buildvector node is a full match of another node, need to
correctly build the mask for the original vector value and build common
mask for the emitted node.
Similar to the other code that costs main/alt instructions, the cmp should be
using the VecTy for the costs, not the ScalarTy.
One of the tests look like it gets worse just because it is not simplified to
0.
Differential Revision: https://reviews.llvm.org/D153507
Partial progress towards removing in-tree uses of `Type::getPointerTo`,
before we can deprecate the API.
If the API is used solely to support an unnecessary bitcast, get rid of
the bitcast as well.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D153933
Added some extra checks for comapreCMP function if IsCompatibility is
false to make it meat the strict weak ordering requirements to be
correctly used in sort functions.
Added some extra checks for comapreCMP function if IsCompatibility is
false to make it meat the strict weak ordering requirements to be
correctly used in sort functions.
ec146cb7c0b4 added two new ReducKinds for float maximum and minimum
(along with support in loop vectorizer).
Enumerate these kinds in SLPVectorizer to fix mlir windows build bot
errors:
C:\buildbot\mlir-x64-windows-ninja\llvm-project\llvm\lib\Transforms\Vectorize\SLPVectorizer.cpp(13883): error C2220: the following warning is treated as an error
C:\buildbot\mlir-x64-windows-ninja\llvm-project\llvm\lib\Transforms\Vectorize\SLPVectorizer.cpp(13883): warning C4062: enumerator 'llvm::RecurKind::FMinimum' in switch of enum 'llvm::RecurKind' is not handled
We don't want the existence of debug instructions affect codegen so we now
ignore debug instructions and other "isAssumeLikeIntrinsics in the
"extend schedule region" search loop in
BoUpSLP::BlockScheduling::extendSchedulingRegion.
Differential Revision: https://reviews.llvm.org/D152441
There are several issues in the current implementation. The instructions
are not properly ordered, if they are placed in different basic blocks,
need to reverse the order of blocks. Also, need to exclude
non-vectorizable nodes and check for CallBase, not CallInst, otherwise
invoke calls are not handled correctly.
For a GEP in a pointer chain, if:
1) a pointer chain is unit-strided
2) the base pointer wasn't folded and is sitting in a register somewhere
3) the distance between the GEP and the base pointer is small enough and
can be folded into the addressing mode of the using load/store
Then we can exclude that GEP from the total cost of the pointer chain,
as it will likely be folded away.
In order to check if 3) holds, we need to know the type of memory access
being made by the users of the pointer chain. For that, we need to pass
along a new argument to getPointersChainCost. (Using the source pointer
type of the GEP isn't accurate, see https://reviews.llvm.org/D149889 for
more details).
Also note that 2) is currently an assumption, and could be modelled more
accurately.
This prevents some unprofitable cases from being SLP vectorized on
RISC-V by making the scalar costs cheaper and closer to the actual
codegen.
For now the getPointersChainCost hook is duplicated for RISC-V to prevent
disturbing other targets, but could be merged back in and shared with
other targets in a following patch.
Reviewed By: ABataev
Differential Revision: https://reviews.llvm.org/D149654
`tryToVectorizePair()` adds a level of indirection over `tryToVectorizeList()`.
I am not really sure why it is needed, it looks redundant.
I replaced all calls to `tryToVectorizePair()` with calls to
`tryToVectorizeList()` and I am not seeing any failures.
Differential Revision: https://reviews.llvm.org/D151004
IsUniformStride is only used when the stride is a unit-stride, i.e. in a
plain wide vector load. This tightens the condition and renames it to
isUnitStride. It removes the old unused getUniformStrided() variant, as
isUnitStride should now imply that the stride is known.
Reviewed By: vdmitrie, ABataev
Differential Revision: https://reviews.llvm.org/D150662
This deprecates `vectorizeSimpleInstructions()` and replaces it with separate
functions that vectorize CmpInsts and Inserts.
Differential Revision: https://reviews.llvm.org/D149993
- Rename `LimitForRegisterSize` to `MaxVFOnly` to make the meaning of the limit less ambiguous
- Rename `OpsWidth` to `ActualVF`, which makes it clear that this is the VF we are using for vectorization.
- Replace the if-else code for the initialization of OpsWidth with an std::min.
Differential Revision: https://reviews.llvm.org/D150241
Introduce processBuildVector as a next step to generalize code for cost
estimation and code emission for gather/buildvector nodes.
Differential Revision: https://reviews.llvm.org/D149973
The pass should not try to revectorize instructions with constant
operands, which were not folded by the IRBuilder. It prevents the
non-terminating loop in the SLP vectorizer for non foldable constant
operations.
insts.
If the vectorizable GEP node is built, which should not be scheduled,
and at least one node is a non-gep instruction, need to insert the
vectorized instructions before the last instruction in the list, not
before the first one, otherwise the instructions may be emitted in the
wrong order.
This includes a couple of changes:
1. Moves the code that changes the root node out of the `TryToReduce` lambda and out of the traversal loop.
2. Since that code moved, there isn't much left in `TryToReduce` so the code was inlined.
3. The phi node variable `P` was also being used as a flag that turns on/off the exploration of operands as new seeds. This patch uses a new variable `TryOperandsAsNewSeeds` for this.
4. Simplifies the code executed when vectorization fails.
The logic of the code should be identical to the original, but I may be missing something not caught by tests.
Differential Revision: https://reviews.llvm.org/D149627
Added basic implementation of ShuffleCostBuilder class in
ShuffleCostEstimator and generalized BaseShuffleAnalysis::createShuffle
function to support emission of Value */InstructionCost for the
vectorization/cost estimation.
Differential Revision: https://reviews.llvm.org/D149171
This makes it explicit that these functions work with instructions, and avoids
calling them if the operand is not an instruction.
Differential Revision: https://reviews.llvm.org/D149465