When reusing a load in a way that requires coercion (i.e. casts or
bit extraction) we currently fail to adjust metadata. Unfortunately,
none of our existing tooling for this is really suitable, because
combineMetadataForCSE() expects both loads to have the same type.
In this case we may work on loads of different types and possibly
offset memory location.
As such, what this patch does is to simply drop all metadata, with
the following exceptions:
* Metadata for which violation is known to always cause UB.
* If the load is !noundef, keep all metadata, as this will turn
poison-generating metadata into UB as well.
This fixes the miscompile that was exposed by D146629.
Differential Revision: https://reviews.llvm.org/D148129
GVN load widening was disabled in D24096. This removes various
support code that is no longer relevant.
The way this works nowadays is that we return PartialAlias with
an offset from BasicAA and this gets passed on as a clobber by
MDA. However, PartialAlias will only be returned if the load is
properly nested inside the other load.
This just removes the bulk of the code, but some additional
cleanup can be done here now that we don't need to distinguish
between load and store cases.
Process cases when phi incoming in predecessor block has select
instruction, and this select address is unavailable, but there
are addresses translated from both sides of select instruction.
Differential Revision: https://reviews.llvm.org/D142705
Don't count debug instructions when limit the number of checked instructions.
Otherwise the debug information may impact optimization like the test case
shows.
Differential Revision: https://reviews.llvm.org/D142787
This patch implements the enhancement proposed by
https://github.com/llvm/llvm-project/issues/59312.
Suppose we have following code
v0 = load %addr
br %LoadBB
LoadBB:
v1 = load %addr
...
PredBB:
...
br %cond, label %LoadBB, label %SuccBB
SuccBB:
v2 = load %addr
...
Instruction v1 in LoadBB is partially redundant, edge (PredBB, LoadBB) is a
critical edge. SuccBB is another successor of PredBB, it contains another load
v2 which is identical to v1. Current GVN splits the critical edge
(PredBB, LoadBB) and inserts a new load in it. A better method is move the load
of v2 into PredBB, then v1 can be changed to a PHI instruction.
If there are two or more similar predecessors, like the test case in the bug
entry, current GVN simply gives up because otherwise it needs to split multiple
critical edges. But we can move all loads in successor blocks into predecessors.
Differential Revision: https://reviews.llvm.org/D141712
Improve findDominatingLoad implementation:
1. Result is saved into gvn::AvailableValue struct
2. Search is done in extended BB (while there is a single predecessor or
limit is reached)
Differential Revision: https://reviews.llvm.org/D141680
This patch extends Def memory dependency with support of select
instructions to consistently handle pointer-select conversion.
Differential Revision: https://reviews.llvm.org/D141619
This patch introduces new type of memory dependency - Select to
consistently handle it like Def/Clobber dependency.
Differential Revision: https://reviews.llvm.org/D141619
Simplify AnalyzeLoadAvailability code:
1. Use std::optional for return value
2. Use range-based loop for non-local dependencies
Differential Revision: https://reviews.llvm.org/D141664
This patch implements the enhancement proposed by
https://github.com/llvm/llvm-project/issues/59312.
Suppose we have following code
v0 = load %addr
br %LoadBB
LoadBB:
v1 = load %addr
...
PredBB:
...
br %cond, label %LoadBB, label %SuccBB
SuccBB:
v2 = load %addr
...
Instruction v1 in LoadBB is partially redundant, edge (PredBB, LoadBB) is a
critical edge. SuccBB is another successor of PredBB, it contains another load
v2 which is identical to v1. Current GVN splits the critical edge
(PredBB, LoadBB) and inserts a new load in it. A better method is move the load
of v2 into PredBB, then v1 can be changed to a PHI instruction.
If there are two or more similar predecessors, like the test case in the bug
entry, current GVN simply gives up because otherwise it needs to split multiple
critical edges. But we can move all loads in successor blocks into predecessors.
Differential Revision: https://reviews.llvm.org/D139582
value() has undesired exception checking semantics and calls
__throw_bad_optional_access in libc++. Moreover, the API is unavailable without
_LIBCPP_NO_EXCEPTIONS on older Mach-O platforms (see
_LIBCPP_AVAILABILITY_BAD_OPTIONAL_ACCESS).
This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated. The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.
This is part of an effort to migrate from llvm::Optional to
std::optional:
https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
If PRE is performed as part of the main GVN pass (to PRE GEP
operands before processing loads), and it is performed across a
backedge, we will end up adding the new instruction to the leader
table of a block that has not yet been processed. When it will be
processed, GVN will incorrectly assume that the value is already
available, even though it is only available at the end of the
block.
Avoid this by not performing PRE across backedges.
Fixes https://github.com/llvm/llvm-project/issues/58418.
Differential Revision: https://reviews.llvm.org/D136095
Another alternative to fix the thread identification problem in
coroutines.
We plan to fix this problem by unifying memory effecting attributes. See
https://discourse.llvm.org/t/rfc-unify-memory-effect-attributes/65579.
But it may be a long-term project. And it is a pity that the coroutines
can't resume in different threads for years. So this one is temporary
fix. It may cause unnecessary performance regression for coroutines. But
correctness are more important. And this one is planned to be reverted
after we are able to unify the memory effecting attributes actually.
Reviewed By: jdoerfert, rjmccall
Differential Revision: https://reviews.llvm.org/D135550
After D129205, we support SplitBlockPredecessors() for predecessors
with callbr terminators. This means that it is now also safe to
invoke critical edge splitting for an edge coming from a callbr
terminator. Remove checks in various passes that were protecting
against that.
Differential Revision: https://reviews.llvm.org/D129256
Drop the requirement that getInitialValueOfAllocation() must be
passed an allocator function, shifting the responsibility for
checking that into the function (which it does anyway). The
motivation is to avoid some calls to isAllocationFn(), which has
somewhat ill-defined semantics (given the number of
allocator-related attributes we have floating around...)
(For this function, all we eventually need is an allockind of
zeroed or uninitialized.)
Differential Revision: https://reviews.llvm.org/D127274
Clang-format InstructionSimplify and convert all "FunctionName"s to
"functionName". This patch does touch a lot of files but gets done with
the cleanup of InstructionSimplify in one commit.
This is the alternative to the less invasive clang-format only patch: D126783
Reviewed By: spatel, rengolin
Differential Revision: https://reviews.llvm.org/D126889
Some cl::ZeroOrMore were added to avoid the `may only occur zero or one times!`
error. More were added due to cargo cult. Since the error has been removed,
cl::ZeroOrMore is unneeded.
Also remove cl::init(false) while touching the lines.
This option was added in D89854. It prevents GVN from performing
load PRE in a loop, if doing so would require critical edge
splitting on the backedge. From the review:
> I know that GVN Load PRE negatively impacts peeling,
> loop predication, so the passes expecting that latch has
> a conditional branch.
In the PhaseOrdering test in this patch, splitting the backedge
negatively affects vectorization: After critical edge splitting,
the loop gets rotated, effectively peeling off the first loop
iteration. The effect is that the first element is handled
separately, then the bulk of the elements use a vectorized
reduction (but using unaligned, off-by-one memory accesses) and
then a tail of 15 elements is handled separately again.
It's probably worth noting that the loop load PRE from D99926 is
not affected by this change (as it does not need backedge
splitting). This is about normal load PRE that happens to occur
inside a loop.
Differential Revision: https://reviews.llvm.org/D126382
The purpose of the custom linked list was to optimize for the case
of a single-element list. It turns out that TinyPtrVector handles
the same basic scenario even better, reducing the size of
LeaderTableEntry by 33%, and requiring only log2(N) allocations
as the size of the list grows. The only downside is that we have
to store the Value's and BasicBlock's in separate vectors, which
is slightly awkward in a few cases. Fortunately that ends up being
entirely encapsulated inside helper functions.
Reviewed By: asbirlea
Differential Revision: https://reviews.llvm.org/D125205
When using opaque pointers, convert GEPs into offset representation
of the form P + V1 * Scale1 + V2 * Scale2 + ... + ConstantOffset.
This allows us to recognize equivalent address calculations even if
the GEPs don't use the same source element type.
This fixes an opaque pointer codegen regression seen in rustc.
Differential Revision: https://reviews.llvm.org/D124527
Add void casts to mark the variables used, next to the places where
they are used in assert or `LLVM_DEBUG()` expressions.
Differential Revision: https://reviews.llvm.org/D123117