DSE uses BatchAA, which caches queries using pairs of MemoryLocations.
At the moment, DSE may remove instructions that are used as pointers in
cached MemoryLocations. If a new instruction used by a new MemoryLoation
and this instruction gets allocated at the same address as a previosuly
cached and then removed instruction, we may access an incorrect entry in
the cache.
To avoid this delay removing all instructions except MemoryDefs until
the end of DSE. This should avoid removing any values used in BatchAA's
cache.
Test case by @vporpo from
https://github.com/llvm/llvm-project/pull/83181.
(Test not precommitted because the results are non-determinstic - memset
only sometimes gets removed)
PR: https://github.com/llvm/llvm-project/pull/83411
If a store is dominated by a condition that ensures that the value being
stored in a memory location is already present at that memory location,
consider the store a noop.
Fixes#63419
The use of SmallSetVector saves 0.58% of heap allocations during the
compilation of a large preprocessed file, namely X86ISelLowering.cpp,
for the X86 target. During the experiment, the final size of ToCheck
was 8 or less 88% of the time.
This patch trivially updates various opt passes to handle DPVAssigns. In
all cases, this means some combination of generifying existing code to
handle DPValues and DbgAssignIntrinsics, iterating over DPValues where
previously we did not, or duplicating code for DbgAssignIntrinsics to
the equivalent DPValue function (in inlining and salvageDebugInfo).
> We are re-using tryToMergePartialOverlappingStores, which requires
DeadSI to dominate DeadSI.
Should be "DeadSI to dominate KillingSI" because that's what the check
is for.
It seems TypeSize is currently broken in the sense that:
TypeSize::Fixed(4) + TypeSize::Scalable(4) => TypeSize::Fixed(8)
without failing its assert that explicitly tests for this case:
assert(LHS.Scalable == RHS.Scalable && ...);
The reason this fails is that `Scalable` is a static method of class
TypeSize,
and LHS and RHS are both objects of class TypeSize. So this is
evaluating
if the pointer to the function Scalable == the pointer to the function
Scalable,
which is always true because LHS and RHS have the same class.
This patch fixes the issue by renaming `TypeSize::Scalable` ->
`TypeSize::getScalable`, as well as `TypeSize::Fixed` to
`TypeSize::getFixed`,
so that it no longer clashes with the variable in
FixedOrScalableQuantity.
The new methods now also better match the coding standard, which
specifies that:
* Variable names should be nouns (as they represent state)
* Function names should be verb phrases (as they represent actions)
Unfortunately the commit (D123162) introduced a mis-compile
(https://github.com/llvm/llvm-project/issues/70547), which wasn't fixed
by the alternative fix (c0de28b92e98acbeb73)
I think as long as the call considered as ephemeral is not removed, we
need to be conservative. To address the correctness issue quickly, I
think we should revert the patch (as this patch does, it doens't revert
cleanly)
This reverts commit 17fdaccccfad9b143e4aadbcdda7f645de127153.
Fixes https://github.com/llvm/llvm-project/issues/70547
This is the first of a series of patch to improve Alias Analysis on
Scalable quantities.
Keep Scalable information from TypeSize which
will be used in Alias Analysis.
This is supposed to be a fast heuristic primarily interested in
allocas. We should not call it for non-root objects where object
size needs to be determined recursively.
Given the transition to opaque pointers we no longer need to emit
some pointer casts. Int8PtrTy was set up to be a ptr in same address
space a OrigDest, making the first CreatePointerCast dead. And then
NewDestGEP will end up having the same type as OrigDest, making the
second CreatePointerCast dead.
New memory accesses are usually inserted by using one of the
createMemoryAccessXYZ() methods followed by insertUse() or
insertDef(). createMemoryAccessXYZ() accepts a defining access,
however this defining access will always be overwritten by
insertUse() / insertDef().
Update the documentation to clarify this, and stop passing
Definition to createMemoryAccessXYZ() if it's followed by
insertUse/insertDef.
Alternatively, we could also make insertUse / insertDef keep the
defining access if it is specified, and only recompute it if it's
missing.
Differential Revision: https://reviews.llvm.org/D157979
`shortenAssignment` inserts dbg.assigns with fragments describing the dead part
of a shortened store after each dbg.assign linked to the store.
Without this patch it doesn't take into account that the dead part of a
shortened store may be outside the bounds of a variable of a linked
dbg.assign. It also doesn't correctly account for a non-zero offset in the
address modifying `DIExpression` of the dbg.assign (which is possible for
fragments now even though whole variables currently cannot have a non-zero
offset in their alloca).
Fix this by moving the dead slice into variable-space and performing an
intersect of that adjusted slice with the existing fragment.
This fixes a verifier error reported when building fuchsia with assignment
tracking enabled:
https://ci.chromium.org/ui/p/fuchsia/builders/ci/
clang_toolchain.ci.core.x64-release/b8784000953022145169/overview
Reviewed By: jmorse
Differential Revision: https://reviews.llvm.org/D148536
DFAJumpThreading
JumpThreading
LibCallsShrink
LoopVectorize
SLPVectorizer
DeadStoreElimination
AggressiveDCE
CorrelatedValuePropagation
IndVarSimplify
These are part of the optimization pipeline, of which the legacy version is deprecated and being removed.
Unlike D140903 this patch folds in treating an empty metadata address component
of a dbg.assign the same as undef because it was already being treated that way
in the AssignmentTrackingAnalysis pass.
Reviewed By: scott.linder
Differential Revision: https://reviews.llvm.org/D141125
This is not NFC because the DSE BatchAA is more powerful than the
default one due to EarliestEscape CaptureInfo, so this might
improve results in some cases.
This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated. The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.
This is part of an effort to migrate from llvm::Optional to
std::optional:
https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
memset_chk may not write the number of bytes specified by the third
argument, if it is larger than the destination size (specified as 4th
argument).
Reviewed By: asbirlea
Differential Revision: https://reviews.llvm.org/D115167
The Assignment Tracking debug-info feature is outlined in this RFC:
https://discourse.llvm.org/t/
rfc-assignment-tracking-a-better-way-of-specifying-variable-locations-in-ir
DeadStoreElimmination shortens stores that are shadowed by later stores such
that the overlapping part of the earlier store is omitted. Insert an unlinked
dbg.assign intrinsic with a variable fragment that describes the omitted part
to signal that that fragment of the variable has a stale value in memory.
Reviewed By: jmorse
Differential Revision: https://reviews.llvm.org/D133315
If the location ptr to be killed is in no loop and the Function does not
have irreducible loops, then we can regard it as loop invariant.
Differential Revision: https://reviews.llvm.org/D135369
Stop assuming that an 'int' is 32 bits in helpers that emit libcalls
to lib functions that had 'int' in the signature. For most targets
this is NFC. For a target with 16 bit 'int' type this could help out
detecting if trying to emit a libcall with incorrect signature.
Similarly we now derive the type mapping to 'size_t' by asking TLI
about the size of 'size_t'. This should be NFC (at least for in-tree
targets) since getSizeTSize(), in TLI, is deriving the size in the
same way as DataLayout::getIntPtrType().
Differential Revision: https://reviews.llvm.org/D135065
For noop store of the form of LoadI and StoreI,
An invariant should be kept is that the memory state of the related
MemoryLoc before LoadI is the same as before StoreI.
For this example:
```
define void @pr49927(i32* %q, i32* %p) {
%v = load i32, i32* %p, align 4
store i32 %v, i32* %q, align 4
store i32 %v, i32* %p, align 4
ret void
}
```
Here the definition of the store's destination is different with the
definition of the load's destination, which it seems that the
invariant mentioned above is broken. But the definition of the
store's destination would write a value that is LoadI, actually, the
invariant is still kept. So we can safely ignore it.
Fixes https://github.com/llvm/llvm-project/issues/49271
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D132657
The type information of the store values can diverge when checking for valid
mask store candidates to eliminate via DSE. This patch checks for equivalence
wrt to size and element count.
Reviewed By: fhahn, rui.zhang
Differential Revision: https://reviews.llvm.org/D132700
This reverts commit cd8f3e75813995c1d2da35370ffcf5af3aff9c2f.
As pointed out by Eli on the review, this is missing an alignment
check. The value might be written at an offset.
For noop store of the form of LoadI and StoreI,
An invariant should be kept is that the memory state of the related
MemoryLoc before LoadI is the same as before StoreI.
For this example:
```
define void @pr49927(i32* %q, i32* %p) {
%v = load i32, i32* %p, align 4
store i32 %v, i32* %q, align 4
store i32 %v, i32* %p, align 4
ret void
}
```
Here the definition of the store's destination is different with the
definition of the load's destination, which it seems that the
invariant mentioned above is broken. But the definition of the
store's destination would write a value that is LoadI, actually, the
invariant is still kept. So we can safely ignore it.
Differential Revision: https://reviews.llvm.org/D132657
Remove isFreeCall() in favor of getFreedOperand(). Replace the
two remaining uses with a getFreedOperand() != nullptr check, as
they only care that something is getting freed. (The usage in DSE
is correct as such. The allocator-related checks in CFLGraph look
rather questionable in general.)
We currently assume in a number of places that free-like functions
free their first argument. This is true for all hardcoded free-like
functions, but with the new attribute-based design, the freed
argument is supposed to be indicated by the allocptr attribute.
To make sure we handle this correctly once allockind(free) is
respected, add a getFreedOperand() helper which returns the freed
argument, rather than just indicating whether the call frees *some*
argument.
This migrates most but not all users of isFreeCall() to the new
API. The remaining users are a bit more tricky.
Drop the requirement that getInitialValueOfAllocation() must be
passed an allocator function, shifting the responsibility for
checking that into the function (which it does anyway). The
motivation is to avoid some calls to isAllocationFn(), which has
somewhat ill-defined semantics (given the number of
allocator-related attributes we have floating around...)
(For this function, all we eventually need is an allockind of
zeroed or uninitialized.)
Differential Revision: https://reviews.llvm.org/D127274
For non-mem-intrinsic and non-lifetime `CallBase`s, the current
`isRemovable` function only checks if the `CallBase` 1. has no uses 2.
will return 3. does not throw:
80fb782336/llvm/lib/Transforms/Scalar/DeadStoreElimination.cpp (L1017)
But we should also exclude invokes even in case they don't throw,
because they are terminators and thus cannot be removed. While it
doesn't seem to make much sense for `invoke`s to have an `nounwind`
target, this kind of code can be generated and is also valid bitcode.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D128224
And thread DSE's ephemeral values to EarliestEscapeInfo.
This allows more precise analysis in DSEState::isReadClobber() via BatchAA.
Followup to D123162.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D123342
This changes MemorySSA to be constructed in unoptimized form.
MemorySSA::ensureOptimizedUses() can be called to optimize all
uses (once). This should be done by passes where having optimized
uses is beneficial, either because we're going to query all uses
anyway, or because we're doing def-use walks.
This should help reduce the compile-time impact of MemorySSA for
some use cases (the reason why I started looking into this is
D117926), which can avoid optimizing all uses upfront, and instead
only optimize those that are actually queried.
Actually, we have an existing use-case for this, which is EarlyCSE.
Disabling eager use optimization there gives a significant
compile-time improvement, because EarlyCSE will generally only query
clobbers for a subset of all uses (this change is not included in
this patch).
Differential Revision: https://reviews.llvm.org/D121381