Follow up to #79476 - that patch added a call to hoistLockstepIdenticalDPValues
which hoists identical DPValues in lockstep, matching dbg intrinsic hoisting
behaviour. The code deleted in this patch, which unconditionally hoists
DPValues, should have been deleted in that patch.
Update test with --try-experimental-debuginfo-iterators to check the behaviour.
Follow up to #79476 - that change introduces a call to
hoistLockstepIdenticalDPValues.
Hoist DPValues attached to each instruction being considered for hoisting if
they are identical in lock-step. This includes the final instructions which
are considered but not hoisted, because the corresponding dbg.values would
appear before those instruction and thus hoisted if identical.
Identical debug records hoisted:
llvm/test/Transforms/SimplifyCFG/hoist-dbgvalue.ll
Non-identical debug records not hoisted:
llvm/test/Transforms/SimplifyCFG/X86/pr39187-g.ll
Debug records attached to first not-hoisted instructions are hoisted:
llvm/test/Transforms/SimplifyCFG/hoist-dbgvalue-inlined.ll
This relands commit f890f010f6a70addbd885acd0c8d1b9578b6246f.
The result value of `getelementptr inbounds (TY, null, not zero)` is a poison value.
We can think of it as undefined behavior.
This patch trivially updates various opt passes to handle DPVAssigns. In
all cases, this means some combination of generifying existing code to
handle DPValues and DbgAssignIntrinsics, iterating over DPValues where
previously we did not, or duplicating code for DbgAssignIntrinsics to
the equivalent DPValue function (in inlining and salvageDebugInfo).
https://github.com/llvm/llvm-project/pull/76669 taught SimplifyCFG to
handle switches when `default` has only one case. When the `switch`'s
condition is wider than 64 bit, the current implementation can calculate
the wrong default value. This PR skips cases where the condition is too
wide.
The specialisation will not be valid when ConstantInt gains native
support for vector types.
This is largely a mechanical change but with extra attention paid to constant
folding, InstCombineVectorOps.cpp, LoopFlatten.cpp and Verifier.cpp to
remove the need to call `getIntegerType()`.
Co-authored-by: Nikita Popov <github@npopov.com>
This patch adds support for CloneBasicBlock duplicating the DPValues
attached to instructions, and adds facilities to remap them into their new
context. The plumbing to achieve this is fairly straightforwards and
mechanical.
I've also added illustrative uses to LoopUnrollRuntime, SimpleLoopUnswitch
and SimplifyCFG. The former only updates for the epilogue right now so I've
added CHECK lines just for the end of an unrolled loop (further updates
coming later). SimpleLoopUnswitch had no debug-info tests so I've added a
new one. The two modified parts of SimplifyCFG are covered by the two
modified SimplifyCFG tests.
These are scenarios where we have to do extra cloning for copying of
DPValues because they're no longer instructions, and remap them too.
The code in the CloneInstructionsIntoPredec... function modified by this
patch has a long history that dates back to 2011, see d715ec82b4ad12c59.
There, when folding branches, all dbg.value intrinsics seen when folding
would be saved and then re-inserted at the end of whatever was folded. Over
the last 12 years this behaviour has been preserved.
However, IMO it's bad behaviour. If we have:
inst1
dbg.value1
inst2
dbg.value2
And we fold that sequence into a different block, then we would want the
instructions and variable assignments to appear in the same order. However
because of this old behaviour, the dbg.values are sunk, and we get:
inst1
inst2
dbg.value1
dbg.value2
This clustering of dbg.values can make assignments to the same variable
invisible, as well as reducing the coverage of other assignments.
This patch relaxes the CloneInstructions... function and allows it to clone
and update dbg.values in-place, causing them to appear in the original
order in the destination block. I've added some extra dbg.values to the
updated test: without the changes to the pass, the dbg.values sink into a
blob ahead of the select. The RemoveDIs code can't cope with this right now
so I've removed the "--try..." flag, restored in a commit to land in a
couple of hours.
(Metadata changes to make the LLVM-IR parser not drop the debug-info for it
being out of date. The RemoveDIs related RUN line has been removed because
it was spuriously passing due to the debug-info being dropped).
In present-day debug-info, when you delete all instructions, you delete
all their debug-info with it because debug-info is stored in
instructions. With debug-info stored in DPValue objects however,
deleting instructions causes DPValue objects to clump together into a
large blob of debug-info that hangs around in the block, as nothing has
explicitly deleted it.
To restore this behaviour, scatter calls to dropDbgValues around in
places that used to delete chunks of dbg.values, for example during
stripDebugInfo and in the code that deletes everything after an
Unreachable instruction. DCE is another example.
The tests with --try... added to them are new scenarios where we can now
correctly replicate the "normal" debug-info behaviour. Alas, there's no
explicit test for the opt -strip-debug option though (in dbg.value mode
or DPValue mode).
Optimization reduces the range for switches whose cases are positive powers
of two by replacing each case with count_trailing_zero(case).
Resolves#70756
AMDGPU target has faced the situation which can be illustrated with the
following testcase:
define void @dont_merge_cbranches(i32 %V) {
%divergent_cond = icmp ne i32 %V, 0
%uniform_cond = call i1 @uniform_result(i1 %divergent_cond)
br i1 %uniform_cond, label %bb2, label %exit, !prof !0
bb2:
br i1 %divergent_cond, label %bb3, label %exit
bb3:
call void @bar( )
br label %exit
exit:
ret void
}
!0 = !{!"branch_weights", i32 1, i32 100000}
SimplifyCFG merges branches on %uniform_cond and %divergent_cond which is undesirable because the first branch to bb2 is taken extremely rare and the second branch is expensive. The merged branch becomes as expensive as the second.
This patch prevents such merging if the branch to the second branch is unlikely to happen.
Fix the crash for the last land PR70542.
Note:
For '%add = add nuw i32 %x, 1', we can only infer the LowerBound is 1,
but the UpperBound is wrapped to 0 in computeConstantRange.
so we can't assume the UpperBound is valid bound when its value is 0.
Fix https://github.com/llvm/llvm-project/issues/71329.
Reviewed By: zmodem, nikic
When the small mask value little than 64, we can eliminate the checking
for upper limit of the range by enlarge the lookup table size to the maximum
index value. (Then the final table size grows to the next pow2 value)
```
bool f(unsigned x) {
switch (x % 8) {
case 0: return 1;
case 1: return 0;
case 2: return 0;
case 3: return 1;
case 4: return 1;
case 5: return 0;
case 6: return 1;
// This would remove the range check: case 7: return 0;
}
return 0;
}
```
Use WouldFitInRegister instead of fitsInLegalInteger to support
more result type beside bool.
Fixes https://github.com/llvm/llvm-project/issues/65120
Reviewed By: zmodem, nikic, RKSimon
C++20 comes with std::erase to erase a value from std::vector. This
patch renames llvm::erase_value to llvm::erase for consistency with
C++20.
We could make llvm::erase more similar to std::erase by having it
return the number of elements removed, but I'm not doing that for now
because nobody seems to care about that in our code base.
Since there are only 50 occurrences of erase_value in our code base,
this patch replaces all of them with llvm::erase and deprecates
llvm::erase_value.
This reverts commit 96ea48ff5dcba46af350f5300eafd7f7394ba606.
The change may cause Verifier.cpp error
"musttail call must precede a ret with an optional bitcast"
As per the stack of patches this is attached to, allow users of
BasicBlock::splitBasicBlock to provide an iterator for a position, instead
of just an instruction pointer. This is to fit with my proposal for how to
get rid of debug intrinsics [0]. There are other call-sites that would need
to change, but this is sufficient for a stage2clang self host and some
other C++ projects to build identical binaries, in the context of the whole
remove-DIs project.
[0] https://discourse.llvm.org/t/rfc-instruction-api-changes-needed-to-eliminate-debug-intrinsics-from-ir/68939
Differential Revision: https://reviews.llvm.org/D152545
Continuing the patch series to get rid of debug intrinsics [0], instruction
insertion needs to be done with iterators rather than instruction pointers,
so that we can communicate information in the iterator class. This patch
adds an iterator-taking insertBefore method and converts various call sites
to take iterators. These are all sites where such debug-info needs to be
preserved so that a stage2 clang can be built identically; it's likely that
many more will need to be changed in the future.
At this stage, this is just changing the spelling of a few operations,
which will eventually become signifiant once the debug-info bearing
iterator is used.
[0] https://discourse.llvm.org/t/rfc-instruction-api-changes-needed-to-eliminate-debug-intrinsics-from-ir/68939
Differential Revision: https://reviews.llvm.org/D152537
As outlined in my proposal of how to get rid of debug intrinsics, this
patch adds a moveBefore method that signals the caller /intends/ the order
of moved instructions is to stay the same. This semantic difference has an
effect on debug-info, as it signals whether debug-info needs to move with
instructions or not.
The patch just replaces a few calls to moveBefore with calls to
moveBeforePreserving -- and the latter just calls the former, so it's all
NFC right now. A future patch will add an implementation of
moveBeforePreserving that takes action to correctly preserve debug-info,
but that's tightly coupled with our non-instruction debug-info
representation that's still being reviewed.
[0] https://discourse.llvm.org/t/rfc-instruction-api-changes-needed-to-eliminate-debug-intrinsics-from-ir/68939
Differential Revision: https://reviews.llvm.org/D156369
isLegalToHoistInto() currently return true for callbr instructions.
That means that a callbr with one successor will be considered a
proper loop preheader, which may result in instructions that use
the callbr return value being hoisted past it.
Fix this by adding callbr to isExceptionTerminator (with a rename
to isSpecialTerminator), which also fixes similar assumptions in
other places.
Fixes https://github.com/llvm/llvm-project/issues/64215.
Differential Revision: https://reviews.llvm.org/D158609
This is the next preparation patch to support widenable conditions
widening instead of branches widening.
We've added parseWidenableGuard util which parses guard condition and
collects all checks existing in the expression tree: D157276
Here we are adding util which walks similar way through the expression
tree but looks up for widenable condition without collecting the checks.
Therefore llvm::extractWidenableCondition could parse widenable branches
with arbitrary position of widenable condition in the expression tree.
llvm::parseWidenableBranch which is we are going to get rid of is being
replaced by llvm::extractWidenableCondition where it's possible.
Reviewed By: anna
Differential Revision: https://reviews.llvm.org/D157529
swifterror pointers can only be used as pointer operands of load & store
instructions (and as swifterror argument of a call). Sinking loads or
stores with swifterror pointer operands would require introducing a
select of of the pointer operands, which isn't allowed.
Check for this condition in canSinkInstructions.
Reviewed By: aschwaighofer
Differential Revision: https://reviews.llvm.org/D158083
Add an API that allows removing multiple incoming phi values based
on a predicate callback, as suggested on D157621.
This makes sure that the removal is linear time rather than quadratic,
and avoids subtleties around iterator invalidation.
I have replaced some of the more straightforward users with the new
API, though there's a couple more places that should be able to use it.
Differential Revision: https://reviews.llvm.org/D158064
Guard FoldBranchToCommonDest in SimplifyCFG with the SpeculateBlocks
flag as it can also speculate instructions.
This was split out of D155997.
Differential Revision: https://reviews.llvm.org/D156194
When new assumption is created it should be registered in assumption cache
or cache should be invalidated.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D154601
This reverts commit 20f0c68fd83a0147a8ec1722bd2e848180610288.
https://reviews.llvm.org/D153966#4464594 reports an optimization
regression in Rust.
Additionally this change has caused an unexpected 0.3% compile-time
regression.
This reverts commit 0c03f48480f69b854f86d31235425b5cb71ac921.
Going to fix forward size regression instead due to more dependent patches needing to be reverted otherwise.