Allow a duplicate basic block with multiple predecessors to the
jump table to be simplified, by considering that the same basic
block may appear in more switch cases.
This is a follow up of #96878 to support hoisting load/store from BBs
have the same predecessor, if load/store are the only instructions and
the branch is unpredictable, e.g.:
```
void test (int a, int *c, int *d) {
if (a)
*c = a;
else
*d = a;
}
```
Currently when we merge invokes as part of SimplifyCFG we apply a merge
of the invoke DILocations to the merged invoke. We also insert an
unconditional branch to the merged invoke at the positions previously
occupied by the original invokes; as this branch is part of the
substitution for the invoke it has replaced, we should propagate the
original invoke DebugLoc to it.
I noticed that the two C functions emitted different IR:
```
int switch_duplicate_arms(int switch_val, int v, int w) {
switch (switch_val) {
default:
break;
case 0:
w = v;
break;
case 1:
w = v;
break;
}
return w;
}
int if_duplicate_arms(int switch_val, int v, int w) {
if (switch_val == 0)
w = v;
else if (switch_val == 1)
w = v;
return v0;
}
```
We generate IR that looks like this:
```
define i32 @switch_duplicate_arms(i32 %0, i32 %1, i32 %2, i32 %3) {
switch i32 %1, label %7 [
i32 0, label %5
i32 1, label %6
]
5:
br label %7
6:
br label %7
7:
%8 = phi i32 [ %3, %4 ], [ %2, %6 ], [ %2, %5 ]
ret i32 %8
}
define i32 @if_duplicate_arms(i32 %0, i32 %1, i32 %2, i32 %3) {
%5 = icmp ult i32 %1, 2
%6 = select i1 %5, i32 %2, i32 %3
ret i32 %6
}
```
For `switch_duplicate_arms`, taking case 0 and 1 are the same since %5
and %6
branch to the same location and the incoming values for %8 are the same
from
those blocks. We could remove one on the duplicate switch targets and
update
the switch with the single target.
On RISC-V, prior to this patch, we generate the following code:
```
switch_duplicate_arms:
li a4, 1
beq a1, a4, .LBB0_2
mv a0, a3
bnez a1, .LBB0_3
.LBB0_2:
mv a0, a2
.LBB0_3:
ret
if_duplicate_arms:
li a4, 2
mv a0, a2
bltu a1, a4, .LBB1_2
mv a0, a3
.LBB1_2:
ret
```
After this patch, the O3 code is optimized to the icmp + select pair,
which
gives us the same code gen as `if_duplicate_arms`, as desired. This
results
is one less branch instruction in the final assembly.
This may help with both code size and further switch simplification. I
found
that this patch causes no significant impact to spec2006/int/ref and
spec2017/intrate/ref.
---------
Co-authored-by: Min Hsu <min@myhsu.dev>
This fixes all the places that hit the new assertion added in
https://github.com/llvm/llvm-project/pull/106524 in tests. That is,
cases where the value passed to the APInt constructor is not an N-bit
signed/unsigned integer, where N is the bit width and signedness is
determined by the isSigned flag.
The fixes either set the correct value for isSigned, set the
implicitTrunc flag, or perform more calculations inside APInt.
Note that the assertion is currently still disabled by default, so this
patch is mostly NFC.
Some (many) attributes can safely be dropped to enable sinking. For
example removing `nonnull` on a return/param can't affect correctness.
Closes#109472
SimplifyCFG store speculation currently has some homegrown code to check
for a writable object, handling the alloca special case only.
Switch it to use the generic isWritableObject() API, which means that we
also support byval arguments, allocator return values, and writable
arguments.
I've adjusted isWritableObject() to also check for the noalias attribute
when handling writable. Otherwise, I don't think that we can generalize
from at-entry writability. This was not relevant for previous uses of
the function, because they'd already require noalias for other reasons
anyway.
This is supposed to test multiplication of the linear multiplifier
with the largest value it can be multiplied with. However, if
we truncate TableSize-1 here, it might not actually be the largest
value. I think in practice this still works out, because in cases
where we'd truncate the value here we'd also fail the NonMonotonic
check. But to match the intent of the code, we should treat the
truncating case as overflowing.
If we can sink the a load/store, but not the gep producing its pointer
operand, don't sink the load/store either. This may prevent the gep from
being folded into an addressing mode, and may also negatively affect
further analysis.
Fixes https://github.com/llvm/llvm-project/issues/96838.
Pass speculation target and assumption cache to
isSafeToSpeculativelyExecute() calls.
This allows speculating based on dereferenceable/align assumptions, but
the primary motivation here is to avoid regressions from planned changes
to fix https://github.com/llvm/llvm-project/issues/108854.
Same we way mark a path unreachable if it may cause a nullptr
dereference, div/rem by zero or signed div/rem of INT_MIN by -1 cause
immediate UB.
Closes#109008
This is simplifycfg part of
https://github.com/llvm/llvm-project/pull/95515
In this PR, we support hoisting load/store with conditional faulting in
`SimplifyCFGOpt::speculativelyExecuteBB` to eliminate conditional
branches.
This is for cases like
```
void test (int a, int *b) {
if (a)
*b = a;
}
```
In the following patches, we will support the hoist in
`SimplifyCFGOpt::hoistCommonCodeFromSuccessors`.
That is for cases like
```
void test (int a, int *c, int *d) {
if (a)
*c = a;
else
*d = a;
}
```
This is a followup to https://github.com/llvm/llvm-project/pull/104579
to remove the limitation on sinking loads/stores of allocas entirely,
even if this would introduce a phi node.
Nowadays, SROA supports speculating load/store over select/phi.
Additionally, SimplifyCFG with sinking only runs at the end of the
function simplification pipeline, after SROA. I checked that the two
tests modified here still successfully SROA after the SimplifyCFG
transform.
We should, however, keep the limitation on lifetime intrinsics. SROA
does not have speculation support for these, and I've also found that
the way these are handled in the backend is very problematic
(https://github.com/llvm/llvm-project/issues/104776), so I think we
should leave them alone.
SimplifyCFG sinking currently does not sink loads/stores of allocas,
because historically SROA was unable to handle the resulting IR. Since
then, SROA both learned to speculate loads/stores over selects and phis,
*and* SimplifyCFG sinking has been deferred to the end of the function
simplification pipeline, which means that SROA happens before it.
As such, I believe that this workaround should no longer be necessary.
Given how sensitive SimplifyCFG sinking seems to be, this patch takes a
very conservative step towards removing this, by allowing sinking if we
don't actually need to form a phi over the pointer argument.
This fixes https://github.com/llvm/llvm-project/issues/104567, where
sinking a store to an escaped alloca allows converting a switch into
arithmetic.
- This patch skips the threading on known values if the target has
divergent branch.
- So far, threading on known values is skipped when the basic block has
covergent calls. However, even without convergent calls, if that
condition is divergent, threading duplicates the execution of that
block threaded and hence results in lower performance. E.g.,
```
BB1:
if (cond) BB3, BB2
BB2:
// work2
br BB3
BB3:
// work3
if (cond) BB5, BB4
BB4:
// work4
br BB5
BB5:
```
after threading,
```
BB1:
if (cond) BB3', BB2'
BB2':
// work3
br BB5
BB3':
// work2
// work3
// work4
br BB5
BB5:
```
After threading, work3 is executed twice if 'cond' is a divergent one.
Reviewers: yxsamliu, nikic
Pull Request: https://github.com/llvm/llvm-project/pull/100185
The `!unpredictable` metadata has been present for a long time, but
it's usage in optimizations is still limited. This patch teaches
`FoldTwoEntryPHINode()` to be more aggressive with an unpredictable
branch to reduce mispredictions.
A TTI interface `getBranchMispredictPenalty()` is added to distinguish
between different hardwares to ensure we don't go too far for simpler
cores. For simplicity, only a naive x86 implementation is included for
the time being.
This patch folds the following pattern (I don't know what to call this):
```
bb0:
br i1 %cond1, label %bb1, label %bb2
bb1:
br i1 %cond2, label %bb3, label %bb4
bb2:
br i1 %cond2, label %bb4, label %bb3
bb3:
...
bb4:
...
```
into
```
bb0:
%cond = xor i1 %cond1, %cond2
br i1 %cond, label %bb4, label %bb3
bb3:
...
bb4:
...
```
Alive2: https://alive2.llvm.org/ce/z/5iOJEL
Closes https://github.com/llvm/llvm-project/issues/97022.
Closes https://github.com/llvm/llvm-project/issues/83417.
I found this pattern in some verilator-generated code, which is widely
used in RTL simulation. This fold will reduces branches and improves the
performance of CPU frontend. To my surprise, this pattern is also common
in C/C++ code base.
Affected libraries/applications:
cmake/cvc5/freetype/git/gromacs/jq/linux/openblas/openmpi/openssl/php/postgres/ruby/sqlite/wireshark/z3/...
This is a helper to avoid writing `getModule()->getDataLayout()`. I
regularly try to use this method only to remember it doesn't exist...
`getModule()->getDataLayout()` is also a common (the most common?)
reason why code has to include the Module.h header.
Uses the new InsertPosition class (added in #94226) to simplify some of
the IRBuilder interface, and removes the need to pass a BasicBlock
alongside a BasicBlock::iterator, using the fact that we can now get the
parent basic block from the iterator even if it points to the sentinel.
This patch removes the BasicBlock argument from each constructor or call
to setInsertPoint.
This has no functional effect, but later on as we look to remove the
`Instruction *InsertBefore` argument from instruction-creation
(discussed
[here](https://discourse.llvm.org/t/psa-instruction-constructors-changing-to-iterator-only-insertion/77845)),
this will simplify the process by allowing us to deprecate the
InsertPosition constructor directly and catch all the cases where we use
instructions rather than iterators.
Sinking currently only supports instructions that have zero or one uses.
Extend this to handle instructions with any number of uses, as long as
all uses are consistent (i.e. the "same" for all sinking candidates).
After #94462 this is basically just a matter of looping over all uses
instead of checking the first one only.
When sinking instructions, we have to make sure that the uses of that
instruction are consistent: If used in a phi node in the sink target,
then the phi operands have to match the sink candidates. This allows the
phi to be removed when the instruction is sunk. This case is already
handled accurately.
However, what the current code doesn't handle are uses in the same
block. These are just unconditionally accepted, even though this needs
the same consistency check for the phi node that sinking the using
instruction would introduce.
Instead, the code has another check when actually performing the
sinking, which repeats the phi check (just at a later time, where all
the later instructions have already been sunk and any new phis
introduced).
This is problematic, because it messes up the profitability heuristic.
The code will think that certain instructions will get sunk, but they
actually won't. This may result in more phi nodes being created than is
considered profitable. See the changed test for a case where we no
longer do this after this patch.
The new approach makes sure that the uses are consistent during the
initial legality check. This is based on PhiOperands, which we already
collect.
The primary motivation for this is to generalize sinking to support more
than one use, and doing that generalization is hard with the current
split checking approach.
…f weights" #95136
Reverts #95060, and relands #86609, with the unintended code generation
changes addressed.
This patch implements the changes to LLVM IR discussed in
https://discourse.llvm.org/t/rfc-update-branch-weights-metadata-to-allow-tracking-branch-weight-origins/75032
In this patch, we add an optional field to MD_prof meatdata nodes for
branch weights, which can be used to distinguish weights added from
llvm.expect* intrinsics from those added via other methods, e.g. from
profiles or inserted by the compiler.
One of the major motivations, is for use with MisExpect diagnostics,
which need to know if branch_weight metadata originates from an
llvm.expect intrinsic. Without that information, we end up checking
branch weights multiple times in the case if ThinLTO + SampleProfiling,
leading to some inaccuracy in how we report MisExpect related
diagnostics to users.
Since we change the format of MD_prof metadata in a fundamental way, we
need to update code handling branch weights in a number of places.
We also update the lang ref for branch weights to reflect the change.
This patch implements the changes to LLVM IR discussed in
https://discourse.llvm.org/t/rfc-update-branch-weights-metadata-to-allow-tracking-branch-weight-origins/75032
In this patch, we add an optional field to MD_prof metadata nodes for
branch weights, which can be used to distinguish weights added from
`llvm.expect*` intrinsics from those added via other methods, e.g.
from profiles or inserted by the compiler.
One of the major motivations, is for use with MisExpect diagnostics,
which need to know if branch_weight metadata originates from an
llvm.expect intrinsic. Without that information, we end up checking
branch weights multiple times in the case if ThinLTO + SampleProfiling,
leading to some inaccuracy in how we report MisExpect related
diagnostics to users.
Since we change the format of MD_prof metadata in a fundamental way, we
need to update code handling branch weights in a number of places.
We also update the lang ref for branch weights to reflect the change.
When transforming a switch with holes into a lookup table, we currently
use a mask to check if the current index is handled by the switch or if
it is a hole. If it is a hole, we skip loading from the lookup table.
Normally, if the switch's default case is unreachable this has no
impact, as the mask test gets optimized away by subsequent passes.
However, if the switch is large enough that the number of lookup table
entries exceeds the target's register width, we won't be able to fit all
the cases into a mask and the switch won't get transformed into a lookup
table. If we know that the switch's default case is unreachable, we know
that the mask is unnecessary and can skip constructing it entirely,
which allows us to transform the switch into a lookup table.
[Example](https://godbolt.org/z/7x7qfx8M1)
In the future, it might be interesting to consider allowing lookup table
masks to be more than one register large (e.g. using a constant array of
bit flags, similar to `std::bitset`).
https://github.com/llvm/llvm-project/pull/76669 taught SimplifyCFG to
handle switches when `default` has only one case. When the `switch`'s
condition is wider than 64 bit, the current implementation can calculate
the wrong default value. This PR skips cases where the condition is too
wide.
(cherry picked from commit 39bb790b906f4921a5d9fc09e856abe53ae7a320)
Use ICmpInst::compare() where possible, ConstantFoldCompareInstOperands
in other places. This only changes places where the either the fold is
guaranteed to succeed, or the code doesn't use the resulting compare if
we fail to fold.