Demanded bit simplification for lshr/ashr will currently demand the low
bits if the exact flag is set. This is because these bits must be zero
to satisfy the flag.
However, this means that our demanded bits simplification is worse for
lshr/ashr exact than it is for plain lshr/ashr, which is generally not
desirable.
Instead, drop the exact flag if a demanded bits simplification of the
operand succeeds, which may no longer satisfy the exact flag.
This matches what we do for the exact flag on udiv, as well as the
nuw/nsw flags on add/sub/mul.
This reverts commit b5743d4798b250506965e07ebab806a3c2d767cc.
This causes some minor compile-time impact. Revert for now, better
to do the change more gradually.
This is the floating-point analog of SimplifyDemandedBits. If we know
the edge cases are assumed impossible in uses, it's possible to prune
upstream edge case handling.
Start by only using this on returns in functions with nofpclass
returns (where I'm surprised there are no other combines), but this
can be extended to include any other nofpclass use or FPMathOperator
with flags.
Partially addresses issue #64870https://reviews.llvm.org/D158648
As per my proposal for how to eliminate debug intrinsics [0], for various
places in InstCombine prefer to insert using an instruction iterator rather
than an instruction pointer. This is so that we can eventually pass more
information in the iterator class. These call-sites where I've changed the
spelling are those that necessary to build a stage2clang to produce an
identical binary in the coming no-debug-intrinsics mode.
[0] https://discourse.llvm.org/t/rfc-instruction-api-changes-needed-to-eliminate-debug-intrinsics-from-ir/68939
Differential Revision: https://reviews.llvm.org/D152543
Bug #63699 shows a hang on arm in instcombine because we do not
propagate known bits for fshl/fshr rotates. We perform the propagation
and add regression test.
Differential Revision: https://reviews.llvm.org/D155307
This fixes the largest remaining discrepancy between results of
computeKnownBits() and SimplifyDemandedBits(). We only care about
the multi-use case here, because the assume necessarily introduces
an extra use.
Disable conversion of funnel shifts (fshl/fshr) into rotates
unless one of the operands is known to be a constant value.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D150670
I suspect that this case is not actually reachable in practice
(because it gets folded away by other code before), but if we do
reach it, we should return poison, not undef.
Otherwise the result will always be unknown anyway. This fixes one
of the last inconsistencies in results between computeKnownBits()
and SimplifyDemandedBits().
We were failing to set the known bits for add/sub in the multi-use
case, resulting in odd behavioral differences depending on the
number of uses. Noticed while adding a consistency assertion.
The test changes are essentially a revert to the state before
d6498ab. These changes are not really desirable, but if we don't
want them, that needs to be handled as part of the heuristic for
demanded constant shrinking, not by artifically suppressing the
known bits in one specific case.
This provides more precise results than the ad-hoc implementation.
Noticed while trying to add a consistency assertion.
To be honest I'm not sure why this code exists at all -- the
recursive calls are done with all bits demanded, so this should
be equivalent to just using the default case.
For the case of a non-constant operand, fall back to
computeKnownBits() rather than trying to reimplement logic.
Found while testing a consistency assertion for both functions.
Define intersectWith and unionWith as two complementary ways of
combining KnownBits. The names are chosen for consistency with
ConstantRange.
Deprecate commonBits as a synonym for intersectWith.
Differential Revision: https://reviews.llvm.org/D150443
Following the change in shufflevector semantics,
poison will be used to represent undefined elements in shufflevector masks.
Differential Revision: https://reviews.llvm.org/D149256
In issue #60632, we have vector math ops that differ because an
operand is shuffled, but the math has limited demanded elements,
so it can be replaced by another instruction:
https://alive2.llvm.org/ce/z/TKqq7H
I don't think we have anything like this yet - it's like a
CSE/GVN fold, but driven by demanded elements of a vector op.
This is limited to splat-0 as a first step to keep it simple.
Differential Revision: https://reviews.llvm.org/D144760
There are extra patterns that have for these three logic operations
that aren't covered in `SimplifyDemandedUseBits`. To avoid duplicating
the code, just use `analyzeKnownBitsFromAndXorOr` in
`SimplifyDemandedUseBits` to get full coverage.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D142429
We shouldn't penalize instructions that have extra flags.
Drop the poison-generating flags if needed instead of bailing out.
This makes canonicalization/optimization more uniform.
There is a chance that dropping flags will cause some
other transform to not fire, but we added a preliminary
patch to avoid that with:
f0faea571403
See D140665 for more details.
value() has undesired exception checking semantics and calls
__throw_bad_optional_access in libc++. Moreover, the API is unavailable without
_LIBCPP_NO_EXCEPTIONS on older Mach-O platforms (see
_LIBCPP_AVAILABILITY_BAD_OPTIONAL_ACCESS).
There's no reason to shrink a constant or simplify
an operand in 2 steps.
This matches what we currently do for 'add' (although that
seems like it should be altered to handle the commutative
case).
This is copying the code that was added for 'add' with D130075.
(That patch removed a fallthrough in the cases, but we can
probably still share at least some code again as a follow-up
cleanup, but I didn't want to risk it here.)
The reasoning is similar to the carry propagation for 'add':
if we don't demand low bits of the subtraction and the
subtrahend (aka RHS or operand 1) is known zero in those low
bits, then there can't be any borrowing required from the
higher bits of operand 0, so the low bits don't matter.
Also, the no-wrap flags can be propagated (and I think that
should be true for add too).
Here's an attempt to prove that in Alive2:
https://alive2.llvm.org/ce/z/xqh7Pa
(can add nsw or nuw to src and tgt, and it should still pass)
Differential Revision: https://reviews.llvm.org/D136788
This was originally part of D133788. There are no visible
regressions. All of the diffs show a large unsigned constant
becoming a small negative constant. This should be better
for analysis (and slightly less compile-time) and codegen.
This patch enables a multi-use demanded bits fold (motivated by issue #57576):
https://alive2.llvm.org/ce/z/DsZakh
This mimics transforms that we already do on the single-use path.
Originally, this patch did not include the last part to form a constant, but
that can be removed independently to reduce risk. It's not clear what the
effect of either change will be when viewed end-to-end.
This is expected to be neutral or a slight win for compile-time.
See the "add-demand2" series for experimental timing results:
https://llvm-compile-time-tracker.com/?config=NewPM-O3&stat=instructions&remote=rotateright
Differential Revision: https://reviews.llvm.org/D133788
Don't demand low order bits from the LHS of an Add if:
- they are not demanded in the result, and
- they are known to be zero in the RHS, so they can't possibly
overflow and affect higher bit positions
This is intended to avoid a regression from a future patch to change
the order of canonicalization of ADD and AND.
Differential Revision: https://reviews.llvm.org/D130075
If we don't demand low bits and it is valid to pre-shift a constant:
(C2 >> X) << C1 --> (C2 << C1) >> X
https://alive2.llvm.org/ce/z/_UzTMP
This is the reverse-order shift sibling to 82040d414b3c ( D127122 ).
It seems likely that we would want to add this to the SDAG version of
the code too to keep it on par with IR.