Introduce llvm::CmpPredicate, an abstraction over a floating-point
predicate, and a pack of an integer predicate with samesign information,
in order to ease extending large portions of the codebase that take a
CmpInst::Predicate to respect the samesign flag.
We have chosen to demonstrate the utility of this new abstraction by
migrating parts of ValueTracking, InstructionSimplify, and InstCombine
from CmpInst::Predicate to llvm::CmpPredicate. There should be no
functional changes, as we don't perform any extra optimizations with
samesign in this patch, or use CmpPredicate::getMatching.
The design approach taken by this patch allows for unaudited callers of
APIs that take a llvm::CmpPredicate to silently drop the samesign
information; it does not pose a correctness issue, and allows us to
migrate the codebase piece-wise.
This change is part of this proposal:
https://discourse.llvm.org/t/rfc-all-the-math-intrinsics/78294
- Return true for atan2 from isTriviallyVectorizable
- Add atan2 to VecFuncs.def for massv and accelerate libraries.
- Add atan2 to hasOptimizedCodeGen
- Add atan2 support in llvm/lib/Analysis/ValueTracking.cpp
llvm::getIntrinsicForCallSite and update vectorization tests
- Add atan2 name check to isLoweredToCall in
llvm/include/llvm/Analysis/TargetTransformInfoImpl.h
- Note: there's no test coverage for these names in isLoweredToCall, except that Transforms/TailCallElim/inf-recursion.ll is impacted by the "fabs" case
Thanks to @jroelofs for the atan2 accelerate veclib and associated test
additions, plus the hasOptimizedCodeGen addition.
Part of: Implement the atan2 HLSL Function #70096.
A urem recurrence has the property that the result can never exceed the
start value. A udiv recurrence has the property that the result can
never exceed either the start value or the numerator, whichever is
greater. Implement a simplification based on these properties.
This patch is inspired by
https://github.com/llvm/llvm-project/pull/113686. I found that it
removes a lot of unnecessary "and X, 1" in some applications that
represent boolean values with int.
As a follow-on to 113686, this breaks the recursion between phi nodes
that have p1 = phi(x, p2) and p2 = phi(y, p1). The knownFPClass can be
calculated from the classes of p1 and p2.
Given a recursive phi with select:
%p = phi [ 0, entry ], [ %sel, loop]
%sel = select %c, %other, %p
The fp state can be calculated using the knowledge that the select/phi
pair can only be the initial state (0 here) or from %other. This adds a
short-cut into computeKnownFPClass for PHI to detect that the select is
recursive back to the phi, and if so use the state from the other
operand.
This helps to address a regression from #83200.
AMD has it's own implementation of vector calls.
New vector calls are introduced in the library for exp10, log10, sincos and finite asin/acos
Please refer [https://github.com/amd/aocl-libm-ose]
---------
Co-authored-by: Rohit Aggarwal <Rohit.Aggarwal@amd.com>
Today, InstCombine can fold fcmp+select patterns to minnum/maxnum
intrinsics when the nnan and nsz flags are set. The ordering of the
operands in both the fcmp and select instructions is important for the
folding to occur.
maxnum patterns:
1. (a op b) ? a : b -> maxnum(a, b), where op is one of {ogt, oge}
2. (a op b) ? b : a -> maxnum(a, b), where op is one of {ule, ult}
The second pattern is supposed to make the order of the operands in the
select instruction irrelevant. However, the pattern matching code uses
the CmpInst::getInversePredicate method to invert the comparison
predicate. This method doesn't take into account the fast-math flags,
which can lead missing the folding opportunity.
The patch extends the pattern matching code to handle unordered fcmp
instructions. This allows the folding to occur even when the select
instruction has the operands in the inverse order.
New maxnum patterns:
1. (a op b) ? a : b -> maxnum(a, b), where op is one of {ugt, uge}
2. (a op b) ? b : a -> maxnum(a, b), where op is one of {ole, olt}
The same changes are applied to the minnum intrinsic.
foldSelectEquivalence currently doesn't support GVN-like replacements on
vector types. Put in the checks for potentially lane-crossing
operations, and lift the limitation.
Reuse llvm::isTriviallyVectorizable in llvm::isNotCrossLaneOperation, in
order to get it to handle more intrinsics.
Alive2 proofs for changed tests: https://alive2.llvm.org/ce/z/XSV_GT
Factor out and unify common code from InstSimplify and InstCombine that
partially guard against cross-lane vector operations into
llvm::isNotCrossLaneOperation in ValueTracking.
Alive2 proofs for changed tests: https://alive2.llvm.org/ce/z/68H4ka
Allow AllowEphemerals in isValidAssumeForContext, as the CxtI might
be the producer of the pointer in the bundle. At the moment, align
assumptions aren't optimized away.
This allows using the assumption in the computeKnownBits call in
getConstantMultipleImpl.
We could extend the computeKnownBits API to allow callers to specify if
ephemerals are allowed, if the info from computeKnownBitsFromContext is
used to remove alignment assumptions.
PR: https://github.com/llvm/llvm-project/pull/108632
There is a stray break statement in the recurrence-handling code in
computeKnownBitsFromOperator, that seems to be unintended. Strip this
statement so that we have the opportunity to go through the rest of
phi-handling code, and refine KnownBits further.
isValidAssumeForContext() handles a couple of trivial cases even if no
dominator tree is available. This adds one more for the case where there
is an assume in the entry block, and a use in some other block. The
entry block always dominates all blocks.
As having context instruction but not having DT is fairly rare, there is
not much impact. Only test change is in assume-builder.ll, where less
redundant assumes are generated. I've found having this special case is
useful for an upcoming change though.
This handles the edge case where BitWidth is 1 and doing the
increment gets a value that's not valid in that width, while we
just want wrap-around.
Split out of https://github.com/llvm/llvm-project/pull/80309.
When we encounter a bitcast from an integer type we can use the
information from `KnownBits` to glean some information about the
fpclass:
- If the sign bit is known, we can transfer this information over.
- If the float is IEEE format and enough of the bits are known, we may
be able to prove or rule out some fpclasses such as NaN, Zero, or Inf.
We were missing the signed flag on the negative value, so the
range was incorrectly interpreted for integers larger than 64-bit.
Split out from https://github.com/llvm/llvm-project/pull/80309.
This addresses an optimization regression in Rust we have observed after
https://github.com/llvm/llvm-project/pull/82458. We now only perform
pointer replacement if they have the same underlying object. However,
getUnderlyingObject() by default only looks through linear chains, not
selects/phis. In particular, this means that we miss cases involving
involving pointer induction variables.
This patch fixes this by introducing a new helper
getUnderlyingObjectAggressive() which basically does what
getUnderlyingObjects() does, just specialized to the case where we must
arrive at a single underlying object in the end, and with a limit on the
number of inspected values.
Doing this more expensive underlying object check has no measurable
compile-time impact on CTMark.
Consider the following case:
```
%cmp = icmp eq ptr %p, null
%load = load i32, ptr %p, align 4
%sel = select i1 %cmp, i32 %load, i32 0
```
`foldSelectValueEquivalence` converts `load i32, ptr %p, align 4` into
`load i32, ptr null, align 4`, which causes immediate UB. `%load` is
speculatable, but it doesn't hold after operand substitution.
This patch introduces a new helper
`isSafeToSpeculativelyExecuteWithVariableReplaced`. It ignores operand
info in these instructions since their operands will be replaced later.
Fixes#99436.
---------
Co-authored-by: Nikita Popov <github@npopov.com>
Add simple support for looking through a zext when doing
ComputeKnownSignBits for shl. This is valid for the case when
all extended bits are shifted out, because then the number of sign
bits can be found by analysing the zext operand.
The solution here is simple as it only handle a single zext (not
passing remaining left shift amount during recursion). It could be
possible to generalize this in the future by for example passing an
'OffsetFromMSB' parameter to ComputeNumSignBitsImpl, telling it to
calculate number of sign bits starting at some offset from the most
significant bit.
`llvm.vector.reverse` preserves each of the elements and thus elements
common to them.
Alive2 doesn't support the intrin yet, but the logic seems pretty
self-evident.
Closes#99013