KnownBits passed to computeKnownBitsFromRangeMetadata must have the same
bit width as the range metadata bit width. Otherwise the calculated
results will be incorrect.
---------
Signed-off-by: John Lu <John.Lu@amd.com>
In https://github.com/llvm/llvm-project/pull/97762, we assume the
minimum possible value of X is NaN implies X is NaN. But it doesn't hold
for x86_fp80 format. If the knownbits of X are
`?'011111111111110'????????????????????????????????????????????????????????????????`,
the minimum possible value of X is NaN/unnormal. However, it can be a
normal value.
Closes https://github.com/llvm/llvm-project/issues/130408.
Fixes (keep it open) #130110.
If the incoming value is PHI itself, we can skip this. If we can
guarantee that the other incoming values are neither undef nor poison,
then we can also guarantee that the value isn't either. If we cannot
guarantee that, it makes no sense in calculating it.
When a wider scalar/vector type containing all sign bits is bitcast to a
narrower vector type, we can deduce that the resulting narrow elements
will also be all sign bits. This matches existing behavior in
SelectionDAG and helps optimize cases involving SSE intrinsics where
sign-extended values are bitcast between different vector types.
The current implementation fails to recognize that an arithmetic right
shift is redundant when applied to elements that are already known to be
all sign bits. This PR improves ComputeNumSignBitsImpl to track this
information through bitcasts, enabling the optimization of such cases.
```
%ext = sext <1 x i1> %cmp to <1 x i8>
%sub = bitcast <1 x i8> %ext to <4 x i2>
%sra = ashr <4 x i2> %sub, <i2 1, i2 1, i2 1, i2 1>
; Can be simplified to just:
%sub = bitcast <1 x i8> %ext to <4 x i2>
```
Closes#87624
The VLMUL and policy enums originally lived in RISCVBaseInfo.h in the
backend which is where everything else in the RISCVII namespace is
defined.
RISCVTargetParser.h is used by much more of the compiler and it
doesn't really make sense to have 2 different namespaces exposed.
These enums are both associated with VTYPE so using the RISCVVType
namespace seems like a good home for them.
The last use was removed in:
commit ac9e67756e0157793d565c2cceaf82e4403f58ba
Author: Yingwei Zheng <dtcxzyw2333@gmail.com>
Date: Mon Feb 26 01:53:16 2024 +0800
For GEPs, we have three bit widths involved: The pointer bit width, the
index bit width, and the bit width of the GEP operands.
The correct behavior here is:
* We need to sextOrTrunc the GEP operand to the index width *before*
multiplying by the scale.
* If the index width and pointer width differ, GEP only ever modifies
the low bits. Adds should not overflow into the high bits.
I'm testing this via unit tests because it's a bit tricky to test in IR
with InstCombine canonicalization getting in the way.
- **[ValueTracking] Add test for issue 124275**
- **[ValueTracking] Fix bug of using wrong condition for deducing
KnownBits**
Fixes https://github.com/llvm/llvm-project/issues/124275
Bug was introduced by https://github.com/llvm/llvm-project/pull/114689
Now that computeKnownBits supports breaking out of recursive Phi
nodes, `IncValue` can be an operand of a different Phi than `P`. This
breaks the previous assumptions we had when using the possibly
condition at `CxtI` to constrain `IncValue`.
As part of the "RemoveDIs" project, BasicBlock::iterator now carries a
debug-info bit that's needed when getFirstNonPHI and similar feed into
instruction insertion positions. Call-sites where that's necessary were
updated a year ago; but to ensure some type safety however, we'd like to
have all calls to getFirstNonPHI use the iterator-returning version.
This patch changes a bunch of call-sites calling getFirstNonPHI to use
getFirstNonPHIIt, which returns an iterator. All these call sites are
where it's obviously safe to fetch the iterator then dereference it. A
follow-up patch will contain less-obviously-safe changes.
We'll eventually deprecate and remove the instruction-pointer
getFirstNonPHI, but not before adding concise documentation of what
considerations are needed (very few).
---------
Co-authored-by: Stephen Tozer <Melamoto@gmail.com>
This callsite assumes `getUnderlyingObjectAggressive` returns a non-null
pointer:
273a94b3d5/llvm/lib/Transforms/IPO/FunctionAttrs.cpp (L124)
But it can return null when there are cycles in the value chain so there
is no more `Worklist` item anymore to explore, in which case it just
returns `Object` at the end of the function without ever setting it:
9b5857a683/llvm/lib/Analysis/ValueTracking.cpp (L6866-L6867)9b5857a683/llvm/lib/Analysis/ValueTracking.cpp (L6889)
`getUnderlyingObject` does not seem to return null either judging by
looking at its code and its callsites, so I think it is not likely to be
the author's intention that `getUnderlyingObjectAggressive` returns
null.
So this checks whether `Object` is null at the end, and if so, falls
back to the original first value.
---
The test case here was reduced by bugpoint and further reduced manually,
but I find it hard to reduce it further.
To trigger this bug, the memory operation should not be reachable from
the entry BB, because the `phi`s should form a cycle without introducing
another value from the entry. I tried a minimal `phi` cycle with three
BBs (entry BB + two BBs in a cycle), but it was skipped here:
273a94b3d5/llvm/lib/Transforms/IPO/FunctionAttrs.cpp (L121-L122)
To get the result that's not `ModRefInfo::NoModRef`, the length of `phi`
chain needed to be greater than the `MaxLookup` value set in this
function:
02403f4e45/llvm/lib/Analysis/BasicAliasAnalysis.cpp (L744)
But just lengthening the `phi` chain to 8 didn't trigger the same error
in `getUnderlyingObjectAggressive` because `getUnderlyingObject` here
passes through a single-chain `phi`s so not all `phi`s end up in
`Visited`:
9b5857a683/llvm/lib/Analysis/ValueTracking.cpp (L6863)
So I just submit here the smallest test case I managed to create.
---
Fixes#117308 and fixes#122166.
66badf2 (VT: teach a special-case optz about samesign) introduced a
compile-time regression due to the use of CmpPredicate::getMatching,
which is unnecessarily inefficient. Introduce
CmpPredicate::getPreferredSignedPredicate, which alleviates the
inefficiency problem and squashes the compile-time regression.
Create an abstraction over isImplied{True,False}ByMatchingCmp to
faithfully communicate the result of both functions, cleaning up code in
callsites. While at it, fix a bug in the implied-false version of the
function, which was inadvertedenly dropping samesign information.
There is a narrow special-case in isImpliedCondICmps that can benefit
from being taught about samesign. Since it costs us nothing to implement
it, teach it about samesign, for completeness. This patch marks the
completion of the effort to teach ValueTracking about samesign.
Move isImplied{True,False}ByMatchingCmp from CmpInst to ICmpInst, so
that it can operate on CmpPredicate instead of CmpInst::Predicate, and
teach it about samesign. There are two callers of this function, and we
choose to migrate the one in ValueTracking, namely
isImpliedCondMatchingOperands to CmpPredicate, hence teaching it about
samesign, with visible test impact.
An occasional idiom for rotation is "(A << B) + (A >> (BitWidth - B))".
Currently this is not well handled on targets with native
funnel-shift/rotate support. Add a special case to haveNoCommonBitsSet
to ensure that the addition is converted to a disjoint or in InstCombine
so during instruction selection the idiom can be converted to an
efficient rotation implementation.
Proof: https://alive2.llvm.org/ce/z/WdCZsN
isImpliedCondICmps() and its callers in ValueTracking can greatly
benefit from being taught about samesign. As a first step, teach one
caller, namely isImpliedCondOperands(). Very minimal changes are
required for this, as CmpPredicate::getMatching() does most of the work.
A signed min-max clamp is the sequence of smin and smax intrinsics,
which constrain a signed value into the range: smin <= value <= smax.
The patch improves the calculation of KnownBits for a value subjected to
the signed clamping.
When Sel(Cmp) are in different integer type,
From: (K and N mean width, K < N; a and b are src operands.)
bN = Ext(bK)
cond = Cmp(aN, bN)
aK = Trunc aN
retK = Sel(cond, aK, bK)
To:
bN = Ext(bK)
cond = Cmp(aN, bN)
retN = Sel(cond, aN, bN)
retK = Trunc retN
Though Sel's operands width becomes larger, the benefit
of making type width in Sel the same as Cmp, is for combing
to max/min intrinsics, and also better performance for SIMD
instructions.
References of correctness: https://alive2.llvm.org/ce/z/Y4Kegmhttps://alive2.llvm.org/ce/z/qFtjtR
Reference of generated code comparision:
https://gcc.godbolt.org/z/o97svGvYMhttps://gcc.godbolt.org/z/59Ynj91ov
With the introduction of CmpPredicate in 51a895a (IR: introduce struct
with CmpInst::Predicate and samesign), PatternMatch is one of the first
key pieces of infrastructure that must be updated to match a CmpInst
respecting samesign information. Implement this change to Cmp-matchers.
This is a preparatory step in migrating the codebase over to
CmpPredicate. Since we no functional changes are desired at this stage,
we have chosen not to migrate CmpPredicate::operator==(CmpPredicate)
calls to use CmpPredicate::getMatching(), as that would have visible
impact on tests that are not yet written: instead, we call
CmpPredicate::operator==(Predicate), preserving the old behavior, while
also inserting a few FIXME comments for follow-ups.
Introduce llvm::CmpPredicate, an abstraction over a floating-point
predicate, and a pack of an integer predicate with samesign information,
in order to ease extending large portions of the codebase that take a
CmpInst::Predicate to respect the samesign flag.
We have chosen to demonstrate the utility of this new abstraction by
migrating parts of ValueTracking, InstructionSimplify, and InstCombine
from CmpInst::Predicate to llvm::CmpPredicate. There should be no
functional changes, as we don't perform any extra optimizations with
samesign in this patch, or use CmpPredicate::getMatching.
The design approach taken by this patch allows for unaudited callers of
APIs that take a llvm::CmpPredicate to silently drop the samesign
information; it does not pose a correctness issue, and allows us to
migrate the codebase piece-wise.
This change is part of this proposal:
https://discourse.llvm.org/t/rfc-all-the-math-intrinsics/78294
- Return true for atan2 from isTriviallyVectorizable
- Add atan2 to VecFuncs.def for massv and accelerate libraries.
- Add atan2 to hasOptimizedCodeGen
- Add atan2 support in llvm/lib/Analysis/ValueTracking.cpp
llvm::getIntrinsicForCallSite and update vectorization tests
- Add atan2 name check to isLoweredToCall in
llvm/include/llvm/Analysis/TargetTransformInfoImpl.h
- Note: there's no test coverage for these names in isLoweredToCall, except that Transforms/TailCallElim/inf-recursion.ll is impacted by the "fabs" case
Thanks to @jroelofs for the atan2 accelerate veclib and associated test
additions, plus the hasOptimizedCodeGen addition.
Part of: Implement the atan2 HLSL Function #70096.
A urem recurrence has the property that the result can never exceed the
start value. A udiv recurrence has the property that the result can
never exceed either the start value or the numerator, whichever is
greater. Implement a simplification based on these properties.