Seeing how we can't generate any debug intrinsics any more: delete a
variety of codepaths where they're handled. For the most part these are
plain deletions, in others I've tweaked comments to remain coherent, or
added a type to (what was) type-generic-lambdas.
This isn't all the DbgInfoIntrinsic call sites but it's most of the
simple scenarios.
Co-authored-by: Nikita Popov <github@npopov.com>
When the original predicate is ordered and both operands are non-NaN,
`Ordered` should be set to true. This variable still matters even if
both operands are non-NaN because FMF only applies to select, not fcmp.
Closes https://github.com/llvm/llvm-project/issues/143123.
Having a finite Depth (or recursion limit) for computeKnownBits is very
limiting, but is currently a load-bearing necessity, as all KnownBits
are recomputed on each call and there is no caching. As a prerequisite
for an effort to remove the recursion limit altogether, either using a
clever caching technique, or writing a easily-invalidable KnownBits
analysis, make the Depth argument in APIs in ValueTracking uniformly the
last argument with a default value. This would aid in removing the
argument when the time comes, as many callers that currently pass 0
explicitly are now updated to omit the argument altogether.
add `GenericFloatingPointPredicateUtils` in order to generalize
effects of floating point comparisons on `KnownFPClass` for both IR and
MIR.
---------
Co-authored-by: Matt Arsenault <arsenm2@gmail.com>
For now use the same treatment as minnum/maxnum, but these should
diverge. alive2 seems happy with this, except for some preexisting bugs
with weird denormal modes.
Closes https://github.com/llvm/llvm-project/issues/137582.
In the original case, LVI uses the edge information in `%entry ->
%if.end` to get a more precise result. However, since the call to `smin`
has an `noundef` return attribute, an immediate UB will be triggered
after optimization.
Currently, `isSafeToSpeculativelyExecuteWithOpcode(%min)` returns true
because
6a288c1e32
only checks whether the function is speculatable. However, it is not
enough in this case.
This patch takes UB-implying attributes into account if
`IgnoreUBImplyingAttrs` is set to false. If it is set to true, the
caller is responsible for correctly propagating UB-implying attributes.
Reapply after d51b2785abf77978d9218a7b6fb5b8ec6c770c31, which should
fix optimization regressions.
After #135642 we have a range attribute on the intrinsic declaration,
so we should not need the special handling here.
KnownBits passed to computeKnownBitsFromRangeMetadata must have the same
bit width as the range metadata bit width. Otherwise the calculated
results will be incorrect.
---------
Signed-off-by: John Lu <John.Lu@amd.com>
In https://github.com/llvm/llvm-project/pull/97762, we assume the
minimum possible value of X is NaN implies X is NaN. But it doesn't hold
for x86_fp80 format. If the knownbits of X are
`?'011111111111110'????????????????????????????????????????????????????????????????`,
the minimum possible value of X is NaN/unnormal. However, it can be a
normal value.
Closes https://github.com/llvm/llvm-project/issues/130408.
Fixes (keep it open) #130110.
If the incoming value is PHI itself, we can skip this. If we can
guarantee that the other incoming values are neither undef nor poison,
then we can also guarantee that the value isn't either. If we cannot
guarantee that, it makes no sense in calculating it.
When a wider scalar/vector type containing all sign bits is bitcast to a
narrower vector type, we can deduce that the resulting narrow elements
will also be all sign bits. This matches existing behavior in
SelectionDAG and helps optimize cases involving SSE intrinsics where
sign-extended values are bitcast between different vector types.
The current implementation fails to recognize that an arithmetic right
shift is redundant when applied to elements that are already known to be
all sign bits. This PR improves ComputeNumSignBitsImpl to track this
information through bitcasts, enabling the optimization of such cases.
```
%ext = sext <1 x i1> %cmp to <1 x i8>
%sub = bitcast <1 x i8> %ext to <4 x i2>
%sra = ashr <4 x i2> %sub, <i2 1, i2 1, i2 1, i2 1>
; Can be simplified to just:
%sub = bitcast <1 x i8> %ext to <4 x i2>
```
Closes#87624
The VLMUL and policy enums originally lived in RISCVBaseInfo.h in the
backend which is where everything else in the RISCVII namespace is
defined.
RISCVTargetParser.h is used by much more of the compiler and it
doesn't really make sense to have 2 different namespaces exposed.
These enums are both associated with VTYPE so using the RISCVVType
namespace seems like a good home for them.
The last use was removed in:
commit ac9e67756e0157793d565c2cceaf82e4403f58ba
Author: Yingwei Zheng <dtcxzyw2333@gmail.com>
Date: Mon Feb 26 01:53:16 2024 +0800
For GEPs, we have three bit widths involved: The pointer bit width, the
index bit width, and the bit width of the GEP operands.
The correct behavior here is:
* We need to sextOrTrunc the GEP operand to the index width *before*
multiplying by the scale.
* If the index width and pointer width differ, GEP only ever modifies
the low bits. Adds should not overflow into the high bits.
I'm testing this via unit tests because it's a bit tricky to test in IR
with InstCombine canonicalization getting in the way.