The patch enables detection of minnum/maxnum patterns for float point
instruction, represented as select/cmp. Also, enables better cost
estimation for integer min/max patterns since the compiler starts
to estimate the scalars separately.
Reviewers: nikic, RKSimon
Reviewed By: RKSimon
Pull Request: https://github.com/llvm/llvm-project/pull/98570
Add KnownBits computations to ValueTracking and X86 DAG lowering.
These instructions add/subtract adjacent vector elements in their operands. Example: phadd [X1, X2] [Y1, Y2] = [X1 + X2, Y1 + Y2]. This means that, in this example, we can compute the KnownBits of the operation by computing the KnownBits of [X1, X2] + [X1, X2] and [Y1, Y2] + [Y1, Y2] and intersecting the results. This approach also generalizes to all x86 vector types.
There are also the operations phadd.sw and phsub.sw, which perform saturating addition/subtraction. Use sadd_sat and ssub_sat to compute the KnownBits of these operations.
Also adjust the existing test case pr53247.ll because it can be transformed to a constant using the new KnownBits computation.
Fixes#82516.
The patch enables detection of minnum/maxnum patterns for float point
instruction, represented as select/cmp. Also, enables better cost
estimation for integer min/max patterns since the compiler starts
to estimate the scalars separately.
Reviewers: nikic, RKSimon
Reviewed By: RKSimon
Pull Request: https://github.com/llvm/llvm-project/pull/98570
The logic in llvm::getVectorConstantRange() can be a bit
inconvenient to use in some cases because of the need to handle
the scalar case separately. Generalize it to handle all constants,
and move it to live directly on Constant.
When using stripPointerCasts() and getUnderlyingObject(), don't
strip through a bitcast from ptr to <1 x ptr>, which is not a
no-op pointer cast. Calling code is generally not prepared to
handle that situation, resulting in incorrect alias analysis
results for example.
Fixes https://github.com/llvm/llvm-project/issues/97600.
Add a common helper used for computeConstantRange() and LVI. The
implementation is a mix of both, with the efficient handling for
ConstantDataVector taken from computeConstantRange(), and the
general handling (including non-splat poison) from LVI.
Previously we only handled the `L0 == R0` case if both `L1` and `R1`
where constant.
We can get more out of the analysis using general constant ranges
instead.
For example, `X u> Y` implies `X != 0`.
In general, any strict comparison on `X` implies that `X` is not equal
to the boundary value for the sign and constant ranges with/without
sign bits can be useful in deducing implications.
Closes#85557
Extract an adjustKnownBitsForSelectArm() helper for the
ValueTracking logic and make use of it in SimplifyDemandedBits().
This fixes a consistency violation under instcombine-verify-known-bits.
Simplify the arms of a select based on the KnownBits implied by its condition.
For now this only handles the case where the select arm folds to a constant,
but this can be generalized to handle other patterns by using
SimplifyDemandedBits instead (in that case we would also have to limit to
non-undef conditions).
This is implemented by adding a new member to SimplifyQuery that can be used
to inject an additional condition. The affected values are pre-computed and
we don't call computeKnownBits() if the select arms don't contain affected
values. This reduces the cost in some pathological cases.
This PR is intended to address the limited SLPVectorizer support of tan
raised in the comments of this PR:
https://github.com/llvm/llvm-project/pull/94559.
Right now emitting the tan intrinsisic allows you to vectorize tan, but
emitting the libfunc does not. to address this the libcall needs to be
mapped to the intrinsic. and the libcall and function name need to be
marked approriately so they can be optimized or defined as a call
lowering.
This is a helper to avoid writing `getModule()->getDataLayout()`. I
regularly try to use this method only to remember it doesn't exist...
`getModule()->getDataLayout()` is also a common (the most common?)
reason why code has to include the Module.h header.
gep nuw can be null if and only if both the base pointer and offset
are null. Unlike the inbounds case this does not depend on whether
the null pointer is valid.
Proofs: https://alive2.llvm.org/ce/z/PLoqK5
I am using `isKnownInversion` in the following pr
https://github.com/llvm/llvm-project/pull/94915
it is useful to have the method in a shared class so I can reuse it. I am not sure if `ValueTracking` is the correct place but it looks like most of the methods with the pattern `isKnownX` belong there.
This defines a new kind of IR Constant that represents a ptrauth signed
pointer, as used in AArch64 PAuth.
It allows representing most kinds of signed pointer constants used thus
far in the llvm ptrauth implementations, notably those used in the
Darwin and ELF ABIs being implemented for c/c++. These signed pointer
constants are then lowered to ELF/MachO relocations.
These can be simply thought of as a constant `llvm.ptrauth.sign`, with
the interesting addition of discriminator computation: the `ptrauth`
constant can also represent a combined blend, when both address and
integer discriminator operands are used. Both operands are otherwise
optional, with default values 0/null.
The ops supported are: `add`, `sub`, `xor`, `or`, `umax`, `uadd.sat`
Proofs: https://alive2.llvm.org/ce/z/8ZMSRg
The `add` case actually comes up in SPECInt, the rest are here mostly
for completeness.
Closes#88579
According to IEEE Std 754-2019, `sqrt` returns nan when the input is
negative (except for -0). In this case, we cannot make assumptions about
sign bit of the result.
Fixes https://github.com/llvm/llvm-project/issues/92217
`(icmp ule/ult (add nuw X, Y), C)` implies both `(icmp ule/ult X, C)` and
`(icmp ule/ult Y, C)`. We can use this to deduce leading zeros in `X`/`Y`.
`(icmp uge/ugt (sub nuw X, Y), C)` implies `(icmp uge/uge X, C)` . We
can use this to deduce leading ones in `X`.
Proofs: https://alive2.llvm.org/ce/z/sc5k22Closes#87180
This patch relands https://github.com/llvm/llvm-project/pull/86409.
I mistakenly thought that `Known.makeNegative()` clears the sign bit of
`Known.Zero`. This patch fixes the assertion failure by explicitly
clearing the sign bit.
There is a missed optimization in
``` llvm
define i8 @known_power_of_two_rust_next_power_of_two(i8 %x, i8 %y) {
%2 = add i8 %x, -1
%3 = tail call i8 @llvm.ctlz.i8(i8 %2, i1 true)
%4 = lshr i8 -1, %3
%5 = add i8 %4, 1
%6 = icmp ugt i8 %x, 1
%p = select i1 %6, i8 %5, i8 1
%r = urem i8 %y, %p
ret i8 %r
}
```
which is extracted from the Rust code
``` rust
fn func(x: usize, y: usize) -> usize {
let z = x.next_power_of_two();
y % z
}
```
Here `%p` (a.k.a `z`) is semantically a power-of-two, so `y urem p` can
be optimized to `y & (p - 1)`. (Alive2 proof:
https://alive2.llvm.org/ce/z/H3zooY)
---
It could be generalized to recognizing `LShr(UINT_MAX, Y) + 1` as a
power-of-two, which is what this PR does.
Alive2 proof: https://alive2.llvm.org/ce/z/zUPTbc
Inline calls to strcmp(s1, s2) and strncmp(s1, s2, N), where N and
exactly one of s1 and s2 are constant.
For example:
```c
int res = strcmp(s, "ab");
```
is converted to
```c
int res = (int)s[0] - (int)'a';
if (res != 0)
goto END;
res = (int)s[1] - (int)'b';
if (res != 0)
goto END;
res = (int)s[2] - (int)'\0';
END:
```
Ported from a similar gcc feature [Inline strcmp with small constant
strings](https://gcc.gnu.org/bugzilla/show_bug.cgi?id=78809).
If the value is not boolean and we are checking for `Undef` or
`UndefOrPoison`, we can avoid the potentially expensive IDom walk.
This should improve compile time for isGuaranteedNotToBeUndefOrPoison
and isGuaranteedNotToBeUndef.
Swapping the operands of a select is not valid if one hand is more
poisonous that the other, because the negation zero contains poison
elements.
Fix this by adding an extra parameter to isKnownNegation() to forbid
poison elements.
I've implemented this using manual checks to avoid needing four variants
for the NeedsNSW/AllowPoison combinations. Maybe there is a better way
to do this...
Fixes https://github.com/llvm/llvm-project/issues/89669.
This improves handling of `threadlocal.address` intrinsic in analyses:
The thread-id cannot change within a function with the exception of
suspend points of pre-split coroutines. This changes
`llvm::getUnderlyingObject` to look through `threadlocal.address` in
these cases.
`GlobalsAAResult::AnalyzeUsesOfPointer` checks whether an address can be
traced to simple loads/stores or escapes to other places. Starting the
analysis from a thread-local `GlobalValue` the `threadlocal.address`
intrinsic is safe to skip here.
This improves issue #87437
In #88217 a large set of matchers was changed to only accept poison
values in splats, but not undef values. This is because we now use
poison for non-demanded vector elements, and allowing undef can cause
correctness issues.
This patch covers the remaining matchers by changing the AllowUndef
parameter of getSplatValue() to AllowPoison instead. We also carry out
corresponding renames in matchers.
As a followup, we may want to change the default for things like m_APInt
to m_APIntAllowPoison (as this is much less risky when only allowing
poison), but this change doesn't do that.
There is one caveat here: We have a single place
(X86FixupVectorConstants) which does require handling of vector splats
with undefs. This is because this works on backend constant pool
entries, which currently still use undef instead of poison for
non-demanded elements (because SDAG as a whole does not have an explicit
poison representation). As it's just the single use, I've open-coded a
getSplatValueAllowUndef() helper there, to discourage use in any other
places.
Rename has/dropPoisonGeneratingFlagsOrMetadata to
has/dropPoisonGeneratingAnnotations and make it also handle
nonnull, align and range return attributes on calls, similar
to the existing handling for !nonnull, !align and !range metadata.
Prior to #85863, the required parameters of llvm::isKnownNonZero were
Value and DataLayout. After, they are Value, Depth, and SimplifyQuery,
where SimplifyQuery is implicitly constructible from DataLayout. The
change to move Depth before SimplifyQuery needed callers to be updated
unnecessarily, and as commented in #85863, we actually want Depth to be
after SimplifyQuery anyway so that it can be defaulted and the caller
does not need to specify it.
As the undef can be replaced with a zero value, this is not legal
in the general case. We can only allow poison values. This matches
what the other ValueTracking helpers like computeKnownBits() do.
`(icmp uge/ugt (and X, Y), C)` implies both `(icmp uge/ugt X, C)` and
`(icmp uge/ugt Y, C)`. We can use this to deduce leading ones in `X`.
`(icmp ule/ult (or X, Y), C)` implies both `(icmp ule/ult X, C)` and
`(icmp ule/ult Y, C)`. We can use this to deduce leading zeros in `X`.
Closes#86059