The ORE argument threaded through ValueTracking is used only in a
single, untested place. It is also essentially never passed: The
only places that do so have been added very recently as part of the
KnownFPClass migration, which is vanishingly unlikely to hit this
code path. Remove this effectively dead argument.
Differential Revision: https://reviews.llvm.org/D151562
This patch utilizes the helper function implemented in D149699 and thus folds the following cases:
```
bitreverse(logic_op(x, bitreverse(y))) -> logic_op(bitreverse(x), y)
bitreverse(logic_op(bitreverse(x), y)) -> logic_op(x, bitreverse(y))
bitreverse(logic_op(bitreverse(x), bitreverse(y))) -> logic_op(x, y) in multiuse case
```
Reviewed By: goldstein.w.n, RKSimon
Differential Revision: https://reviews.llvm.org/D151246
Issue was an assertion failure due to an unchecked `cast`. Fix is to
check the operator is `BinaryOperator` before cast so that we won't
match `ConstExpr`
Reviewed By: goldstein.w.n, RKSimon
Differential Revision: https://reviews.llvm.org/D149699
The generic cast to `BinaryOperator` can break if `V` is not a
`BinaryOperator` (i.e a `ConstantExpr`). This occurs in things like
PPC linux build.
This reverts commit fe733f54da6faca95070b36b1640dbca3e43d396.
The patch implements a helper function that matches and fold the following cases in the InstCombine pass:
bswap(logic_op(x, bswap(y))) -> logic_op(bswap(x), y)
bswap(logic_op(bswap(x), y)) -> logic_op(x, bswap(y))
bswap(logic_op(bswap(x), bswap(y))) -> logic_op(x, y) in multiuse case, which still reduces the number of instructions.
The helper function accepts bswap and bitreverse intrinsics. This patch folds the bswap cases and remain the bitreverse optimization for the future
Differential Revision: https://reviews.llvm.org/D149699
Following the change in shufflevector semantics,
poison will be used to represent undefined elements in shufflevector masks.
Differential Revision: https://reviews.llvm.org/D149256
The various isKnownNever* calls can be merged into one. This also introduces
the new ability to remove zero/sub/normal checks. Also start passing the
AssumptionCache arguments.
Move this handling to a centralized place and extend it to handle
saturating add/sub intrinsics.
I originally wanted to make this fully generic rather than
whitelist based, because this is legal and likely profitable for all
speculatable intrinsics. The caveat is that for vector selects,
the intrinsic can't perform cross-lane operations like a shuffle
or reduction, which we don't really expose as a generic property
right now. So for now I'm just extending the list.
This reverts commit 27c4e233104ba765cd986b3f8b0dcd3a6c3a9f89.
I think I made a mistake with the use in RemoveConditionFromAssume(),
because the instruction being changed is not the current one, but
the next assume. Revert the change for now.
Same as with other replacement methods, it's generally necessary
to report a change on the instruction itself, e.g. by returning
it from the visit method (or possibly explicitly adding it to the
worklist).
Return Instruction * from replaceUse() to encourage the usual
"return replaceXYZ" pattern.
These idioms already appear a number of places in code, and upcoming changes to the various sanitizers continue to need more instances of the same patterns.
Differential Revision: https://reviews.llvm.org/D145945
Before this change, we call getUnderlyingObject on each separate_storage
operand on every alias() call (potentially requiring lots of pointer
chasing). Instead, we rewrite the assumptions in instcombine to do this
pointer-chasing once.
We still leave the getUnderlyingObject calls in alias(), just expecting
them to be no-ops much of the time. This is relatively fast (just a
couple dyn_casts with no pointer chasing) and avoids making alias
analysis results depend on whether or not instcombine has been run.
Differential Revision: https://reviews.llvm.org/D144933
is.fpclass(x, qnan|snan) -> fcmp uno x, 0.0
is.fpclass(nnan x, qnan|snan|other) -> is.fpclass(x, other)
Start porting the existing combines from llvm.amdgcn.class to the
generic intrinsic. Start with the ones which aren't dependent on the
FP mode.
This makes folding to 0/1 later on easier and regardless `icmp ne` is
'probably' faster on most targets (especially for vectors).
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D142253
Use deduction guides instead of helper functions.
The only non-automatic changes have been:
1. ArrayRef(some_uint8_pointer, 0) needs to be changed into ArrayRef(some_uint8_pointer, (size_t)0) to avoid an ambiguous call with ArrayRef((uint8_t*), (uint8_t*))
2. CVSymbol sym(makeArrayRef(symStorage)); needed to be rewritten as CVSymbol sym{ArrayRef(symStorage)}; otherwise the compiler is confused and thinks we have a (bad) function prototype. There was a few similar situation across the codebase.
3. ADL doesn't seem to work the same for deduction-guides and functions, so at some point the llvm namespace must be explicitly stated.
4. The "reference mode" of makeArrayRef(ArrayRef<T> &) that acts as no-op is not supported (a constructor cannot achieve that).
Per reviewers' comment, some useless makeArrayRef have been removed in the process.
This is a follow-up to https://reviews.llvm.org/D140896 that introduced
the deduction guides.
Differential Revision: https://reviews.llvm.org/D140955
value() has undesired exception checking semantics and calls
__throw_bad_optional_access in libc++. Moreover, the API is unavailable without
_LIBCPP_NO_EXCEPTIONS on older Mach-O platforms (see
_LIBCPP_AVAILABILITY_BAD_OPTIONAL_ACCESS).