667 Commits

Author SHA1 Message Date
Simon Pilgrim
55991b44b7 [InstCombine] foldAndOrOfICmpsOfAndWithPow2 - add vector support
Support vector cases for folding:

 (iszero(A & K1) | iszero(A & K2)) -> (A & (K1 | K2)) != (K1 | K2)
 (!iszero(A & K1) & !iszero(A & K2)) -> (A & (K1 | K2)) == (K1 | K2)
2020-10-16 10:41:40 +01:00
Simon Pilgrim
23f1616626 [InstCombine] Use m_SpecificInt instead of m_APInt + comparison. NFCI. 2020-10-15 16:06:27 +01:00
Simon Pilgrim
2b45639ea0 [InstCombine] InstCombineAndOrXor - refactor cast<ConstantInt> usages to PatternMatch. NFCI.
First step towards replacing these to add full vector support.
2020-10-15 16:06:17 +01:00
Simon Pilgrim
09be7623e4 [InstCombine] visitXor - refactor ((X^C1)>>C2)^C3 -> (X>>C2)^((C1>>C2)^C3) fold. NFCI.
This is still ConstantInt-only (scalar) but is refactored to use PatternMatch to make adding vector support in the future relatively trivial.
2020-10-15 14:38:15 +01:00
Simon Pilgrim
89a2a47870 [InstCombine] Add m_SpecificIntAllowUndef pattern matcher
m_SpecificInt doesn't accept undef elements in a vector splat value - tweak specific_intval to optionally allow undefs and add the m_SpecificIntAllowUndef variants.

Allows us to remove the m_APIntAllowUndef + comparison hack inside matchFunnelShift
2020-10-14 16:15:53 +01:00
Simon Pilgrim
1e4d882f9a [InstCombine] matchFunnelShift - add support for non-uniform vectors containing undefs.
Replace m_SpecificInt with m_APIntAllowUndef to matching splats containing undefs, then use ConstantExpr::mergeUndefsWith to merge the undefs together in the result.

The undef funnel shift amounts are getting replaced with zero later on - I'll address this in a later patch, otherwise we lose potential shift by splat value patterns.
2020-10-14 10:42:27 +01:00
Simon Pilgrim
bbf3925879 [InstCombine] matchFunnelShift - fold or(shl(a,x),lshr(b,sub(bw,x))) -> fshl(a,b,x) iff x < bw (REAPPLIED)
If value tracking can confirm that a shift value is less than the type bitwidth then we can more confidently fold general or(shl(a,x),lshr(b,sub(bw,x))) patterns to a funnel/rotate intrinsic pattern without causing bad codegen regressions in the backend (see D89139).

Reapplied after the shift canonicalization in rG02295e6d1a15 which removed the need to flip the shift values.

Differential Revision: https://reviews.llvm.org/D88783
2020-10-12 16:06:41 +01:00
Simon Pilgrim
fa56623370 [InstCombine] matchFunnelShift - remove shift value commutation. NFCI.
After rG02295e6d1a15 we no longer need to invert the shift values for fshr - this is just hidden at the moment as funnel shifts only ever match for constant values so never use the fshr "Sub on SHL" path.
2020-10-12 15:55:18 +01:00
Simon Pilgrim
02295e6d1a [InstCombine] matchFunnelShift - canonicalize to OR(SHL,LSHR). NFCI.
Simplify the shift amount matching code by canonicalizing the shift ops first.
2020-10-12 15:10:59 +01:00
Simon Pilgrim
45d785e22b Revert rGb97093e520036f8 - "[InstCombine] matchFunnelShift - fold or(shl(a,x),lshr(b,sub(bw,x))) -> fshl(a,b,x) iff x < bw"
This reverts commit b97093e520036f88c5b39e572966f1c8c387661e.

Funnel shift argument commutation isn't working correctly
2020-10-12 11:38:52 +01:00
Simon Pilgrim
b97093e520 [InstCombine] matchFunnelShift - fold or(shl(a,x),lshr(b,sub(bw,x))) -> fshl(a,b,x) iff x < bw
If value tracking can confirm that a shift value is less than the type bitwidth then we can more confidently fold general or(shl(a,x),lshr(b,sub(bw,x))) patterns to a funnel/rotate intrinsic pattern without causing bad codegen regressions in the backend (see D89139).

Differential Revision: https://reviews.llvm.org/D88783
2020-10-11 10:37:20 +01:00
Simon Pilgrim
5415fef3ab [InstCombine] matchFunnelShift - support non-uniform constant vector shift amounts (PR46895)
Complete basic PR46895 fixes by refactoring D87452/D88402 to allow us to match non-uniform constant values.

We still don't handle non-uniform vectors that contain undef elements, but that can wait until we have a decent generic mechanism for this.

Differential Revision: https://reviews.llvm.org/D88420
2020-10-08 12:56:27 +01:00
Simon Pilgrim
e1d4ca0009 [InstCombine] matchRotate - add support for matching general funnel shifts with constant shift amounts (PR46896)
First step towards extending the existing rotation support to full funnel shift handling now that the backend legalization support has improved.

This enables us to match the shift by constant cases, which are pretty trivial to expand again if necessary.

D88420 will add non-uniform support for funnel shifts as well once its been finalized.

Differential Revision: https://reviews.llvm.org/D88834
2020-10-08 11:05:14 +01:00
Simon Pilgrim
aa47962cc9 [InstCombine] canNarrowShiftAmt - replace custom Constant matching with m_SpecificInt_ICMP
The existing code ignores undef values which matches m_SpecificInt_ICMP, although m_SpecificInt_ICMP returns false for an all-undef constant, I've added test coverage at rGfe0197e194a64f9 to show that undef folding should already have dealt with that case.
2020-10-08 10:53:32 +01:00
Simon Pilgrim
3aa93f690b [InstCombine] recognizeBSwapOrBitReverseIdiom - support for 'partial' bswap patterns (PR47191) (Reapplied)
If we're bswap'ing some bytes and zero'ing the remainder we can perform this as a bswap+mask which helps us match 'partial' bswaps as a first step towards folding into a more complex bswap pattern.

Reapplied with early-out if recognizeBSwapOrBitReverseIdiom collects a source wider than the result type.

Differential Revision: https://reviews.llvm.org/D88578
2020-10-03 14:52:42 +01:00
Simon Pilgrim
0364721e3e Revert rG3d14a1e982ad27 - "[InstCombine] recognizeBSwapOrBitReverseIdiom - support for 'partial' bswap patterns (PR47191)"
This reverts commit 3d14a1e982ad27111346471564d575ad5efc6419.

This is breaking on some 2stage clang buildbots
2020-10-02 18:17:14 +01:00
Simon Pilgrim
3d14a1e982 [InstCombine] recognizeBSwapOrBitReverseIdiom - support for 'partial' bswap patterns (PR47191)
If we're bswap'ing some bytes and zero'ing the remainder we can perform this as a bswap+mask which helps us match 'partial' bswaps as a first step towards folding into a more complex bswap pattern.

Differential Revision: https://reviews.llvm.org/D88578
2020-10-02 17:25:12 +01:00
Simon Pilgrim
63ee42a06b [InstCombine] matchRotate - force splat of uniform constant rotation amounts (PR46895)
Fixes minor bug in D88402 where we were using the original shift constant (with undefs) instead of one with the splat values (re)splatted to all elements.
2020-09-28 15:12:41 +01:00
Simon Pilgrim
dabb14cadd [InstCombine] matchRotate - allow undef in uniform constant rotation amounts (PR46895)
An extension to D87452, we can safely permit undefs in the uniform/splat detection

https://alive2.llvm.org/ce/z/nT-ptN

Differential Revision: https://reviews.llvm.org/D88402
2020-09-28 13:36:13 +01:00
Simon Pilgrim
9ff9c1d8ee [InstCombine] matchRotate - support (uniform) constant rotation amounts (PR46895)
This patch adds handling of rotation patterns with constant shift amounts - the next bit will be how we want to support non-uniform constant vectors.

Differential Revision: https://reviews.llvm.org/D87452
2020-09-25 22:03:10 +01:00
Christopher Tetreault
640f20b0c7 [SVE] Remove calls to VectorType::getNumElements from InstCombine
Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D82237
2020-08-31 12:59:10 -07:00
Sanjay Patel
ec06b38130 [InstCombine] canonicalize 'not' ops before logical shifts
This reverses the existing transform that would uniformly canonicalize any 'xor' after any shift. In the case of logical shifts, that turns a 'not' into an arbitrary 'xor' with constant, and that's probably not as good for analysis, SCEV, or codegen.

The SCEV motivating case is discussed in:
http://bugs.llvm.org/PR47136

There's an analysis motivating case at:
http://bugs.llvm.org/PR38781

I did draft a patch that would do the same for 'ashr' but that's questionable because it's just swapping the position of a 'not' and uncovers at least 2 missing folds that we would probably need to deal with as preliminary steps.

Alive proofs:
https://rise4fun.com/Alive/BBV

  Name: shift right of 'not'
  Pre: C2 == (-1 u>> C1)
  %a = lshr i8 %x, C1
  %r = xor i8 %a, C2
  =>
  %n = xor i8 %x, -1
  %r = lshr i8 %n, C1

  Name: shift left of 'not'
  Pre: C2 == (-1 << C1)
  %a = shl i8 %x, C1
  %r = xor i8 %a, C2
  =>
  %n = xor i8 %x, -1
  %r = shl i8 %n, C1

  Name: ashr of 'not'
  %a = ashr i8 %x, C1
  %r = xor i8 %a, -1
  =>
  %n = xor i8 %x, -1
  %r = ashr i8 %n, C1

Differential Revision: https://reviews.llvm.org/D86243
2020-08-22 09:38:13 -04:00
Sanjay Patel
c8d711adae [InstCombine] reduce code duplication; NFC 2020-08-19 12:05:12 -04:00
Dávid Bolvanský
c2f0101310 [InstCombine] ~(~X + Y) -> X - Y
Proof:
https://alive2.llvm.org/ce/z/4xharr

Solves PR47051

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D85593
2020-08-11 11:05:42 +02:00
Sanjay Patel
23693ffc3b [InstCombine] reduce xor-of-or's bitwise logic (PR46955); 2nd try
The 1st try at this (rG2265d01f2a5b) exposed what looks like
unspecified behavior in C/C++ resulting in test variations.

The arguments to BinaryOperator::CreateAnd() were both IRBuilder
function calls, and the order in which they execute determines
the order of the new instructions in the IR. But the order of
function arg evaluation is not fixed by the rules of C/C++, so
depending on compiler config, the test would fail because the
test expected a single fixed ordering of instructions.

Original commit message:
I tried to use m_Deferred() on this, but didn't find
a clean way to do that.

http://bugs.llvm.org/PR46955

https://alive2.llvm.org/ce/z/2h6QTq
2020-08-03 10:21:56 -04:00
Sanjay Patel
f19a9be385 Revert "[InstCombine] reduce xor-of-or's bitwise logic (PR46955)"
This reverts commit 2265d01f2a5bd153959701e22f5be2a40e1674a3.
Seeing bot failures after this change like:
http://lab.llvm.org:8011/builders/clang-cmake-x86_64-sde-avx512-linux/builds/42586
2020-08-03 08:58:41 -04:00
Sanjay Patel
2265d01f2a [InstCombine] reduce xor-of-or's bitwise logic (PR46955)
I tried to use m_Deferred() on this, but didn't find
a clean way to do that.

http://bugs.llvm.org/PR46955

https://alive2.llvm.org/ce/z/2h6QTq
2020-08-03 08:31:43 -04:00
Sebastian Neubauer
2a6c871596 [InstCombine] Move target-specific inst combining
For a long time, the InstCombine pass handled target specific
intrinsics. Having target specific code in general passes was noted as
an area for improvement for a long time.

D81728 moves most target specific code out of the InstCombine pass.
Applying the target specific combinations in an extra pass would
probably result in inferior optimizations compared to the current
fixed-point iteration, therefore the InstCombine pass resorts to newly
introduced functions in the TargetTransformInfo when it encounters
unknown intrinsics.
The patch should not have any effect on generated code (under the
assumption that code never uses intrinsics from a foreign target).

This introduces three new functions:
TargetTransformInfo::instCombineIntrinsic
TargetTransformInfo::simplifyDemandedUseBitsIntrinsic
TargetTransformInfo::simplifyDemandedVectorEltsIntrinsic

A few target specific parts are left in the InstCombine folder, where
it makes sense to share code. The largest left-over part in
InstCombineCalls.cpp is the code shared between arm and aarch64.

This allows to move about 3000 lines out from InstCombine to the targets.

Differential Revision: https://reviews.llvm.org/D81728
2020-07-22 15:59:49 +02:00
Sanjay Patel
d8b268680d [InstCombine] prevent infinite looping in or-icmp fold (PR46712)
I'm not sure if the test is truly minimal, but we need to
induce a situation where a value becomes a constant but is
not immediately folded before getting to the 'or' transform.
2020-07-15 14:12:12 -04:00
Sanjay Patel
2552f65183 [InstCombine] fold mask op into casted shift (PR46013)
https://rise4fun.com/Alive/Qply8

  Pre: C2 == (-1 u>> zext(C1))
  %a = ashr %x, C1
  %s = sext %a to i16
  %r = and i16 %s, C2
    =>
  %s2 = sext %x to i16
  %r = lshr i16 %s2, zext(C1)

https://bugs.llvm.org/show_bug.cgi?id=46013
2020-06-07 09:33:18 -04:00
Roman Lebedev
fde8eb00e1
[InstCombine] visitMaskedMerge(): when unfolding, sanitize undef constants (PR45955)
We can't leave undef vector element constants as-is,
it is a miscompile, so we need to sanitize them.

We have two vectors (C and ~C):
* We can't replace undef with 0 in both of them
* We can't replace undef with 0 in only one of them
* We could replace undef with -1 in both of them
* We could replace undef with -1 in only one(!) of them
* We could replace undef with -1 in one and 0 in another one of them.

Therefore, it seems best to go with the last option, since otherwise
we'd loose knowledge that C and ~C have no common bits set,
which seems more important than preserving partial undef knowledge.

Fixes https://bugs.llvm.org/show_bug.cgi?id=45955
2020-05-17 22:53:03 +03:00
Simon Pilgrim
bab44a698e [InstCombine] matchOrConcat - match BITREVERSE
Fold or(zext(bitreverse(x)),shl(zext(bitreverse(y)),bw/2) -> bitreverse(or(zext(x),shl(zext(y),bw/2))

Practically this is the same as the BSWAP pattern so we might as well handle it.
2020-05-10 16:00:29 +01:00
Simon Pilgrim
5c91aa6603 [InstCombine] Fold or(zext(bswap(x)),shl(zext(bswap(y)),bw/2)) -> bswap(or(zext(x),shl(zext(y), bw/2))
This adds a general combine that can be used to fold:

  or(zext(OP(x)), shl(zext(OP(y)),bw/2))
-->
  OP(or(zext(x), shl(zext(y),bw/2)))

Allowing us to widen 'concat-able' style or+zext patterns - I've just set this up for BSWAP but we could use this for other similar ops (BITREVERSE for instance).

We already do something similar for bitop(bswap(x),bswap(y)) --> bswap(bitop(x,y))

Fixes PR45715

Reviewed By: @lebedev.ri

Differential Revision: https://reviews.llvm.org/D79041
2020-05-05 12:30:10 +01:00
Michael Liao
495bb8feb9 Fix -Wparentheses warnings. NFC. 2020-04-24 15:04:01 -04:00
Sanjay Patel
62da6ecea2 [InstCombine] substitute equivalent constant to reduce logic-of-icmps
(X == C) && (Y Pred1 X) --> (X == C) && (Y Pred1 C)
(X != C) || (Y Pred1 X) --> (X != C) || (Y Pred1 C)

This cooperates/overlaps with D78430, but it is a more general transform
that gets us most of the expected simplifications and several other
improvements.
http://volta.cs.utah.edu:8080/z/5gxjjc

PR45618:
https://bugs.llvm.org/show_bug.cgi?id=45618

Differential Revision: https://reviews.llvm.org/D78582
2020-04-23 10:19:16 -04:00
Michael Liao
21529355e1 Fix -Wparentheses warnings. NFC. 2020-04-21 15:02:59 -04:00
Sanjay Patel
978166f209 [InstCombine] improve types/names for logic-of-icmp helper function; NFC 2020-04-21 10:16:45 -04:00
Sanjay Patel
ba72389269 [InstCombine] improve types/names for logic-of-icmp helper functions; NFC 2020-04-21 09:18:22 -04:00
Sanjay Patel
812970edda [InstCombine] replace undef in vector constant for safe shift transform (PR45447)
As noted in PR45447, we have a vector-constant-with-undef-element transform bug:
https://bugs.llvm.org/show_bug.cgi?id=45447

We replace undefs with a safe constant (0 or -1) based on the (non-)negative
predicate constraint.

So this is correct:
http://volta.cs.utah.edu:8080/z/WZE36H
...but this is not:
http://volta.cs.utah.edu:8080/z/boj8gJ

Previously, we were relying on getSafeVectorConstantForBinop() in the related fold (D76800).
But that's making an assumption about what qualifies as "safe", and that assumption may
not always hold.

Differential Revision: https://reviews.llvm.org/D77739
2020-04-09 08:00:46 -04:00
Christopher Tetreault
155740cc33 Clean up usages of asserting vector getters in Type
Summary:
Remove usages of asserting vector getters in Type in preparation for the
VectorType refactor. The existence of these functions complicates the
refactor while adding little value.

Reviewers: sdesmalen, rriddle, efriedma

Reviewed By: sdesmalen

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D77263
2020-04-08 15:15:41 -07:00
Nikita Popov
672e8bfbfc [InstCombine] Fix worklist management in foldXorOfICmps()
Because this code does not use the IC-aware replaceInstUsesWith()
helper, we need to manually push users to the worklist.

This is NFC-ish, in that it may only change worklist order.
2020-03-28 18:25:21 +01:00
Jonathan Roelofs
7a89a5d81b [InstCombine] Fix Incorrect fold of ashr+xor -> lshr w/ vectors
Fixes https://bugs.llvm.org/show_bug.cgi?id=43665
2020-03-26 12:09:36 -06:00
Florian Hahn
9063022573 [InstCombin] Avoid nested Create calls, to guarantee order.
The original code allowed creating the != checks in unpredictable order,
causing http://lab.llvm.org:8011/builders/clang-cmake-x86_64-sde-avx512-linux/builds/34014
to fail.
2020-02-18 09:44:11 +01:00
Florian Hahn
6c85e92bcf [InstCombine] Simplify a umul overflow check to a != 0 && b != 0.
This patch adds a simplification if an OR weakens the overflow condition
for umul.with.overflow by treating any non-zero result as overflow. In that
case, we overflow if both umul.with.overflow operands are != 0, as in that
case the result can only be 0, iff the multiplication overflows.

Code like this is generated by code using __builtin_mul_overflow with
negative integer constants, e.g.
   bool test(unsigned long long v, unsigned long long *res) {
     return __builtin_mul_overflow(v, -4775807LL, res);
   }

```
----------------------------------------
Name: D74141
  %res = umul_overflow {i8, i1} %a, %b
  %mul = extractvalue {i8, i1} %res, 0
  %overflow = extractvalue {i8, i1} %res, 1
  %cmp = icmp ne %mul, 0
  %ret = or i1 %overflow, %cmp
  ret i1 %ret
=>
  %t0 = icmp ne i8 %a, 0
  %t1 = icmp ne i8 %b, 0
  %ret = and i1 %t0, %t1
  ret i1 %ret
  %res = umul_overflow {i8, i1} %a, %b
  %mul = extractvalue {i8, i1} %res, 0
  %cmp = icmp ne %mul, 0
  %overflow = extractvalue {i8, i1} %res, 1

Done: 1
Optimization is correct!

```

Reviewers: nikic, lebedev.ri, spatel, Bigcheese, dexonsmith, aemerson

Reviewed By: lebedev.ri

Differential Revision: https://reviews.llvm.org/D74141
2020-02-18 09:11:55 +01:00
Nikita Popov
5b2b67be8e [InstCombine] Remove unnecessary worklist push; NFCI
This is no longer needed after d4627b90a0462c90a834c2f7b9c9228b3ec7a45b,
should have dropped it there...
2020-02-08 17:09:28 +01:00
Nikita Popov
d4627b90a0 [InstCombine] Avoid modifying instructions in-place
As discussed on D73919, this replaces a few cases where we were
modifying multiple operands of instructions in-place with the
creation of a new instruction, which we generally prefer nowadays.

This tends to be more readable and less prone to worklist management
bugs.

Test changes are only superficial (instruction naming and order).
2020-02-08 17:05:56 +01:00
Nikita Popov
878cb38a5c [InstCombine] Add replaceOperand() helper
Adds a replaceOperand() helper, which is like Instruction.setOperand()
but adds the old operand to the worklist. This reduces the amount of
missing or incorrect worklist management.

This only applies the helper to a relatively small subset of
setOperand() calls in InstCombine, namely those of the pattern
`I.setOperand(); return &I;`, where it is most obviously applicable.

Differential Revision: https://reviews.llvm.org/D73803
2020-02-03 19:00:17 +01:00
Nikita Popov
e6c9ab4fb7 [InstCombine] Rename worklist methods; NFC
This renames Worklist.AddDeferred() to Worklist.add() and
Worklist.Add() to Worklist.push(). The intention here is that
Worklist.add() should be the go-to method for explicit worklist
management, while the raw Worklist.push() is mostly for
InstCombine internals. I will then migrate uses of Worklist.push()
to Worklist.add() in followup changes.

As suggested by spatel on D73411 I'm also changing the remaining
method names to lowercase first character, in line with current
coding standards.

Differential Revision: https://reviews.llvm.org/D73745
2020-02-03 18:56:51 +01:00
Nikita Popov
efba7ed05e [PatternMatch] Make m_c_ICmp swap the predicate (PR42801)
This addresses https://bugs.llvm.org/show_bug.cgi?id=42801.
The m_c_ICmp() matcher is changed to provide the swapped predicate
if the operands are swapped.

Existing uses of m_c_ICmp() fall in one of two categories: Working
on equality predicates only, where swapping is irrelevant.
Or performing a manual swap, in which case this patch removes it.

The only exception is the foldICmpWithLowBitMaskedVal() fold, which
does not swap the predicate, and instead reasons about whether
a swap occurred or not for each predicate. Getting the swapped
predicate allows us to merge the logic for pairs of predicates,
instead of duplicating it.

Differential Revision: https://reviews.llvm.org/D72976
2020-01-22 22:56:26 +01:00
Sanjay Patel
f8962571f7 [InstCombine] try to pull 'not' of select into compare operands
not (select ?, (cmp TPred, ?, ?), (cmp FPred, ?, ?) -->
     select ?, (cmp TPred', ?, ?), (cmp FPred', ?, ?)

If both sides of the select are cmps, we can remove an instruction.
The case where only side is a cmp is deferred to a possible
follow-on patch.

We have a more general 'isFreeToInvert' analysis, but I'm not seeing
a way to use that more widely without inducing infinite looping
(opposing transforms).
Here, we flip the compare predicates directly, so we should not have
any danger by creating extra intermediate 'not' ops.

Alive proofs:
https://rise4fun.com/Alive/jKa

Name: both select values are compares - invert predicates
  %tcmp = icmp sle i32 %x, %y
  %fcmp = icmp ugt i32 %z, %w
  %sel = select i1 %cond, i1 %tcmp, i1 %fcmp
  %not = xor i1 %sel, true
=>
  %tcmp_not = icmp sgt i32 %x, %y
  %fcmp_not = icmp ule i32 %z, %w
  %not = select i1 %cond, i1 %tcmp_not, i1 %fcmp_not

Name: false val is compare - invert/not
  %fcmp = icmp ugt i32 %z, %w
  %sel = select i1 %cond, i1 %tcmp, i1 %fcmp
  %not = xor i1 %sel, true
=>
  %tcmp_not = xor i1 %tcmp, -1
  %fcmp_not = icmp ule i32 %z, %w
  %not = select i1 %cond, i1 %tcmp_not, i1 %fcmp_not

Differential Revision: https://reviews.llvm.org/D72007
2020-01-07 10:44:23 -05:00