802 Commits

Author SHA1 Message Date
zGoldthorpe
82caa251d4
[InstCombine] Fold integer unpack/repack patterns through ZExt (#153583)
This patch explicitly enables the InstCombiner to fold integer
unpack/repack patterns such as

```llvm
define i64 @src_combine(i32 %lower, i32 %upper) {
  %base = zext i32 %lower to i64

  %u.0 = and i32 %upper, u0xff
  %z.0 = zext i32 %u.0 to i64
  %s.0 = shl i64 %z.0, 32
  %o.0 = or i64 %base, %s.0

  %r.1 = lshr i32 %upper, 8
  %u.1 = and i32 %r.1, u0xff
  %z.1 = zext i32 %u.1 to i64
  %s.1 = shl i64 %z.1, 40
  %o.1 = or i64 %o.0, %s.1

  %r.2 = lshr i32 %upper, 16
  %u.2 = and i32 %r.2, u0xff
  %z.2 = zext i32 %u.2 to i64
  %s.2 = shl i64 %z.2, 48
  %o.2 = or i64 %o.1, %s.2

  %r.3 = lshr i32 %upper, 24
  %u.3 = and i32 %r.3, u0xff
  %z.3 = zext i32 %u.3 to i64
  %s.3 = shl i64 %z.3, 56
  %o.3 = or i64 %o.2, %s.3

  ret i64 %o.3
}
; =>
define i64 @tgt_combine(i32 %lower, i32 %upper) {
  %base = zext i32 %lower to i64
  %upper.zext = zext i32 %upper to i64
  %s.0 = shl nuw i64 %upper.zext, 32
  %o.3 = or disjoint i64 %s.0, %base
  ret i64 %o.3
}
```

Alive2 proofs: [YAy7ny](https://alive2.llvm.org/ce/z/YAy7ny)
2025-08-15 12:48:32 -06:00
Yingwei Zheng
84b31581f8
Revert "[PatternMatch] Add m_[Shift]OrSelf matchers." (#152953)
Reverts llvm/llvm-project#152924
According to
f67668b586,
it is not an NFC change.
2025-08-11 09:35:16 +02:00
Yingwei Zheng
1c499351d6
[PatternMatch] Add m_[Shift]OrSelf matchers. (#152924)
Address the comment
https://github.com/llvm/llvm-project/pull/147414/files#r2228612726.
As they are usually used to match integer packing patterns, it is enough
to handle constant shamts.
2025-08-11 09:58:16 +08:00
zGoldthorpe
71d6762309
[InstCombine] Added pattern for recognising the construction of packed integers. (#147414)
This patch extends the instruction combiner to simplify the construction
of a packed scalar integer from a vector type, such as:
```llvm
target datalayout = "e"

define i32 @src(<4 x i8> %v) {
  %v.0 = extractelement <4 x i8> %v, i32 0
  %z.0 = zext i8 %v.0 to i32

  %v.1 = extractelement <4 x i8> %v, i32 1
  %z.1 = zext i8 %v.1 to i32
  %s.1 = shl i32 %z.1, 8
  %x.1 = or i32 %z.0, %s.1

  %v.2 = extractelement <4 x i8> %v, i32 2
  %z.2 = zext i8 %v.2 to i32
  %s.2 = shl i32 %z.2, 16
  %x.2 = or i32 %x.1, %s.2

  %v.3 = extractelement <4 x i8> %v, i32 3
  %z.3 = zext i8 %v.3 to i32
  %s.3 = shl i32 %z.3, 24
  %x.3 = or i32 %x.2, %s.3

  ret i32 %x.3
}

; ===============

define i32 @tgt(<4 x i8> %v) {
  %x.3 = bitcast <4 x i8> %v to i32
  ret i32 %x.3
}
```

Alive2 proofs (little-endian):
[YKdMeg](https://alive2.llvm.org/ce/z/YKdMeg)
Alive2 proofs (big-endian):
[vU6iKc](https://alive2.llvm.org/ce/z/vU6iKc)
2025-07-30 10:58:49 -06:00
Nikita Popov
7e878aaf23
[PatternMatch] Add support for capture-and-match (NFC) (#149825)
When using PatternMatch, there is a common problem where we want to both
match something against a pattern, but also capture the
value/instruction for various reasons (e.g. to access flags).

Currently the two ways to do that is to either capture using
m_Value/m_Instruction and do a separate match on the result, or to use
the somewhat awkward `m_CombineAnd(m_XYZ, m_Value(V))` pattern.

This PR introduces to add a variant of `m_Value`/`m_Instruction` which
does both a capture and a match. `m_Value(V, m_XYZ)` is basically
equivalent to `m_CombineAnd(m_XYZ, m_Value(V))`.

I've ported two InstCombine files to this pattern as a sample.
2025-07-23 10:05:09 +02:00
Marius Kamp
9544bb5c29
[InstCombine] Fold umul.overflow(x, c1) | (x*c1 > c2) to x > c2/c1 (#147327)
The motivation of this pattern is to check whether the product of a
variable and a constant would be mathematically (i.e., as integer
numbers instead of bit vectors) greater than a given constant bound. The
pattern appears to occur when compiling several Rust projects (it seems
to originate from the `smallvec` crate but I have not checked this
further).

Unless `c1` is `0`, we can transform this pattern into `x > c2/c1` with
all operations working on unsigned integers. Due to undefined behavior
when an element of a non-splat vector is `0`, the transform is only
implemented for scalars and splat vectors.

Alive proof: https://alive2.llvm.org/ce/z/LawTkm

Closes #142674
2025-07-11 10:52:13 +02:00
Jeffrey Byrnes
0da9aacf48
[InstCombine] Extend bitmask mul combine to handle independent operands (#142503)
This extends https://github.com/llvm/llvm-project/pull/136013 to capture
cases where the combineable bitmask muls are nested under multiple
or-disjoints.

This PR is meant for commits starting at
8c403c912046505ffc10378560c2fc48f214af6a

op1 = or-disjoint mul(and (X, C1), D) , reg1
op2 = or-disjoint mul(and (X, C2), D) , reg2
out = or-disjoint op1, op2

->

temp1 = or-disjoint reg1, reg2
out = or-disjoint mul(and (X, (C1 + C2)), D), temp1


Case1: https://alive2.llvm.org/ce/z/dHApyV
Case2: https://alive2.llvm.org/ce/z/Jz-Nag
Case3: https://alive2.llvm.org/ce/z/3xBnEV
2025-07-07 13:50:42 -07:00
Andreas Jonson
f21f2b483c
[InstCombine] Create Icmp in canonical form (NFC) (#146266) 2025-06-29 15:08:36 +02:00
Jeffrey Byrnes
7034014d08
[InstCombine] Combine or-disjoint (and->mul), (and->mul) to and->mul (#136013)
The canonical pattern for bitmasked mul is currently

```
%val = and %x, %bitMask // where %bitMask is some constant
%cmp = icmp eq %val, 0
%sel = select %cmp, 0, %C // where %C is some constant = C' * %bitMask
```

In certain cases, where we are combining multiple of these bitmasked
muls with common factors, we are able to optimize into and->mul (see
https://github.com/llvm/llvm-project/pull/135274 )

This optimization lends itself to further optimizations. This PR
addresses one of such optimizations.

In cases where we have

`or-disjoint ( mul(and (X, C1), D) , mul (and (X, C2), D))`

we can combine into

`mul( and (X, (C1 + C2)), D) `

provided C1 and C2 are disjoint.

Generalized proof: https://alive2.llvm.org/ce/z/MQYMui
2025-06-11 18:07:00 -07:00
Yingwei Zheng
e2c698c7e8
[InstCombine] Fix miscompilation in sinkNotIntoLogicalOp (#142727)
Consider the following case:
```
define i1 @src(i8 %x) {
  %cmp = icmp slt i8 %x, -1
  %not1 = xor i1 %cmp, true
  %or = or i1 %cmp, %not1
  %not2 = xor i1 %or, true
  ret i1 %not2
}
```
`sinkNotIntoLogicalOp(%or)` calls `freelyInvert(%cmp,
/*IgnoredUser=*/%or)` first. However, as `%cmp` is also used by `Op1 =
%not1`, the RHS of `%or` is set to `%cmp.not = xor i1 %cmp, true`. Thus
`Op1` is out of date in the second call to `freelyInvert`. Similarly,
the second call may change `Op0`. According to the analysis above, I
decided to avoid this fold when one of the operands is also a user of
the other.

Closes https://github.com/llvm/llvm-project/issues/142518.
2025-06-04 17:48:01 +08:00
Ramkumar Ramachandra
b40e4ceaa6
[ValueTracking] Make Depth last default arg (NFC) (#142384)
Having a finite Depth (or recursion limit) for computeKnownBits is very
limiting, but is currently a load-bearing necessity, as all KnownBits
are recomputed on each call and there is no caching. As a prerequisite
for an effort to remove the recursion limit altogether, either using a
clever caching technique, or writing a easily-invalidable KnownBits
analysis, make the Depth argument in APIs in ValueTracking uniformly the
last argument with a default value. This would aid in removing the
argument when the time comes, as many callers that currently pass 0
explicitly are now updated to omit the argument altogether.
2025-06-03 17:12:24 +01:00
Jeffrey Byrnes
21123792b8
Reland: [InstCombine] Combine and->cmp->sel->or-disjoint into and->mul (#142035)
Reland of https://github.com/llvm/llvm-project/pull/135274

The commit to land the original PR was blamelisted for two types of
failures:

https://lab.llvm.org/buildbot/#/builders/24/builds/8932
https://lab.llvm.org/buildbot/#/builders/198/builds/4844

The second of which seems to be unrelated to the PR and seemingly fixed
by
6ee2453360

I've addressed the fix to the other issue with the latest commit in this
PR b24f4731aaeb753c9269dbd9926cc83c7456f98e . This is the only
difference between this PR and the previously accepted PR.

---------

Co-authored-by: Matt Arsenault <arsenm2@gmail.com>
Co-authored-by: Yingwei Zheng <dtcxzyw@qq.com>
2025-05-30 10:21:21 -07:00
Jeffrey Byrnes
46828d2830 Revert "[InstCombine] Combine and->cmp->sel->or-disjoint into and->mul (#135274)"
This reverts commit c49c7ddb0b335708778d2bfd88119c439bd0973e.
2025-05-28 18:01:19 -07:00
Jeffrey Byrnes
c49c7ddb0b
[InstCombine] Combine and->cmp->sel->or-disjoint into and->mul (#135274)
While and->cmp->sel combines into and->mul may result in worse code on
some targets, this combine should be uniformly beneficial.

Proof: https://alive2.llvm.org/ce/z/MibAcN

---------

Co-authored-by: Matt Arsenault <arsenm2@gmail.com>
Co-authored-by: Yingwei Zheng <dtcxzyw@qq.com>
2025-05-28 17:16:56 -07:00
Tim Gymnich
571a24c314
Reland [llvm] add GenericFloatingPointPredicateUtils #140254 (#141065)
#140254 was previously missing 2 files in the bazel build config.
2025-05-22 17:17:02 +02:00
Kewen12
c47a5fbb22
Revert "[llvm] add GenericFloatingPointPredicateUtils (#140254)" (#140968)
This reverts commit d00d74bb2564103ae3cb5ac6b6ffecf7e1cc2238. 

The PR breaks our buildbots and blocks downstream merge.
2025-05-21 19:31:14 -04:00
Tim Gymnich
d00d74bb25
[llvm] add GenericFloatingPointPredicateUtils (#140254)
add `GenericFloatingPointPredicateUtils` in order to generalize
effects of floating point comparisons on `KnownFPClass` for both IR and
MIR.

---------

Co-authored-by: Matt Arsenault <arsenm2@gmail.com>
2025-05-21 23:45:31 +02:00
fengfeng
e7bf750437
[InstCombine] Pass disjoint in or combine (#138800)
Proof: https://alive2.llvm.org/ce/z/wtTm5V
https://alive2.llvm.org/ce/z/WC7Ai2

---------

Signed-off-by: feng.feng <feng.feng@iluvatar.com>
2025-05-08 14:29:13 +08:00
Yingwei Zheng
a10f6c1e68
[InstCombine] Handle isnormal idiom (#125454)
This patch improves the codegen of Rust's `is_normal` implementation:
https://godbolt.org/z/1MPzcrrYG
Alive2: https://alive2.llvm.org/ce/z/hF9RWQ
2025-05-06 23:55:04 +08:00
Jim Lin
12d1cb1347
[InstCombine] Preserve disjoint or after folding casted bitwise logic (#136815)
Optimize
`or disjoint (zext/sext a) (zext/sext b))`
to
`(zext/sext (or disjoint a, b))`
without losing disjoint.

Confirmed here: https://alive2.llvm.org/ce/z/kQ5fJv.
2025-04-26 12:35:04 +08:00
Jim Lin
462bf4746f
[InstCombine] Refactor the code for folding logicop and sext/zext. NFC. (#137132)
This refactoring is for more easily adding the code to preserve disjoint
or in the PR https://github.com/llvm/llvm-project/pull/136815.

Both casts must have one use for folding logicop and sext/zext when the
src type differ to avoid creating an extra instruction. If the src type
of casts are the same, only one of the casts needs to have one use. This
PR also adds more tests for the same src type.
2025-04-25 10:59:01 +08:00
Jeffrey Byrnes
1636f4af7b
[CmpInstAnalysis] Decompose icmp eq (and x, C) C2 (#136367)
This type of decomposition is used in multiple places already. Adding it
to `CmpInstAnalysis` reduces code duplication.
2025-04-24 12:40:26 -07:00
Yingwei Zheng
8abc917fe0
[InstCombine] Do not fold logical is_finite test (#136851)
This patch disables the fold for logical is_finite test (i.e., `and
(fcmp ord x, 0), (fcmp u* x, inf) -> fcmp o* x, inf`).
It is still possible to allow this fold for several logical cases (e.g.,
`stripSignOnlyFPOps(RHS0)` does not strip any operations). Since this
patch has no real-world impact, I decided to disable this fold for all
logical cases.

Alive2: https://alive2.llvm.org/ce/z/aH4LC7
Closes https://github.com/llvm/llvm-project/issues/136650.
2025-04-24 00:12:30 +08:00
Yingwei Zheng
e710a5a9f2
[InstCombine] Fold fneg/fabs patterns with ppc_f128 (#130557)
This patch is needed by
https://github.com/llvm/llvm-project/pull/130496.
2025-04-14 14:30:00 +08:00
Andreas Jonson
94f6f0334d
[InstCombine] handle trunc to i1 in foldLogOpOfMaskedICmps. (#128861)
proof: https://alive2.llvm.org/ce/z/pu8WmX

fixes #128778
2025-04-09 18:07:34 +02:00
Simon Pilgrim
d84dc8ff93
[InstCombine] Add handling for (or (zext x), (shl (zext (ashr x, bw/2-1))), bw/2) -> (sext x) fold (#130316)
Minor tweak to #129363 which handled all the cases where there was a sext for the original source value, but not for cases where the source is already half the size of the destination type

Another regression noticed in #76524
2025-03-09 10:34:30 +00:00
Muhammad Bassiouni
c662a9d303
[InstCombine] recognize missed i128 split optimization (#129363)
This pr fixes #126056, recognising a split i128 extension optimization.

Proof for working optimization:

```llvm
define i128 @src(i32 noundef %x) {
entry:
  %coerce.sroa.0.0.extract.trunc = sext i32 %x to i64
  %0 = ashr i32 %x, 31
  %coerce.sroa.2.0.extract.trunc = sext i32 %0 to i64
  %x.sroa.2.0.insert.ext.i = zext i64 %coerce.sroa.2.0.extract.trunc to i128
  %x.sroa.2.0.insert.shift.i = shl nuw i128 %x.sroa.2.0.insert.ext.i, 64
  %x.sroa.0.0.insert.ext.i = zext i64 %coerce.sroa.0.0.extract.trunc to i128
  %x.sroa.0.0.insert.insert.i = or disjoint i128 %x.sroa.2.0.insert.shift.i, %x.sroa.0.0.insert.ext.i
  ret i128 %x.sroa.0.0.insert.insert.i
}

define i128 @tgt(i32 noundef %x)  {
  %x.sroa.0.0.insert.insert.i = sext i32 %x to i128
  ret i128 %x.sroa.0.0.insert.insert.i
}
```
2025-03-06 09:42:52 +00:00
Simon Pilgrim
78aa61d8b6
[InstCombine] matchOrConcat - return Value* not Instruction* (#128921)
NFC to make it easier to use builders in the future that might constant fold etc.
2025-02-27 08:31:43 +00:00
Yingwei Zheng
9cd83d6ea2
[InstCombine] Drop samesign in foldLogOpOfMaskedICmps (#125829)
Alive2: https://alive2.llvm.org/ce/z/6zLAYp

Note: We can also apply this fix to the logic below (`if (Mask &
AMask_NotAllOnes)`), but it seems unreachable.
2025-02-07 11:56:52 +08:00
Yingwei Zheng
6410bddc27
[InstCombine] Extend #125676 to handle variable power of 2 (#125855)
Alive2: https://alive2.llvm.org/ce/z/dJehZ8
2025-02-06 10:57:49 +08:00
Yingwei Zheng
fad6375428
[InstCombine] Fold xor of bittests into bittest of xor'd value (#125676)
Motivating case:
64927af52a/llvm/lib/Analysis/ValueTracking.cpp (L8600-L8602)

It is translated into `xor (X & 2) != 0, (Y & 2) != 0`.
Alive2: https://alive2.llvm.org/ce/z/dJehZ8
2025-02-05 05:38:04 +08:00
Yingwei Zheng
94fee13d42
[InstCombine] Simplify FMF propagation. NFC. (#121899)
This patch uses new FMF interfaces introduced by
https://github.com/llvm/llvm-project/pull/121657 to simplify existing
code with `andIRFlags` and `copyFastMathFlags`.
2025-01-17 01:31:06 +08:00
Andreas Jonson
06499f3672
[InstCombine] Prepare foldLogOpOfMaskedICmps to handle trunc to i1. (NFC) (#122179) 2025-01-15 18:08:53 +01:00
Yingwei Zheng
d80bdf7261
[IRBuilder] Add a helper function to intersect FMFs from two instructions (#122059)
Address review comment in
https://github.com/llvm/llvm-project/pull/121899#discussion_r1905765776
2025-01-09 14:36:42 +08:00
Andreas Jonson
d4182f1b56
[InstCombine] move foldAndOrOfICmpsOfAndWithPow2 into foldLogOpOfMaskedICmps (#121970) 2025-01-08 18:04:38 +01:00
Yingwei Zheng
882df05435
[InstCombine] Fold (A | B) ^ (A & C) --> A ? ~C : B (#121906)
Closes https://github.com/llvm/llvm-project/issues/121773.
2025-01-07 20:50:35 +08:00
Yingwei Zheng
4ebfd43cf0
[InstCombine] Always treat inner and/or as bitwise (#121766)
In https://github.com/llvm/llvm-project/pull/116065, we pass `IsLogical`
into `foldBooleanAndOr` when folding inner and/or ops. But it is always
safe to treat them as bitwise if the outer ops are bitwise.

Alive2: https://alive2.llvm.org/ce/z/hULrgH
Closes https://github.com/llvm/llvm-project/issues/121701.
2025-01-07 00:03:13 +08:00
Yingwei Zheng
a77346bad0
[IRBuilder] Refactor FMF interface (#121657)
Up to now, the only way to set specified FMF flags in IRBuilder is to
use `FastMathFlagGuard`. It makes the code ugly and hard to maintain.

This patch introduces a helper class `FMFSource` to replace the original
parameter `Instruction *FMFSource` in IRBuilder. To maximize the
compatibility, it accepts an instruction or a specified FMF.
This patch also removes the use of `FastMathFlagGuard` in some simple
cases.

Compile-time impact:
https://llvm-compile-time-tracker.com/compare.php?from=f87a9db8322643ccbc324e317a75b55903129b55&to=9397e712f6010be15ccf62f12740e9b4a67de2f4&stat=instructions%3Au
2025-01-06 14:37:04 +08:00
Yingwei Zheng
6f68010f91
[InstCombine] Drop samesign flags in foldLogOpOfMaskedICmps_NotAllZeros_BMask_Mixed (#120373)
Counterexamples: https://alive2.llvm.org/ce/z/6Ks8Qz
Closes https://github.com/llvm/llvm-project/issues/120361.
2024-12-18 20:40:33 +08:00
Ramkumar Ramachandra
4a0d53a0b0
PatternMatch: migrate to CmpPredicate (#118534)
With the introduction of CmpPredicate in 51a895a (IR: introduce struct
with CmpInst::Predicate and samesign), PatternMatch is one of the first
key pieces of infrastructure that must be updated to match a CmpInst
respecting samesign information. Implement this change to Cmp-matchers.

This is a preparatory step in migrating the codebase over to
CmpPredicate. Since we no functional changes are desired at this stage,
we have chosen not to migrate CmpPredicate::operator==(CmpPredicate)
calls to use CmpPredicate::getMatching(), as that would have visible
impact on tests that are not yet written: instead, we call
CmpPredicate::operator==(Predicate), preserving the old behavior, while
also inserting a few FIXME comments for follow-ups.
2024-12-13 14:18:33 +00:00
Matthias Braun
e9c68c6d8c
[InstCombine] Match range check pattern with SExt (#118910)
= Background

We optimize range check patterns like the following:
```
  %n_not_negative = icmp sge i32 %n, 0
  call void @llvm.assume(i1 %n_not_negative)
  %a = icmp sge i32 %x, 0
  %b = icmp slt i32 %x, %n
  %c = and i1 %a, %b
```
to a single unsigned comparison:
```
  %n_not_negative = icmp sge i32 %n, 0
  call void @llvm.assume(i1 %n_not_negative)
  %c = icmp ult i32 %x, %n
```

= Extended Pattern

This adds support for a variant of this pattern where the upper range is
compared with a sign extended value:

```
  %n_not_negative = icmp sge i64 %n, 0
  call void @llvm.assume(i1 %n_not_negative)
  %x_sext = sext i32 %x to i64
  %a = icmp sge i32 %x, 0
  %b = icmp slt i64 %x_sext, %n
  %c = and i1 %a, %b
```
is now optimized to:
```
  %n_not_negative = icmp sge i64 %n, 0
  call void @llvm.assume(i1 %n_not_negative)
  %x_sext = sext i32 %x to i64
  %c = icmp ult i64 %x_sext, %n
```

Alive2: https://alive2.llvm.org/ce/z/XVuz9L
2024-12-09 16:36:16 -08:00
Nikita Popov
e477989a05
[InstCombine] Handle trunc i1 pattern in eq-of-parts fold (#112704)
Equality/inequality of the low bit can be represented by `(trunc (xor x,
y) to i1)`, possibly with an extra not. We have to handle this in the
eq-of-parts fold now that we no longer canonicalize this to a masked
icmp.

Proofs: https://alive2.llvm.org/ce/z/qidkzq

Fixes https://github.com/llvm/llvm-project/issues/110919.
2024-11-25 11:49:00 +01:00
Nikita Popov
e5faeb69fb
[InstCombine] Support reassoc for foldLogicOfFCmps (#116065)
We currently support simple reassociation for foldAndOrOfICmps().
Support the same for foldLogicOfFCmps() by going through the common
foldBooleanAndOr() helper.

This will also resolve the regression on #112704, which is also due to
missing reassoc support.

I had to adjust one fold to add support for FMF flag preservation,
otherwise there would be test regressions. There is a separate fold
(reassociateFCmps) handling reassociation for *just* that specific case
and it preserves FMF. Unfortunately it's not rendered entirely redundant
by this patch, because it handles one more level of reassociation as
well.
2024-11-25 10:21:38 +01:00
Stephen Tozer
d686e5cdaf
[DebugInfo][InstCombine] When replacing bswap idiom, add DebugLoc to new insts (#114231)
Currently when InstCombineAndOrXor recognizes a bswap idiom and replaces
it with an intrinsic and other instructions, only the last instruction
gets the DebugLoc of the replaced instruction set to it. This patch
applies the DebugLoc to all the generated instructions, to maintain some
degree of attribution.
2024-11-14 10:06:29 +00:00
Andreas Jonson
00b47b98d4 [NFC] Fix missplaced comment 2024-10-22 20:51:46 +02:00
XChy
a2ba438f3e
[InstCombine] Preserve the flag from RHS only if the and is bitwise (#113164)
Fixes #113123
Alive proof: https://alive2.llvm.org/ce/z/hnqeLC
2024-10-21 22:30:31 +08:00
Jay Foad
85c17e4092
[LLVM] Make more use of IRBuilder::CreateIntrinsic. NFC. (#112706)
Convert many instances of:
  Fn = Intrinsic::getOrInsertDeclaration(...);
  CreateCall(Fn, ...)
to the equivalent CreateIntrinsic call.
2024-10-17 16:20:43 +01:00
Nikita Popov
0f7d148db4 [InstCombine] Add shared helper for logical and bitwise and/or (NFC)
Add a helper for shared folds between logical and bitwise and/or
and move the and/or of icmp and fcmp folds in there. This makes
it easier to extend to more folds.

A possible extension would be to base the current and/or of icmp
reassociation logic on this helper, so that it for example also
applies to fcmp.
2024-10-17 14:25:44 +02:00
Yingwei Zheng
3bf2295ee0
[InstCombine] Drop samesign flag in foldAndOrOfICmpsWithConstEq (#112489)
In
5dbfca30c1
we assume that RHS is poison implies LHS is also poison. It doesn't hold
after introducing samesign flag.

This patch drops the `samesign` flag on RHS if the original expression
is a logical and/or.

Closes #112467.
2024-10-16 16:24:44 +08:00
Yingwei Zheng
9edc454ee6
[InstCombine] Drop range attributes in foldIsPowerOf2OrZero (#112178)
Closes https://github.com/llvm/llvm-project/issues/112078.
2024-10-14 20:52:55 +08:00