With the introduction of CmpPredicate in 51a895a (IR: introduce struct
with CmpInst::Predicate and samesign), PatternMatch is one of the first
key pieces of infrastructure that must be updated to match a CmpInst
respecting samesign information. Implement this change to Cmp-matchers.
This is a preparatory step in migrating the codebase over to
CmpPredicate. Since we no functional changes are desired at this stage,
we have chosen not to migrate CmpPredicate::operator==(CmpPredicate)
calls to use CmpPredicate::getMatching(), as that would have visible
impact on tests that are not yet written: instead, we call
CmpPredicate::operator==(Predicate), preserving the old behavior, while
also inserting a few FIXME comments for follow-ups.
The (extended) bit width might not fit into the (non-extended)
type, resulting in an incorrect truncation of the compared value.
Fix this by using m_SpecificInt(), which is both simpler and
handles this correctly.
Fixes the assertion failure reported in:
https://github.com/llvm/llvm-project/pull/114539#issuecomment-2485799395
On a 32-bit target if pointer arithmetic with `addrspace` is used in i64
computation, the missed folding in InstCombine results to suboptimal
performance, unlike same code compiled for 64bit target.
Rename the function to reflect its correct behavior and to be consistent
with `Module::getOrInsertFunction`. This is also in preparation of
adding a new `Intrinsic::getDeclaration` that will have behavior similar
to `Module::getFunction` (i.e, just lookup, no creation).
Added folds:
- `(add (sub X, Y), (sub Z, X))` -> `(sub Z, Y)`
- `(sub (add X, Y), (add X, Z))` -> `(sub Y, Z)`
The fold typically is handled in the `Reassosiate` pass, but it fails
if the inner `sub`/`add` are multi-use. Less importantly, Reassosiate
doesn't propagate flags correctly.
This patch adds the fold explicitly the InstCombine
Proofs: https://alive2.llvm.org/ce/z/p6JyRPCloses#105866
This change also covers the fold of `zext(A > B) - zext(A < B)` since it
is already being canonicalized into the aforementioned pattern.
Proof: https://alive2.llvm.org/ce/z/AgnfMn
The idea behind this canonicalization is that it allows us to handle less
patterns, because we know that some will be canonicalized away. This is
indeed very useful to e.g. know that constants are always on the right.
However, this is only useful if the canonicalization is actually
reliable. This is the case for constants, but not for arguments: Moving
these to the right makes it look like the "more complex" expression is
guaranteed to be on the left, but this is not actually the case in
practice. It fails as soon as you replace the argument with another
instruction.
The end result is that it looks like things correctly work in tests,
while they actually don't. We use the "thwart complexity-based
canonicalization" trick to handle this in tests, but it's often a
challenge for new contributors to get this right, and based on the
regressions this PR originally exposed, we clearly don't get this right
in many cases.
For this reason, I think that it's better to remove this complexity
canonicalization. It will make it much easier to write tests for
commuted cases and make sure that they are handled.
We have a general fold for (zext (X +nuw C2)) + C1 --> zext (X + (C2 +
trunc(C1)))
but this fold is disabled if the zext has an additional use.
If the two constants cancel, we can fold the whole expression to
zext(X) without increasing the number of instructions.
This patch simplifies `sdiv` to `udiv` by preserving the `nsw` flag for
`(X | Op01C) + Op1C --> X + (Op01C + Op1C)` if the sum of `Op01C` and
`Op1C` will not overflow, and preserves the `nuw` flag unconditionally.
Alive2 Proofs (provided by @nikic): https://alive2.llvm.org/ce/z/nrdCZT,
https://alive2.llvm.org/ce/z/YnJHnH
We can convert this to a select based on the `(icmp eq X, C)`, then
constant fold the addition the true arm begin `(add C, (sext/zext 1))`
and the false arm being `(add X, 0)` e.g
- `(select (icmp eq X, C), (add C, (sext/zext 1)), (add X, 0))`.
This is essentially a specialization of the only case that sees to
actually show up from #89020Closes#93840
Currently, the OptimizePointerDifference fold does not trigger when
working on the sub of two geps where one of the geps has multiple uses,
to avoid duplicating the offset arithmetic too much.
However, there are cases where performing it would still be
clearly profitable, e.g. test_sub_ptradd_multiuse.
This patch drops the one-use restriction using the same strategy we use
in GEP comparison folds: If there are multiple uses, we rewrite the GEP
to use the expanded offset arithmetic instead (effectively
canonicalizing it into ptradd representation).
Fixes https://github.com/llvm/llvm-project/issues/88231.
Since `DivRemPairPass` runs after `ReassociatePass` in the optimization
pipeline, I decided to do this simplification in `InstCombine`.
Alive2: https://alive2.llvm.org/ce/z/JgsiqfFixes#76128.
In #88217 a large set of matchers was changed to only accept poison
values in splats, but not undef values. This is because we now use
poison for non-demanded vector elements, and allowing undef can cause
correctness issues.
This patch covers the remaining matchers by changing the AllowUndef
parameter of getSplatValue() to AllowPoison instead. We also carry out
corresponding renames in matchers.
As a followup, we may want to change the default for things like m_APInt
to m_APIntAllowPoison (as this is much less risky when only allowing
poison), but this change doesn't do that.
There is one caveat here: We have a single place
(X86FixupVectorConstants) which does require handling of vector splats
with undefs. This is because this works on backend constant pool
entries, which currently still use undef instead of poison for
non-demanded elements (because SDAG as a whole does not have an explicit
poison representation). As it's just the single use, I've open-coded a
getSplatValueAllowUndef() helper there, to discourage use in any other
places.
Prior to #85863, the required parameters of llvm::isKnownNonZero were
Value and DataLayout. After, they are Value, Depth, and SimplifyQuery,
where SimplifyQuery is implicitly constructible from DataLayout. The
change to move Depth before SimplifyQuery needed callers to be updated
unnecessarily, and as commented in #85863, we actually want Depth to be
after SimplifyQuery anyway so that it can be defaulted and the caller
does not need to specify it.
The full fold is one of the following:
1) `(fp_binop ({s|u}itofp x), ({s|u}itofp y))`
-> `({s|u}itofp (int_binop x, y))`
2) `(fp_binop ({s|u}itofp x), FpC)`
-> `({s|u}itofp (int_binop x, (fpto{s|u}i FpC)))`
And support the following binops:
`fmul` -> `mul`
`fadd` -> `add`
`fsub` -> `sub`
Proofs: https://alive2.llvm.org/ce/z/zuacA8
The proofs timeout, so they must be reproduced locally.
Closes#82555
This patch refactors the interface of the `computeKnownFPClass` family
to pass `SimplifyQuery` directly.
The motivation of this patch is to compute known fpclass with
`DomConditionCache`, which was introduced by
https://github.com/llvm/llvm-project/pull/73662. With
`DomConditionCache`, we can do more optimization with context-sensitive
information.
Example (extracted from
[fmt/format.h](e17bc67547/include/fmt/format.h (L3555-L3566))):
```
define float @test(float %x, i1 %cond) {
%i32 = bitcast float %x to i32
%cmp = icmp slt i32 %i32, 0
br i1 %cmp, label %if.then1, label %if.else
if.then1:
%fneg = fneg float %x
br label %if.end
if.else:
br i1 %cond, label %if.then2, label %if.end
if.then2:
br label %if.end
if.end:
%value = phi float [ %fneg, %if.then1 ], [ %x, %if.then2 ], [ %x, %if.else ]
%ret = call float @llvm.fabs.f32(float %value)
ret float %ret
}
```
We can prove the signbit of `%value` is always zero. Then the fabs can
be eliminated.
`(ctpop (not x))` <-> `(sub nuw nsw BitWidth(x), (ctpop x))`. The
`sub` expression can sometimes be constant folded depending on the use
case of `(ctpop (not x))`.
This patch adds fold for the following cases:
`(add/sub/disjoint_or C, (ctpop (not x))`
-> `(add/sub/disjoint_or C', (ctpop x))`
`(cmp pred C, (ctpop (not x))`
-> `(cmp swapped_pred C', (ctpop x))`
Where `C'` depends on how we constant fold `C` with `BitWidth(x)` for
the given opcode.
Proofs: https://alive2.llvm.org/ce/z/qUgfF3Closes#77859
This patch cleans up the duplicate code for folding commutative binops
over `select/phi/minmax`.
Related commits:
+ select support:
88cc35b27e
+ phi support:
8674a023bc
+ minmax support:
624973806c
Use disjoint instead of haveNoCommonBitsSet(), which is slightly
stronger in case the information used to infer disjoint has been
lost.
Introduce the m_DisjointOr() matcher to make handling cases like
this cleaner.
Check the operands of I are used in no more than one place, which can
not be deleted, cause a mul instruction has far more weight than add and
shl instruction in IR, thus this method cannot achieve the goal of
simplifying instructions, just return null.
This is nearly an NFC, the only change is potentially to order that
values are created/names.
Otherwise it is a slight speed boost/simplification to avoid having to
go through the `getFreelyInverted` recursive logic twice to simplify
the extra `not` op.
Some Instructions (select/min/max) are inverted by just inverting the
operands. So the answer of whether they are free to invert is really
just whether the operands are free to invert.
Differential Revision: https://reviews.llvm.org/D159056
If `Y` and `Z` are constant then we can simplify to `(X - W) + (Y -
Z)`. If `Y == Z` we can fold to `X - W`.
Note these transform exist outside of InstCombine. The purpose of this
commit is primarily to make it so that folds can generate these
simplifiable patterns without having to worry about creating an inf
loop.