This reverts commit c45f939e34dafaf0f57fd1d93df7df5cc89f1dec.
This refactoring turned out to not be useful for the case I had
originally in mind, so revert it for now.
We can transfer a nuw flag from the gep to the add. Additionally,
the inbounds + nneg case can be relaxed to nusw + nneg. Finally,
don't forget to pass the correct context instruction to
SimplifyQuery.
We can't preserve the context across a non-speculatable instruction,
as this might introduce a trap. Alternatively, we could also
insert all the replacement instruction at the use-site, but that
would be a more intrusive change for the sake of this edge case.
Fixes https://github.com/llvm/llvm-project/issues/95547.
Fold
``` llvm
define i32 @src(i32 %x, i32 %y) {
%base = inttoptr i32 %x to ptr
%ptr = getelementptr inbounds i8, ptr %base, i32 %y
%r = ptrtoint ptr %ptr to i32
ret i32 %r
}
```
where both `%base` and `%ptr` have only one use, to
``` llvm
define i32 @tgt(i32 %x, i32 %y) {
%r = add i32 %x, %y
ret i32 %r
}
```
The `add` can be `nuw` if the GEP is `inbounds` and the offset is
non-negative. The relevant Alive2 proof is
https://alive2.llvm.org/ce/z/nP3RWy.
### Motivation
It seems unnecessary to convert `int` to `ptr` just to get its offset.
In most cases, they generates the same assembly, but sometimes it may
miss some optimizations since the analysis of `GEP` is not as perfect as
that of arithmetic operation. One example is
e3c822bf41/bench/protobuf/optimized/generated_message_reflection.cc.ll (L39860-L39873)
``` llvm
%conv.i188 = zext i32 %145 to i64
%add.i189 = add i64 %conv.i188, %125
%146 = load i16, ptr %num_aux_entries10.i, align 2
%conv2.i191 = zext i16 %146 to i64
%mul.i192 = shl nuw nsw i64 %conv2.i191, 3
%add3.i193 = add i64 %add.i189, %mul.i192
%147 = inttoptr i64 %add3.i193 to ptr
%sub.ptr.lhs.cast.i195 = ptrtoint ptr %144 to i64
%sub.ptr.rhs.cast.i196 = ptrtoint ptr %143 to i64
%sub.ptr.sub.i197 = sub i64 %sub.ptr.lhs.cast.i195, %sub.ptr.rhs.cast.i196
%add.ptr = getelementptr inbounds i8, ptr %147, i64 %sub.ptr.sub.i197
%sub.ptr.lhs.cast = ptrtoint ptr %add.ptr to i64
%sub.ptr.sub = sub i64 %sub.ptr.lhs.cast, %125
```
where `%conv.i188` first adds `%125` and then subtracts `%125` (the
result is `%sub.ptr.sub`), which can be optimized.
In #88217 a large set of matchers was changed to only accept poison
values in splats, but not undef values. This is because we now use
poison for non-demanded vector elements, and allowing undef can cause
correctness issues.
This patch covers the remaining matchers by changing the AllowUndef
parameter of getSplatValue() to AllowPoison instead. We also carry out
corresponding renames in matchers.
As a followup, we may want to change the default for things like m_APInt
to m_APIntAllowPoison (as this is much less risky when only allowing
poison), but this change doesn't do that.
There is one caveat here: We have a single place
(X86FixupVectorConstants) which does require handling of vector splats
with undefs. This is because this works on backend constant pool
entries, which currently still use undef instead of poison for
non-demanded elements (because SDAG as a whole does not have an explicit
poison representation). As it's just the single use, I've open-coded a
getSplatValueAllowUndef() helper there, to discourage use in any other
places.
This reverts commit d80d5b923c6f611590a12543bdb33e0c16044d44.
It wasn't a particularly important transform to begin with and caused
some codegen regressions on targets that prefer `sitofp` so dropping.
Might re-visit along with adding `nneg` flag to `uitofp` so its easily
reversable for the backend.
This patch enables more optimization after canonicalizing `fmul X, 0.0`
into a copysign.
I decide to implement this fold in InstCombine because
`computeKnownFPClass` may be expensive.
Alive2: https://alive2.llvm.org/ce/z/ASM8tQ
This fixes the case where we would shrink an frem to half and then
bitcast to bfloat, producing invalid results. The transformation was
written under the assumption that there is only one type with a given
bit width.
Also add a strategic assert to CastInst::CreateFPCast to turn this
miscompilation into a crash.
Use KnownBits to infer the nneg flag on zext instructions.
Currently we only set nneg when converting sext -> zext, but don't set
it when we have a zext in the first place. If we want to use it in
optimizations, we should make sure the flag inference is consistent.
An issue arose when handling shift amounts while performing
narrowed funnel shifts simplification. Specifically, shift
amounts were incorrectly truncated when their type was
narrower than the target bit width. This has been addressed
by zero-extending `ShAmt` in such cases.
Fixes: https://github.com/llvm/llvm-project/issues/71463.
Proof: https://alive2.llvm.org/ce/z/5draKz.
`and` is generally more supported so if we have a `ptrmask` anyways
might as well use `and`.
Differential Revision: https://reviews.llvm.org/D156640Closes#67166
The m_ZExtOrSExt / m_Trunc in the following code can match constant
expressions, which we don't want here. Make sure we bail out early
for non-immediate constants.
Builds on #67982 which recently introduced the nneg flag on a zext
instruction. InstCombine is one of our largest canonicalizers of zext
from non-negative sext instructions, so set the flag there.
This regression triggers after commit f400daa to fix infinite loop
issue.
In this case, we can known the shift count is 0, so it will not be
triggered by the form of (iN (~X) u>> (N - 1)) in commit 21d3871, of
which N indicates the data type bitwidth of X.
Fixes https://github.com/llvm/llvm-project/issues/68465.
As per my proposal for how to eliminate debug intrinsics [0], for various
places in InstCombine prefer to insert using an instruction iterator rather
than an instruction pointer. This is so that we can eventually pass more
information in the iterator class. These call-sites where I've changed the
spelling are those that necessary to build a stage2clang to produce an
identical binary in the coming no-debug-intrinsics mode.
[0] https://discourse.llvm.org/t/rfc-instruction-api-changes-needed-to-eliminate-debug-intrinsics-from-ir/68939
Differential Revision: https://reviews.llvm.org/D152543
Replace these with IRBuilder uses, as we don't (from a type
perspective) care about Constant results.
Switch the predicate to m_ImmConstant() instead of isa<Constant>
to guarantee that these do get folded away and our assumptions
about simplifications hold true.
/data/llvm-project/llvm/lib/Transforms/InstCombine/InstCombineCasts.cpp:32:15: error: function 'decomposeSimpleLinearExpr' is not needed and will not be emitted [-Werror,-Wunneeded-internal-declaration]
static Value *decomposeSimpleLinearExpr(Value *Val, unsigned &Scale,
^
1 error generated.
This is part of select constant expression removal. As there is
only a single place where this is used, just expand it to explicit
constant folding calls.
(Normally we'd just use the IRBuilder here, but this isn't possible
due to mergeUndefsWith use).
The reported compile-time regression has been address in
47f9109dff80a1abbe2705ee71dc0882b1d62274.
Additionally, this contains a change to immediately fold zext
with constant operand, even if it's used in a trunc. I'm not sure
if this is relevant for anything, but I noticed it as a behavioral
discrepancy when investigating this issue.
-----
InstCombine currently performs a constant folding attempt as part
of the main InstCombine loop, before visiting the instruction.
However, each visit method will also attempt to simplify the
instruction, which will in turn constant fold it. (Additionally,
we also constant fold instructions before the main InstCombine loop
and use a constant folding IR builder, so this is doubly redundant.)
There is one place where InstCombine visit methods currently don't
call into simplification, and that's casts. To be conservative,
I've added an explicit constant folding call there (though it has
no impact on tests).
This makes for a mild compile-time improvement and in particular
mitigates the compile-time regression from enabling load
simplification in be88b5814d9efce131dbc0c8e288907e2e6c89be.
Differential Revision: https://reviews.llvm.org/D144369
Increase compile time with ubsan ARM from 3 to 14 min single file.
I upload reproducer into D144369.
Also we have random timeouts on internal x86_64 builds.
Both bisected to this one.
This reverts commit 45a0b812fa13ec255cae91f974540a4d805a8d79.
The m_VScale() matcher is unusual in that it requires a DataLayout.
It is currently used to determine the size of the GEP type. However,
I believe it is sufficient to check for the canonical
<vscale x 1 x i8> form here -- I don't think there's a need to
recognize exotic variations like <vscale x 1 x i4> as a vscale
constant representation as well.
Differential Revision: https://reviews.llvm.org/D144566
InstCombine currently performs a constant folding attempt as part
of the main InstCombine loop, before visiting the instruction.
However, each visit method will also attempt to simplify the
instruction, which will in turn constant fold it. (Additionally,
we also constant fold instructions before the main InstCombine loop
and use a constant folding IR builder, so this is doubly redundant.)
There is one place where InstCombine visit methods currently don't
call into simplification, and that's casts. To be conservative,
I've added an explicit constant folding call there (though it has
no impact on tests).
This makes for a mild compile-time improvement and in particular
mitigates the compile-time regression from enabling load
simplification in be88b5814d9efce131dbc0c8e288907e2e6c89be.
Differential Revision: https://reviews.llvm.org/D144369