This mostly manifested as broken constant folding. This was
mishandling the dynamic denormal mode. It was also mishandling literal
signaling nans, such that they would also be treated as poison.
https://reviews.llvm.org/D155437
This patch is separated from D154953 to see what tests are affected by this
change alone according comment.
Depend on the related updating of LangRef on D155193.
Reviewed By: paulwalker-arm, nikic, david-arm
Differential Revision: https://reviews.llvm.org/D155350
Improve computeKnownFPClass select handling to cover the case where
the condition performs a class test. This allows us to recognize
no-nans in cases like:
%not.nan = fcmp ord float %x, 0.0
%select = select i1 %not.nan, float %x, float 0.0
Math library code has similar edge case filtering on the inputs and
final results.
https://reviews.llvm.org/D153089
The implementations of a number of math functions on amdgpu involve
pre and post-scaling the inputs out of the denormal range. If these
are chained together we can possibly fold them out.
computeConstantRange seems weaker than computeKnownBits, so this
regresses some of the older vector tests.
Support the canonical range check pattern for KnownBits assumptions.
This is the same as the generic ConstantRange handling, just shifted
by an offset.
For non-equality icmps, we don't do any KnownBits-specific
reasoning, and just use the known bits as a constraint on the range.
We can generalize this for all predicates by round-tripping through
ConstantRange and using makeAllowedICmpRegion().
The minor improvement in zext-or-icmp is because we assume that
a value is ult [0,1], which means it must be zero.
When processing assumes, we also handle assumes on ptrtoint of the
value. In canonical IR, these will have the same size as the value.
However, in non-canonical IR there may be an implicit zext or
trunc, which results in a bit width mismatch. We currently handle
this by adjusting bitwidth everywhere, but this is fragile and I'm
pretty sure that the way we do this is incorrect for some predicates,
because we effectively end up commuting an ext/trunc and an icmp.
Instead, add an m_PtrToIntSameSize() matcher that will only handle
bitwidth preserving cases. For the bitwidth-changing cases, wait
until they have been canonicalized.
The original handling for this was added purely to prevent crashes
in an earlier implementation which failed to account for this
entirely.
Use a unit test since I don't see any existing uses try to make use of
the high bits of a pointer.
This will also assert if the metadata type doesn't match the pointer
width, but I consider that a defect in the verifier and shouldn't be
handled.
AMDGPU allocates LDS globals by assigning !absolute_symbol with the
final fixed address. Tracking the high bits are 0 may help with
addressing mode matching.
The DataLayout alloca address space is the address space that should
be used when creating new allocas. However, not all allocas are
required to be in this address space. The isKnownNonZero() check
should work on the actual address space of the alloca, not the
default alloca address space.
This reverts commit 464dcab8a6c823c9cb462bf4107797b8173de088.
Going to fix forward size regression instead due to more dependent patches needing to be reverted otherwise.
Define the function @llvm.amdgcn.make.buffer.rsrc, which take a 64-bit
pointer, the 16-bit stride/swizzling constant that replace the high 16
bits of an address in a buffer resource, the 32-bit extent/number of
elements, and the 32-bit flags (the latter two being the 3rd and 4th
wards of the resource), and combines them into a ptr addrspace(8).
This intrinsic is lowered during the early phases of the backend.
This intrinsic is needed so that alias analysis can correctly infer
that a certain buffer resource points to the same memory as some
global pointer. Previous methods of constructing buffer resources,
which relied on ptrtoint, would not allow for such an inference.
Depends on D148184
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D148957
This fixes the largest remaining discrepancy between results of
computeKnownBits() and SimplifyDemandedBits(). We only care about
the multi-use case here, because the assume necessarily introduces
an extra use.
These implement essentially the same thing, so normalize
ValueTracking to use SimplifyQuery. In the future we can directly
expose the SimplifyQuery-based APIs.
The ORE argument threaded through ValueTracking is used only in a
single, untested place. It is also essentially never passed: The
only places that do so have been added very recently as part of the
KnownFPClass migration, which is vanishingly unlikely to hit this
code path. Remove this effectively dead argument.
Differential Revision: https://reviews.llvm.org/D151562
Make ValueTracking directly call the KnownBits shift helpers, which
provides more precise results.
Unfortunately, ValueTracking has a special case where sometimes we
determine non-zero shift amounts using isKnownNonZero(). I have my
doubts about the usefulness of that special-case (it is only tested
in a single unit test), but I've reproduced the special-case via an
extra parameter to the KnownBits methods.
Differential Revision: https://reviews.llvm.org/D151816
This reverts commit 754f3ae65518331b7175d7a9b4a124523ebe6eac.
Unfortunately the change can cause regressions due to dropping flags
from instructions (like nuw,nsw,inbounds), prevent further optimizations
depending on those flags.
A simple example is the IR below, where `inbounds` is dropped with the
patch and the phase-ordering test added in 7c91d82ab912fae8b.
define i1 @test(ptr %base, i64 noundef %len, ptr %p2) {
bb:
%gep = getelementptr inbounds i32, ptr %base, i64 %len
%c.1 = icmp uge ptr %p2, %base
%c.2 = icmp ult ptr %p2, %gep
%select = select i1 %c.1, i1 %c.2, i1 false
ret i1 %select
}
For more discussion, see D149404.
Implement precise nuw/nsw support in the KnownBits implementation,
replacing the rather crude handling in ValueTracking.
Differential Revision: https://reviews.llvm.org/D151208
I removed the conflict check from computeKnownBitsFromShiftOperator()
in D150648 assuming that this is now handled on the KnownBits side.
However, the nsw handling is still inside ValueTracking, so we
still need to handle conflicts there. Restore the check closer to
where it is relevant.
Fixes https://github.com/llvm/llvm-project/issues/62908.
The knownbits implementation covers all the cases previously handled
by `uadd.sat`/`usub.sat` as well some additional ones. We previously
were not handling the `ssub.sat`/`sadd.sat` cases at all.
Reviewed By: RKSimon
Differential Revision: https://reviews.llvm.org/D150103
For always poison shifts, any KnownBits return value is valid.
Currently we return unknown, but returning zero is generally more
profitable. We had some code in ValueTracking that tried to do this,
but was actually dead code.
Differential Revision: https://reviews.llvm.org/D150648
The only value that can produce -0 is exactly -0, no rounding is involved. If the
denormal mode has flushed denormal inputs, a negative value could produce -0.
The constrained intrinsics do not track the denormal mode, and this is just
generally broken in the current set of FP predicates. The move to computeKnownFPClass
will address some of these issues.
This patch add a new API `impliesPoisonIgnoreFlagsOrMetadatas` which is
the same as `impliesPoison` but ignoring poison generating flags or
metadatas in the process of implying poison and recording these ignored
instructions.
In InstCombineSelect, replacing `impliesPoison` with
`impliesPoisonIgnoreFlagsOrMetadatas` to allow more patterns like
`select i1 %a, i1 %b, i1 false` to be optimized to and/or instructions
by droping the poison generating flags or metadatas.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D149404
For some reason the inliner calls simplifyInstruction with disembodied
instructions. I consider this to be an API defect. Either the instruction
should always be inserted prior to simplification, or we at least
should pass in the new function for the context.