154 Commits

Author SHA1 Message Date
paperchalice
8bacfb2538
[AMDGPU] Remove UnsafeFPMath uses (#151079)
Remove `UnsafeFPMath` in AMDGPU part, it blocks some bugfixes related to
clang and the ultimate goal is to remove `resetTargetOptions` method in
`TargetMachine`, see FIXME in `resetTargetOptions`.
See also
https://discourse.llvm.org/t/rfc-honor-pragmas-with-ffp-contract-fast

https://discourse.llvm.org/t/allowfpopfusion-vs-sdnodeflags-hasallowcontract

---------

Co-authored-by: Matt Arsenault <arsenm2@gmail.com>
2025-07-31 17:36:57 +08:00
Pierre van Houtryve
29e14c3b44
[AMDGPU] Remove widen-16-bit-ops from CGP (#145483)
This was already off by default so there is no codegen change.
2025-07-16 10:14:19 +02:00
Ramkumar Ramachandra
b40e4ceaa6
[ValueTracking] Make Depth last default arg (NFC) (#142384)
Having a finite Depth (or recursion limit) for computeKnownBits is very
limiting, but is currently a load-bearing necessity, as all KnownBits
are recomputed on each call and there is no caching. As a prerequisite
for an effort to remove the recursion limit altogether, either using a
clever caching technique, or writing a easily-invalidable KnownBits
analysis, make the Depth argument in APIs in ValueTracking uniformly the
last argument with a default value. This would aid in removing the
argument when the time comes, as many callers that currently pass 0
explicitly are now updated to omit the argument altogether.
2025-06-03 17:12:24 +01:00
Simon Pilgrim
111effe05e AMDGPUCodeGenPrepare.cpp - fix MSVC operator precedence warning. NFC. 2025-06-02 09:47:52 +01:00
Matt Arsenault
cc8d253f39
AMDGPU: Handle other fmin flavors in fract combine (#141987)
Since the input is either known not-nan, or we have explicit use
code checking if the input is a nan, any of the 3 is valid to match.
2025-05-29 22:11:01 +02:00
anjenner
a36cb01ea7
[AMDGPU] Handle CreateBinOp not returning BinaryOperator (#137791)
AMDGPUCodeGenPrepareImpl::visitBinaryOperator() calls
Builder.CreateBinOp() and casts the resulting Value as a BinaryOperator
without checking, leading to an assert failure in a case found by
fuzzing. In this case, the operands are constant and CreateBinOp does
constant folding so returns a Constant instead of a BinaryOperator.
2025-05-29 19:10:35 +02:00
Kazu Hirata
1e8e662174
[AMDGPU] Remove unused includes (NFC) (#141376)
These are identified by misc-include-cleaner.  I've filtered out those
that break builds.  Also, I'm staying away from llvm-config.h,
config.h, and Compiler.h, which likely cause platform- or
compiler-specific build failures.
2025-05-24 14:48:46 -07:00
Pierre van Houtryve
aacebaeab5
[AMDGPU] Do not promote uniform i16 operations to i32 in CGP (#140208)
For the majority of cases, this is a neutral or positive change.
There are even testcases that greatly benefit from it, but some regressions are possible.
There is #140040 for GlobalISel that'd need to be fixed but it's only a one instruction regression and I think it can be fixed later.

Solves #64591
2025-05-16 10:31:03 +02:00
Craig Topper
123758b1f4
[IRBuilder] Add versions of createInsertVector/createExtractVector that take a uint64_t index. (#138324)
Most callers want a constant index. Instead of making every caller
create a ConstantInt, we can do it in IRBuilder. This is similar to
createInsertElement/createExtractElement.
2025-05-02 16:10:18 -07:00
Jay Foad
9060ca0191
[AMDGPU] Check for nonnull loads feeding addrspacecast (#138184)
Handle nonnull loads just like nonnull arguments when checking for
addrspacecasts that are known never null.
2025-05-02 12:54:22 +01:00
Jay Foad
886f1199f0
[AMDGPU] Use variadic isa<>. NFC. (#137016) 2025-04-24 08:19:09 +01:00
Shoreshen
121cd7c6f0
Re apply 130577 narrow math for and operand (#133896)
Re-apply https://github.com/llvm/llvm-project/pull/130577

Which is reverted in https://github.com/llvm/llvm-project/pull/133880

The old application failed in address sanitizer due to
`tryNarrowMathIfNoOverflow` was called after `I.eraseFromParent();` in
`AMDGPUCodeGenPrepareImpl::visitBinaryOperator`, it create a use after
free failure.

To fix this, `tryNarrowMathIfNoOverflow` will be called before and
directly return if `tryNarrowMathIfNoOverflow` result in true.
2025-04-17 17:03:32 +08:00
Rahul Joshi
a3754ade63
[NFC][LLVM][AMDGPU] Cleanup pass initialization for AMDGPU (#134410)
- Remove calls to pass initialization from pass constructors.
- https://github.com/llvm/llvm-project/issues/111767
2025-04-07 17:27:50 -07:00
Shoreshen
7f14b2a9eb
Revert "[AMDGPU][CodeGenPrepare] Narrow 64 bit math to 32 bit if profitable" (#133880)
Reverts llvm/llvm-project#130577
2025-04-01 17:37:02 +08:00
Shoreshen
145b4a3950
[AMDGPU][CodeGenPrepare] Narrow 64 bit math to 32 bit if profitable (#130577)
For Add, Sub, Mul with Int64 type, if profitable, then do:
1. Trunc operands to Int32 type
2. Apply 32 bit Add/Sub/Mul
3. Zext to Int64 type
2025-04-01 11:18:17 +08:00
Rahul Joshi
74b7abf154
[IRBuilder] Add new overload for CreateIntrinsic (#131942)
Add a new `CreateIntrinsic` overload with no `Types`, useful for
creating calls to non-overloaded intrinsics that don't need additional
mangling.
2025-03-31 08:10:34 -07:00
Tim Gymnich
049f179606
[Analysis][NFC] Extract KnownFPClass (#133457)
- extract KnownFPClass for future use inside of GISelKnownBits

---------

Co-authored-by: Matt Arsenault <arsenm2@gmail.com>
2025-03-28 18:10:02 +01:00
Jay Foad
44607666b3
[AMDGPU] Simplify conditional expressions. NFC. (#129228)
Simplfy `cond ? val : false` to `cond && val` and similar.
2025-03-03 10:40:49 +00:00
choikwa
de12836f28
[AMDGPU] Rework getDivNumBits API (#119768)
Rework involves below:
- Return unsigned value, the number of div/rem bits actually needed.
- Change from AtLeast(SignBits) to MaxDivBits hint.
- Use MaxDivBits hint for unsigned case.
- Remove unnecessary second early exit.

Mostly NFC changes.
2025-01-09 10:08:13 -05:00
choikwa
8d2e611802
[AMDGPU] Calculate getDivNumBits' AtLeast using bitwidth (#121758)
Previously in shrinkDivRem64, it used fixed value 32 for AtLeast which
meant that <64bit divisions would be rejected from shrinking since logic
depended only on number of sign bits. I.e. 'idiv i48 %0, %1' would
return 24 for number of sign bits if %0,%1 both had 24 division bits,
and was rejected.
2025-01-07 01:31:09 -05:00
Ramkumar Ramachandra
4a0d53a0b0
PatternMatch: migrate to CmpPredicate (#118534)
With the introduction of CmpPredicate in 51a895a (IR: introduce struct
with CmpInst::Predicate and samesign), PatternMatch is one of the first
key pieces of infrastructure that must be updated to match a CmpInst
respecting samesign information. Implement this change to Cmp-matchers.

This is a preparatory step in migrating the codebase over to
CmpPredicate. Since we no functional changes are desired at this stage,
we have chosen not to migrate CmpPredicate::operator==(CmpPredicate)
calls to use CmpPredicate::getMatching(), as that would have visible
impact on tests that are not yet written: instead, we call
CmpPredicate::operator==(Predicate), preserving the old behavior, while
also inserting a few FIXME comments for follow-ups.
2024-12-13 14:18:33 +00:00
choikwa
463e93b95f
Reapply [AMDGPU] prevent shrinking udiv/urem if either operand exceeds signed max (#119325)
This reverts commit 254d206ee2a337cb38ba347c896f7c6a14c7f218.

+Added a fix in ExpandDivRem24 to disqualify if DivNumBits exceed 24.

Original commit & msg:
ce6e955ac374f2b86cbbb73b2f32174dffd85f25.
Handle signed and unsigned path differently in getDivNumBits. Using
computeKnownBits, this rejects shrinking unsigned div/rem if operands
exceed signed max since we know NumSignBits will be always 0.
2024-12-12 15:24:34 -05:00
Joseph Huber
254d206ee2 Revert "Reapply "[AMDGPU] prevent shrinking udiv/urem if either operand is in… (#118928)"
This reverts commit 509893b58ff444a6f080946bd368e9bde7668f13.

This broke the libc build again https://lab.llvm.org/buildbot/#/builders/73/builds/9787.
2024-12-09 08:10:49 -06:00
choikwa
509893b58f
Reapply "[AMDGPU] prevent shrinking udiv/urem if either operand is in… (#118928)
… (SignedMax,UnsignedMax] (#116733)"

This reverts commit 905e831f8c8341e53e7e3adc57fd20b8e08eb999.

Handle signed and unsigned path differently in getDivNumBits. Using
computeKnownBits, this rejects shrinking unsigned div/rem if operands
exceed signed max since we know NumSignBits will be always 0.

Rebased and re-attempt after first one was reverted due to unrelated
failure in LibC (should be fixed by now I'm told).
2024-12-06 19:14:39 -05:00
Jay Foad
9ad09b2930
[AMDGPU] Refine AMDGPUCodeGenPrepareImpl class. NFC. (#118461)
Use references instead of pointers for most state, initialize it all in
the constructor, and common up some of the initialization between the
legacy and new pass manager paths.
2024-12-03 15:31:25 +00:00
Jay Foad
3923e0451a
[AMDGPU] Preserve all analyses if nothing changed (#117994) 2024-11-28 14:33:05 +00:00
Joseph Huber
905e831f8c Revert "[AMDGPU] prevent shrinking udiv/urem if either operand is in (SignedMax,UnsignedMax] (#116733)"
This reverts commit b8e1d4dbea8905e48d51a70bf75cb8fababa4a60.

Causes failures on the `libc` test suite https://lab.llvm.org/buildbot/#/builders/73/builds/8871
2024-11-20 18:21:10 -06:00
choikwa
b8e1d4dbea
[AMDGPU] prevent shrinking udiv/urem if either operand is in (SignedMax,UnsignedMax] (#116733)
Do this by using ComputeKnownBits and checking for !isNonNegative and
isUnsigned. This rejects shrinking unsigned div/rem if operands exceed
smax_bitwidth since we know NumSignBits will be always 0.
2024-11-20 11:22:09 -05:00
Jay Foad
85c17e4092
[LLVM] Make more use of IRBuilder::CreateIntrinsic. NFC. (#112706)
Convert many instances of:
  Fn = Intrinsic::getOrInsertDeclaration(...);
  CreateCall(Fn, ...)
to the equivalent CreateIntrinsic call.
2024-10-17 16:20:43 +01:00
Rahul Joshi
fa789dffb1
[NFC] Rename Intrinsic::getDeclaration to getOrInsertDeclaration (#111752)
Rename the function to reflect its correct behavior and to be consistent
with `Module::getOrInsertFunction`. This is also in preparation of
adding a new `Intrinsic::getDeclaration` that will have behavior similar
to `Module::getFunction` (i.e, just lookup, no creation).
2024-10-11 05:26:03 -07:00
Matt Arsenault
79516ddbee
AMDGPU: Fix assert from wrong address space size assumption (#97267)
This was assuming the source address space was at least as large
as the destination of the cast. I'm not sure why this was casting
to begin with; the assumption seems to be the source
address space from the root addrspacecast matches the underlying
object so directly check that.

Fixes #97457
2024-07-02 23:18:25 +02:00
Stephen Tozer
d75f9dd1d2 Revert "[IR][NFC] Update IRBuilder to use InsertPosition (#96497)"
Reverts the above commit, as it updates a common header function and
did not update all callsites:

  https://lab.llvm.org/buildbot/#/builders/29/builds/382

This reverts commit 6481dc57612671ebe77fe9c34214fba94e1b3b27.
2024-06-24 18:00:22 +01:00
Stephen Tozer
6481dc5761
[IR][NFC] Update IRBuilder to use InsertPosition (#96497)
Uses the new InsertPosition class (added in #94226) to simplify some of
the IRBuilder interface, and removes the need to pass a BasicBlock
alongside a BasicBlock::iterator, using the fact that we can now get the
parent basic block from the iterator even if it points to the sentinel.
This patch removes the BasicBlock argument from each constructor or call
to setInsertPoint.

This has no functional effect, but later on as we look to remove the
`Instruction *InsertBefore` argument from instruction-creation
(discussed
[here](https://discourse.llvm.org/t/psa-instruction-constructors-changing-to-iterator-only-insertion/77845)),
this will simplify the process by allowing us to deprecate the
InsertPosition constructor directly and catch all the cases where we use
instructions rather than iterators.
2024-06-24 17:27:43 +01:00
Shilei Tian
0a43ca731b
[AMDGPU] Fix missing IsExact flag when expanding vector binary operator (#86712) 2024-03-27 17:40:58 -04:00
Peter Rong
4a026b5092
[AMDGCN] Use ZExt when handling indices in insertment element (#85718)
When i1 true is used as an index, SExt extends it to i32 -1. This would
cause BitVector to overflow.
The language manual have specified that the index shall be treated as an
unsigned number, this patch fixes that.
(https://llvm.org/docs/LangRef.html#insertelement-instruction)

This patch fixes #85717

---------

Signed-off-by: Peter Rong <PeterRong96@gmail.com>
2024-03-19 21:44:08 -07:00
Orlando Cazalet-Hyams
3ab1481f9a
[RemoveDIs] Use getFirstNonPHIIt to fix crash #85472 (#85618) 2024-03-18 09:57:22 +00:00
Pierre van Houtryve
756166e342
[AMDGPU] Improve detection of non-null addrspacecast operands (#82311)
Use IR analysis to infer when an addrspacecast operand is nonnull, then
lower it to an intrinsic that the DAG can use to skip the null check.

I did this using an intrinsic as it's non-intrusive. An alternative
would have been to allow something like `!nonnull` on `addrspacecast`
then lower that to a custom opcode (or add an operand to the
addrspacecast MIR/DAG opcodes), but it's a lot of boilerplate for just
one target's use case IMO.

I'm hoping that when we switch to GISel that we can move all this logic
to the MIR level without losing info, but currently the DAG doesn't see
enough so we need to act in CGP.

Fixes: SWDEV-316445
2024-03-01 14:01:10 +01:00
choikwa
e5638c5a00
[AMDGPU] Use correct number of bits needed for div/rem shrinking (#80622)
There was an error where dividend of type i64 and actual used number of
bits of 32 fell into path that assumes only 24 bits being used. Check
that AtLeast field is used correctly when using computeNumSignBits and
add necessary extend/trunc for 32 bits path.

Regolden and update testcases.

@jrbyrnes @bcahoon @arsenm @rampitec
2024-02-06 21:32:28 +05:30
Yingwei Zheng
930996e9e4
[ValueTracking][NFC] Pass SimplifyQuery to computeKnownFPClass family (#80657)
This patch refactors the interface of the `computeKnownFPClass` family
to pass `SimplifyQuery` directly.
The motivation of this patch is to compute known fpclass with
`DomConditionCache`, which was introduced by
https://github.com/llvm/llvm-project/pull/73662. With
`DomConditionCache`, we can do more optimization with context-sensitive
information.

Example (extracted from
[fmt/format.h](e17bc67547/include/fmt/format.h (L3555-L3566))):
```
define float @test(float %x, i1 %cond) {
  %i32 = bitcast float %x to i32
  %cmp = icmp slt i32 %i32, 0
  br i1 %cmp, label %if.then1, label %if.else

if.then1:
  %fneg = fneg float %x
  br label %if.end

if.else:
  br i1 %cond, label %if.then2, label %if.end

if.then2:
  br label %if.end

if.end:
  %value = phi float [ %fneg, %if.then1 ], [ %x, %if.then2 ], [ %x, %if.else ]
  %ret = call float @llvm.fabs.f32(float %value)
  ret float %ret
}
```
We can prove the signbit of `%value` is always zero. Then the fabs can
be eliminated.
2024-02-06 02:30:12 +08:00
Piotr Sobczak
fac093dd08
[AMDGPU] Update IEEE and DX10_CLAMP for GFX12 (#75030)
Co-authored-by: Stanislav Mekhanoshin <Stanislav.Mekhanoshin@amd.com>
2023-12-13 13:52:40 +01:00
Pierre van Houtryve
8a66510fa7
[AMDGPU] Don't create mulhi_24 in CGP (#72983)
Instead, create a mul24 with a 64 bit result and let ISel take care of
it.

This allows patterns to simply match mul24 even for 64-bit muls instead of having to match both mul/mulhi and a buildvector/bitconvert/etc.
2023-11-30 08:26:45 +01:00
Matt Arsenault
231aa0f212 AMDGPU: Avoid creating vector extracts if we aren't going to do anything
Try to avoid expensive checks failures from reporting no changes
when some dead instructions were introduced.
2023-09-13 09:45:34 +03:00
Matt Arsenault
72a7024add AMDGPU: Correctly lower llvm.sqrt.f32
Make codegen emit correctly rounded sqrt by default.

Emit the fast but only kind of fast expansion in AMDGPUCodeGenPrepare
based on !fpmath, like the fdiv case. Hack around visitation ordering
problems from AMDGPUCodeGenPrepare using forward iteration instead of
a well behaved combiner.

https://reviews.llvm.org/D158129
2023-09-12 23:22:54 +03:00
Matt Arsenault
6012fed6f5 AMDGPU: Fix sqrt fast math flags spreading to fdiv fast math flags
This was working around the lack of operator| on FastMathFlags. We
have that now which revealed the bug.
2023-08-30 11:53:05 -04:00
Matt Arsenault
a738bdf35e AMDGPU: Permit more rsq formation in AMDGPUCodeGenPrepare
We were basing the defer the fast case to codegen based on the fdiv
itself, and not looking for a foldable sqrt input.

https://reviews.llvm.org/D158127
2023-08-23 20:06:50 -04:00
pvanhout
b7503ae8a0 [AMDGPU] Clear BreakPhiNodesCache in-between functions
Otherwise stale pointers pollute the cache and
when a dead PHI's memory is reused for another PHI, we can get a false positive hit in the cache.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D157711
2023-08-11 15:23:41 +02:00
pvanhout
62ea799e6c [AMDGPU] Break Large PHIs: Take whole PHI chains into account
Previous heuristics had a big flaw: they only looked at single PHI at a time, and didn't take into account the whole "chain".
The concept of "chain" is important because if we only break a chain partially, we risk forcing regalloc to reserve twice as many registers for that vector.
We also risk adding a lot of copies that shouldn't be there and can inhibit backend optimizations.

The solution I found is to consider the whole "PHI chain" when looking at PHI.
That is, we recursively look at the PHI's incoming value & users for other PHIs, then make a decision about the chain as a whole.

The currrent threshold requires that at least `ceil(chain size * (2/3))` PHIs have at least one interesting incoming value.
In simple terms, two-thirds (rounded up) of the PHIs should be breakable.

This seems to work well. A lower threshold such as 50% is too aggressive because chains can often have 7 or 9 PHIs, and breaking 3+ or 4+ PHIs in those case often causes performance issue.

Fixes SWDEV-409648, SWDEV-398393, SWDEV-413487

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D156414
2023-08-03 16:41:11 +02:00
Kazu Hirata
03612b2c1a [AMDGPU] Fix an unused variable warning
This patch fixes:

  llvm/lib/Target/AMDGPU/AMDGPUCodeGenPrepare.cpp:1006:9: error:
  unused variable 'Ty' [-Werror,-Wunused-variable]
2023-07-21 16:14:41 -07:00
Matt Arsenault
6398b687c5 AMDGPU: Fix variables only used in asserts 2023-07-21 18:55:42 -04:00
Matt Arsenault
8406c3568a AMDGPU: Implement new 2ulp fdiv lowering
Extends the new frexp scaled reciprocal to the general case. The
reciprocal case is just the same thing when frexp of 1 is constant
folded. Could probably clean up the code to rely on that constant
folding.

Improves results for the IEEE path for the default OpenCL division. We
used to only emit the fdiv.fast intrinsic with a 2.5 ulp accuracy
threshold with DAZ, which uses explicit range checks. This gives us a
better fast option with the default IEEE behavior.
2023-07-21 18:55:42 -04:00