227 Commits

Author SHA1 Message Date
Matt Arsenault
5911fbb39d
AMDGPU: Do not fold copy to physreg from operation on frame index (#115977) 2024-11-12 21:35:51 -08:00
Matt Arsenault
4fb43c47dd
AMDGPU: Fold more scalar operations on frame index to VALU (#115059)
Further extend workaround for the lack of proper regbankselect
for frame indexes.
2024-11-07 19:02:20 -08:00
Matt Arsenault
aa7941289e
AMDGPU: Fold copy of scalar add of frame index (#115058)
This is a pre-optimization to avoid a regression in a future
commit. Currently we almost always emit frame index with
a v_mov_b32 and use vector adds for the pointer operations. We
need to consider the users of the frame index (or rather, the
transitive users of derived pointer operations) to know whether
the value will be used in a vector or scalar context. This saves
an sgpr->vgpr copy.

This optimization could be more general for any opcode that's
trivially convertible from a scalar to vector form (although this
is a workaround for a proper regbankselect).
2024-11-06 09:10:58 -08:00
Brox Chen
e8644e3b47
[AMDGPU][True16][MC] VOP2 update instructions with fake16 format (#114436)
Some old "t16" VOP2 instructions are actually in fake16 format. Correct
and update test file
2024-11-05 16:12:49 -05:00
Jay Foad
a156362e93
[AMDGPU] Fix machine verification failure after SIFoldOperandsImpl::tryFoldOMod (#113544)
Fixes #54201
2024-10-29 14:59:37 +00:00
Matt Arsenault
ef91cd3f01
AMDGPU: Handle folding frame indexes into add with immediate (#110738) 2024-10-19 12:33:03 -07:00
Akshat Oke
2adc94cd6c
AMDGPU/NewPM: Port SIFoldOperands to new pass manager (#105801) 2024-08-29 11:34:54 +05:30
Brox Chen
ae059a1f9f
[AMDGPU][True16][CodeGen] support v_mov_b16 and v_swap_b16 in true16 format (#102198)
support v_swap_b16 in true16 format.
update tableGen pattern and folding for v_mov_b16.

---------

Co-authored-by: guochen2 <guochen2@amd.com>
2024-08-08 16:52:59 -04:00
Mirko Brkušanin
817cd72645
[AMDGPU] Fix folding clamp into pseudo scalar instructions (#100568)
Clamp is canonically a v_max* instruction with a VGPR dst. Folding clamp
into a pseudo scalar instruction can cause issues due to a change in
regbank. We fix this with a copy.
2024-07-25 18:19:26 +02:00
Changpeng Fang
25f4bd8872
AMDGPU: Clear kill flags after FoldZeroHighBits (#99582)
After folding, all uses of the result register are going to be replaced
by the operand register. The kill flags on the uses of the result and
operand registers are no longer valid after the replacement, and need to
be cleared.
The only exception is, however, if the kill flag is set for the operand
register, we are sure the last use of the result register is the new
last use of the operand register, and thus we are safe to keep the kill
flags.
2024-07-19 00:46:37 -07:00
Jay Foad
c7309dadbf
[AMDGPU] Use range-based for loops. NFC. (#99047) 2024-07-17 10:18:03 +01:00
Matt Arsenault
d8b63b680d
AMDGPU: Don't fold clamp/omod modifiers without nofpexcept (#95950) 2024-06-18 23:55:49 +02:00
Shilei Tian
0b50d095bc
[AMDGPU] Don't optimize agpr phis if the operand doesn't have subreg use (#91267)
If the operand doesn't have any subreg use, the optimization could
potentially
generate `V_ACCVGPR_READ_B32_e64` with wrong register class. The
following example demonstrates the issue.

Input MIR:

```
bb.0:
  %0:sgpr_32 = S_MOV_B32 0
  %1:sgpr_128 = REG_SEQUENCE %0:sgpr_32, %subreg.sub0, %0:sgpr_32, %subreg.sub1, %0:sgpr_32, %subreg.sub2, %0:sgpr_32, %subreg.sub3
  %2:vreg_128 = COPY %1:sgpr_128
  %3:areg_128 = COPY %2:vreg_128, implicit $exec

bb.1:
  %4:areg_128 = PHI %3:areg_128, %bb.0, %6:areg_128, %bb.1
  %5:areg_128 = PHI %3:areg_128, %bb.0, %7:areg_128, %bb.1
  ...
```

Output of current implementation:

```
bb.0:
  %0:agpr_32 = V_ACCVGPR_WRITE_B32_e64 0, implicit $exec
  %1:agpr_32 = V_ACCVGPR_WRITE_B32_e64 0, implicit $exec
  %2:agpr_32 = V_ACCVGPR_WRITE_B32_e64 0, implicit $exec
  %3:agpr_32 = V_ACCVGPR_WRITE_B32_e64 0, implicit $exec
  %4:areg_128 = REG_SEQUENCE %0:agpr_32, %subreg.sub0, %1:agpr_32, %subreg.sub1, %2:agpr_32, %subreg.sub2, %3:agpr_32, %subreg.sub3
  %5:vreg_128 = V_ACCVGPR_READ_B32_e64 %4:areg_128, implicit $exec
  %6:areg_128 = COPY %46:vreg_128

bb.1:
  %7:areg_128 = PHI %6:areg_128, %bb.0, %9:areg_128, %bb.1
  %8:areg_128 = PHI %6:areg_128, %bb.0, %10:areg_128, %bb.1
  ...
```

The problem is the generated `V_ACCVGPR_READ_B32_e64` instruction.
Apparently the operand `%4:areg_128` is not valid for this.

In this patch, we don't count the none-subreg use because
`V_ACCVGPR_READ_B32_e64` can't handle none-32-bit operand.

Fixes: SWDEV-459556
2024-05-07 16:44:00 -04:00
Martin Wehking
248468097f
Add non-null check before accessing pointer (#83459)
Add a check if RC is not null to ensure that a consecutive access is
safe.

A static analyzer flagged this issue since hasVectorRegisters
potentially dereferences RC.
2024-03-07 10:31:34 +05:30
choikwa
04db60d150
[AMDGPU] Prevent hang in SIFoldOperands by caching uses (#82099)
foldOperands() for REG_SEQUENCE has recursion that can trigger an infinite loop
as the method can modify the operand order, which messes up the range-based
for loop. This patch fixes the issue by caching the uses for processing beforehand,
and then iterating over the cache rather using the instruction iterator.
2024-02-27 09:13:59 -06:00
Stanislav Mekhanoshin
39cab1a0a0
[AMDGPU] Add v2bf16 for opsel immediate folding (#82435)
This was previously enabled since v2bf16 was represented by v2f16. As of
now it is NFC since we only have dot instructions which could use it,
but currently folding is guarded by the hasDOTOpSelHazard().
2024-02-20 14:55:44 -08:00
Ivan Kosarev
7d19dc50de
[AMDGPU][True16] Support VOP3 source DPP operands. (#80892) 2024-02-08 16:23:00 +00:00
Mirko Brkušanin
7fdf608cef
[AMDGPU] Add GFX12 WMMA and SWMMAC instructions (#77795)
Co-authored-by: Petar Avramovic <Petar.Avramovic@amd.com>
Co-authored-by: Piotr Sobczak <piotr.sobczak@amd.com>
2024-01-24 13:43:07 +01:00
Jay Foad
0a3a0ea591
[AMDGPU] Update uses of new VOP2 pseudos for GFX12 (#78155)
New pseudos were added for instructions that were natively VOP3 on
GFX11: V_ADD_F64_pseudo, V_MUL_F64_pseudo, V_MIN_NUM_F64, V_MAX_NUM_F64,
V_LSHLREV_B64_pseudo

---------

Co-authored-by: Mirko Brkusanin <Mirko.Brkusanin@amd.com>
2024-01-18 13:26:13 +00:00
Nicolai Hähnle
49b492048a
AMDGPU: Fix packed 16-bit inline constants (#76522)
Consistently treat packed 16-bit operands as 32-bit values, because
that's really what they are. The attempt to treat them differently was
ultimately incorrect and lead to miscompiles, e.g. when using non-splat
constants such as (1, 0) as operands.

Recognize 32-bit float constants for i/u16 instructions. This is a bit
odd conceptually, but it matches HW behavior and SP3.

Remove isFoldableLiteralV216; there was too much magic in the dependency
between it and its use in SIFoldOperands. Instead, we now simply rely on
checking whether a constant is an inline constant, and trying a bunch of
permutations of the low and high halves. This is more obviously correct
and leads to some new cases where inline constants are used as shown by
tests.

Move the logic for switching packed add vs. sub into SIFoldOperands.
This has two benefits: all logic that optimizes for inline constants in
packed math is now in one place; and it applies to both SelectionDAG and
GISel paths.

Disable the use of opsel with v_dot* instructions on gfx11. They are
documented to ignore opsel on src0 and src1. It may be interesting to
re-enable to use of opsel on src2 as a future optimization.

A similar "proper" fix of what inline constants mean could potentially
be applied to unpacked 16-bit ops. However, it's less clear what the
benefit would be, and there are surely places where we'd have to
carefully audit whether values are properly sign- or zero-extended. It
is best to keep such a change separate.

Fixes: Corruption in FSR 2.0 (latent bug exposed by an LLPC change)
2024-01-04 00:10:15 +01:00
Stanislav Mekhanoshin
87d884b5c8
[AMDGPU] Fix folding of v2i16/v2f16 splat imms (#72709)
We can use inline constants with packed 16-bit operands, but these
should use op_sel. Currently splat of inlinable constants is considered
legal, which is not really true if we fail to fold it with op_sel and
drop the high half. It may be legal as a literal but not as inline
constant, but then usual literal checks must be performed.

This patch makes these splat literals illegal but adds additional logic
to the operand folding to keep current folds. This logic is somewhat
heavy though.

This has fixed constant bus violation in the fdot2 test.
2023-11-28 09:07:26 -08:00
Stanislav Mekhanoshin
82d22a1bb4
[AMDGPU] Fixed folding of inline imm into dot w/o opsel (#73589)
A splat packed constant can be folded as an inline immediate but it
shall use opsel. On gfx940 this code path can be skipped due to HW bug
workaround and then it may be folded w/o opsel which is a bug. Fixed.
2023-11-28 00:50:41 -08:00
Jay Foad
a4196666ac
[AMDGPU] Revert "Preliminary patch for divergence driven instruction selection. Operands Folding 1." (#71710)
This reverts commit 201f892b3b597f24287ab6a712a286e25a45a7d9.
2023-11-13 13:53:10 +00:00
Jay Foad
cc5c2efe4a
[AMDGPU] Fix and use isSISrcInlinableOperand. NFC. (#72101) 2023-11-13 11:29:56 +00:00
Jay Foad
0448a1c0dc
[AMDGPU] Simplify commuted operand handling. NFCI. (#71965)
SIInstrInfo::commuteInstructionImpl should accept indices to commute in
either order. This simplifies SIFoldOperands::tryAddToFoldList where
OtherIdx, CommuteIdx0 and CommuteIdx1 are no longer needed.
2023-11-10 21:51:52 +00:00
Ivan Kosarev
fab28e0e14 Reapply "[AMDGPU] Introduce real and keep fake True16 instructions."
Reverts 6cb3866b1ce9d835402e414049478cea82427cf1.

Analysis of failures on buildbots with expensive checks enabled showed
that the problem was triggered by changes in another commit,
469b3bfad20550968ac428738eb1f8bb8ce3e96d, and was caused by the bug
addressed in #67245.
2023-09-23 22:07:41 +01:00
Ivan Kosarev
6cb3866b1c Revert "[AMDGPU] Introduce real and keep fake True16 instructions."
This reverts commit 0f864c7b8bc9323293ec3d85f4bd5322f8f61b16 due to
failures on expensive checks.
2023-09-22 15:40:26 +01:00
Ivan Kosarev
0f864c7b8b [AMDGPU] Introduce real and keep fake True16 instructions.
The existing fake True16 instructions using 32-bit VGPRs are supposed to
co-exist with real ones until all the necessary True16 functionality is
implemented and relevant tests are updated.

Reviewed By: arsenm, Joe_Nash

Differential Revision: https://reviews.llvm.org/D156101
2023-09-22 10:57:56 +01:00
Mirko Brkušanin
ecfdc23dd2
[AMDGPU] Select gfx1150 SALU Float instructions (#66885) 2023-09-21 12:22:55 +02:00
Matt Arsenault
8f18cf77e7 AMDGPU: Check for implicit defs before constant folding instruction
Can't delete the constant folded instruction if scc is used.

Fixes #63986

https://reviews.llvm.org/D157504
2023-08-11 10:29:53 -04:00
pvanhout
361e9eec51 [AMDGPU] Corrrectly emit AGPR copies in tryFoldPhiAGPR
- Don't create COPY instructions between PHI nodes.
- Don't create V_ACCVGPR_WRITE with operands that aren't AGPR_32

Solves SWDEV-410408

Reviewed By: #amdgpu, arsenm

Differential Revision: https://reviews.llvm.org/D155080
2023-07-13 08:55:22 +02:00
pvanhout
026fc9e9c4 [AMDGPU] Handle Additional Cases in tryFoldPhiAGPR
Sometimes PHI have different incoming values, such as:
 ```
%1:vgpr_256 = COPY %0:agpr_256
%2:vgpr_32 = COPY %1:vgpr_256.sub0
```

Those weren't handled, which could lead to massive performance issues if break-large-PHIs kicked in + AGPRs were used (MFMA)

Fixes SWDEV-407986

Reviewed By: #amdgpu, arsenm

Differential Revision: https://reviews.llvm.org/D153879
2023-06-29 14:49:18 +02:00
Ivan Kosarev
4e312abdfd [AMDGPU][NFC] Add a getRegBitWidth() helper for TargetRegisterClass operands.
Reviewed By: foad

Differential Revision: https://reviews.llvm.org/D152257
2023-06-07 11:41:11 +01:00
pvanhout
6b971325e9 [AMDGPU] Fold more AGPR copies/PHIs in SIFoldOperands
Generalize `tryFoldLCSSAPhi` into `tryFoldPhiAGPR` which works
on any kind of PHI node (not just LCSSA ones) and attempts to
create AGPR Phis more aggressively.

Also adds a GFX908-only "cleanup" function `tryOptimizeAGPRPhis`
which tries to minimize AGPR to AGPR copies on GFX908, which doesn't
have a ACCVGPR MOV instruction (so AGPR-AGPR copies become 2 or 3 instructions
as they need a VGPR temp). The reason why this is needed is because D143731
+ the new `tryFoldPhiAGPR` may create a lot more PHIs (one 32xfloat PHI becomes
32 float phis), and if each PHI hits the same AGPR (like in `test_mfma_loop_agpr_init`)
they will be lowered to 32 copies from the same AGPR, which will each become 2-3 instructions.
Creating a VGPR cache in this case prevents all those copies from being generated
(we have AGPR-VGPR copies instead which are trivial).

This is a prepation patch intended to prevent regressions in D143731 when
AGPRs are involved.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D144099
2023-03-28 09:33:12 +02:00
Jay Foad
e2eee902a4 [AMDGPU] Fix an assertion failure when folding into src2 of V_FMAC_F16
D139469 "[AMDGPU] Enable OMod on more VOP3 instructions" caused an
assertion failure when trying to fold into src2 of V_FMAC_F16. It would
temporarily convert the instruction to V_FMA_F16_gfx9 and add an opsel
operand, but if the fold still failed then it would forget to remove the
opsel operand.

Differential Revision: https://reviews.llvm.org/D144558
2023-02-22 14:26:03 +00:00
Yashwant Singh
2a832d0f09 [AMDGPU] Add missing physical register check in SIFoldOperands::tryFoldLoad
tryFoldLoad() is not meant to work on physical registers moreover
use_nodbg_instructions(reg) makes the compiler buggy when called with
physical reg

Fix for SWDEV-373493

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D141895
2023-01-24 14:24:41 +05:30
Jay Foad
073401e59c [MC] Define and use MCInstrDesc implicit_uses and implicit_defs. NFC.
The new methods return a range for easier iteration. Use them everywhere
instead of getImplicitUses, getNumImplicitUses, getImplicitDefs and
getNumImplicitDefs. A future patch will remove the old methods.

In some use cases the new methods are less efficient because they always
have to scan the whole uses/defs array to count its length, but that
will be fixed in a future patch by storing the number of implicit
uses/defs explicitly in MCInstrDesc. At that point there will be no need
to 0-terminate the arrays.

Differential Revision: https://reviews.llvm.org/D142215
2023-01-23 14:44:58 +00:00
Jay Foad
768aed1378 [MC] Make more use of MCInstrDesc::operands. NFC.
Change MCInstrDesc::operands to return an ArrayRef so we can easily use
it everywhere instead of the (IMHO ugly) opInfo_begin and opInfo_end.
A future patch will remove opInfo_begin and opInfo_end.

Also use it instead of raw access to the OpInfo pointer. A future patch
will remove this pointer.

Differential Revision: https://reviews.llvm.org/D142213
2023-01-23 11:31:41 +00:00
Matt Arsenault
4463badf46 AMDGPU: Use DenormalMode type in FP mode tracking
This simplies a future patch. The MIR handling should be fixed. We're
still printing these in custom MachineFunctionInfo as bools (plus the
inverted meaning is hard to follow).
2022-12-21 20:35:48 -05:00
Jay Foad
6443c0ee02 [AMDGPU] Stop using make_pair and make_tuple. NFC.
C++17 allows us to call constructors pair and tuple instead of helper
functions make_pair and make_tuple.

Differential Revision: https://reviews.llvm.org/D139828
2022-12-14 13:22:26 +00:00
Joe Nash
bbfbec94b1 [AMDGPU] Enable OMod on more VOP3 instructions
OMod was disabled if OpSel was enabled, but that restriction is more
specific than necessary. Any VOP3 with float operands can use OMod.

On GFX11, FMAC_F16_e64 can use op_sel.
Previously, SIFoldOperands and convertToThreeAddress were accidentally correct when
they reinterpreted the zero OMod operand on V_FMAC_F16_e64 as the OpSel operand on
V_FMA_F16_gfx9_e64. Now we explicitly add op_sel if required.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D139469
2022-12-07 13:30:33 -05:00
Jay Foad
38302c60ef [AMDGPU] Stop looking for implicit M0 uses on MOV instructions
Before D114230, indirect moves used regular MOV opcodes and were
identified by having an implicit use of M0. Since D114230 they use
dedicated opcodes instead, so remove some old code that checks for
implicit uses of M0. NFCI.

Differential Revision: https://reviews.llvm.org/D138308
2022-11-18 16:57:55 +00:00
Jay Foad
49762162ea [AMDGPU] Remove isLiteralConstant and isLiteralConstantLike
isLiteralConstant and isLiteralConstantLike were similar to
!isInlineConstant with slight differences like handling isReg operands.

To avoid a profusion of similar functions with undocumented differences,
this patch removes all the isLiteralConstant* variants. Callers are responsible
for handling the isReg case.

Differential Revision: https://reviews.llvm.org/D125759
2022-11-17 16:45:48 +00:00
Pierre van Houtryve
7425077e31 [AMDGPU] Add & use hasNamedOperand, NFC
In a lot of places, we were just calling `getNamedOperandIdx` to check if the result was != or == to -1.
This is fine in itself, but it's verbose and doesn't make the intention clear, IMHO. I added a `hasNamedOperand` and replaced all cases I could find with regexes and manually.

Reviewed By: arsenm, foad

Differential Revision: https://reviews.llvm.org/D137540
2022-11-08 07:57:21 +00:00
Pierre van Houtryve
b5f9972345 [SIFoldOperands] Small code cleanups, NFC.
I've been trying to understand the backend better and decided to read the code of this pass.
While doing so, I noticed parts that could be refactored to be a tiny bit clearer.
I tried to keep the changes minimal, a non-exhaustive list of changes is:
- Stylistic changes to better fit LLVM's coding style
- Removing dead/useless functions (e.g. FoldCandidate had getters, but it's a public struct!)
 - Saving regs/opcodes in variables if they're going to be used multiple times in the same condition

Reviewed By: arsenm, foad

Differential Revision: https://reviews.llvm.org/D137539
2022-11-08 07:51:48 +00:00
Pierre van Houtryve
70c781f4b6 [SIFoldOperands] Move isFoldableCopy into a separate helper, NFC.
There was quite a bit of logic there that was just in the middle of core loop. I think it makes it easier to follow when it's split off in a separate helper like the others.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D137538
2022-11-08 07:44:34 +00:00
Joe Nash
b982ba2a6e [AMDGPU][GFX11] Use VGPR_32_Lo128 for VOP1,2,C
Due to the encoding changes in GFX11, we had a hack in place that
    disables the use of VGPRs above 128. This patch removes the need for
    that hack.

    We introduce a new register class VGPR_32_Lo128 which is used for 16-bit
    operands of VOP1, VOP2, and VOPC instructions. This register class only has the
    low 128 VGPRs, but is otherwise identical to VGPR_32. Therefore, 16-bit VOP1,
    VOP2, and VOPC instructions are correctly limited to use the first 128
    VGPRs, while the other instructions can freely use all 256.

    We introduce new pseduo-instructions used on GFX11 which have the suffix
    t16 (True 16) to use the VGPR_32_Lo128 register class.

Reviewed By: foad, rampitec, #amdgpu

Differential Revision: https://reviews.llvm.org/D133723
2022-09-20 09:56:28 -04:00
Matt Arsenault
7834194837 TableGen: Introduce generated getSubRegisterClass function
Currently there isn't a generic way to get a smaller register class
that can be produced from a subregister of a larger class. Replaces a
manually implemented version for AMDGPU. This will be used to improve
subregister support in the allocator.
2022-09-12 09:03:37 -04:00
Jay Foad
96dfa523c2 [AMDGPU] Refactor SIFoldOperands. NFC.
Refactor static functions into class methods so they have access to TII, MRI
etc.
2022-09-07 11:05:01 +01:00
Kazu Hirata
9861a68a7c [Target] Qualify auto in range-based for loops (NFC) 2022-08-28 10:41:50 -07:00