There's little point to trying to commute an instruction if the
two operands are already the same.
This avoids an assertion in a future patch, but this likely isn't the
correct fix. The worklist management in SIFoldOperands is dodgy, and
we should probably fix it to work like PeepholeOpt (i.e. stop looking
at use lists, and fold from users). This is an extension of the already
handled special case which it's trying to avoid folding an instruction
which is already being folded.
There were 2 parallel fold check mechanisms, so consistently use the
fold list. The worklist management here is still not good. Other types
of folds are not using it, and we should probably rewrite the pass to
look more like peephole-opt.
This should be an alternative fix to skipping commute if the operands
are the same (#127562). The new test is still not broken as-is, but
demonstrates failures in a future patch.
Previous PR https://github.com/llvm/llvm-project/pull/122950 get
reverted since it hit the buildbot failure. Another patch get merged
when this PR is under review, and thus causing one test not up to date.
repen this PR and fixed the issue.
- Change InstrInfoEmitter to emit OpName as an enum class
instead of an anonymous enum in the OpName namespace.
- This will help clearly distinguish between values that are
OpNames vs just operand indices and should help avoid
bugs due to confusion between the two.
- Rename OpName::OPERAND_LAST to NUM_OPERAND_NAMES.
- Emit declaration of getOperandIdx() along with the OpName
enum so it doesn't have to be repeated in various headers.
- Also updated AMDGPU, RISCV, and WebAssembly backends
to conform to the new definition of OpName (mostly
mechanical changes).
In SIFoldOperands, leave copies for moving between agpr and vgpr
registers. The register coalescer is able to handle the copies
more efficiently than v_accvgpr_mov, v_accvgpr_write, and
v_accvgpr_read. Otherwise, the compiler generates unneccesary
instructions such as v_accvgpr_mov a0, a0.
Support true16 format for v_fma_f16 in MC.
Since we are replacing v_fma_f16 to v_fma_f16_t16/v_fma_f16_fake16 in
Post-GFX11, have to update the CodeGen pattern for v_fma_f16_fake16 to
get CodeGen test passing. There is no pattern modified/created, but just
replacing the v_fma_f16 with fake16 format.
This is a pre-optimization to avoid a regression in a future
commit. Currently we almost always emit frame index with
a v_mov_b32 and use vector adds for the pointer operations. We
need to consider the users of the frame index (or rather, the
transitive users of derived pointer operations) to know whether
the value will be used in a vector or scalar context. This saves
an sgpr->vgpr copy.
This optimization could be more general for any opcode that's
trivially convertible from a scalar to vector form (although this
is a workaround for a proper regbankselect).
Clamp is canonically a v_max* instruction with a VGPR dst. Folding clamp
into a pseudo scalar instruction can cause issues due to a change in
regbank. We fix this with a copy.
After folding, all uses of the result register are going to be replaced
by the operand register. The kill flags on the uses of the result and
operand registers are no longer valid after the replacement, and need to
be cleared.
The only exception is, however, if the kill flag is set for the operand
register, we are sure the last use of the result register is the new
last use of the operand register, and thus we are safe to keep the kill
flags.
Add a check if RC is not null to ensure that a consecutive access is
safe.
A static analyzer flagged this issue since hasVectorRegisters
potentially dereferences RC.
foldOperands() for REG_SEQUENCE has recursion that can trigger an infinite loop
as the method can modify the operand order, which messes up the range-based
for loop. This patch fixes the issue by caching the uses for processing beforehand,
and then iterating over the cache rather using the instruction iterator.
This was previously enabled since v2bf16 was represented by v2f16. As of
now it is NFC since we only have dot instructions which could use it,
but currently folding is guarded by the hasDOTOpSelHazard().
New pseudos were added for instructions that were natively VOP3 on
GFX11: V_ADD_F64_pseudo, V_MUL_F64_pseudo, V_MIN_NUM_F64, V_MAX_NUM_F64,
V_LSHLREV_B64_pseudo
---------
Co-authored-by: Mirko Brkusanin <Mirko.Brkusanin@amd.com>
Consistently treat packed 16-bit operands as 32-bit values, because
that's really what they are. The attempt to treat them differently was
ultimately incorrect and lead to miscompiles, e.g. when using non-splat
constants such as (1, 0) as operands.
Recognize 32-bit float constants for i/u16 instructions. This is a bit
odd conceptually, but it matches HW behavior and SP3.
Remove isFoldableLiteralV216; there was too much magic in the dependency
between it and its use in SIFoldOperands. Instead, we now simply rely on
checking whether a constant is an inline constant, and trying a bunch of
permutations of the low and high halves. This is more obviously correct
and leads to some new cases where inline constants are used as shown by
tests.
Move the logic for switching packed add vs. sub into SIFoldOperands.
This has two benefits: all logic that optimizes for inline constants in
packed math is now in one place; and it applies to both SelectionDAG and
GISel paths.
Disable the use of opsel with v_dot* instructions on gfx11. They are
documented to ignore opsel on src0 and src1. It may be interesting to
re-enable to use of opsel on src2 as a future optimization.
A similar "proper" fix of what inline constants mean could potentially
be applied to unpacked 16-bit ops. However, it's less clear what the
benefit would be, and there are surely places where we'd have to
carefully audit whether values are properly sign- or zero-extended. It
is best to keep such a change separate.
Fixes: Corruption in FSR 2.0 (latent bug exposed by an LLPC change)
We can use inline constants with packed 16-bit operands, but these
should use op_sel. Currently splat of inlinable constants is considered
legal, which is not really true if we fail to fold it with op_sel and
drop the high half. It may be legal as a literal but not as inline
constant, but then usual literal checks must be performed.
This patch makes these splat literals illegal but adds additional logic
to the operand folding to keep current folds. This logic is somewhat
heavy though.
This has fixed constant bus violation in the fdot2 test.
A splat packed constant can be folded as an inline immediate but it
shall use opsel. On gfx940 this code path can be skipped due to HW bug
workaround and then it may be folded w/o opsel which is a bug. Fixed.
SIInstrInfo::commuteInstructionImpl should accept indices to commute in
either order. This simplifies SIFoldOperands::tryAddToFoldList where
OtherIdx, CommuteIdx0 and CommuteIdx1 are no longer needed.
Reverts 6cb3866b1ce9d835402e414049478cea82427cf1.
Analysis of failures on buildbots with expensive checks enabled showed
that the problem was triggered by changes in another commit,
469b3bfad20550968ac428738eb1f8bb8ce3e96d, and was caused by the bug
addressed in #67245.
The existing fake True16 instructions using 32-bit VGPRs are supposed to
co-exist with real ones until all the necessary True16 functionality is
implemented and relevant tests are updated.
Reviewed By: arsenm, Joe_Nash
Differential Revision: https://reviews.llvm.org/D156101
Sometimes PHI have different incoming values, such as:
```
%1:vgpr_256 = COPY %0:agpr_256
%2:vgpr_32 = COPY %1:vgpr_256.sub0
```
Those weren't handled, which could lead to massive performance issues if break-large-PHIs kicked in + AGPRs were used (MFMA)
Fixes SWDEV-407986
Reviewed By: #amdgpu, arsenm
Differential Revision: https://reviews.llvm.org/D153879
Generalize `tryFoldLCSSAPhi` into `tryFoldPhiAGPR` which works
on any kind of PHI node (not just LCSSA ones) and attempts to
create AGPR Phis more aggressively.
Also adds a GFX908-only "cleanup" function `tryOptimizeAGPRPhis`
which tries to minimize AGPR to AGPR copies on GFX908, which doesn't
have a ACCVGPR MOV instruction (so AGPR-AGPR copies become 2 or 3 instructions
as they need a VGPR temp). The reason why this is needed is because D143731
+ the new `tryFoldPhiAGPR` may create a lot more PHIs (one 32xfloat PHI becomes
32 float phis), and if each PHI hits the same AGPR (like in `test_mfma_loop_agpr_init`)
they will be lowered to 32 copies from the same AGPR, which will each become 2-3 instructions.
Creating a VGPR cache in this case prevents all those copies from being generated
(we have AGPR-VGPR copies instead which are trivial).
This is a prepation patch intended to prevent regressions in D143731 when
AGPRs are involved.
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D144099
D139469 "[AMDGPU] Enable OMod on more VOP3 instructions" caused an
assertion failure when trying to fold into src2 of V_FMAC_F16. It would
temporarily convert the instruction to V_FMA_F16_gfx9 and add an opsel
operand, but if the fold still failed then it would forget to remove the
opsel operand.
Differential Revision: https://reviews.llvm.org/D144558
tryFoldLoad() is not meant to work on physical registers moreover
use_nodbg_instructions(reg) makes the compiler buggy when called with
physical reg
Fix for SWDEV-373493
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D141895
The new methods return a range for easier iteration. Use them everywhere
instead of getImplicitUses, getNumImplicitUses, getImplicitDefs and
getNumImplicitDefs. A future patch will remove the old methods.
In some use cases the new methods are less efficient because they always
have to scan the whole uses/defs array to count its length, but that
will be fixed in a future patch by storing the number of implicit
uses/defs explicitly in MCInstrDesc. At that point there will be no need
to 0-terminate the arrays.
Differential Revision: https://reviews.llvm.org/D142215
Change MCInstrDesc::operands to return an ArrayRef so we can easily use
it everywhere instead of the (IMHO ugly) opInfo_begin and opInfo_end.
A future patch will remove opInfo_begin and opInfo_end.
Also use it instead of raw access to the OpInfo pointer. A future patch
will remove this pointer.
Differential Revision: https://reviews.llvm.org/D142213