2088 Commits

Author SHA1 Message Date
Florian Hahn
b6476e5549
[LV] Retain branch in middle block when scalar epilogue is needed (NFC)
splitBlock will create a unconditional branch between the middle block
and scalar preheader. Instead of creating and replacing the same branch
again when scalar epilogue is needed, simply add an early exit.

As suggested by @ayalz in
https://github.com/llvm/llvm-project/pull/92651 to clarify the existing
code.
2024-06-17 17:14:56 +01:00
Florian Hahn
9f69e116a5
[VPlan] Use VPTransformState::UF in vectorizeInterleaveGroup (NFCI).
Bring implementations of vectorizeInterleaveGroup in line with other
recipes' execute by using VPTransformState::UF.
2024-06-16 21:03:47 +01:00
Arthur Eubanks
6f538f6a2d Revert "Recommit "[VPlan] First step towards VPlan cost modeling. (#92555)""
This reverts commit 90fd99c0795711e1cf762a02b29b0a702f86a264.
This reverts commit 43e6f46936e177e47de6627a74b047ba27561b44.

Causes crashes, see comments on https://github.com/llvm/llvm-project/pull/92555.
2024-06-14 17:47:08 +00:00
Florian Hahn
43e6f46936
[VPlan] Pre-compute cost for all instrs only feeding exit conditions.
This fixes the following buildbot failures after 90fd99c07957:
    https://lab.llvm.org/buildbot/#/builders/17/builds/47
    https://lab.llvm.org/buildbot/#/builders/168/builds/37
2024-06-14 15:12:25 +01:00
Florian Hahn
90fd99c079
Recommit "[VPlan] First step towards VPlan cost modeling. (#92555)"
This reverts commit 46080abe9b136821eda2a1a27d8a13ceac349f8c.

Extra tests have been added in 52d29eb287.

Original message:
This adds a new interface to compute the cost of recipes, VPBasicBlocks,
VPRegionBlocks and VPlan, initially falling back to the legacy cost model
for all recipes. Follow-up patches will gradually migrate recipes to
compute their own costs step-by-step.

It also adds getBestPlan function to LVP which computes the cost of all
VPlans and picks the most profitable one together with the most
profitable VF.

The VPlan selected by the VPlan cost model is executed and there is an
assert to catch cases where the VPlan cost model and the legacy cost
model disagree. Even though I checked a number of different build
configurations on AArch64 and X86, there may be some differences
that have been missed.

Additional discussions and context can be found in @arcbbb's
https://github.com/llvm/llvm-project/pull/67647 and
https://github.com/llvm/llvm-project/pull/67934 which is an earlier
version of the current PR.

PR: https://github.com/llvm/llvm-project/pull/92555
2024-06-14 12:33:48 +01:00
Arthur Eubanks
46080abe9b Revert "[VPlan] First step towards VPlan cost modeling. (#92555)"
This reverts commit 00798354c553d48d27006a2b06a904bd6013e31b.

Causes crashes, see comments on https://github.com/llvm/llvm-project/pull/92555.
2024-06-13 16:37:21 +00:00
Florian Hahn
00798354c5
[VPlan] First step towards VPlan cost modeling. (#92555)
This adds a new interface to compute the cost of recipes, VPBasicBlocks,
VPRegionBlocks and VPlan, initially falling back to the legacy cost model
for all recipes. Follow-up patches will gradually migrate recipes to 
compute their own costs step-by-step.

It also adds getBestPlan function to LVP which computes the cost of all
VPlans and picks the most profitable one together with the most
profitable VF.

The VPlan selected by the VPlan cost model is executed and there is an
assert to catch cases where the VPlan cost model and the legacy cost
model disagree. Even though I checked a number of different build
configurations on AArch64 and X86, there may be some differences
that have been missed.

Additional discussions and context can be found in @arcbbb's
https://github.com/llvm/llvm-project/pull/67647 and 
https://github.com/llvm/llvm-project/pull/67934 which is an earlier
version of the current PR.


PR: https://github.com/llvm/llvm-project/pull/92555
2024-06-13 14:26:18 +01:00
Paul Kirth
294f3ce5dd
Reapply "[llvm][IR] Extend BranchWeightMetadata to track provenance o… (#95281)
…f weights" #95136

Reverts #95060, and relands #86609, with the unintended code generation
changes addressed.

This patch implements the changes to LLVM IR discussed in
https://discourse.llvm.org/t/rfc-update-branch-weights-metadata-to-allow-tracking-branch-weight-origins/75032

In this patch, we add an optional field to MD_prof meatdata nodes for
branch weights, which can be used to distinguish weights added from
llvm.expect* intrinsics from those added via other methods, e.g. from
profiles or inserted by the compiler.

One of the major motivations, is for use with MisExpect diagnostics,
which need to know if branch_weight metadata originates from an
llvm.expect intrinsic. Without that information, we end up checking
branch weights multiple times in the case if ThinLTO + SampleProfiling,
leading to some inaccuracy in how we report MisExpect related
diagnostics to users.

Since we change the format of MD_prof metadata in a fundamental way, we
need to update code handling branch weights in a number of places.

We also update the lang ref for branch weights to reflect the change.
2024-06-12 12:52:28 -07:00
Florian Hahn
c46a6e6c92
[LV] Remove unnecessary getRuntimeVF call when computing vector TC.
As Step is VF * UF, there is no need to compute it again, which may
require multiple instructions for scalable VFs.
2024-06-12 14:35:37 +01:00
Paul Kirth
607afa0b63
Revert "[llvm][IR] Extend BranchWeightMetadata to track provenance of weights" (#95060)
Reverts llvm/llvm-project#86609

This change causes compile-time regressions for stage2 builds
(https://llvm-compile-time-tracker.com/compare.php?from=3254f31a66263ea9647c9547f1531c3123444fcd&to=c5978f1eb5eeca8610b9dfce1fcbf1f473911cd8&stat=instructions:u).
It also introduced unintended changes to `.text` which should be
addressed before relanding.
2024-06-11 08:06:06 +02:00
Paul Kirth
c5978f1eb5
[llvm][IR] Extend BranchWeightMetadata to track provenance of weights (#86609)
This patch implements the changes to LLVM IR discussed in

https://discourse.llvm.org/t/rfc-update-branch-weights-metadata-to-allow-tracking-branch-weight-origins/75032

In this patch, we add an optional field to MD_prof metadata nodes for
branch weights, which can be used to distinguish weights added from
`llvm.expect*` intrinsics from those added via other methods, e.g.
from profiles or inserted by the compiler.

One of the major motivations, is for use with MisExpect diagnostics,
which need to know if branch_weight metadata originates from an
llvm.expect intrinsic. Without that information, we end up checking
branch weights multiple times in the case if ThinLTO + SampleProfiling,
leading to some inaccuracy in how we report MisExpect related
diagnostics to users.

Since we change the format of MD_prof metadata in a fundamental way, we
need to update code handling branch weights in a number of places.

We also update the lang ref for branch weights to reflect the change.
2024-06-10 11:27:21 -07:00
Florian Hahn
05e1b5340b
[VPlan] Model FOR resume value extraction in VPlan. (#93396)
This patch uses the ExtractFromEnd VPInstruction opcode
to extract the value of a FOR to be used as resume value for the ph in
the scalar loop.

It adds a new live-out that temporarily wraps the FOR phi in the scalar
loop. fixFixedOrderRecurrence will process live outs for fixed order
recurrence phis by creating a new phi node in the scalar preheader, 
using the generated value for the live-out as incoming value from the
middle block and the original start value as incoming value for the
other edge. Creation of the phi in the preheader, as well as updating
the phi in the scalar loop will also be moved to VPlan in the future,
eventually retiring fixFixedOrderRecurrence

Depends on https://github.com/llvm/llvm-project/pull/93395

PR: https://github.com/llvm/llvm-project/pull/93396
2024-06-05 11:18:06 +01:00
Florian Hahn
07b330132c
[VPlan] Model FOR extract of exit value in VPlan. (#93395)
This patch introduces a new ExtractFromEnd VPInstruction opcode to
extract the value of a FOR for users outside the loop (i.e. in the
scalar loop's exits). This moves the first part of fixing first order
recurrences to VPlan, and removes some additional code to patch up
live-outs, which is now handled automatically.

The majority of test changes is due to changes in the order of which the
extracts are generated now. As we are now using VPTransformState to
generate the extracts, we may be able to re-use existing extracts in the
loop body in some cases. For scalable vectors, in some cases we now have
to compute the runtime VF twice, as each extract is now independent, but
those should be trivial to clean up for later passes (and in line with
other places in the code that also liberally re-compute runtime VFs).

PR: https://github.com/llvm/llvm-project/pull/93395
2024-06-03 20:20:30 +01:00
Florian Hahn
f7e63e8b46
[LV] Operands feeding pointers of interleave member pointers are free.
For interleave groups we only create a pointer for the start of the
interleave group, not all original loads/stores. Mark single-use ops
feeding interleave group mem ops as free when vectorizing.
2024-06-01 13:59:29 +01:00
Florian Hahn
654cd94629
[VPlan] Unconditionally run optimizeForVFAndUF.
Now that the VPlan for the main vector loop gets cloned in the epilogue
vectorization code path, there optimizeForVFAndUF can be applied
unconditionally.
2024-05-31 06:32:49 -07:00
Florian Hahn
5785048321
[VPlan] Add VPIRBasicBlock, use to model pre-preheader. (#93398)
This patch adds a new special type of VPBasicBlock that wraps an
existing IR basic block. Recipes of the block get added before the
terminator of the wrapped IR basic block. Making it a subclass of
VPBasicBlock avoids duplicating various APIs to manage recipes in a
block, as well as makes sure the traversals filtering VPBasicBlocks
automatically apply as well.

Initially VPIRBasicBlock are only used for the pre-preheader (wrapping
the original preheader of the scalar loop).

As follow-up, this will be used to move more parts of the skeleton
inside VPlan, starting with the branch and condition in the middle
block.

Separated out of https://github.com/llvm/llvm-project/pull/92651

PR: https://github.com/llvm/llvm-project/pull/93398
2024-05-30 11:23:32 -07:00
Ramkumar Ramachandra
43100766f2
LV: generalize profitability criterion over TC (#93300)
Generalize LoopVectorizationPlanner::isMoreProfitable smoothly across
the fixed-vector and scalable-vector cases, taking the trip-count into
account, and fixing logical pitfalls that arise from a lack of
generality.
2024-05-30 10:54:32 +01:00
Florian Hahn
8b037862b6
[VPlan] Preserve DT (and SCEV) in VPlan-native path (#93287)
As a follow-up to b2f65e80, use the DTU to also update and preserve
the DT in the native path. This should also allow preserving SCEV in the
native path

PR: https://github.com/llvm/llvm-project/pull/93287
2024-05-27 17:03:53 -07:00
Florian Hahn
83646590af
[VPlan] Remove unused Range arg from createWidenInductionRecipe (NFC).
The Range argument is not used by createWidenInductionRecipe; induction
classification applies across the whole range of VFs. Remove the
argument.
2024-05-25 09:11:36 +01:00
Shih-Po Hung
0338c55ea5
[LV, VPlan] Check if plan is compatible to EVL transform (#92092)
The transform updates all users of inductions to work based on EVL,
instead
of the VF directly. At the moment, widened inductions cannot be updated,
so
bail out if the plan contains any.
This patch introduces a check before applying EVL transform. If any
recipes in loop rely on RuntimeVF, the plan is discarded.
2024-05-25 08:22:49 +08:00
Ramkumar Ramachandra
bb0d29a72d
[LV] fix logical error in trunc cost (#91136)
In LoopVectorizationCostModel::getInstructionCost(), when the condition
canTruncateToMinimalBitwidth() is satisfied, for a trunc, the source
type is computed as the smallest type of the source vector and the
destination vector, and the destination type is computed as the largest
type of the instruction and destination type. This is clearly a logical
error, as the original source vector type could be smaller than the
original destination vector type, and the trunc semantics are broken
because we're attempting to widen.

Fixes #47665.
2024-05-24 18:01:58 +01:00
Florian Hahn
b2f65e809e
[VPlan] Use DomTreeUpdater to automatically update DT for vector loop. (#92525)
Use DTU to queue DominatorTree updates directly when connecting basic
blocks during VPlan execution.

This simplifies DT updates and will also automatically allow updating
the DT for the VPlan-native path as additional benefit in a follow-up.

PR: https://github.com/llvm/llvm-project/pull/92525
2024-05-24 10:10:12 +01:00
Florian Hahn
352dc7d4bb
[LV] Propagate PredicatedBBsAfterVectorization to predecessors.
This fixes some cases where predicated BBs where missed previously,
leading to under-estimating the cost of those blocks.
2024-05-21 10:27:32 +01:00
Florian Hahn
1e7d047c71
[VPlan] Mark LoopInfo preserved in native-path as well (NFC).
LoopInfo is updated during VPlan execution now, so it will also be
updated correctly in the native path.
2024-05-17 12:18:01 +01:00
Florian Hahn
b1e99a699d
[LV] Drop redundant comment from createEdgeMask (NFC).
Follow-up to remove a redundant comment post-commit
https://github.com/llvm/llvm-project/pull/91897
2024-05-14 12:43:47 +01:00
Ramkumar Ramachandra
d7ef34bfe3
[LV] update comment following 63d8058 (NFC) (#91120)
Address a review comment post landing 63d8058 (LoopVectorize: guard
appending InstsToScalarize; fix bug) to update a comment.
2024-05-14 10:59:26 +01:00
Florian Hahn
632317e9ab
[VPlan] Add non-poison propagating LogicalAnd VPInstruction opcode. (#91897)
Add a new opcode to mode non-poison propagating logical AND operations
used when generating edge masks. This follows the similar decision to
model Not as dedicated opcode as well, to improve clarity.

This also helps to simplify the matchers for
https://github.com/llvm/llvm-project/pull/89386.


PR: https://github.com/llvm/llvm-project/pull/91897
2024-05-14 09:42:49 +01:00
Florian Hahn
e122380445
[LV] Use VPBuilder to create Select (NFCI). 2024-05-13 20:44:39 +01:00
Florian Hahn
082c81ae4a
[LV] Properly extend versioned constant strides.
We only version unknown strides to 1. If the original type is i1, then
the sign of the extension matters. Properly extend the stride value
before replacing it.

Fixes https://github.com/llvm/llvm-project/issues/91369.
2024-05-07 21:31:42 +01:00
Alexey Bataev
6517c5b068 [LV][NFC]Address last comments from https://github.com/llvm/llvm-project/pull/88025. 2024-05-03 06:51:01 -07:00
Florian Hahn
bccb7ed8ac
Reapply "[LV] Improve AnyOf reduction codegen. (#78304)"
This reverts the revert commit c6e01627acf859.

This patch includes a fix for any-of reductions and epilogue
vectorization. Extra test coverage for the issue that caused the revert
has been added in bce3bfced5fe0b019 and an assertion has been added in
c7209cbb8be7a3c65813.

--------------------------------
Original commit message:

Update AnyOf reduction code generation to only keep track of the AnyOf
property in a boolean vector in the loop, only selecting either the new
or start value in the middle block.

The patch incorporates feedback from https://reviews.llvm.org/D153697.

This fixes the #62565, as now there aren't multiple uses of the
start/new values.

Fixes https://github.com/llvm/llvm-project/issues/62565

PR: https://github.com/llvm/llvm-project/pull/78304
2024-05-03 14:40:49 +01:00
Alexey Bataev
1d43cdc9f5
[LV][EVL]Support reversed loads/stores.
Support for predicated vector reverse intrinsic was added some time ago.
Adds support for predicated reversed loads/stores in the loop
vectorizer.

Reviewers: fhahn

Reviewed By: fhahn

Pull Request: https://github.com/llvm/llvm-project/pull/88025
2024-05-03 07:28:56 -04:00
Florian Hahn
c7209cbb8b
[LV] Assert that there's a resume phi for epilogue loops (NFC).
This patch adds an assert to createAndCollectMergePhiForReduction to
make sure there is a resume phi when vectorizing the epilogue loop. This
is needed to set the resume value from the main vector loop.

This assertion guards against the issue caused the revert of
https://github.com/llvm/llvm-project/pull/78304.
2024-05-02 19:20:28 +01:00
Florian Hahn
e846778e52
[VPlan] Make CallInst optional for VPWidenCallRecipe (NFCI).
Replace relying on the underling CallInst for looking up the called
function and its types by instead adding the called function as operand,
in line with how called functions are handled in CallInst.

Operand bundles, metadata and fast-math flags are optionally used if
there's an underlying CallInst.

This enables creating VPWidenCallRecipes without requiring an underlying
IR instruction.
2024-05-01 20:48:22 +01:00
Florian Hahn
9c3f5fe88f
[LV] Don't consider the latch block as ScalarPredicatedBB.
The conditional branch from the loop latch will be replaced by a
single branch controlling the loop, so there is no extra overhead from
scalarization. This improves the cost esimates in some cases.
2024-04-29 19:15:46 +01:00
Maciej Gabka
bfc0317153
Move several vector intrinsics out of experimental namespace (#88748)
This patch is moving out following intrinsics:
* vector.interleave2/deinterleave2
* vector.reverse
* vector.splice

from the experimental namespace.

All these intrinsics exist in LLVM for more than a year now, and are
widely used, so should not be considered as experimental.
2024-04-29 10:16:45 +01:00
Florian Hahn
b6a8f5486b
[LV] Consider all exit branch conditions uniform.
If we vectorize a loop with multiple exits, all exiting branches should
be considered uniform, as the resulting loop will be controlled by the
canonical IV only. Previously we were overestimating the cost of values
contributing to the other exits.
2024-04-28 13:15:55 +01:00
Florian Hahn
9ee8e38cdc
[VPlan] Also propagate versioned strides to users via sext/zext.
The versioned value may not be used in the loop directly but through a
sext/zext. Add new live-ins in those cases.
2024-04-26 21:29:43 +01:00
David Green
a8105026ff [LV] Fix warning about Mask being set twice. NFC 2024-04-20 16:40:08 +01:00
Florian Hahn
e2a72fa583
[VPlan] Introduce recipes for VP loads and stores. (#87816)
Introduce new subclasses of VPWidenMemoryRecipe for VP
(vector-predicated) loads and stores to address multiple TODOs from
https://github.com/llvm/llvm-project/pull/76172

Note that the introduction of the new recipes also improves code-gen for
VP gather/scatters by removing the redundant header mask. With the new
approach, it is not sufficient to look at users of the widened canonical
IV to find all uses of the header mask.

In some cases, a widened IV is used instead of separately widening the
canonical IV. To handle that, first collect all VPValues representing header
masks (by looking at users of both the canonical IV and widened inductions
that are canonical) and then checking all users (recursively) of those header
masks.

Depends on https://github.com/llvm/llvm-project/pull/87411.

PR: https://github.com/llvm/llvm-project/pull/87816
2024-04-19 09:44:23 +01:00
Ramkumar Ramachandra
73e7f2ff70
LoopVectorize: guard marking iv as scalar; fix bug (#88730)
When collecting loop scalars, LoopVectorize over-eagerly marks the
induction variable and its update as scalars after vectorization, even
if the induction variable update is a first-order recurrence. Guard the
process with this check, fixing a crash.

Fixes #72969.
2024-04-18 14:41:07 +01:00
Ramkumar Ramachandra
63d8058ef5
LoopVectorize: guard appending InstsToScalarize; fix bug (#88720)
In the process of collecting instructions to scalarize, LoopVectorize
uses faulty reasoning whereby it also adds instructions that will be
scalar after vectorization. If an instruction satisfies
isScalarAfterVectorization() for the given VF, it should not be appended
to InstsToScalarize. Add this extra guard, fixing a crash.

Fixes #55096.
2024-04-18 10:03:07 +01:00
Florian Hahn
a9bafe91dd
[VPlan] Split VPWidenMemoryInstructionRecipe (NFCI). (#87411)
This patch introduces a new VPWidenMemoryRecipe base class and distinct
sub-classes to model loads and stores.

This is a first step in an effort to simplify and modularize code
generation for widened loads and stores and enable adding further more
specialized memory recipes.

PR: https://github.com/llvm/llvm-project/pull/87411
2024-04-17 11:00:58 +01:00
Mel Chen
cbe148b730
[LV][NFC] Remove the declaration of function fixReduction. (#88491) 2024-04-17 17:59:52 +08:00
Arthur Eubanks
c6e01627ac Revert "Reapply "[LV] Improve AnyOf reduction codegen. (#78304)""
This reverts commit c6e38b928c56f562aea68a8e90f02dbdf0eada85.

Causes miscompiles, see comments on #78304.
2024-04-16 20:40:21 +00:00
Alexey Bataev
e84b2fb48d
[LV][NFCI]Use integer for cost/trip count calculations instead of double, fix possible UB.
Using fp type in the compiler is not the best idea, here it used with
the comparison for equal to 0 and may cause undefined behavior in some
cases.

Reviewers: fhahn

Reviewed By: fhahn

Pull Request: https://github.com/llvm/llvm-project/pull/87241
2024-04-16 09:48:13 -04:00
Florian Hahn
c836983671
[VPlan] Remove unused first mask op from VPBlendRecipe. (#87770)
VPBlendRecipe does not use the first mask operand. Removing it allows
VPlan-based DCE to remove unused mask computations.

This also fixes #87410, where unused Not VPInstructions are considered
having only their first lane demanded, but some of their operands
providing a vector value due to other users.

Fixes https://github.com/llvm/llvm-project/issues/87410

PR: https://github.com/llvm/llvm-project/pull/87770
2024-04-09 11:14:05 +01:00
Florian Hahn
9430a4b9d2
[VPlan] Use getEdgeMask when constructing VPBlendRecipe (NFCI).
After 2d0d65b3babe, block-in and edge masks are create up-front. Only
retrieve the cached edge-mask here.
2024-04-09 09:32:40 +01:00
Florian Hahn
15d11a4de9
[VPlan] Track IsOrdered in VPReductionRecipe, remove use of ILV (NFCI).
Instead of using ILV.useOrderedReductions during ::execute, instead
store the information at recipe construction.

Another step towards making recipe'::execute independent of legacy ILV.
2024-04-07 20:33:22 +01:00
Florian Hahn
c6e38b928c
Reapply "[LV] Improve AnyOf reduction codegen. (#78304)"
This reverts the revert commit 589c7abb03448.

This patch includes a fix for any-of reductions and epilogue
vectorization. Extra test coverage for the issue that caused the revert
has been added in 399ff08e29d.

--------------------------------
Original commit message:

Update AnyOf reduction code generation to only keep track of the AnyOf
property in a boolean vector in the loop, only selecting either the new
or start value in the middle block.

The patch incorporates feedback from https://reviews.llvm.org/D153697.

This fixes the #62565, as now there aren't multiple uses of the
start/new values.

Fixes https://github.com/llvm/llvm-project/issues/62565

PR: https://github.com/llvm/llvm-project/pull/78304
2024-04-05 13:45:13 +01:00