PredicateInfo needs some no-op to which the predicate can be attached.
Currently this is an ssa.copy intrinsic. This PR replaces it with a
no-op bitcast.
Using a bitcast is more efficient because we don't have the overhead of
an overloaded intrinsic. It also makes things slightly simpler overall.
This PR removes the old `nocapture` attribute, replacing it with the new
`captures` attribute introduced in #116990. This change is
intended to be essentially NFC, replacing existing uses of `nocapture`
with `captures(none)` without adding any new analysis capabilities.
Making use of non-`none` values is left for a followup.
Some notes:
* `nocapture` will be upgraded to `captures(none)` by the bitcode
reader.
* `nocapture` will also be upgraded by the textual IR reader. This is to
make it easier to use old IR files and somewhat reduce the test churn in
this PR.
* Helper APIs like `doesNotCapture()` will check for `captures(none)`.
* MLIR import will convert `captures(none)` into an `llvm.nocapture`
attribute. The representation in the LLVM IR dialect should be updated
separately.
When traversing the use-def chain of an Argument in a candidate
specialization, also query the SCCPSolver to see if a Value is constant.
This allows us to better estimate the codesize savings of a candidate in
the presence of instructions that are a user of the argument we are
estimating savings for which also use arguments that have been found
constant by IPSCCP.
Similarly when estimating the dead basic blocks from branch and switch
instructions which become constant, also query the SCCPSolver to see if
a predecessor is unreachable.
When visiting comparison instructions during computation of a
specializations's bonus, make use of information from the lattice value
of the other operand in the case where we have not found this to have a
specific constant value.
Look through ssa_copy intrinsic calls when computing codesize bonus for
a specialization.
Also remove redundant logic to skip computing codesize bonus for
ssa_copy intrinsics, now these are considered zero-cost by TTI (in PR
#75294).
Only accumulate the codesize increase of functions that are actually
specialized, rather than for every candidate specialization that we
analyse.
This fixes a subtle bug where prior analysis of candidate
specializations that were deemed unprofitable could prevent subsequent
profitable candidates from being recognised.
Enable specialization on literal constant arguments by default in
Function Specialization.
---------
Co-authored-by: Alexandros Lamprineas <alexandros.lamprineas@arm.com>
Always require functions to be larger than MinFunctionSize when
SpecializeLiteralConstant is enabled, and increase MinFunctionSize to
500, to prevent excessive triggering of specialisations on small
functions.
…ntElimination
ArgumentPromotion and DeadArgumentElimination passes could change
function signatures but the function name remains the same as before the
transformation. This makes it hard for tracing with bpf programs where
user tends to use function signature in the source. See discussion [1]
for details.
This patch added suffix to functions whose signatures are changed. The
suffix lets users know that function signature has changed and they need
to impact the IR or binary to find modified signature before tracing
those functions.
The suffix for ArgumentPromotion is ".argprom" and the suffixes for
DeadArgumentElimination are ".argelim" and ".retelim". The suffix also
gives user hints about what kind of transformation has been done.
With this patch, I built a recent linux kernel with full LTO enabled. I
got 4 functions with only argpromotion like
```
set_track_update.argelim.argprom
pmd_trans_huge_lock.argprom
...
```
I got 1058 functions with only deadargelim like
```
process_bit0.argelim
pci_io_ecs_init.argelim
...
```
I got 3 functions with both argpromotion and deadargelim
```
set_track_update.argelim.argprom
zero_pud_populate.argelim.argprom
zero_pmd_populate.argelim.argprom
```
[1] https://github.com/llvm/llvm-project/issues/104678
During inter-procedural SCCP, also infer attributes on arguments, not
just return values. This allows other non-interprocedural passes to make
use of the information later.
This patch canonicalizes constant expression GEPs to use i8 source
element type, aka ptradd. This is the ConstantFolding equivalent of the
InstCombine canonicalization introduced in #68882.
I believe all our optimizations working on constant expression GEPs
(like GlobalOpt etc) have already been switched to work on offsets, so I
don't expect any significant fallout from this change.
This is part of:
https://discourse.llvm.org/t/rfc-replacing-getelementptr-with-ptradd/68699
When using the LLVM flang compiler with alias analysis (AA) enabled,
SPEC2017:548.exchange2_r was running significantly slower than wihtout
the AA.
This was caused by the GVN pass replacing many of the loads in the
pre-AA code with phi-nodes that form a long chain of dependencies, which
the function specialization was unable to follow.
This adds a function to discover phi-nodes in a transitive set, with
some limitations to avoid spending ages analysing phi-nodes.
The minimum latency savings also had to be lowered - fewer load
instructions means less saving.
Adding some more prints to help debugging the isProfitable decision.
No significant change in compile time or generated code-size.
(A previous attempt to fix this was abandoned: https://github.com/llvm/llvm-project/pull/71442)
---------
Co-authored-by: Alexandros Lamprineas <alexandros.lamprineas@arm.com>
Currently the naming scheme is a bit funky; the specializations are named
after the original function followed by an arbitrary decimal number. This
makes it hard to debug inlined specializations of recursive functions.
With this patch I am adding ".specialized." in between of the original
name and the suffix, which is now a single increment counter.
* Changes the default value of FuncSpecMaxIters from 1 to 10.
This allows specialization of recursive functions.
* Adds an option to control the maximum codesize growth per function.
* Measured ~45% performance uplift for SPEC2017:548.exchange2_r on
AWS Graviton3.
Differential Revision: https://reviews.llvm.org/D145819
Scalable vector GEPs are not constants and trying to create one for
these GEPs causes an assertion failure.
Reviewed By: nikic, paulwalker-arm
Differential Revision: https://reviews.llvm.org/D157590
D150464 updated the cost model for function specialization. Unfortunately, this
also crashes when trying to build stage2 LLD with thinLTO and assertions. It looks
like the issue is caused by a mishandling of the Constant in a SwitchInst since the
Constant cannot always be assumed to safely casted to a ConstantInt. In the case
of the crash, Constant was a ConstantExpr which triggered the assertion.
Reviewed By: ChuanqiXu
Differential Revision: https://reviews.llvm.org/D154159
After each iteration of the function specializer, constant stack values
are promoted to constant globals in order to enable recursive function
specialization. This should also be done once before running the
specializer. Enables specialization of _QMbrute_forcePdigits_2 from
SPEC2017:548.exchange2_r.
Differential Revision: https://reviews.llvm.org/D152799
Using AvgLoopIters on any loop is too imprecise making the cost model
favor users inside loop nests regardless of the actual tripcount.
Differential Revision: https://reviews.llvm.org/D150375
As reported on https://reviews.llvm.org/D150375#4367861 and
following, this change causes PDT invalidation issues. Revert
it and dependent commits.
This reverts commit 0524534d5220da5ecb2cd424a46520184d2be366.
This reverts commit ced90d1ff64a89a13479a37a3b17a411a3259f9f.
This reverts commit 9f992cc9350a7f7072a6dbf018ea07142ea7a7ed.
This reverts commit 1b1232047e83b69561fd64b9547cb0a0d374473a.
To do so we have to tweak the cost model such that specialization
does not trigger excessively.
Differential Revision: https://reviews.llvm.org/D150649
Using AvgLoopIters on any loop is too imprecise making the cost model
favor users inside loop nests regardless of the actual tripcount.
Differential Revision: https://reviews.llvm.org/D150375
There are a few inaccuracies with how FuncSpec handles global
variables.
When specialisation on non-const global variables is disabled (the
default) the pass could nevertheless perform some specializations,
e.g. on a constant GEP expression, or on a SSA variable, for which the
Solver has determined it has the value of a global variable.
When specialisation on non-const global variables is enabled, the pass
would skip non-scalars, e.g. a global array, but this should be
completely inconsequential, a pointer is a pointer.
Reviewed By: SjoerdMeijer
Differential Revision: https://reviews.llvm.org/D149476
Change-Id: Ic73051b2f8602587306760bf2ec552e5860f8d39
To track the return values of specializations, we need to invalidate all
the lattice values across the use-def chain which originates from the
callsites, recompute and propagate.
Differential Revision: https://reviews.llvm.org/D146158
Allow a function to be specialised even if it has its address taken or
it's global. For such functions, consider all of the arguments as
overdefined. Don't delete the functions even if all the apparent calls
were redirected to specialised instances.
Reviewed By: labrinea, ChuanqiXu
Differential Revision: https://reviews.llvm.org/D148345
* Remove redundant variable `NbFunctionsSpecialized` as it is no longer
used by the cost model.
* Rename statistic `NumFuncSpecialized` to `NumSpecsCreated` as a better
description (the old name confusingly implied number of functions we have
created clones for).
* Same for variable `SpecializedFuncs`. Renamed to `Specializations`.
* Move debug message in the destructor (avoids repetition when MaxIters > 1).
Differential Revision: https://reviews.llvm.org/D145375
If the only user of the Alloca argument provided to getPromotableAlloca()
is the same as the Call argument, StoreValue is never set and results
in an assertion failure that isa<> was used on a nullptr when passed into
getCandidateConstant().
This was originally seen when trying to build SPEC 2006 416.gamess using
flang with lto enabled.
Differential Revision: https://reviews.llvm.org/D143457
This reverts commit 2656572d485127cc30b8fe9752024d2a0f1c50db.
It looks like CINT2017rate/502.gcc_r gets mis-compiled with LTO + PGO on
AArch64 with function specialization.
This patch enables Function Specialization by default at all
optimization levels except Os, Oz.
Compilation Time Overhead:
--------------------------
Measured the Instruction Count increase (Geomean) for CTMark from
the llvm-testsuite as in https://llvm-compile-time-tracker.com.
* {-O3, Non-LTO}: +0.136% Instruction Count
* {-O3, LTO}: +0.346% Instruction Count
Performance Uplift:
-------------------
Measured +9.121% score increase for 505.mcf_r from SPEC Int 2017
(Tested on Neoverse N1 with -O3 + LTO)
Correctness Testing:
--------------------
* Passes bootstrap Clang with ASAN + LTO + FuncSpec aggressive options:
{ MaxClonesThreshold=10,
SmallFunctionThreshold=10,
AvgLoopIterationCount=30,
SpecializeOnAddresses=true,
EnableSpecializationForLiteralConstant=true,
FuncSpecializationMaxIters=10 }
* Builds Chromium and passes its unittests with the above options + ThinLTO.
For more info please refer to
https://discourse.llvm.org/t/rfc-should-we-enable-function-specialization/61518
Differential Revision: https://reviews.llvm.org/D140210
The `FunctionSpecialization` pass chooses specializations among the
opportunities presented by a single function and its calls,
progressively penalizing subsequent specialization attempts by
artificially increasing the cost of a specialization, depending on how
many specialization were applied before. Thus the chosen
specializations are sensitive to the order the functions appear in the
module and may be worse than others, had those others been considered
earlier.
This patch makes the `FunctionSpecialization` pass rank the
specializations globally, i.e. choose the "best" specializations
among the all possible specializations in the module, for all
functions.
Since this involved quite a bit of redesign of the pass data
structures, this patch also carries:
* removal of duplicate specializations
* optimization of call sites update, by collecting per
specialization the list of call sites that can be directly
rewritten, without prior expensive check if the call constants and
their positions match those of the specialized function.
A bit of a write-up up about the FuncSpec data structures and
operation:
Each potential function specialisation is kept in a single vector
(`AllSpecs` in `FunctionSpecializer::run`). This vector is populated
by `FunctionSpecializer::findSpecializations`.
The `findSpecializations` member function has a local `DenseMap` to
eliminate duplicates - with each call to the current function,
`findSpecializations` builds a specialisation signature (`SpecSig`)
and looks it in the duplicates map. If the signature is present, the
function records the call to rewrite into the existing specialisation
instance. If the signature is absent, it means we have a new
specialisation instance - the function calculates the gain and creates
a new entry in `AllSpecs`. Negative gain specialisation are ignored at
this point, unless forced.
The potential specialisations for a function form a contiguous range
in the `AllSpecs` [1]. This range is recorded in `SpecMap SM`, so we
can quickly find all specialisations for a function.
Once we have all the potential specialisations with their gains we
need to choose the best ones, which fit in the module specialisation
budget. This is done by using a max-heap (`std::make_heap`,
`std::push_heap`, etc) to find the best `NSpec` specialisations with a
single traversal of the `AllSpecs` vector. The heap itself is
contained with a small vector (`BestSpecs`) of indices into
`AllSpecs`, since elements of `AllSpecs` are a bit too heavy to
shuffle around.
Next the chosen specialisation are performed, that is, functions
cloned, `SCCPSolver` primed, and known call sites updated.
Then we run the `SCCPSolver` to propagate constants in the cloned
functions, after which we walk the calls of the original functions to
update them to call the specialised functions.
---
[1] This range may contain specialisation that were discarded and is
not ordered in any way. One alternative design is to keep a vector
indices of all specialisations for this function (which would
initially be, `i`, `i+1`, `i+2`, etc) and later sort them by gain,
pushing non-applied ones to the back. This has the potential to speed
`updateCallSites` up.
Reviewed By: ChuanqiXu, labrinea
Differential Revision: https://reviews.llvm.org/D139346
Change-Id: I708851eb38f07c42066637085b833ca91b195998
Reland 877a9f9abec61f06e39f1cd872e37b828139c2d1 since D138654 (parent)
has been fixed with 9ebaf4fef4aac89d4eff08e48185d61bc893f14e and with
8f1e11c5a7d70f96943a72649daa69f152d73e90.
Differential Revision: https://reviews.llvm.org/D126455
This reverts commit 877a9f9abec61f06e39f1cd872e37b828139c2d1.
It depends on the parent revision 42c2dc401742266da3e0251b6c1ca491f4779963
which needs to be reverted as it broke some buildbots, so reverting both.
The aim of this patch is to minimize the compilation time overhead of
running Function Specialization. It is about 40% slower to run as a
standalone pass (IPSCCP + FuncSpec vs IPSCCP with FuncSpec) according
to my measurements. I compiled the llvm testsuite with NewPM-O3 + LTO
and measured single threaded [user + system] time of IPSCCP and FuncSpec
by passing the '-time-passes' option to lld. Then I compared the two
configurations in terms of Instruction Count of the total compilation
(not of the individual passes) as in https://llvm-compile-time-tracker.com.
Geomean for non-LTO builds is -0.25% and LTO is -0.5% approximately.
You can find more info below:
https://discourse.llvm.org/t/rfc-should-we-enable-function-specialization/61518
Differential Revision: https://reviews.llvm.org/D126455
Deleting a fully specialised function left dangling pointers in
`FunctionAnalysisManager`, which causes an internal compiler error
when the function's storage was reused.
Fixes bug #58759.
Reviewed By: ChuanqiXu
Differential Revision: https://reviews.llvm.org/D138909
Change-Id: Ifed378c748af35e8fe7dcbdddb0f41b8777cbe87
When calculating the specialization bonus for a given function argument,
we recursively traverse the chain of (certain) users, accumulating the
instruction costs. Then we exponentially increase the bonus to account
for loop nests. This is problematic for two reasons: (a) the users might
not themselves be inside the loop nest, (b) if they are we are accounting
for it multiple times. Instead we should be adjusting the bonus before
traversing the user chain.
This reduces the instruction count for CTMark (newPM-O3) when Function
Specialization is enabled without actually reducing the amount of
specializations performed (geomean: -0.001% non-LTO, -0.406% LTO).
Differential Revision: https://reviews.llvm.org/D136692
[fixed test to work with reverse iteration]
The `FunctionSpecialization` pass has support for specialising
functions, which are called with literal arguments. This functionality
is disabled by default and is enabled with the option
`-function-specialization-for-literal-constant` . There are a few
issues with the implementation, though:
* even with the default, the pass will still specialise based on
floating-point literals
* even when it's enabled, the pass will specialise only for the `i1`
type (or `i2` if all of the possible 4 values occur, or `i3` if all
of the possible 8 values occur, etc)
The reason for this is incorrect check of the lattice value of the
function formal parameter. The lattice value is `overdefined` when the
constant range of the possible arguments is the full set, and this is
the reason for the specialisation to trigger. However, if the set of
the possible arguments is not the full set, that must not prevent the
specialisation.
This patch changes the pass to NOT consider a formal parameter when
specialising a function if the lattice value for that parameter is:
* unknown or undef
* a constant
* a constant range with a single element
on the basis that specialisation is pointless for those cases.
Is also changes the criteria for picking up an actual argument to
specialise if the argument is:
* a LLVM IR constant
* has `constant` lattice value
has `constantrange` lattice value with a single element.
Reviewed By: ChuanqiXu
Differential Revision: https://reviews.llvm.org/D135893
Change-Id: Iea273423176082ec51339aa66a5fe9fea83557ee