`replaceAllUsesWith` is not safe to use in a dialect conversion and will
be deactivated soon (#154112). Fix commit fixes some API violations.
Also some general improvements.
Now that #149310 has restricted lifetime intrinsics to only work on
allocas, we can also drop the explicit size argument. Instead, the size
is implied by the alloca.
This removes the ability to only mark a prefix of an alloca alive/dead.
We never used that capability, so we should remove the need to handle
that possibility everywhere (though many key places, including stack
coloring, did not actually respect this).
Flang at Ofast usually produces executables that consume more stack that
other Fortran compilers.
This is in part because the alloca created from temporary heap
allocation by the StackArray pass are created at the function scope
level without lifetimes, and LLVM does not/is not able to merge alloca
that do not have overlapping lifetimes.
This patch adds an option to generate LLVM lifetime in the StackArray
pass at the previous heap allocation/free using the LLVM dialect
operation for it.
This is similar to other configuration objects used across MLIR.
Rename some fields to better reflect that they are no longer booleans.
Reland 04d261101b4f229189463136a794e3e362a793af / #132253.
`findAllocaLoopInsertionPoint()` hit assertion not being able
to find the `fir.freemem` because of the `fir.convert`.
I think it is better to look for `fir.freemem` same way
with the look-through walk.
This does add a little computational complexity because now every
freemem operation has to be tested for every allocation. This could be
improved with some more memoisation but I think it is easier to read
this way. Let me know if you would prefer me to change this to
pre-compute the normalised addresses each freemem operation is using.
Weirdly, this change resulted in a verifier failure for the fir.declare
in the previous test case. Maybe it was previously removed as dead code
and now it isn't. Anyway I fixed that too.
Note that PointerUnion::{is,get} have been soft deprecated in
PointerUnion.h:
// FIXME: Replace the uses of is(), get() and dyn_cast() with
// isa<T>, cast<T> and the llvm::dyn_cast<T>
I'm not touching PointerUnion::dyn_cast for now because it's a bit
complicated; we could blindly migrate it to dyn_cast_if_present, but
we should probably use dyn_cast when the operand is known to be
non-null.
The greedy rewriter is used in many different flows and it has a lot of
convenience (work list management, debugging actions, tracing, etc). But
it combines two kinds of greedy behavior 1) how ops are matched, 2)
folding wherever it can.
These are independent forms of greedy and leads to inefficiency. E.g.,
cases where one need to create different phases in lowering and is
required to applying patterns in specific order split across different
passes. Using the driver one ends up needlessly retrying folding/having
multiple rounds of folding attempts, where one final run would have
sufficed.
Of course folks can locally avoid this behavior by just building their
own, but this is also a common requested feature that folks keep on
working around locally in suboptimal ways.
For downstream users, there should be no behavioral change. Updating
from the deprecated should just be a find and replace (e.g., `find ./
-type f -exec sed -i
's|applyPatternsAndFoldGreedily|applyPatternsGreedily|g' {} \;` variety)
as the API arguments hasn't changed between the two.
The concept of a 'program point' in the original data flow framework is
ambiguous. It can refer to either an operation or a block itself. This
representation has different interpretations in forward and backward
data-flow analysis. In forward data-flow analysis, the program point of
an operation represents the state after the operation, while in backward
data flow analysis, it represents the state before the operation. When
using forward or backward data-flow analysis, it is crucial to carefully
handle this distinction to ensure correctness.
This patch refactors the definition of program point, unifying the
interpretation of program points in both forward and backward data-flow
analysis.
How to integrate this patch?
For dense forward data-flow analysis and other analysis (except dense
backward data-flow analysis), the program point corresponding to the
original operation can be obtained by `getProgramPointAfter(op)`, and
the program point corresponding to the original block can be obtained by
`getProgramPointBefore(block)`.
For dense backward data-flow analysis, the program point corresponding
to the original operation can be obtained by
`getProgramPointBefore(op)`, and the program point corresponding to the
original block can be obtained by `getProgramPointAfter(block)`.
NOTE: If you need to get the lattice of other data-flow analyses in
dense backward data-flow analysis, you should still use the dense
forward data-flow approach. For example, to get the Executable state of
a block in dense backward data-flow analysis and add the dependency of
the current operation, you should write:
``getOrCreateFor<Executable>(getProgramPointBefore(op),
getProgramPointBefore(block))``
In case above, we use getProgramPointBefore(op) because the analysis we
rely on is dense backward data-flow, and we use
getProgramPointBefore(block) because the lattice we query is the result
of a non-dense backward data flow computation.
related dsscussion:
https://discourse.llvm.org/t/rfc-unify-the-semantics-of-program-points/80671/8
corresponding PSA:
https://discourse.llvm.org/t/psa-program-point-semantics-change/81479
Since #108562, StackArrays no longer has to create function declarations
at the module level to use stacksave/stackrestore LLVM intrinsics. This
will allow it to run in parallel on multiple functions at the same time.
The new LLVM stack save/restore intrinsic operations are more convenient
than function calls because they do not add function declarations to the
module and therefore do not block the parallelisation of passes.
Furthermore they could be much more easily marked with memory effects
than function calls if that ever proved useful.
This builds on top of #107879.
Resolves#108016
First patch to fix a BIND(C) ABI issue
(https://github.com/llvm/llvm-project/issues/102113). I need to keep
track of BIND(C) in more locations (fir.dispatch and func.func
operations), and I need to fix a few passes that are dropping the
attribute on the floor. Since I expect more procedure attributes that
cannot be reflected in mlir::FunctionType will be needed for ABI,
optimizations, or debug info, this NFC patch adds a new enum attribute
to keep track of procedure attributes in the IR.
This patch is not updating lowering to lower more attributes, this will
be done in a separate patch to keep the test changes low here.
Adding the attribute on fir.dispatch and func.func will also be done in
separate patches.
Base `DataFlowAnalysis::visit` returns `LogicalResult`, but wrappers's
Sparse/Dense/Forward/Backward `visitOperation` doesn't.
Sometimes it's needed to abort solver early if some unrecoverable
condition detected inside analysis.
Update `visitOperation` to return `LogicalResult` and propagate it to
`solver.initializeAndRun()`. Only `visitOperation` is updated for now,
it's possible to update other hooks like `visitNonControlFlowArguments`,
bit it's not needed immediately and let's keep this PR small.
Hijacked `UnderlyingValueAnalysis` test analysis to test it.
We missed collecting the analysis results for regions terminated with
`omp.yield`. This result in missing some opportunities for malloc
optimizations inside omp regions.
Extends the stack-arrays pass to support `fir.declare` ops. Before that,
we did not recognize malloc-free pairs for which `fir.declare` is used
to declare the allocated entity. This is because the `free` op was
invoked on the result of the `fir.declare` op and did not directly use
the allocated memory SSA value.
This also extends the pass to collect the analysis results within OpenMP
regions.
This is a heavy process, and it can trigger a massive explosion in
adding block arguments. While potentially reducing the code size, the
resulting merged blocks with arguments are hiding some of the def-use
chain and can even hinder some further analyses/optimizations: a merge
block does not have it's own path-sensitive context, instead the context
is merged from all the predecessors.
Previous behavior can be restored by passing:
{test-convergence region-simplify=aggressive}
to the canonicalize pass.
…ted. (#89998)" (#90250)
This partially reverts commit 7aedd7dc754c74a49fe84ed2640e269c25414087.
This change removes calls to the deprecated member functions. It does
not mark the functions deprecated yet and does not disable the
deprecation warning in TypeSwitch. This seems to cause problems with
MSVC.
Stack arrays needs to stay running only on func.func because it needs to
know which block terminators can end the function (rather than just
branch between unstructured control flow). A similar concept does not
exist at the more abstract level of "any top level mlir operation".
For example, it currently looks for func::ReturnOp and
fir::UnreachableOp as points when execution can end. If this were to be
run on omp.declare_reduction, it would also need to understand
omp.YieldOp (perhaps only when omp.declare_reduction is the parent).
There isn't a generic concept in MLIR for this.
This patch fixes:
flang/lib/Optimizer/Transforms/StackArrays.cpp:452:7: error:
ignoring return value of function declared with 'nodiscard'
attribute [-Werror,-Wunused-result]
The stack arrays pass uses data flow analysis to determine whether heap
allocations are freed on all paths out of the function.
`interp_domain_em_part2` in spec2017 wrf generates over 120k operations,
including almost 5k fir.if operations and over 200 fir.do_loop
operations, all in the same function. The MLIR data flow analysis
framework cannot provide reasonable performance for such cases because
there is a combinatorial explosion in the number of control flow paths
through the function, all of which must be checked to determine if the
heap allocations will be freed.
This patch skips the stack arrays pass for ridiculously large functions
(defined as having more than 1000 fir.allocmem operations). This
threshold is configurable at runtime with a command line argument.
With this patch, compiling this file is more than 80% faster.
Previously only a constant reference was stored in the FirOpBuilder.
However, a lot of code was merged using
FirOpBuilder builder{rewriter, getKindMapping(mod)};
This is incorrect because the KindMapping returned will go out of scope
as soon as FirOpBuilder's constructor had run. This led to an infinite
loop running some tests using HLFIR (because the stack space containing
the kind mapping was re-used and corrupted).
One solution would have just been to fix the incorrect call sites,
however, as a large number of these had already made it past review, I
decided to instead change FirOpBuilder to store its own copy of the
KindMapping. This is not costly because nearly every time we construct a
KindMapping is exclusively to construct a FirOpBuilder. To make this
common pattern simpler, I added a new constructor to FirOpBuilder which
calls getKindMapping().
Differential Revision: https://reviews.llvm.org/D151881
The old fir.allocmem operation returned a !fir.heap<.> type. The new
fir.alloca operation returns a !fir.ref<.> type. This patch inserts a
fir.convert so that the old type is preserved. This prevents verifier
failures when types returned from fir.if statements don't match the
expected type.
Differential Revision: https://reviews.llvm.org/D151921
In upstream mlir, the dialect conversion infrastructure is used for
lowering from one dialect to another: the passes are of the form
XToYPass. Whereas, transformations within the same dialect tend to use
applyPatternsAndFoldGreedily.
In this case, the full complexity of applyPatternsAndFoldGreedily isn't
needed so we can get away with the simpler applyOpPatternsAndFold.
This change was suggested by @jeanPerier
The old differential revision for this patch was
https://reviews.llvm.org/D150853
Re-applying here fixing the issue which led to the patch being reverted. The
issue was from erasing uses of the allocation operation while still iterating
over those uses (leading to a use-after-free). I have added a regression
test which catches this bug for -fsanitize=address builds, but it is
hard to reliably cause a crash from the use-after-free in normal builds.
Differential Revision: https://reviews.llvm.org/D151728
In upstream mlir, the dialect conversion infrastructure is used for
lowering from one dialect to another: the passes are of the form
XToYPass. Whereas, transformations within the same dialect tend to use
applyPatternsAndFoldGreedily.
In this case, the full complexity of applyPatternsAndFoldGreedily isn't
needed so we can get away with the simpler applyOpPatternsAndFold.
This change was suggested by @jeanPerier
Differential Revision: https://reviews.llvm.org/D150853
Some functions (e.g. the main function) end with a call to the STOP
statement instead of a func.return. This is lowered as a call to the
stop runtime function followed by a fir.unreachable. fir.unreachable is
a terminator and so this can cause functions to have no func.return.
The stack arrays pass looks to see which heap allocations have always
been freed by the time a function returns. Without any returns, the pass
does not detect any freed allocations. This patch changes this behaviour
so that fir.unreachable is checked as well as func.return.
This allows 15 heap allocations for array temporaries in spec2017
exchange2's main function to be moved to the stack.
Differential Revision: https://reviews.llvm.org/D143918
This pass implements the `-fstack-arrays` flag. See the RFC in
`flang/docs/fstack-arrays.md` for more information.
Differential revision: https://reviews.llvm.org/D140415