384 Commits

Author SHA1 Message Date
Bjorn Pettersson
a20f7efbc5 Remove several no longer needed includes. NFCI
Mostly removing includes of InitializePasses.h and Pass.h in
passes that no longer has support for the legacy PM.
2023-04-17 13:54:19 +02:00
Mikhail Maltsev
ab8150acc5 [MemCpyOpt] Don't fold memcpy.inline into memmove
The llvm.memcpy.inline intrinsic must be expanded into code that
does not contain any function calls because it is intended for
the implementation of low-level functions like memcpy. Currently the
MemCpyOpt might covert llvm.memcpy.inline into llvm.memmove in
certain circumstances. This patch fixes the issue.

Fixes https://github.com/llvm/llvm-project/issues/61791.

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D147162
2023-03-30 13:14:59 +01:00
Arthur Eubanks
1a90faacf1 [Passes] Remove some legacy passes
NewGVN
GVNHoist
GVNSink
MemCpyOpt
Float2Int

These were only used for the optimization pipeline, of which the legacy version was removed.
2023-03-14 13:32:59 -07:00
Guillaume Chatelet
135f23d67b Deprecate MemIntrinsicBase::getDestAlignment() and MemTransferBase::getSourceAlignment()
Differential Revision: https://reviews.llvm.org/D141840
2023-01-16 14:22:03 +00:00
Guillaume Chatelet
8fd5558b29 [NFC] Use TypeSize::geFixedValue() instead of TypeSize::getFixedSize()
This change is one of a series to implement the discussion from
https://reviews.llvm.org/D141134.
2023-01-11 16:49:38 +00:00
Nikita Popov
07bf39df80 [MemCpyOpt] Extract processStoreOfLoad() method (NFC) 2023-01-06 16:11:10 +01:00
Nikita Popov
a6a526ec54 [IR] Add AllocaInst::getAllocationSize() (NFC)
When fetching allocation sizes, we almost always want to have the
size in bytes, but we were only providing an InBits API. Also add
the corresponding byte-based conjugate to save some *8 and /8
juggling everywhere.
2023-01-06 15:36:16 +01:00
Owen Anderson
8256ddf78c Resolve a long-standing FIXME in memcpyopt.
Inspecting the downstream use of the cpyAlign, it is clear that
`performCallSlotOptzn` is expecting it to represent the alignment
of the copy destination, not the minimum of the src and dest
alignments. This patch renames the parameter to make this more
obvious.

I believe this change is NFC, because the downstream code has
alignment checks such that it all works out in the end. I have not
been able to construct a test case that actually triggers a change
in output.

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D140603
2022-12-23 16:15:24 -07:00
Fangrui Song
d4b6fcb32e [Analysis] llvm::Optional => std::optional 2022-12-14 07:32:24 +00:00
Kazu Hirata
405fc404bf [ADT] Don't including None.h (NFC)
These source files no longer use None, so they do not need to include
None.h.

This is part of an effort to migrate from llvm::Optional to
std::optional:

https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
2022-12-06 20:14:51 -08:00
Nikita Popov
b7ede701d8 [MemCpyOpt] Use BatchAA when processing one instruction (NFCI)
While we can't use a single BatchAA instance for the entire
MemCpyOpt run without further justification, we can use BatchAA
while performing the queries related to a single instruction
(these will first perform some AA-based checks, and then modify
the IR only afterwards).
2022-12-06 10:16:39 +01:00
Krzysztof Parzyszek
f3b6dbfda8 Instructions: convert Optional to std::optional 2022-12-04 14:25:11 -06:00
Kazu Hirata
343de6856e [Transforms] Use std::nullopt instead of None (NFC)
This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated.  The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.

This is part of an effort to migrate from llvm::Optional to
std::optional:

https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
2022-12-02 21:11:37 -08:00
OCHyams
e292f91291 [Assignment Tracking][17/*] Account for assignment tracking in memcpyopt
The Assignment Tracking debug-info feature is outlined in this RFC:

https://discourse.llvm.org/t/
rfc-assignment-tracking-a-better-way-of-specifying-variable-locations-in-ir

Maintain and propagate DIAssignID attachments in memcpyopt.

Reviewed By: jmorse

Differential Revision: https://reviews.llvm.org/D133312
2022-11-15 11:51:10 +00:00
Nikita Popov
4d33cf4166 [MemCpyOpt] Avoid moving lifetime marker above def (PR58903)
This is unlikely to happen with opaque pointers, so just bail out
of the transform, rather than trying to move bitcasts/etc as well.

Fixes https://github.com/llvm/llvm-project/issues/58903.
2022-11-11 15:06:34 +01:00
Nikita Popov
9a45e4beed [MemCpyOpt] Move lifetime marker before call to enable call slot optimization
Currently call slot optimization may be prevented because the
lifetime markers for the destination only start after the call.
In this case, rather than aborting the transform, we should move
the lifetime.start before the call to enable the transform.

Differential Revision: https://reviews.llvm.org/D135886
2022-11-07 15:26:00 +01:00
Nikita Popov
b54b84fde6 [MemCpyOpt] Add additional debug output (NFC) 2022-10-13 17:03:44 +02:00
Nikita Popov
5fa14ee835 [MemCpyOpt] Don't hoist above producer of pointer operand
This was already handled correctly below, but not checked for the
original store pointer operand. Encountered when converting tests
to opaque pointers, where the intermediate bitcast goes away.
2022-10-05 14:52:33 +02:00
Matt Arsenault
c867401407 MemCpyOpt: Pass through AssumptionCache 2022-09-19 19:25:22 -04:00
Matt Arsenault
0d8ffcc532 Analysis: Add AssumptionCache argument to isDereferenceableAndAlignedPointer
This does not try to pass it through from the end users.
2022-09-19 18:57:33 -04:00
Guillaume Chatelet
3c126d5fe4 [Alignment] Replace commonAlignment with std::min
`commonAlignment` is a shortcut to pick the smallest of two `Align`
objects. As-is it doesn't bring much value compared to `std::min`.

Differential Revision: https://reviews.llvm.org/D128345
2022-06-28 07:15:02 +00:00
Guillaume Chatelet
57ffff6db0 Revert "[NFC] Remove dead code"
This reverts commit 8ba2cbff70f2c49a8926451c59cc260d67b706cf.
2022-06-22 14:55:47 +00:00
Guillaume Chatelet
8ba2cbff70 [NFC] Remove dead code 2022-06-22 13:33:58 +00:00
Guillaume Chatelet
f9bb8c24ac [NFC][Alignment] Convert MemCpyOptimizer.cpp 2022-06-13 10:07:09 +00:00
Fangrui Song
d86a206f06 Remove unneeded cl::ZeroOrMore for cl::opt/cl::list options 2022-06-05 00:31:44 -07:00
Nikita Popov
a5c3b5748c [MemCpyOpt] Work around PR54682
As discussed on https://github.com/llvm/llvm-project/issues/54682,
MemorySSA currently has a bug when computing the clobber of calls
that access loop-varying locations. I think a "proper" fix for this
on the MemorySSA side might be non-trivial, but we can easily work
around this in MemCpyOpt:

Currently, MemCpyOpt uses a location-less getClobberingMemoryAccess()
call to find a clobber on either the src or dest location, and then
refines it for the src and dest clobber. This was intended as an
optimization, as the location-less API is cached, while the
location-affected APIs are not.

However, I don't think this really makes a difference in practice,
because I don't think anything will use the cached clobbers on
those calls later anyway. On CTMark, this patch seems to be very
mildly positive actually.

So I think this is a reasonable way to avoid the problem for now,
though MemorySSA should also get a fix.

Differential Revision: https://reviews.llvm.org/D122911
2022-04-04 10:19:51 +02:00
Philip Reames
7c51669c21 [memcpyopt] Restructure store(load src, dest) form of callslotopt for compile time
The search for the clobbering call is fairly expensive if uses are not optimized at construction.  Defer the clobber walk to the point in the implementation we need it; there are a bunch of bailouts before that point.  (e.g. If the source pointer is not an alloca, we can't do callslotopt.)

On a test case which involves a bunch of copies from argument pointers, this switches memcpyopt from > 1/2 second to < 10ms.
2022-04-03 20:16:20 -07:00
Philip Reames
33deaa13b8 [memcpyopt] Common code into performCallSlotOptzn [NFC]
We have the same code repeated in both callers, sink it into callee.

The motivation here isn't just code style, we can also defer the relatively expensive aliasing checks until the cheap structural preconditions have been validated.  (e.g. Don't bother aliasing if src is not an alloca.)  This helps compile time significantly.
2022-03-28 20:10:13 -07:00
serge-sans-paille
59630917d6 Cleanup includes: Transform/Scalar
Estimated impact on preprocessor output line:
before: 1062981579
after:  1062494547

Discourse thread: https://discourse.llvm.org/t/include-what-you-use-include-cleanup
Differential Revision: https://reviews.llvm.org/D120817
2022-03-03 07:56:34 +01:00
Florian Hahn
7662d1687b
[MemCpyOpt] Check all access for MemoryUses in writtenBetween.
Currently writtenBetween can miss clobbers of Loc between End and Start,
if End is a MemoryUse.

To guarantee we see all write clobbers of Loc between Start and End
for MemoryUses, restrict to Start and End being in the same block
and check all accesses between them.

This fixes 2 mis-compiles illustrated in
llvm/test/Transforms/MemCpyOpt/memcpy-byval-forwarding-clobbers.ll

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D119929
2022-02-21 16:54:30 +00:00
Nikita Popov
6b69985da4 [MemCpyOpt] Use helper for unwind check
This extends support to byval arguments. It would be further
extended to handle the case of non-captured noalias returns.
2022-01-26 12:43:31 +01:00
Nikita Popov
0d20407d1a Reapply [MemCpyOpt] Look through pointer casts when checking capture
This is a recommit of the patch without changes. The reason for
the revert has been addressed in D117679.

-----

The user scanning loop above looks through pointer casts, so we
also need to strip pointer casts in the capture check. Previously
the source was incorrectly considered not captured if a bitcast
was passed to the call.
2022-01-20 09:30:21 +01:00
Nikita Popov
655a7024db Reapply [MemCpyOpt] Make capture check during call slot optimization more precise
This is a recommit of the patch without changes. The reason for
the revert has been addressed in D117679.

-----

Call slot optimization is currently supposed to be prevented if
the call can capture the source pointer. Due to an implementation
bug, this check currently doesn't trigger if a bitcast of the source
pointer is passed instead. I'm somewhat afraid of the fallout of
fixing this bug (due to heavy reliance on call slot optimization
in rust), so I'd like to strengthen the capture reasoning a bit first.

In particular, I believe that the capture is fine as long as a)
the call itself cannot depend on the pointer identity, because
neither dest has been captured before/at nor src before the
call and b) there is no potential use of the captured pointer
before the lifetime of the source alloca ends, either due to
lifetime.end or a return from a function. At that point the
potentially captured pointer becomes dangling.

Differential Revision: https://reviews.llvm.org/D115615
2022-01-20 09:30:20 +01:00
Nikita Popov
d7bff2e9d2 [MemCpyOpt] Fix metadata merging during call slot optimization
Call slot optimization currently merges the metadata between the
call and the load. However, we also need to merge in the metadata
of the store.

Part of the reason why we might have gotten away with this
previously is that usually the load and the store are the same
instruction (a memcpy), this can only happen if call slot
optimization occurs on an actual load/store pair.

This addresses the issue reported in
https://reviews.llvm.org/D115615#3251386.

Differential Revision: https://reviews.llvm.org/D117679
2022-01-20 09:25:13 +01:00
Nikita Popov
4dc4815f56 [MemCpyOpt] Add some debug output to call slot optimization (NFC) 2022-01-19 15:51:10 +01:00
Hans Wennborg
53a51acc36 Revert "[MemCpyOpt] Make capture check during call slot optimization more precise"
This casued a miscompile due to call slot optimization replacing a call
argument without considering the call's !noalias metadata, see discussion on
the code review.

> Call slot optimization is currently supposed to be prevented if
> the call can capture the source pointer. Due to an implementation
> bug, this check currently doesn't trigger if a bitcast of the source
> pointer is passed instead. I'm somewhat afraid of the fallout of
> fixing this bug (due to heavy reliance on call slot optimization
> in rust), so I'd like to strengthen the capture reasoning a bit first.
>
> In particular, I believe that the capture is fine as long as a)
> the call itself cannot depend on the pointer identity, because
> neither dest has been captured before/at nor src before the
> call and b) there is no potential use of the captured pointer
> before the lifetime of the source alloca ends, either due to
> lifetime.end or a return from a function. At that point the
> potentially captured pointer becomes dangling.
>
> Differential Revision: https://reviews.llvm.org/D115615

Also reverting the dependent commit:

> [MemCpyOpt] Look through pointer casts when checking capture
>
> The user scanning loop above looks through pointer casts, so we
> also need to strip pointer casts in the capture check. Previously
> the source was incorrectly considered not captured if a bitcast
> was passed to the call.

This reverts commit 487a34ed9d7d24a7b1fb388c8856c784a459b22b
and 00e6869463ae6023d0d48f30de8511d6d748b14f.
2022-01-18 17:41:49 +01:00
Simon Pilgrim
5bbcff6181 [MemCpyOptimizer] hasUndefContents - only look for underlying object if we've found an alloca
Provides an early-out if we fail to find an AllocaInst, and avoids a static analyzer warning about null dereferencing.
2022-01-06 15:15:03 +00:00
Simon Pilgrim
8399fa673b [MemCpyOptimizer] Use auto* for cast<> results (style). NFC. 2022-01-06 15:15:03 +00:00
Nikita Popov
00e6869463 [MemCpyOpt] Look through pointer casts when checking capture
The user scanning loop above looks through pointer casts, so we
also need to strip pointer casts in the capture check. Previously
the source was incorrectly considered not captured if a bitcast
was passed to the call.
2022-01-05 09:50:33 +01:00
Nikita Popov
487a34ed9d [MemCpyOpt] Make capture check during call slot optimization more precise
Call slot optimization is currently supposed to be prevented if
the call can capture the source pointer. Due to an implementation
bug, this check currently doesn't trigger if a bitcast of the source
pointer is passed instead. I'm somewhat afraid of the fallout of
fixing this bug (due to heavy reliance on call slot optimization
in rust), so I'd like to strengthen the capture reasoning a bit first.

In particular, I believe that the capture is fine as long as a)
the call itself cannot depend on the pointer identity, because
neither dest has been captured before/at nor src before the
call and b) there is no potential use of the captured pointer
before the lifetime of the source alloca ends, either due to
lifetime.end or a return from a function. At that point the
potentially captured pointer becomes dangling.

Differential Revision: https://reviews.llvm.org/D115615
2022-01-05 09:39:25 +01:00
Fraser Cormack
7fb66d4035 [MemCpyOpt] Fix a variety of scalable-type crashes
This patch fixes a variety of crashes resulting from the `MemCpyOptPass`
casting `TypeSize` to a constant integer, whether implicitly or
explicitly.

Since the `MemsetRanges` requires a constant size to work, all but one
of the fixes in this patch simply involve skipping the various
optimizations for scalable types as cleanly as possible.

The optimization of `byval` parameters, however, has been updated to
work on scalable types in theory. In practice, this optimization is only
valid when the length of the `memcpy` is known to be larger than the
scalable type size, which is currently never the case. This could
perhaps be done in the future using the `vscale_range` attribute.

Some implicit casts have been left as they were, under the knowledge
they are only called on aggregate types. These should never be
scalably-sized.

Reviewed By: nikic, tra

Differential Revision: https://reviews.llvm.org/D109329
2021-09-08 11:21:36 +01:00
Artem Belevich
30dfd3449e [MemCpyOpt] Allow specifying --enable-memcpyopt-without-libcalls more than once
so we can override it via clang's CLI if necessary.
2021-08-30 13:55:55 -07:00
Nikita Popov
17db125b48 [MemCpyOpt] Optimize MemoryDef insertion
When converting a store into a memset, we currently insert the new
MemoryDef after the store MemoryDef, which requires all uses to be
renamed to the new def using a whole block scan. Instead, we can
insert the new MemoryDef before the store and not rename uses,
because we know that the location is immediately overwritten, so
all uses should still refer to the old MemoryDef. Those uses will
get renamed when the old MemoryDef is actually dropped, which is
efficient.

I expect something similar can be done for some of the other MSSA
updates in MemCpyOpt. This is an alternative to D107513, at least
for this particular case.

Differential Revision: https://reviews.llvm.org/D107702
2021-08-10 21:28:29 +02:00
Nikita Popov
88003cea1c [MemCpyOpt] Remove MemDepAnalysis-based implementation
The MemorySSA-based implementation has been enabled for a few months
(since D94376). This patch drops the old MDA-based implementation
entirely.

I've kept this to only the basic cleanup of dropping various
conditions -- the code could be further cleaned up now that there
is only one implementation.

Differential Revision: https://reviews.llvm.org/D102113
2021-08-07 22:35:44 +02:00
Artem Belevich
6a9cf21f5a [CUDA, MemCpyOpt] Add a flag to force-enable memcpyopt and use it for CUDA.
Attempt to enable MemCpyOpt unconditionally in D104801 uncovered the fact that
there are users that do not expect LLVM to materialize `memset` intrinsic.

While other passes can do that, too, MemCpyOpt triggers it more frequently and
breaks sanitizers and some downstream users.

For now introduce a flag to force-enable the flag and opt-in only CUDA
compilation with NVPTX back-end.

Differential Revision: https://reviews.llvm.org/D106401
2021-08-06 11:13:52 -07:00
Michael Liao
d1cacd5928 [MemCpyOpt] Teach memcpyopt to handle loads from the constant memory.
- Loads from the constant memory (either explicit one or as the source
  of memory transfer intrinsics) won't alias any stores.

Reviewed By: asbirlea, efriedma

Differential Revision: https://reviews.llvm.org/D107605
2021-08-06 12:43:52 -04:00
Nikita Popov
bb15861e14 [MemCpyOpt] Relax libcall checks
Rather than blocking the whole MemCpyOpt pass if the libcalls are
not available, only disable creation of new memset/memcpy intrinsics
where only load/stores were used previously. This only affects the
store merging and load-store conversion optimization. Other
optimizations are derived from existing intrinsics, which are
well-defined in the absence of libcalls -- not having the libcalls
just means that call simplification won't convert them to intrinsics.

This is a weaker variation of D104801, which dropped these checks
entirely. Ideally we would not couple emission of intrinsics to
libcall availability at all, but as the intrinsics may be legalized
to libcalls we need to be a bit careful right now.

Differential Revision: https://reviews.llvm.org/D106769
2021-08-04 21:17:51 +02:00
Artem Belevich
1a43ee65d1 Revert "[MemCpyOpt] Enable memcpy optimizations unconditionally."
This reverts commit 2c98298a7559dfe4a264ef1adaad0921526768cc which breaks
sanitizers.
2021-07-19 14:27:41 -07:00
Artem Belevich
2c98298a75 [MemCpyOpt] Enable memcpy optimizations unconditionally.
The patch does not depend on the availability of the library functions for
memcpy/memset as it operates on LLVM intrinsics.  The optimizations are useful
on the targets that have these functions disabled (e.g. NVPTX & AMDGPU).

Differential Revision: https://reviews.llvm.org/D104801
2021-07-19 11:58:02 -07:00
Arthur Eubanks
ab5693aa4a [OpaquePtr] Use byval type more 2021-07-13 09:34:34 -07:00