…210)"
This reverts commit 9a14b1d254a43dc0d4445c3ffa3d393bca007ba3.
Revert "RuntimeLibcalls: Return StringRef for libcall names (#153209)"
This reverts commit cb1228fbd535b8f9fe78505a15292b0ba23b17de.
Revert "TableGen: Emit statically generated hash table for runtime
libcalls (#150192)"
This reverts commit 769a9058c8d04fc920994f6a5bbb03c8a4fbcd05.
Reverted three changes because of a CMake error while building llvm-nm
as reported in the following PR:
https://github.com/llvm/llvm-project/pull/150192#issuecomment-3192223073
Stop emitting these calls by name in PreISelIntrinsicLowering. This
is still kind of a hack. We should be going through the abstract
RTLIB:Libcall, and then checking if the call is really supported in
this module. Do this as a placeholder until RuntimeLibcalls is a
module analysis.
As Constants are already uniquified, we can use a map to keep track of
whether a GlobalVariable was produced for a given Constant or not.
Repeated globals with the same value was one of the codegen differences
noted in #126736. This patch removes that diff, producing cleaner
output.
Instead of reporting ___memmove as an implementation of memcpy,
make it unavailable and let the lowering logic consider memmove as
a fallback path.
This avoids a special case 1:N mapping for libcall implementations.
This adds basic support for objc_claimAutoreleasedReturnValue, which is
mostly equivalent to objc_retainAutoreleasedReturnValue, with the
difference that it doesn't require the marker nop to be emitted between
it and the call it was attached to.
To achieve that, this also teaches the AArch64 attachedcall bundle
lowering to pick whether the marker should be emitted or not based on
whether the attachedcall target is claimARV or retainARV.
Co-authored-by: Ahmed Bougacha <ahmed@bougacha.org>
This patch cleans up the handling of the count parameter in general,
though was initially motivated by a compiler crash upon a memset.pattern
with a narrow count causing a compiler crash due to different types for
CreateMul when converting the count to the number of bytes.
The logic used to name globals means there is some minor renaming churn
in the output to
test/Transforms/PreISelIntrinsicLowering/X86/memset-pattern.ll
irrelevant to the newly added tests (that would crash before).
This is to enable a transition of LoopIdiomRecognize to selecting the
llvm.experimental.memset.pattern intrinsic as requested in #118632 (as
opposed to supporting selection of the libcall or the intrinsic). As
such, although it _is_ a TODO to add costing considerations on whether
to lower to the libcall (when available) or expand directly, lacking
such logic is helpful at this stage in order to minimise any unexpected
code gen changes in this transition.
Relands 7ff3a9acd84654c9ec2939f45ba27f162ae7fbc3 after regenerating the
test case.
Supersedes the draft PR #94992, taking a different approach following
feedback:
* Lower in PreISelIntrinsicLowering
* Don't require that the number of bytes to set is a compile-time
constant
* Define llvm.memset_pattern rather than llvm.memset_pattern.inline
As discussed in the [RFC
thread](https://discourse.llvm.org/t/rfc-introducing-an-llvm-memset-pattern-inline-intrinsic/79496),
the intent is that the intrinsic will be lowered to loops, a sequence of
stores, or libcalls depending on the expected cost and availability of
libcalls on the target. Right now, there's just a single lowering path
that aims to handle all cases. My intent would be to follow up with
additional PRs that add additional optimisations when possible (e.g.
when libcalls are available, when arguments are known to be constant
etc).
This reverts commit 7ff3a9acd84654c9ec2939f45ba27f162ae7fbc3.
Recent scheduling changes means tests need to be re-generated. Reverting
to green while I do that.
Supersedes the draft PR #94992, taking a different approach following
feedback:
* Lower in PreISelIntrinsicLowering
* Don't require that the number of bytes to set is a compile-time
constant
* Define llvm.memset_pattern rather than llvm.memset_pattern.inline
As discussed in the [RFC
thread](https://discourse.llvm.org/t/rfc-introducing-an-llvm-memset-pattern-inline-intrinsic/79496),
the intent is that the intrinsic will be lowered to loops, a sequence of
stores, or libcalls depending on the expected cost and availability of
libcalls on the target. Right now, there's just a single lowering path
that aims to handle all cases. My intent would be to follow up with
additional PRs that add additional optimisations when possible (e.g.
when libcalls are available, when arguments are known to be constant
etc).
This is used by PreISelIntrinsicLowering. With this change,
PreISelIntrinsicLowering does not have to assume that there were changes
just because we encountered a VP intrinsic.
lowerConstantIntrinsics does an RPO traveral, which doesn't reach dead
blocks. Remove the assertion that all intrinsics are lowered, because
some intrinsics might remain.
expandVectorPredication may change code, even if the intrinsic itself
remains in the code. Report changes whenever such an intrinsic is
encountered, because code could have been changed.
Another follow-up fix for #101652 to fix expensive-checks-only failure.
Currently, the LowerConstantIntrinsics pass does an RPO traversal of
every function... only to find that many functions don't have constant
intrinsics (is.constant, objectsize). In the CodeGen pipeline, there is
already a pre-isel intrinsic lowering pass, which iterates over
intrinsic declarations and lowers all users. Call
lowerConstantIntrinsics from this pass to avoid the extra iteration over
the entire IR and the RPO traversal.
This reverts commit ac4b6b662630cd4d3bf6929f2b39ea203c0054a1.
A test change was missing for
mlir/test/Target/LLVMIR/llvmir-intrinsics.mlir in the initial commit.
Following on from the discussion in
https://discourse.llvm.org/t/rfc-introducing-an-llvm-memset-pattern-inline-intrinsic/79496
and the equivalent change for llvm.memset.inline (#95397), this removes
the requirement that the length of llvm.memcpy.inline is constant.
PreISelInstrinsicLowering will expand llvm.memcpy.inline with
non-constant lengths, while the codegen path for constant lengths is
left unaltered.
Uses the new InsertPosition class (added in #94226) to simplify some of
the IRBuilder interface, and removes the need to pass a BasicBlock
alongside a BasicBlock::iterator, using the fact that we can now get the
parent basic block from the iterator even if it points to the sentinel.
This patch removes the BasicBlock argument from each constructor or call
to setInsertPoint.
This has no functional effect, but later on as we look to remove the
`Instruction *InsertBefore` argument from instruction-creation
(discussed
[here](https://discourse.llvm.org/t/psa-instruction-constructors-changing-to-iterator-only-insertion/77845)),
this will simplify the process by allowing us to deprecate the
InsertPosition constructor directly and catch all the cases where we use
instructions rather than iterators.
We should query the subtarget of the calling function, not of the
intrinsic.
This probably makes no functional difference (as libcalls are
unlikely to vary across subtargets), but fixes minor compile-time
regressions from unnecessary subtarget instantiations.
Followup to D157567.
Differential Revision: https://reviews.llvm.org/D157848
This is a quick fix for an assert when the source and dest have
different address spaces. The pointer compare needs to have matching
types, but we can't generically introduce addrspacecast and we don't
know if the address spaces alias.
Expand large or unknown size memory intrinsics into loops in the
default lowering pipeline if the target doesn't have the corresponding
libfunc. Previously AMDGPU had a custom pass which existed to call the
expansion utilities.
With a default no-libcall option, we can remove the libfunc checks in
LoopIdiomRecognize for these, which never made any sense. This also
provides a path to lifting the immarg restriction on
llvm.memcpy.inline.
There seems to be a bug where TLI reports functions as available if
you use -march and not -mtriple.
WinEHPrepare marks any function call from EH funclets as unreachable, if it's not a nounwind intrinsic or has no proper funclet bundle operand. This
affects ARC intrinsics on Windows, because they are lowered to regular function calls in the PreISelIntrinsicLowering pass. It caused silent binary truncations and crashes during unwinding with the GNUstep ObjC runtime: https://github.com/gnustep/libobjc2/issues/222
This patch adds a new function `llvm::IntrinsicInst::mayLowerToFunctionCall()` that aims to collect all affected intrinsic IDs.
* Clang CodeGen uses it to determine whether or not it must emit a funclet bundle operand.
* PreISelIntrinsicLowering asserts that the function returns true for all ObjC runtime calls it lowers.
* LLVM uses it to determine whether or not a funclet bundle operand must be propagated to inlined call sites.
Reviewed By: theraven
Differential Revision: https://reviews.llvm.org/D128190
This reverts commit 7f230feeeac8a67b335f52bd2e900a05c6098f20.
Breaks CodeGenCUDA/link-device-bitcode.cu in check-clang,
and many LLVM tests, see comments on https://reviews.llvm.org/D121169
This matches the actual runtime function more closely.
I considered also renaming both RetainRV/UnsafeClaimRV to end with
"ARV", for AutoreleasedReturnValue, but there's less potential
for confusion there.
operand bundle "clang.arc.attachedcall" with ObjC runtime functions
The existing code only handles the case where the intrinsic being
rewritten is used as the called function pointer of a call/invoke.