Also starts pruning out these calls if the exception model is
forced to none.
I worked backwards from the logic in addPassesToHandleExceptions
and the pass content. There appears to be some tolerance
for mixing and matching exception modes inside of a single module.
As far as I can tell _Unwind_CallPersonality is only relevant for
wasm, so just add it there.
As usual, the arm64ec case makes things difficult and is
missing test coverage. The set of calls in list form is necessary
to use foreach for the duplication, but in every other context a
dag is more convenient. You cannot use foreach over a dag, and I
haven't found a way to flatten a dag into a list.
This removes the last manual setLibcallImpl call in generic code.
There are several libcall choices for MUL_I64 which depend on the
subtarget, but this is the base case. The manual custom ISelLowering
is still overriding the decision until we have a way to control
lowering choices, but we can still get the calling convention
set for now.
…210)"
This reverts commit 9a14b1d254a43dc0d4445c3ffa3d393bca007ba3.
Revert "RuntimeLibcalls: Return StringRef for libcall names (#153209)"
This reverts commit cb1228fbd535b8f9fe78505a15292b0ba23b17de.
Revert "TableGen: Emit statically generated hash table for runtime
libcalls (#150192)"
This reverts commit 769a9058c8d04fc920994f6a5bbb03c8a4fbcd05.
Reverted three changes because of a CMake error while building llvm-nm
as reported in the following PR:
https://github.com/llvm/llvm-project/pull/150192#issuecomment-3192223073
a96121089b9c94e08c6632f91f2dffc73c0ffa28 reverted a change
to use a binary search on the string name table because it
was too slow. This replaces it with a static string hash
table based on the known set of libcall names. Microbenchmarking
shows this is similarly fast to using DenseMap. It's possibly
slightly slower than using StringSet, though these aren't an
exact comparison. This also saves on the one time use construction
of the map, so it could be better in practice.
This search isn't simple set check, since it does find the
range of possible matches with the same name. There's also
an additional check for whether the current target supports
the name. The runtime constructed set doesn't require this,
since it only adds the symbols live for the target.
Followed algorithm from this post
http://0x80.pl/notesen/2023-04-30-lookup-in-strings.html
I'm also thinking the 2 special case global symbols should
just be added to RuntimeLibcalls. There are also other global
references emitted in the backend that aren't tracked; we probably
should just use this as a centralized database for all compiler
selected symbols.
I'm assuming this was the set of targets that were relevant
for sjlj handling. Just take the raw exception setting instead,
and assume it makes sense for the target.
These are already the default calls set for these conversions, so
they should not require explicit setting. The non-default cases are
currently overridden in ARMISelLowering. Just delete this until
the list of calls and lowering decisions are separated.
This was added back in 6402ad27c01c9503a12d41d7e40646cf0d1f919f. It
appears to not be relevant for AArch64, where calls appear to never
be used for these. It also appears to not be relevant for x86, where
the default calls seem to always end up used anyway.
Hack in the default setting so it's consistently generated like
the other cases. Maintain a list of targets where this applies.
The alternative would require new infrastructure to sort the system
library initialization in some way.
I wanted the unhandled target case to be treated as a fatal
error, but it turns out there's a hack in IRSymtab using
RuntimeLibcalls, which will fail out in many tests that
do not have a triple set. Many of the failures are simply
running llvm-as with no triple, which probably should not
depend on knowing an accurate set of calls.
Also replace the current static DenseMap of preserved symbol
names in the Symtab hack with this. That was broken statefulness
across compiles, so this at least fixes that. However this is
still broken, llvm-as shouldn't really depend on the triple.
The compiler should not introduce calls to arbitrary strings
that aren't defined in RuntimeLibcalls. Previously OpenBSD was
disabling the default __stack_chk_fail, but there was no record
of the alternative __stack_smash_handler function it emits instead.
This also avoids a random triple check in the pass.
This is getting pretty ugly, but seems to be the worst of the
cases. Opting out of the base set of calls for the various windows
cases is really ugly. We need to apply that to the arm cases as well.
It also may make sense to go back to transposing the architecture
and operating system (i.e. make isOSWindows the SystemLibrary
and then modify based on architecture).
These are identified by misc-include-cleaner. I've filtered out those
that break builds. Also, I'm staying away from llvm-config.h,
config.h, and Compiler.h, which likely cause platform- or
compiler-specific build failures.
This was going out of its way to explicitly mark these as
ARM_AAPCS_VFP. This has been explicitly set since 8b40366b54bd4,
where the commit message states that "sincos" (not sincos_stret)
has a special calling convention. However, that commit also sets
the calling convention for all libcalls to ARM_AAPCS_VFP, and
getEffectiveCallingConv returns the same for CCC anyway in tests
using isWatchABI triples.
The net result of this appears to be a change in behavior when
using -float-abi=soft with isWatchABI, which have no tests so
I assume this is a theoretical combination.
If I assert
```
if (getTargetMachine().getTargetTriple().isWatchABI()) {
assert(!useSoftFloat());
assert(getEffectiveCallingConv(CallingConv::C, false) == CallingConv::ARM_AAPCS_VFP);
}
```
Only 2 tests fail the second condition, which look like copy paste
accidents
using v7k triples with linux and only needed a filler triple. This is a
consequence
of strangely using the target architecture in place of the OS ABI check,
as was done in 042a6c1fe19caf48af7e287dc8f6fd5fec158093
Hexagon currently has an untested global flag to control fast
math variants of libcalls. Add fast variants as explicit libcall
options so this can be a flag based lowering decision, and implement
it. I have no idea what fast math flags the hexagon case requires,
so I picked the maximally potentially relevant set of flags although
this probably is refinable per call. Looking in compiler-rt, I'm not
sure if the fast variants are anything more than aliases.
This fully consolidates all the calling convention configuration into
RuntimeLibcallInfo. I'm assuming that __aeabi functions have a universal
calling convention, and on other ABIs just don't use them. This will
enable splitting of RuntimeLibcallInfo into the ABI and lowering component.
Previously we had a table of entries for every Libcall for
the comparison to use against an integer 0 if it was a soft
float compare function. This was only relevant to a handful of
opcodes, so it was wasteful. Now that we can distinguish the
abstract libcall for the compare with the concrete implementation,
we can just directly hardcode the comparison against the libcall
impl without this configuration system.
As a temporary step configure the calling convention here. This
can't be moved into tablegen until RuntimeLibcallsInfo is split
into a separate lowering component.
Allow associating a non-default CallingConv with a set of library
functions, and applying a default for a SystemLibrary.
I also wanted to be able to apply a default calling conv
to a RuntimeLibcallImpl, but that turned out to be annoying
so leave it for later.
Instead of associating the libcall with the RTLIB::Libcall, put it
into a table indexed by the RTLIB::LibcallImpl. The LibcallImpls
should contain all ABI details for a particular implementation, not
the abstract Libcall. In the future the wrappers in terms of the
RTLIB::Libcall should be removed.
Add a way to define a SystemLibrary for a complete set of
libcalls, subdivided by a predicate based on the triple.
Libraries can be defined using dag set operations, and the
prior default set can be subtracted from and added to (though
I think eventually all targets should move to explicit opt-ins.
We're still doing things like reporting ppcf128 libcalls as
available dy default on all targets).
Start migrating some of the easier targets to only use the new
system. Targets that don't define a SystemLibrary are still
manually mutating a table set to the old defaults.
These were bypassing the ordinary libcall emission mechanism. Make sure
we have entries in RuntimeLibcalls, which should include all possible
calls the compiler could emit.
Fixes not emitting the # prefix in the arm64ec case.
Work towards separating the ABI existence of libcalls vs. the
lowering selection. Set libcall selection through enums, rather
than through raw string names.
Replace RuntimeLibcalls.def with a tablegenerated version. This
is in preparation for splitting RuntimeLibcalls into two components.
For now match the existing functionality.
This fixes a TODO and avoids a special case. Also required
hacking up a few cases to avoid asserting in codegen; it's not
confidence inspiring that there is only one codegen test using
a bridgeos triple and its specifically for the exp10 libcall
names.
This also changes the behavior, losing an extra leading _ in the
emitted name matching the other apple outputs. I have no idea if
this is right or not. IMO it's someone from apple's problem to fix
it and add appropriate test coverage, or we can rip all references
to BridgeOS out from upstream.
Instead of reporting ___memmove as an implementation of memcpy,
make it unavailable and let the lowering logic consider memmove as
a fallback path.
This avoids a special case 1:N mapping for libcall implementations.
We need the full set of ABI options to accurately compute
the full set of libcalls. This partially resolves missing
information required to compute the set of ARM calls.