590 Commits

Author SHA1 Message Date
Noah Goldstein
2da4960f20 [Inliner] Also propagate noundef and align ret attributes during inlining
Both of these can potentially be lost otherwise.
2023-10-03 16:12:19 -05:00
Noah Goldstein
2d037f5aed [Inliner] Use "best" ret attribute when propagating attributes during inlining
For attributes assosiated with a value (like `dereferenceable(N)`)
instead of always using the attribute from the to-be inlined caller,
it should keep using the value at existing callsites that have the
attribute if the value is higher (provides more information).
2023-10-03 16:12:16 -05:00
Noah Goldstein
2f3b7d33f4 [Inliner] Fix bug when propagating poison generating return attributes
Poison generating return attributes can't be propagated the same as
others, as they can change the behavior of other uses and/or create UB
where it otherwise wouldn't have occurred.

For example:
```
define nonnull ptr @foo() {
    %p = call ptr @bar()
    call void @use(ptr %p)
    ret ptr %p
}
```

If we inline `@foo` and propagate `nonnull` to `@bar`, it could change
the behavior of `@use` as instead of taking `null`, `@use` will
now be passed `poison`.

This can be even worth in a case like:
```
define nonnull ptr @foo() {
    %p = call noundef ptr @bar()
    ret ptr %p
}
```

Where propagating `nonnull` to `@bar` will cause UB on `null` return
of `@bar` (`noundef` + `poison`) where it previously wouldn't
have occurred.

To fix this, we only propagate poison generating return attributes if
either 1) The only use of the callsite to propagate too is return and
the callsite to propagate too doesn't have `noundef`. Or 2) the
callsite to be be inlined has `noundef`.

The former case ensures no new UB or `poison` values will be
added. The latter is UB anyways if the value is `poison` so we can go
ahead without worrying about behavior changes.
2023-09-28 17:27:42 -05:00
Jeremy Morse
e54277fa10 [NFC][RemoveDIs] Use iterators over inst-pointers when using IRBuilder
This patch adds a two-argument SetInsertPoint method to IRBuilder that
takes a block/iterator instead of an instruction, and updates many call
sites to use it. The motivating reason for doing this is given here [0],
we'd like to pass around more information about the position of debug-info
in the iterator object. That necessitates passing iterators around most of
the time.

[0] https://discourse.llvm.org/t/rfc-instruction-api-changes-needed-to-eliminate-debug-intrinsics-from-ir/68939

Differential Revision: https://reviews.llvm.org/D152468
2023-09-11 20:01:19 +01:00
Jeremy Morse
6942c64e81 [NFC][RemoveDIs] Prefer iterator-insertion over instructions
Continuing the patch series to get rid of debug intrinsics [0], instruction
insertion needs to be done with iterators rather than instruction pointers,
so that we can communicate information in the iterator class. This patch
adds an iterator-taking insertBefore method and converts various call sites
to take iterators. These are all sites where such debug-info needs to be
preserved so that a stage2 clang can be built identically; it's likely that
many more will need to be changed in the future.

At this stage, this is just changing the spelling of a few operations,
which will eventually become signifiant once the debug-info bearing
iterator is used.

[0] https://discourse.llvm.org/t/rfc-instruction-api-changes-needed-to-eliminate-debug-intrinsics-from-ir/68939

Differential Revision: https://reviews.llvm.org/D152537
2023-09-11 11:48:45 +01:00
Anna Thomas
23f08af2be [Inline] Avoid incompatible return attributes on deoptimize
When updating the return type of deoptimize call during inline, we need
to drop incompatible return attributes.  This bug was exposed once we
relaxed the contraint of adding the attributes through D156844. With
that change deoptimize (are not willreturn) will start having return
attributes added to it.

Fixes https://github.com/llvm/llvm-project/issues/64804.

Differential Revision: https://reviews.llvm.org/D158286
2023-08-18 12:55:51 -04:00
Sameer Sahasrabuddhe
8dce4c56dd [Inliner] Handle convergence control when inlining a call
When a convergencectrl token is passed to a convergent call, and the called
function in turn calls the entry intrinsic, the intrinsic is now now replaced
with the convergencectrl token.

The spec requires the following check:
  A call from function F to function G can be inlined only if:
  - at least one of F or G does not make any convergent calls, or,
  - both F and G make the same kind of convergent calls: controlled or
    uncontrolled.

But this change does not implement this complete check. A proper implemenation
require a whole new analysis that identifies convergence in every function. For
now, we skip that and just do a cursory check for the entry intrinsic. The
underlying assumption is that in a compiler flow that fully implements
convergence control tokens, there is no mixing of controlled and uncontrolled
convergent operations in the whole program.

This is a reboot of the original change D85606 by
Nicolai Haehnle <nicolai.haehnle@amd.com>.

Reviewed By: arsenm, nhaehnle

Differential Revision: https://reviews.llvm.org/D152431
2023-08-17 09:56:25 +05:30
Noah Goldstein
4d51c6258e [Inliner] Add return attributes to callsites not marked willreturn/nounwind
The actual callsite we are adding to doesn't need to be
`willreturn`/`nounwind`, only ever instructions between the callsite
and the return.

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D156844
2023-08-16 22:43:04 -05:00
Noah Goldstein
612a7f0b15 [Inliner] Add the callsites called function return attributes to set addable attributes
We can do this by just querying attribute in the callsite itself. This
is both cleaner code and produces bette results.

Differential Revision: https://reviews.llvm.org/D156843
2023-08-16 22:43:04 -05:00
Matt Arsenault
25bc999d1f Intrinsics: Add type overload to stacksave and stackstore
This allows use with non-0 address space stacks. llvm_ptr_ty should
never be used. This could use some more percolation up through mlir,
but this is enough to fix existing tests.

https://reviews.llvm.org/D156666
2023-08-09 18:33:11 -04:00
Nuno Lopes
3bc74bed64 [Inline] Use poison instead of undef as placeholder [NFC] 2023-07-22 13:23:40 +01:00
Nikita Popov
9cf5254878 [llvm] Remove some uses of isOpaqueOrPointeeTypeEquals() (NFC) 2023-07-18 11:18:31 +02:00
Teresa Johnson
e5479f27f2 [MemProf] Remove stale comment (NFC)
We already do the simplification described in the FIXME comment.
2023-06-08 12:30:23 -07:00
Hongtao Yu
23da210624 [PseudoProbe] Do not force the calliste debug loc to inlined probes from __nodebug__ functions.
For pseudo probes we would like to keep their original dwarf discriminator (either a zero or null) until the first FS-discriminator pass. The inliner is a violation of that, given that it assigns inlinee instructions with no debug info with the that of the callsite. This is being disabled in this patch.

Reviewed By: wenlei

Differential Revision: https://reviews.llvm.org/D151568
2023-05-26 13:00:16 -07:00
Arthur Eubanks
e096a03fdb [Inliner] Remove -update-return-attrs flag
This is by default on and I don't see any reason to turn it off. There's also no testing of it.

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D148956
2023-04-21 14:39:29 -07:00
Dávid Bolvanský
d5fe5604a6 Revert "xxx"
This reverts commit f60592438a7446595cfbfa3944681c689952d859.
2023-04-06 16:54:00 +02:00
Dávid Bolvanský
f60592438a xxx 2023-04-06 16:51:31 +02:00
Arthur Eubanks
fa6ea7a419 [AlwaysInliner] Make legacy pass like the new pass
The legacy pass is only used in AMDGPU codegen, which doesn't care about running it in call graph order (it actually has to work around that fact).

Make the legacy pass a module pass and share code with the new pass.

This allows us to remove the legacy inliner infrastructure.

Reviewed By: mtrofin

Differential Revision: https://reviews.llvm.org/D146446
2023-03-21 11:04:22 -07:00
Yuanfang Chen
e7a2da5298 [Inliner] Assign dummy debug location to the memcpy for byval argument
A similar fix to D133095.

Fixes https://github.com/llvm/llvm-project/issues/58770.

Reviewed By: aprantl

Differential Revision: https://reviews.llvm.org/D145607
2023-03-15 10:30:28 -07:00
Max Kazantsev
0cbb8ec030 Revert "[AssumptionCache] caches @llvm.experimental.guard's"
This reverts commit f9599bbc7a3f831e1793a549d8a7a19265f3e504.

For some reason it caused us a huge compile time regression in downstream
workloads. Not sure whether the source of it is in upstream code ir not.
Temporarily reverting until investigated.

Differential Revision: https://reviews.llvm.org/D142330
2023-02-20 18:38:07 +07:00
Stefan Gränitz
3b387d1070 Lift EHPersonalities from Analysis to IR (NFC)
Computing EH-related information was only relevant for analysis passes so far. Lifting it to IR will allow the IR Verifier to calculate EH funclet coloring and validate funclet operand bundles in a follow-up step.

Reviewed By: rnk, compnerd

Differential Revision: https://reviews.llvm.org/D138122
2023-01-27 18:05:13 +01:00
Joshua Cao
f9599bbc7a [AssumptionCache] caches @llvm.experimental.guard's
As discussed in https://github.com/llvm/llvm-project/issues/59901

This change is not NFC. There is one SCEV and EarlyCSE test that have an
improved analysis/optimization case. Rest of the tests are not failing.

I've mostly only added cleanup to SCEV since that is where this issue
started. As a follow up, I believe there is more cleanup opportunity in
SCEV and other affected passes.

There could be cases where there are missed registerAssumption of
guards, but this case is not so bad because there will be no
miscompilation. AssumptionCacheTracker should take care of deleted
guards.

Differential Revision: https://reviews.llvm.org/D142330
2023-01-24 20:16:46 -08:00
OCHyams
4ece50737d [Assignment Tracking][NFC] Replace LLVM command line option with a module flag
Remove LLVM flag -experimental-assignment-tracking. Assignment tracking is
still enabled from Clang with the command line -Xclang
-fexperimental-assignment-tracking which tells Clang to ask LLVM to run the
pass declare-to-assign. That pass converts conventional debug intrinsics to
assignment tracking metadata. With this patch it now also sets a module flag
debug-info-assignment-tracking with the value `i1 true` (using the flag conflict
rule `Max` since enabling assignment tracking on IR that contains only
conventional debug intrinsics should cause no issues).

Update the docs and tests too.

Reviewed By: CarlosAlbertoEnciso

Differential Revision: https://reviews.llvm.org/D142027
2023-01-20 14:24:15 +00:00
Guillaume Chatelet
b55f83d013 [NFC] Remove Function::getParamAlignment
Differential Revision: https://reviews.llvm.org/D141696
2023-01-13 16:20:58 +00:00
Guillaume Chatelet
26bd6476c6 Deprecate DataLayout::getPrefTypeAlignment 2023-01-13 15:05:24 +00:00
Guillaume Chatelet
8fd5558b29 [NFC] Use TypeSize::geFixedValue() instead of TypeSize::getFixedSize()
This change is one of a series to implement the discussion from
https://reviews.llvm.org/D141134.
2023-01-11 16:49:38 +00:00
James Y Knight
1ae36b1387 Remove special cases for invoke of non-throwing inline-asm.
Non-throwing inline asm infers the nounwind attribute in
instcombine. Thus, it can be handled in the same manner as
non-throwing target functions are generally. Further special casing is
unnecessary complexity.
2023-01-06 13:53:10 -05:00
Teresa Johnson
35c7e457e8 [MemProf] Fix inline propagation of memprof metadata
It isn't correct to always remove memprof metadata MIBs from the
original allocation call after inlining.

Let's say we have the following partial call graph:

C     D
 \   /
  v v
   B   E
   |  /
   v v
    A

where A contains an allocation call. If both contexts including B have
the same allocation behavior, the context in the memprof metadata on the
allocation will be pruned, and we will have 2 MIBs with contexts:
A,B and A,E.

Previously, if we inlined A into B we propagate the matching MIBs onto
the inlined allocation call in B' (A,B in this case), and remove it from
the original out of line allocation in A. This is correct if we have a
single round of bottom up inlining.

However, in the compiler we can have multiple invocations of the inliner
pass (e.g. LTO). We may also inline non-bottom up with an alternative
inliner such as the ModuleInliner. In that case, we could end up first
inlining B into C, without having inlined A into B. The call graph then
looks like:

    D
    |
    v
C'  B   E
 \  |  /
  v v v
    A

If we subsequently (perhaps on a later invocation of bottom up inlining)
inline A into B, the previous handling would propagate the memprof MIB
context A,B up into the inlined allocation in B', and remove it from the
original allocation in A. The propagation into B' is fine, however, by
removing it from A's allocation, we no longer reflect the context coming
from C'.

To fix this, simply prevent the removal of MIB from the original
allocation callsites.

Note that the memprof_inline.ll test has some changes to existing
checking to replace "noncold" with "notcold" in the metadata. The
corresponding CHECK was accidentally commented out in the old version
and thus this mistake was not previously detected.

Differential Revision: https://reviews.llvm.org/D140764
2022-12-30 07:31:47 -08:00
Vasileios Porpodas
dc891846b8 [NFC] Cleanup: Replace Function::getBasicBlockList().splice() with Function::splice()
This is part of a series of patches that aim at making Function::getBasicBlockList() private.

Differential Revision: https://reviews.llvm.org/D139984
2022-12-14 15:34:19 -08:00
Vasileios Porpodas
adfb23c607 [NFC] Cleanup: Remove Function::getBasicBlockList() when not required.
This is part of a series of patches that aim at making Function::getBasicBlockList() private.

Differential Revision: https://reviews.llvm.org/D139910
2022-12-13 11:46:29 -08:00
Kazu Hirata
f7dffc28b3 Don't include None.h (NFC)
I've converted all known uses of None to std::nullopt, so we no longer
need to include None.h.

This is part of an effort to migrate from llvm::Optional to
std::optional:

https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
2022-12-10 11:24:26 -08:00
Kazu Hirata
934942c033 [llvm] Don't include Optional.h (NFC)
These source files no longer use Optional<T>, so they do not need to
include Optional.h.

This is part of an effort to migrate from llvm::Optional to
std::optional:

https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
2022-12-06 22:34:50 -08:00
Krzysztof Parzyszek
f3b6dbfda8 Instructions: convert Optional to std::optional 2022-12-04 14:25:11 -06:00
Kazu Hirata
343de6856e [Transforms] Use std::nullopt instead of None (NFC)
This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated.  The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.

This is part of an effort to migrate from llvm::Optional to
std::optional:

https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
2022-12-02 21:11:37 -08:00
Vasileios Porpodas
bebca2b6d5 [NFC] Cleanup: Replaces BB->getInstList().splice() with BB->splice().
This is part of a series of cleanup patches towards making BasicBlock::getInstList() private.

Differential Revision: https://reviews.llvm.org/D138979
2022-12-01 15:37:51 -08:00
Vasileios Porpodas
af4e856fa7 [NFC] Replaced BB->getInstList().{erase(),pop_front(),pop_back()} with eraseFromParent().
Differential Revision: https://reviews.llvm.org/D138617
2022-11-23 22:47:46 -08:00
OCHyams
e3cd498ff7 [Assignment Tracking][21/*] Account for assignment tracking in inliner
The Assignment Tracking debug-info feature is outlined in this RFC:

https://discourse.llvm.org/t/
rfc-assignment-tracking-a-better-way-of-specifying-variable-locations-in-ir

The inliner requires two additions:

fixupAssignments - Update inlined instructions' DIAssignID metadata so that
inlined DIAssignID attachments are unique to the inlined instance.

trackInlinedStores - Treat inlined stores to caller-local variables
(i.e. callee stores to argument pointers that point to the caller's allocas) as
assignments. Track them using trackAssignments, which is the same method as is
used by the AssignmentTrackingPass. This means that we're able to detect stale
memory locations due to DSE after inlining. Because the stores are only tracked
_after_ inlining, any DSE or movement of stores _before_ inlining will not be
accounted for. This is an accepted limitation mentioned in the RFC.

One change is also required:

Update CloneBlock to preserve debug use-before-defs. Otherwise the assignments
will be dropped due to having the intrinsic operands replaced with empty
metadata (see use-before-def.ll in this patch and this related discourse post.

Reviewed By: jmorse

Differential Revision: https://reviews.llvm.org/D133318
2022-11-18 11:55:05 +00:00
Nikita Popov
747f27d97d [AA] Rename getModRefBehavior() to getMemoryEffects() (NFC)
Follow up on D135962, renaming the method name to match the new
type name.
2022-10-19 11:03:54 +02:00
Nikita Popov
1a9d9823c5 [AA] Rename uses of FunctionModRefBehavior (NFC)
Followup to D135962 to rename remaining uses of
FunctionModRefBehavior to MemoryEffects. Does not touch API names
yet, but also updates variables names FMRB/MRB to ME, to match the
new type name.
2022-10-19 10:54:47 +02:00
Teresa Johnson
43417d8159 [MemProf] Update metadata during inlining
Update both memprof and callsite metadata to reflect inlined functions.

For callsite metadata this is simply a concatenation of each cloned
call's call stack with that of the inlined callsite's.

For memprof metadata, each profiled memory info block (MIB) is either
moved to the cloned allocation call or left on the original allocation
call depending on whether its context matches the newly refined call
stack context on the cloned call. We also reapply context trimming
optimizations based on the refined set of contexts on each of the calls
(cloned and original).

Depends on D128142.

Reviewed By: snehasish

Differential Revision: https://reviews.llvm.org/D128143
2022-09-30 19:21:15 -07:00
Teresa Johnson
4d243348fb Revert "[MemProf] Update metadata during inlining" and preceeding commit
This reverts commit 0d7f3464ce0ba3a97df73e08ee0acd4e33adbe9b and
commit f9403ca41e5f3dab60cd6e5de26eea65dcab01a4. The latter was
"Profile matching and IR annotation for memprof profiles." and was left
from a bad rebase from a commit already pushed upstream.
2022-09-30 17:01:30 -07:00
Teresa Johnson
0d7f3464ce [MemProf] Update metadata during inlining
Update both memprof and callsite metadata to reflect inlined functions.

For callsite metadata this is simply a concatenation of each cloned
call's call stack with that of the inlined callsite's.

For memprof metadata, each profiled memory info block (MIB) is either
moved to the cloned allocation call or left on the original allocation
call depending on whether its context matches the newly refined call
stack context on the cloned call. We also reapply context trimming
optimizations based on the refined set of contexts on each of the calls
(cloned and original), via utilities in MemoryProfileInfo.

Depends on D128142.

Differential Revision: https://reviews.llvm.org/D128143
2022-09-30 16:46:17 -07:00
Kazu Hirata
00874c48ea [IPO] Reorder parameters of InlineFunction (NFC)
With the recent addition of new parameter MergeAttributes (D134117),
callers need to specify several default parameters before getting to
specify the new parameter.

This patch reorders the parameters so that callers do not have to
specify as many default parameters.

Differential Revision: https://reviews.llvm.org/D134125
2022-09-20 09:09:38 -07:00
Kazu Hirata
284f0397e2 [Transforms] Merge function attributes within InlineFunction (NFC)
In the past, we've had a bug resulting in a compiler crash after
forgetting to merge function attributes (D105729).

This patch teaches InlineFunction to merge function attributes.  This
way, we minimize the "time" when the IR is valid, but the function
attributes are not.

Differential Revision: https://reviews.llvm.org/D134117
2022-09-17 23:10:23 -07:00
Nikita Popov
b1cd393f9e [AA] Tracking per-location ModRef info in FunctionModRefBehavior (NFCI)
Currently, FunctionModRefBehavior tracks whether the function reads
or writes memory (ModRefInfo) and which locations it can access
(argmem, inaccessiblemem and other). This patch changes it to track
ModRef information per-location instead.

To give two examples of why this is useful:

* D117095 highlights a weakness of ModRef modelling in the presence
  of operand bundles. For a memcpy call with deopt operand bundle,
  we want to say that it can read any memory, but only write argument
  memory. This would allow them to be treated like any other calls.
  However, we currently can't express this and have to say that it
  can read or write any memory.
* D127383 would ideally be modelled as a separate threadid location,
  where threadid Refs outside pre-split coroutines can be ignored
  (like other accesses to constant memory). The current representation
  does not allow modelling this precisely.

The patch as implemented is intended to be NFC, but there are some
obvious opportunities for improvements and simplification. To fully
capitalize on this we would also want to change the way we represent
memory attributes on functions, but that's a larger change, and I
think it makes sense to separate out the FunctionModRefBehavior
refactoring.

Differential Revision: https://reviews.llvm.org/D130896
2022-09-14 16:34:41 +02:00
Sami Tolvanen
cff5bef948 KCFI sanitizer
The KCFI sanitizer, enabled with `-fsanitize=kcfi`, implements a
forward-edge control flow integrity scheme for indirect calls. It
uses a !kcfi_type metadata node to attach a type identifier for each
function and injects verification code before indirect calls.

Unlike the current CFI schemes implemented in LLVM, KCFI does not
require LTO, does not alter function references to point to a jump
table, and never breaks function address equality. KCFI is intended
to be used in low-level code, such as operating system kernels,
where the existing schemes can cause undue complications because
of the aforementioned properties. However, unlike the existing
schemes, KCFI is limited to validating only function pointers and is
not compatible with executable-only memory.

KCFI does not provide runtime support, but always traps when a
type mismatch is encountered. Users of the scheme are expected
to handle the trap. With `-fsanitize=kcfi`, Clang emits a `kcfi`
operand bundle to indirect calls, and LLVM lowers this to a
known architecture-specific sequence of instructions for each
callsite to make runtime patching easier for users who require this
functionality.

A KCFI type identifier is a 32-bit constant produced by taking the
lower half of xxHash64 from a C++ mangled typename. If a program
contains indirect calls to assembly functions, they must be
manually annotated with the expected type identifiers to prevent
errors. To make this easier, Clang generates a weak SHN_ABS
`__kcfi_typeid_<function>` symbol for each address-taken function
declaration, which can be used to annotate functions in assembly
as long as at least one C translation unit linked into the program
takes the function address. For example on AArch64, we might have
the following code:

```
.c:
  int f(void);
  int (*p)(void) = f;
  p();

.s:
  .4byte __kcfi_typeid_f
  .global f
  f:
    ...
```

Note that X86 uses a different preamble format for compatibility
with Linux kernel tooling. See the comments in
`X86AsmPrinter::emitKCFITypeId` for details.

As users of KCFI may need to locate trap locations for binary
validation and error handling, LLVM can additionally emit the
locations of traps to a `.kcfi_traps` section.

Similarly to other sanitizers, KCFI checking can be disabled for a
function with a `no_sanitize("kcfi")` function attribute.

Relands 67504c95494ff05be2a613129110c9bcf17f6c13 with a fix for
32-bit builds.

Reviewed By: nickdesaulniers, kees, joaomoreira, MaskRay

Differential Revision: https://reviews.llvm.org/D119296
2022-08-24 22:41:38 +00:00
Sami Tolvanen
a79060e275 Revert "KCFI sanitizer"
This reverts commit 67504c95494ff05be2a613129110c9bcf17f6c13 as using
PointerEmbeddedInt to store 32 bits breaks 32-bit arm builds.
2022-08-24 19:30:13 +00:00
Sami Tolvanen
67504c9549 KCFI sanitizer
The KCFI sanitizer, enabled with `-fsanitize=kcfi`, implements a
forward-edge control flow integrity scheme for indirect calls. It
uses a !kcfi_type metadata node to attach a type identifier for each
function and injects verification code before indirect calls.

Unlike the current CFI schemes implemented in LLVM, KCFI does not
require LTO, does not alter function references to point to a jump
table, and never breaks function address equality. KCFI is intended
to be used in low-level code, such as operating system kernels,
where the existing schemes can cause undue complications because
of the aforementioned properties. However, unlike the existing
schemes, KCFI is limited to validating only function pointers and is
not compatible with executable-only memory.

KCFI does not provide runtime support, but always traps when a
type mismatch is encountered. Users of the scheme are expected
to handle the trap. With `-fsanitize=kcfi`, Clang emits a `kcfi`
operand bundle to indirect calls, and LLVM lowers this to a
known architecture-specific sequence of instructions for each
callsite to make runtime patching easier for users who require this
functionality.

A KCFI type identifier is a 32-bit constant produced by taking the
lower half of xxHash64 from a C++ mangled typename. If a program
contains indirect calls to assembly functions, they must be
manually annotated with the expected type identifiers to prevent
errors. To make this easier, Clang generates a weak SHN_ABS
`__kcfi_typeid_<function>` symbol for each address-taken function
declaration, which can be used to annotate functions in assembly
as long as at least one C translation unit linked into the program
takes the function address. For example on AArch64, we might have
the following code:

```
.c:
  int f(void);
  int (*p)(void) = f;
  p();

.s:
  .4byte __kcfi_typeid_f
  .global f
  f:
    ...
```

Note that X86 uses a different preamble format for compatibility
with Linux kernel tooling. See the comments in
`X86AsmPrinter::emitKCFITypeId` for details.

As users of KCFI may need to locate trap locations for binary
validation and error handling, LLVM can additionally emit the
locations of traps to a `.kcfi_traps` section.

Similarly to other sanitizers, KCFI checking can be disabled for a
function with a `no_sanitize("kcfi")` function attribute.

Reviewed By: nickdesaulniers, kees, joaomoreira, MaskRay

Differential Revision: https://reviews.llvm.org/D119296
2022-08-24 18:52:42 +00:00
Stefan Gränitz
1e30820483 [WinEH] Apply funclet operand bundles to nounwind intrinsics that lower to function calls in the course of IR transforms
WinEHPrepare marks any function call from EH funclets as unreachable, if it's not a nounwind intrinsic or has no proper funclet bundle operand. This
affects ARC intrinsics on Windows, because they are lowered to regular function calls in the PreISelIntrinsicLowering pass. It caused silent binary truncations and crashes during unwinding with the GNUstep ObjC runtime: https://github.com/gnustep/libobjc2/issues/222

This patch adds a new function `llvm::IntrinsicInst::mayLowerToFunctionCall()` that aims to collect all affected intrinsic IDs.
* Clang CodeGen uses it to determine whether or not it must emit a funclet bundle operand.
* PreISelIntrinsicLowering asserts that the function returns true for all ObjC runtime calls it lowers.
* LLVM uses it to determine whether or not a funclet bundle operand must be propagated to inlined call sites.

Reviewed By: theraven

Differential Revision: https://reviews.llvm.org/D128190
2022-07-26 17:52:43 +02:00
Nuno Lopes
9df0b254d2 [NFC] Switch a few uses of undef to poison as placeholders for unreachable code 2022-07-23 21:50:11 +01:00