Move the initialization of ptrauth-* function attributes near the
initialization of branch protection attributes. The semantics of these
groups of attributes partially overlaps, so handle both groups in
getDefaultFunctionAttributes() and setTargetAttributes() functions to
prevent getting them out of sync. This fixes C++ TLS wrappers.
The "target-features" function attribute is not currently considered
when adding vscale_range to a function. When +sve/+sme are pushed onto
functions with "#pragma attribute push(+sve/+sme)", the function
potentially misses out on optimizations that rely on vscale_range being
present.
Currently BlockAddresses store both the Function and the BasicBlock they
reference, and the BlockAddress is part of the use list of both the
Function and BasicBlock.
This is quite awkward, because this is not really a use of the function
itself (and walks of function uses generally skip block addresses for
that reason). This also has weird implications on function RAUW (as that
will replace the function in block addresses in a way that generally
doesn't make sense), and causes other peculiar issues, like the ability
to have multiple block addresses for one block (with different
functions).
Instead, I believe it makes more sense to specify only the basic block
and let the function be implied by the BB parent. This does mean that we
may have block addresses without a function (if the BB is not inserted),
but this should only happen during IR construction.
This feature is currently not supported in the compiler.
To facilitate this we emit a stub version of each kernel
function body with different name mangling scheme, and
replaces the respective kernel call-sites appropriately.
Fixes https://github.com/llvm/llvm-project/issues/60313
D120566 was an earlier attempt made to upstream a solution
for this issue.
---------
Co-authored-by: anikelal <anikelal@amd.com>
With -fpatchable-function-entry (or the patchable_function_entry
function attribute), we emit records of patchable entry locations to the
__patchable_function_entries section. Add an additional parameter to the
command line option that allows one to specify a different default
section name for the records, and an identical parameter to the function
attribute that allows one to override the section used.
The main use case for this change is the Linux kernel using prefix NOPs
for ftrace, and thus depending on__patchable_function_entries to locate
traceable functions. Functions that are not traceable currently disable
entry NOPs using the function attribute, but this creates a
compatibility issue with -fsanitize=kcfi, which expects all indirectly
callable functions to have a type hash prefix at the same offset from
the function entry.
Adding a section parameter would allow the kernel to distinguish between
traceable and non-traceable functions by adding entry records to
separate sections while maintaining a stable function prefix layout for
all functions. LKML discussion:
https://lore.kernel.org/lkml/Y1QEzk%2FA41PKLEPe@hirez.programming.kicks-ass.net/
This implements the R2 semantics of P0963.
The R1 semantics, as outlined in the paper, were introduced in Clang 6.
In addition to that, the paper proposes swapping the evaluation order of
condition expressions and the initialization of binding declarations
(i.e. std::tuple-like decompositions).
If we have +sme but not +sve, we would not set vscale_range on
functions. It should be valid to apply it with the same range with just
+sme, which can help mitigate some performance regressions in cases such
as scalable vector bitcasts (https://godbolt.org/z/exhe4jd8d).
Following the previous patch which adds the "extend lifetimes" flag
without (almost) any functionality, this patch adds the real feature by
allowing Clang to emit fake uses. These are emitted as a new form of cleanup,
set for variable addresses, which just emits a fake use intrinsic when the
variable falls out of scope. The code for achieving this is simple, with most
of the logic centered on determining whether to emit a fake use for a given
address, and on ensuring that fake uses are ignored in a few cases.
Co-authored-by: Stephen Tozer <stephen.tozer@sony.com>
- Adding the changes from PRs:
- #116331
- #121852
- Fixes test `tools/dxil-dis/debug-info.ll`
- Address some missed comments in the previous PR
---------
Co-authored-by: joaosaffran <joao.saffran@microsoft.com>
The `Checked` parameter of `CodeGenFunction::EmitCheck` is of type
`ArrayRef<std::pair<llvm::Value *, SanitizerMask>>`, which is overly
generalized: SanitizerMask can denote that zero or more sanitizers are
enabled, but `EmitCheck` requires that exactly one sanitizer is
specified in the SanitizerMask (e.g.,
`SanitizeTrap.has(Checked[i].second)` enforces that).
This patch replaces SanitizerMask with SanitizerOrdinal in the `Checked`
parameter of `EmitCheck` and code that transitively relies on it. This
should not affect the behavior of UBSan, but it has the advantages that:
- the code is clearer: it avoids ambiguity in EmitCheck about what to do
if multiple bits are set
- specifying the wrong number of sanitizers in `Checked[i].second` will
be detected as a compile-time error, rather than a runtime assertion
failure
Suggested by Vitaly in https://github.com/llvm/llvm-project/pull/122392
as an alternative to adding an explicit runtime assertion that the
SanitizerMask contains exactly one sanitizer.
`CounterPair` can hold `<uint32_t, uint32_t>` instead of current
`unsigned`, to hold also the counter number of SkipPath. For now, this
change provides the skeleton and only `CounterPair::Executed` is used.
Each counter number can have `None` to suppress emitting counter
increment. 2nd element `Skipped` is initialized as `None` by default,
since most `Stmt*` don't have a pair of counters.
This change also provides stubs for the verifier. I'll provide the impl
of verifier for `+Asserts` later.
`markStmtAsUsed(bool, Stmt*)` may be used to inform that other side
counter may not emitted.
`markStmtMaybeUsed(S)` may be used for the `Stmt` and its inner will be
excluded for emission in the case of skipping by constant folding. I put
it into places where I found.
`verifyCounterMap()` will check the coverage map and the counter map,
and can be used to report inconsistency.
These verifier methods shall be eliminated in `-Asserts`.
https://discourse.llvm.org/t/rfc-integrating-singlebytecoverage-with-branch-coverage/82492
- adding Flatten and Branch to if stmt.
- adding dxil control flow hint metadata generation
- modifing spirv OpSelectMerge to account for the specific attributes.
Closes#70112
---------
Co-authored-by: Joao Saffran <jderezende@microsoft.com>
Co-authored-by: joaosaffran <joao.saffran@microsoft.com>
The flag is placed together with pointer authentication flags since they
serve the same security purpose of protecting against attacks on control
flow. The flag is not ABI-affecting and might be enabled separately if
needed, but it's also intended to be enabled as part of pauth-enabled
environments (e.g. pauthtest).
See also codegen implementation #97666.
Currently we have code with target hooks in CodeGenModule shared between
X86 and AArch64 for sorting MultiVersionResolverOptions. Those are used
when generating IFunc resolvers for FMV. The RISCV target has different
criteria for sorting, therefore it repeats sorting after calling
CodeGenFunction::EmitMultiVersionResolver.
I am moving the FMV priority logic in TargetInfo, so that it can be
implemented by the TargetParser which then makes it possible to query it
from llvm. Here is an example why this is handy:
https://github.com/llvm/llvm-project/pull/87939
# What
This PR renames the newly-introduced llvm attribute
`sanitize_realtime_unsafe` to `sanitize_realtime_blocking`. Likewise,
sibling variables such as `SanitizeRealtimeUnsafe` are renamed to
`SanitizeRealtimeBlocking` respectively. There are no other functional
changes.
# Why?
- There are a number of problems that can cause a function to be
real-time "unsafe",
- we wish to communicate what problems rtsan detects and *why* they're
unsafe, and
- a generic "unsafe" attribute is, in our opinion, too broad a net -
which may lead to future implementations that need extra contextual
information passed through them in order to communicate meaningful
reasons to users.
- We want to avoid this situation and make the runtime library boundary
API/ABI as simple as possible, and
- we believe that restricting the scope of attributes to names like
`sanitize_realtime_blocking` is an effective means of doing so.
We also feel that the symmetry between `[[clang::blocking]]` and
`sanitize_realtime_blocking` is easier to follow as a developer.
# Concerns
- I'm aware that the LLVM attribute `sanitize_realtime_unsafe` has been
part of the tree for a few weeks now (introduced here:
https://github.com/llvm/llvm-project/pull/106754). Given that it hasn't
been released in version 20 yet, am I correct in considering this to not
be a breaking change?
Translates `RWBuffer` and `StructuredBuffer` resources buffer types to
DirectX target types `dx.TypedBuffer` and `dx.RawBuffer`.
Includes a change of `HLSLAttributesResourceType` from 'sugar' type to
full canonical type. This is required for codegen and other clang
infrastructure to work property on HLSL resource types.
Fixes#95952 (part 2/2)
Rename the function to reflect its correct behavior and to be consistent
with `Module::getOrInsertFunction`. This is also in preparation of
adding a new `Intrinsic::getDeclaration` that will have behavior similar
to `Module::getFunction` (i.e, just lookup, no creation).
This patch enable the function multiversion(FMV) and `target_clones`
attribute for RISC-V target.
The proposal of `target_clones` syntax can be found at the
https://github.com/riscv-non-isa/riscv-c-api-doc/pull/48 (which has
landed), as modified by the proposed
https://github.com/riscv-non-isa/riscv-c-api-doc/pull/85 (which adds the
priority syntax).
It supports the `target_clones` function attribute and function
multiversioning feature for RISC-V target. It will generate the ifunc
resolver function for the function that declared with target_clones
attribute.
The resolver function will check the version support by runtime object
`__riscv_feature_bits`.
For example:
```
__attribute__((target_clones("default", "arch=+ver1", "arch=+ver2"))) int bar() {
return 1;
}
```
the corresponding resolver will be like:
```
bar.resolver() {
__init_riscv_feature_bits();
// Check arch=+ver1
if ((__riscv_feature_bits.features[0] & BITMASK_OF_VERSION1) == BITMASK_OF_VERSION1) {
return bar.arch=+ver1;
} else {
// Check arch=+ver2
if ((__riscv_feature_bits.features[0] & BITMASK_OF_VERSION2) == BITMASK_OF_VERSION2) {
return bar.arch=+ver2;
} else {
// Default
return bar.default;
}
}
}
```
Introducing `HLSLAttributedResourceType` - a new type that is similar to
`AttributedType` but with additional data specific to HLSL resources.
`AttributeType` currently only stores an attribute kind and no
additional data from the type attribute parameters. This does not really
work for HLSL resources since its type attributes contain non-boolean
values that need to be retained as well.
For example:
```
template <typename T> class RWBuffer {
__hlsl_resource_t [[hlsl::resource_class(uav)]] [[hlsl::is_rov]] handle;
};
```
The data `HLSLAttributedResourceType` needs to eventually store are:
- resource class (SRV, UAV, CBuffer, Sampler)
- texture dimension(1-3)
- flags is_rov, is_array, is_feedback and is_multisample
- contained type
All of these values except contained type will be stored in
`HLSLAttributedResourceType::Attributes` struct and accessed
individually via the fields. There is also `Data` alias that covers all
of these values as a `unsigned` which is used for hashing and the AST
type serialization.
During type attribute processing all HLSL type attributes will be
validated and collected by SemaHLSL (by
`SemaHLSL::handleResourceTypeAttr`) and in the end combined into a
single `HLSLAttributedResourceType` instance (in
`SemaHLSL::ProcessResourceTypeAttributes`). `SemaHLSL` will also need to
short-term store the `TypeLoc` information for the new type that will be
grabbed by `TypeSpecLocFiller` soon after the type is created.
Part 1/2 of #104861
Previously, functions named "main" got the NoRecurse attribute
consistent with the behavior of C++, which HLSL largely follows.
However, standard recursion is not allowed in HLSL, so all functions
should really have this attribute. This doesn't prevent recursion, but
rather signals that these functions aren't expected to recurse.
Practically, this was done so that entry point functions named "main"
would have all have the same attributes as otherwise identical entry
points with other names.
This required small changes to the this assignment tests because they no
longer generate so many attribute sets since more of them match.
related to #105244
but done to simplify testing for #89806
Introduce the `-fsanitize=realtime` flag in clang driver
Plug in the RealtimeSanitizer PassManager pass in Codegen, and attribute
a function based on if it has the `[[clang::nonblocking]]` function
effect.
Marks exported functions with `"hlsl.export"` attribute. This
information will be later used by DXILFinalizeLinkage pass (coming soon)
to determine which functions should have internal linkage in the final
DXIL code.
Related to #llvm/llvm-project#92071
Through the new `-foffload-via-llvm` flag, CUDA kernels can now be
lowered to the LLVM/Offload API. On the Clang side, this is simply done
by using the OpenMP offload toolchain and emitting calls to `llvm*`
functions to orchestrate the kernel launch rather than `cuda*`
functions. These `llvm*` functions are implemented on top of the
existing LLVM/Offload API.
As we are about to redefine the Offload API, this wil help us in the
design process as a second offload language.
We do not support any CUDA APIs yet, however, we could:
https://www.osti.gov/servlets/purl/1892137
For proper host execution we need to resurrect/rebase
https://tianshilei.me/wp-content/uploads/2021/12/llpp-2021.pdf
(which was designed for debugging).
```
❯❯❯ cat test.cu
extern "C" {
void *llvm_omp_target_alloc_shared(size_t Size, int DeviceNum);
void llvm_omp_target_free_shared(void *DevicePtr, int DeviceNum);
}
__global__ void square(int *A) { *A = 42; }
int main(int argc, char **argv) {
int DevNo = 0;
int *Ptr = reinterpret_cast<int *>(llvm_omp_target_alloc_shared(4, DevNo));
*Ptr = 7;
printf("Ptr %p, *Ptr %i\n", Ptr, *Ptr);
square<<<1, 1>>>(Ptr);
printf("Ptr %p, *Ptr %i\n", Ptr, *Ptr);
llvm_omp_target_free_shared(Ptr, DevNo);
}
❯❯❯ clang++ test.cu -O3 -o test123 -foffload-via-llvm --offload-arch=native
❯❯❯ llvm-objdump --offloading test123
test123: file format elf64-x86-64
OFFLOADING IMAGE [0]:
kind elf
arch gfx90a
triple amdgcn-amd-amdhsa
producer openmp
❯❯❯ LIBOMPTARGET_INFO=16 ./test123
Ptr 0x155448ac8000, *Ptr 7
Ptr 0x155448ac8000, *Ptr 42
```
This provides -fptrauth-auth-traps, which at the frontend level only
controls the addition of the "ptrauth-auth-traps" function attribute.
The attribute in turn controls various aspects of backend codegen, by
providing the guarantee that every "auth" operation generated will trap
on failure.
This can either be delegated to the hardware (if AArch64 FPAC is known
to be available), in which case this attribute doesn't change codegen.
Otherwise, if FPAC isn't available, this asks the backend to emit
additional instructions to check and trap on auth failure.
We already ended up with -fptrauth-returns, the feature macro, the lang
opt, and the actual backend lowering.
The only part left is threading it all through PointerAuthOptions, to
drive the addition of the "ptrauth-returns" attribute to generated
functions.
While there, do minor cleanup on ptrauth-function-attributes.c.
This also adds ptrauth_key_return_address to ptrauth.h.
Enabled in clang using:
-fptrauth-indirect-gotos
and at the IR level using function attribute:
"ptrauth-indirect-gotos"
Signing uses IA and a per-function integer discriminator. The
discriminator isn't ABI-visible, and is currently:
ptrauth_string_discriminator("<function_name> blockaddress")
A sufficiently sophisticated frontend could benefit from per-indirectbr
discrimination, which would need additional machinery, such as allowing
"ptrauth" bundles on indirectbr. For our purposes, the simple scheme
above is sufficient.
This approach doesn't support subtracting label addresses and using
the result as offsets, because each label address is signed.
Pointer arithmetic on signed pointers corrupts the signature bits,
and because label address expressions aren't typed beyond void*,
we can't do anything reliably intelligent on the arithmetic exprs.
Not signing addresses when used to form offsets would allow
easily hijacking control flow by overwriting the offset.
This diagnoses the basic cases (`&&lbl2 - &&lbl1`) in the frontend,
while we evaluate either alternative implementations (e.g., lowering
blockaddress to a bb number, and indirectbr to a checked jump-table),
or better diagnostics (both at the frontend level and on unencodable
IR constants).
When generating the body of the ifunc resolver, clang skips runtime
checks for features that are implied from the command line. We bend this
rule for certain features (memtag, bti, dgh), but this happens quite
arbitrarily in my opinion. The reasoning is that some features are in
the HINT instruction space, meaning they operate as NOPs if the hardware
does not support them. Still the user wants to detect their presence
with runtime checks. See #90928 for details.
I think we should always perform runtime checks regardless of the
feature and then try to statically resolve calls whenever a function is
compiled with a sufficiently high set of architecture features (so
including target/target_version/target_clones attributes, and command
line options). This is what GCC does. We have an open PR in LLVM
GlobalOpt since it was suggested not to perform such codegen
optimizations in clang anyway. See #87939.
Re-signing occurs when function type discrimination is enabled and a
function pointer is converted to another function pointer type that
requires signing using a different discriminator. A function pointer is
re-signed using discriminator zero when it's converted to a pointer to a
non-function type such as `void*`.
---------
Co-authored-by: Ahmed Bougacha <ahmed@bougacha.org>
Co-authored-by: John McCall <rjmccall@apple.com>
There are two problems with _BitInt prior to this patch:
1. For at least some values of N, we cannot use LLVM's iN for the type
of struct elements, array elements, allocas, global variables, and so
on, because the LLVM layout for that type does not match the high-level
layout of _BitInt(N).
Example: Currently for i128:128 targets correct implementation is
possible either for __int128 or for _BitInt(129+) with lowering to iN,
but not both, since we have now correct implementation of __int128 in
place after a21abc7.
When this happens, opaque [M x i8] types used, where M =
sizeof(_BitInt(N)).
2. LLVM doesn't guarantee any particular extension behavior for integer
types that aren't a multiple of 8. For this reason, all _BitInt types
are now have in-memory representation that is a whole number of bytes.
I.e. for example _BitInt(17) now will have memory layout type i32.
This patch also introduces concept of load/store type and adds an API to
CodeGenTypes that returns the IR type that should be used for load and
store operations. This is particularly useful for the case when a
_BitInt ends up having array of bytes as memory layout type. For
_BitInt(N), let M = sizeof(_BitInt(N)), and let BITS = M * 8. Loads and
stores of iM would both (1) produce far better code from the backends
and (2) be far more optimizable by IR passes than loads and stores of [M
x i8].
Fixes https://github.com/llvm/llvm-project/issues/85139
Fixes https://github.com/llvm/llvm-project/issues/83419
---------
Co-authored-by: John McCall <rjmccall@gmail.com>
Virtual function pointer entries in v-tables are signed with address
discrimination in addition to declaration-based discrimination, where an
integer discriminator the string hash (see
`ptrauth_string_discriminator`) of the mangled name of the overridden
method. This notably provides diversity based on the full signature of
the overridden method, including the method name and parameter types.
This patch introduces ItaniumVTableContext logic to find the original
declaration of the overridden method.
On AArch64, these pointers are signed using the `IA` key (the
process-independent code key.)
V-table pointers can be signed with either no discrimination, or a
similar scheme using address and decl-based discrimination. In this
case, the integer discriminator is the string hash of the mangled
v-table identifier of the class that originally introduced the vtable
pointer.
On AArch64, these pointers are signed using the `DA` key (the
process-independent data key.)
Not using discrimination allows attackers to simply copy valid v-table
pointers from one object to another. However, using a uniform
discriminator of 0 does have positive performance and code-size
implications on AArch64, and diversity for the most important v-table
access pattern (virtual dispatch) is already better assured by the
signing schemas used on the virtual functions. It is also known that
some code in practice copies objects containing v-tables with `memcpy`,
and while this is not permitted formally, it is something that may be
invasive to eliminate.
This is controlled by:
```
-fptrauth-vtable-pointer-type-discrimination
-fptrauth-vtable-pointer-address-discrimination
```
In addition, this provides fine-grained controls in the
ptrauth_vtable_pointer attribute, which allows overriding the default
ptrauth schema for vtable pointers on a given class hierarchy, e.g.:
```
[[clang::ptrauth_vtable_pointer(no_authentication, no_address_discrimination,
no_extra_discrimination)]]
[[clang::ptrauth_vtable_pointer(default_key, default_address_discrimination,
custom_discrimination, 0xf00d)]]
```
The override is then mangled as a parametrized vendor extension:
```
"__vtptrauth" I
<key>
<addressDiscriminated>
<extraDiscriminator>
E
```
To support this attribute, this patch adds a small extension to the
attribute-emitter tablegen backend.
Note that there are known areas where signing is either missing
altogether or can be strengthened. Some will be addressed in later
changes (e.g., member function pointers, some RTTI).
`dynamic_cast` in particular is handled by emitting an artificial
v-table pointer load (in a way that always authenticates it) before the
runtime call itself, as the runtime doesn't have enough information
today to properly authenticate it. Instead, the runtime is currently
expected to strip the v-table pointer.
---------
Co-authored-by: John McCall <rjmccall@apple.com>
Co-authored-by: Ahmed Bougacha <ahmed@bougacha.org>
Uses the new InsertPosition class (added in #94226) to simplify some of
the IRBuilder interface, and removes the need to pass a BasicBlock
alongside a BasicBlock::iterator, using the fact that we can now get the
parent basic block from the iterator even if it points to the sentinel.
This patch removes the BasicBlock argument from each constructor or call
to setInsertPoint.
This has no functional effect, but later on as we look to remove the
`Instruction *InsertBefore` argument from instruction-creation
(discussed
[here](https://discourse.llvm.org/t/psa-instruction-constructors-changing-to-iterator-only-insertion/77845)),
this will simplify the process by allowing us to deprecate the
InsertPosition constructor directly and catch all the cases where we use
instructions rather than iterators.