When a customer class inherits from a libc++ class, and is built with
"-flto -fwhole-program-vtables -static-libstdc++ \
-Wl,-plugin-opt=-whole-program-visibility", the libc++ class's vtable is
available_externally, meanwhile the customer class vtable is private.
And
both of them are !vcall_visibility == Linkage Unit.
In this case, icall.branch.funnel might be generated.
But the icall.branch.funnel would cause crash in LowerTypeTests because
available_externally Global_Object's GlobalTypeMember would not be
saved and finally leads to a NULL GlobalTypeMember which causes a crash.
Even saving the available_externally GO's GlobalTypeMember so that it is
not NULL to avoid the crash in LowerTypeTests, it still will crash in
SelectionDAGBuilder or Verifier, because operands linkage type
consistency
check of icall.branch.funnel can not pass.
So any one of available externally vtable would stop to generate
icall.branch.funnel.
This patch fixes FullLTO mode and split-LTO-unit ThinLTO mode.
We were clearing SummaryTypeCheckedLoadUsers to prevent devirtualized
llvm.type.checked.load calls from being converted to llvm.type.test,
which meant that AddCalls would not see them in the list of
callsites and they would not get imported. Fix that by not clearing
SummaryTypeCheckedLoadUsers so that the list survives to AddCalls and
using AllCallSitesDevirted to control whether to convert them instead.
Reviewers: teresajohnson
Reviewed By: teresajohnson
Pull Request: https://github.com/llvm/llvm-project/pull/144019
We'd hit an assertion checking proper alignment for an i8 when building
chromium because we used the prefered alignment (which is 4 bytes)
instead of the ABI alignment (which is 1 byte). The ABI alignment should
be used because that's the actual alignment needed to load a constant
from the vtable.
This also updates the two `virtual-const-prop-small-alignment-*` to
explicitly give ABI alignments for i64s.
After pointer element types were removed this function can only return
a GlobalVariable, so reflect that in the type and comments and clean
up callers.
Reviewers: nikic
Reviewed By: nikic
Pull Request: https://github.com/llvm/llvm-project/pull/141323
It's possible for virtual constant propagation in whole program
devirtualization to create unaligned loads. We originally saw this with
4-byte aligned relative vtables where we could store 8-byte values
before/after the vtable. But since the vtable is 4-byte aligned and we
unconditionally do an 8-byte load, we can't guarantee that the stored
constant will always be aligned to 8 bytes. We can also see this with
normal vtables whenever a 1-byte char is stored in the vtable because
the offset calculation for the GEP doesn't take into account the
original vtable alignment.
This patch introduces two changes to virtual constant propagation:
1. Do not propagate constants whose preferred alignment is larger than
the vtable alignment. This is required because if the constants are
stored in the vtable, we can only guarantee the constant will be stored
at an address at most aligned to the vtable's alignment.
2. Round up the offset used in the GEP before the load to ensure it's at
an address suitably aligned such that we can load from it.
This patch updates tests to reflect this alignment change and adds some
cases for relative vtables.
See https://discourse.llvm.org/t/rfc-keep-globalvalue-guids-stable/84801
for context.
This is a non-functional change which just changes the interface of
GlobalValue, in preparation for future functional changes. This part
touches a fair few users, so is split out for ease of review. Future
changes to the GlobalValue implementation can then be focused purely on
that class.
This does the following:
* Rename GlobalValue::getGUID(StringRef) to
getGUIDAssumingExternalLinkage. This is simply making explicit at the
callsite what is currently implicit.
* Where possible, migrate users to directly calling getGUID on a
GlobalValue instance.
* Otherwise, where possible, have them call the newly renamed
getGUIDAssumingExternalLinkage, to make the assumption explicit.
There are a few cases where neither of the above are possible, as the
caller saves and reconstructs the necessary information to compute the
GUID themselves. We want to migrate these callers eventually, but for
this first step we leave them be.
…elative
The semantics of `llvm.type.checked.load.relative` seem to be a little
different from that of `llvm.load.relative`. It looks like the semantics
for `llvm.type.checked.load.relative` is `ptr + offset + *(ptr +
offset)` whereas the semantics for `llvm.load.relative` is `ptr + *(ptr
+ offset)`. That is, the offset for the former is added to the offset
address whereas the later has the offset added to the original pointer.
It really feels like the checked intrinsic was meant to match the
semantics of the non-checked intrinsic, but I think for all cases the
checked intrinsic is used (swift being the only use I know of), the
calculation just happens to be the same because swift always uses an
offset of zero. Likewise, all llvm tests for this intrinsic happen to
use an offset of zero.
Relative vtables in clang happens to be the first time where we're using
this intrinsic and using it with non-zero values. This updates the
semantics of the checked intrinsic to match the non-checked one.
Effectively this shouldn't change any codegen by any users of this since
all current users seem to use a zero offset.
This PR also updates some tests with non-zero offsets.
Checking mode aims to help diagnose and confirm undefined behavior. In
most cases, source code don't cast pointers between unrelated types for
virtual calls, so we expect direct calls in the frequent branch and
debug trap in the unlikely branch.
This way, the overhead of checking mode is not higher than an indirect
call promotion for a hot callsite as long as the callsite doesn't run the debug trap
branch.
https://reviews.llvm.org/D115492 skips unreachable functions and
potentially allows more static de-virtualizations. The motivation is to
ignore virtual deleting destructor of abstract class (e.g.,
`Base::~Base()` in https://gcc.godbolt.org/z/dWMsdT9Kz).
* Note WPD already handles most pure virtual functions (like `Base::x()`
in the godbolt example above), which becomes a `__cxa_pure_virtual` in
the vtable slot.
This PR proposes to undo the change, because it turns out there are
other unreachable functions that a general program wants to run and fail
intentionally, with `LOG(FATAL)` or `CHECK` [1] for example. While many
real-world applications are encouraged to check-fail sparingly, they are
allowed to do so on critical errors (e.g., misconfiguration or bug is
detected during server startup).
* Implementation-wise, this PR keeps the one-bit 'unreachable' state in
bitcode and updates WPD analysis.
https://gcc.godbolt.org/z/T1aMhczYr is a minimum reproducible example
extracted from unit test. `Base::func` is a one-liner of `LOG(FATAL) <<
"message"`, and lowered to one basic block ending with `unreachable`. A
real-world program is _allowed_ to invoke Base::func to terminate the
program as a way to report errors (in server initialization stage for
example), even if errors on the serving path should be handled more
gracefully.
[1] https://abseil.io/docs/cpp/guides/logging#CHECK and
https://abseil.io/docs/cpp/guides/logging#configuration-and-flags
This option applies for _import_ WPD (i.e., when `DevirtModule` pass
de-virtualizes according to an imported summary, in ThinLTO backend
pipeline). It's meant for debugging (e.g., bisection).
Rename the function to reflect its correct behavior and to be consistent
with `Module::getOrInsertFunction`. This is also in preparation of
adding a new `Intrinsic::getDeclaration` that will have behavior similar
to `Module::getFunction` (i.e, just lookup, no creation).
It is almost always simpler to use {} instead of std::nullopt to
initialize an empty ArrayRef. This patch changes all occurrences I could
find in LLVM itself. In future the ArrayRef(std::nullopt_t) constructor
could be deprecated or removed.
Since `raw_string_ostream` doesn't own the string buffer, it is
desirable (in terms of memory safety) for users to directly reference
the string buffer rather than use `raw_string_ostream::str()`.
Work towards TODO comment to remove `raw_string_ostream::str()`.
We have a lot of repeated code with random constants.
Particular values are not important, the one just needs to be
bigger then another.
UR_NONTAKEN_WEIGHT is selected as it's the most common one.
As part of the RemoveDIs project we need LLVM to insert instructions using
iterators wherever possible, so that the iterators can carry a bit of
debug-info. This commit implements some of that by updating the contents of
llvm/lib/Transforms/Utils to always use iterator-versions of instruction
constructors.
There are two general flavours of update:
* Almost all call-sites just call getIterator on an instruction
* Several make use of an existing iterator (scenarios where the code is
actually significant for debug-info)
The underlying logic is that any call to getFirstInsertionPt or similar
APIs that identify the start of a block need to have that iterator passed
directly to the insertion function, without being converted to a bare
Instruction pointer along the way.
Noteworthy changes:
* FindInsertedValue now takes an optional iterator rather than an
instruction pointer, as we need to always insert with iterators,
* I've added a few iterator-taking versions of some value-tracking and
DomTree methods -- they just unwrap the iterator. These are purely
convenience methods to avoid extra syntax in some passes.
* A few calls to getNextNode become std::next instead (to keep in the
theme of using iterators for positions),
* SeparateConstOffsetFromGEP has it's insertion-position field changed.
Noteworthy because it's not a purely localised spelling change.
All this should be NFC.
This adds support for a HasTailCall flag on function call edges in the
ThinLTO summary. It is intended for use in aiding discovery of missing
frames from tail calls in profiled call stacks for MemProf of profiled
binaries that did not disable tail call elimination. A follow on change
will add the use of this new flag during MemProf context disambiguation.
The new flag is encoded in the bitcode along with either the hotness
flag from the profile, or the relative block frequency under the
-write-relbf-to-summary flag when there is no profile data.
Because we now will always have some additional call edge information, I
have removed the non-profile function summary record format, and we
simply encode the tail call flag along with a hotness type of none when
there is no profile information or relative block frequency. The change
of record format and name caused most of the test case changes.
I have added explicit testing of generation of the new tail call flag
into the bitcode and IR assembly format as part of the changes to
llvm/test/Bitcode/thinlto-function-summary-refgraph.ll. I have also
added round trip testing through assembly and bitcode to
llvm/test/Assembler/thinlto-summary.ll.
Discussion about this approach: https://discourse.llvm.org/t/rfc-safer-whole-program-class-hierarchy-analysis/65144/18
When enabling WPD in an environment where native binaries are present, types we want to optimize can be derived from inside these native files and devirtualizing them can lead to correctness issues. RTTI can be used as a way to determine all such types in native files and exclude them from WPD providing a safe checked way to enable WPD.
The approach is:
1. In the linker, identify if RTTI is available for all native types. If not, under `--lto-validate-all-vtables-have-type-infos` `--lto-whole-program-visibility` is automatically disabled. This is done by examining all .symtab symbols in object files and .dynsym symbols in DSOs for vtable (_ZTV) and typeinfo (_ZTI) symbols and ensuring there's always a match for every vtable symbol.
2. During thinlink, if `--lto-validate-all-vtables-have-type-infos` is set and RTTI is available for all native types, identify all typename (_ZTS) symbols via their corresponding typeinfo (_ZTI) symbols that are used natively or outside of our summary and exclude them from WPD.
Testing:
ninja check-all
large Meta service that uses boost, glog and libstdc++.so runs successfully with WPD via --lto-whole-program-visibility. Previously, native types in boost caused incorrect devirtualization that led to crashes.
Reviewed By: MaskRay, tejohnson
Differential Revision: https://reviews.llvm.org/D155659
Since we no longer support typed LLVM IR pointer types, the code can
be simplified into for example using PointerType::get directly instead
of using Type::getInt8PtrTy and Type::getInt32PtrTy etc.
Differential Revision: https://reviews.llvm.org/D156733
This adds a type_checked_load_relative intrinsic whose semantics it is to
load a relative function pointer.
A relative function pointer is a pointer to a 32bit value that when
added to its address yields the address of the function.
Differential Revision: https://reviews.llvm.org/D143204
We can't have a call with a constant target with a ptrauth bundle. Remove the
ptrauth bundle operand in such a case
rdar://105696396
Differential Revision: https://reviews.llvm.org/D144581
Partial progress towards removing in-tree uses of `Type::getPointerTo`,
before we can deprecate the API.
If the API is used solely to support an unnecessary bitcast, get rid of
the bitcast as well.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D153933
It's possible to segfault in `DevirtModule::applyICallBranchFunnel` when
attempting to call `getCaller` on a call base that was erased in a prior
iteration. This can occur when attempting to find devirtualizable calls
via `findDevirtualizableCallsForTypeTest` if the vtable passed to
llvm.type.test is a global and not a local. The function works by taking
the first argument of the llvm.type.test call (which is a vtable),
iterating through all uses of it, and adding any relevant all uses that
are calls associated with that intrinsic call to a vector. For most
cases where the vtable is actually a *local*, this wouldn't be an issue.
Take for example:
```
define i32 @fn(ptr %obj) #0 {
%vtable = load ptr, ptr %obj
%p = call i1 @llvm.type.test(ptr %vtable, metadata !"typeid2")
call void @llvm.assume(i1 %p)
%fptr = load ptr, ptr %vtable
%result = call i32 %fptr(ptr %obj, i32 1)
ret i32 %result
}
```
`findDevirtualizableCallsForTypeTest` will check the call base ` %result
= call i32 %fptr(ptr %obj, i32 1)`, find that it is associated with a
virtualizable call from `%vtable`, find all loads for `%vtable`, and add
any instances those load results are called into a vector. Now consider
the case where instead `%vtable` was the global itself rather than a
local:
```
define i32 @fn(ptr %obj) #0 {
%p = call i1 @llvm.type.test(ptr @vtable, metadata !"typeid2")
call void @llvm.assume(i1 %p)
%fptr = load ptr, ptr @vtable
%result = call i32 %fptr(ptr %obj, i32 1)
ret i32 %result
}
```
`findDevirtualizableCallsForTypeTest` should work normally and add one
unique call instance to a vector. However, if there are multiple
instances where this same global is used for llvm.type.test, like with:
```
define i32 @fn(ptr %obj) #0 {
%p = call i1 @llvm.type.test(ptr @vtable, metadata !"typeid2")
call void @llvm.assume(i1 %p)
%fptr = load ptr, ptr @vtable
%result = call i32 %fptr(ptr %obj, i32 1)
ret i32 %result
}
define i32 @fn2(ptr %obj) #0 {
%p = call i1 @llvm.type.test(ptr @vtable, metadata !"typeid2")
call void @llvm.assume(i1 %p)
%fptr = load ptr, ptr @vtable
%result = call i32 %fptr(ptr %obj, i32 1)
ret i32 %result
}
```
Then each call base `%result = call i32 %fptr(ptr %obj, i32 1)` will be
added to the vector twice. This is because for either call base `%result
= call i32 %fptr(ptr %obj, i32 1) `, we determine it is associated with
a virtualizable call from `@vtable`, and then we iterate through all the
uses of `@vtable`, which is used across multiple functions. So when
scanning the first `%result = call i32 %fptr(ptr %obj, i32 1)`, then
both call bases will be added to the vector, but when scanning the
second one, both call bases are added again, resulting in duplicate call
bases in the CSInfo.CallSites vector.
Note this is actually accounted for in every other instance WPD iterates
over CallSites. What everything else does is actually add the call base
to the `OptimizedCalls` set and just check if it's already in the set.
We can't reuse that particular set since it serves a different purpose
marking which calls where devirtualized which `applyICallBranchFunnel`
explicitly says it doesn't. For this fix, we can just account for
duplicates with a map and do the actual replacements afterwards by
iterating over the map.
Differential Revision: https://reviews.llvm.org/D146267
Prior to this patch, WPD was not acting on relative-vtables in C++. This
involves teaching WPD about these things:
- llvm.load.relative which is how relative-vtables are indexed (instead of GEP)
- dso_local_equivalent which is used in the vtable itself when taking the
offset between a virtual function and vtable
- Update llvm/test/ThinLTO/X86/devirt.ll to use opaque pointers and add
equivalent tests for RV
Differential Revision: https://reviews.llvm.org/D134320
Follow on to D144209 to support single implementation devirtualization
for Regular LTO when the vtable holds a function alias.
For now I have prevented other optimizations performed in regular LTO
that need to analyze the contents of the function target when the vtable
holds an alias, as I'm not sure they are always correct to perform in
that case.
Differential Revision: https://reviews.llvm.org/D144270
We were not summarizing a function alias in the vtable, leading to
incorrect WPD in some cases, and missing WPD in others.
Specifically, we would end up ignoring function aliases as they aren't
summarized, so we could incorrectly devirtualize if there was a single
other non-alias function in a compatible vtable. And if there was only
one implementation, but it was an alias, we would not be able to
identify and perform the single implementation devirtualization.
Handling the alias summary correctly also required fixing the handling
in mustBeUnreachableFunction, so that it is not incorrectly ignored.
Regular LTO is conservatively correct because it will skip
devirtualizing when any pointer within a vtable is not a function.
However, it needs additional work to be able to take advantage of
function alias within the vtable that is in fact the only
implementation. For that reason, the Regular LTO testing in the second
test case is currently disabled, and will be enabled along with a follow
on enhancement fix for Regular LTO WPD.
Differential Revision: https://reviews.llvm.org/D144209
This patch adds several missing GlobalList modifier functions, like
removeGlobalVariable(), eraseGlobalVariable() and insertGlobalVariable().
There is no longer need to access the list directly so it also makes
getGlobalList() private.
Differential Revision: https://reviews.llvm.org/D144027
This patch drops the ZeroBehavior parameter from bit counting
functions like countLeadingZeros. ZeroBehavior specifies the behavior
when the input to count{Leading,Trailing}Zeros is zero and when the
input to count{Leading,Trailing}Ones is all ones.
ZeroBehavior was first introduced on May 24, 2013 in commit
eb91eac9fb866ab1243366d2e238b9961895612d. While that patch did not
state the intention, I would guess ZeroBehavior was for performance
reasons. The x86 machines around that time required a conditional
branch to implement countLeadingZero<uint32_t> that returns the 32 on
zero:
test edi, edi
je .LBB0_2
bsr eax, edi
xor eax, 31
.LBB1_2:
mov eax, 32
That is, we can remove the conditional branch if we don't care about
the behavior on zero.
IIUC, Intel's Haswell architecture, launched on June 4, 2013,
introduced several bit manipulation instructions, including lzcnt and
tzcnt, which eliminated the need for the conditional branch.
I think it's time to retire ZeroBehavior as its utility is very
limited. If you care about compilation speed, you should build LLVM
with an appropriate -march= to take advantage of lzcnt and tzcnt.
Even if not, modern host compilers should be able to optimize away
quite a few conditional branches because the input is often known to
be nonzero from dominating conditional branches.
Differential Revision: https://reviews.llvm.org/D141798
This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated. The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.
This is part of an effort to migrate from llvm::Optional to
std::optional:
https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716