When all section contents are updated in-place, we can skip creation of
new segment(s), save disk space, and free up low memory addresses.
Currently, this feature only works with --use-gnu-stack.
Refactor the code for NewTextSegmentAddress to correctly point at the
true start of the segment when PHDR table is placed at the beginning. We
used to offset NewTextSegmentAddress by PHDR table plus cache line
alignment.
NFC for proper binaries. Some YAML binaries from our tests will diverge
due to bad segment address/offset alignment.
This is the culmination of a series of changes described in [1].
Although somewhat large by line count, it is almost entirely mechanical,
creating a new library in DebugInfo/DWARF/LowLevel. This new library has
very minimal dependencies, allowing it to be used from more places than
the normal DebugInfo/DWARF library--in particular from MC.
I am happy to put it in another location, or to structure it differently
if that makes sense. Some have suggested in BinaryFormat, but it is not
a great fit there. But if that makes more sense to the reviewers, I can
do that.
Another possibility would be to use pass-through headers to allow
clients who don't care to depend only on DebugInfo/DWARF. This would be
a much less invasive change, and perhaps easier for clients. But also a
system that hides details.
Either way, I'm open.
1.
https://discourse.llvm.org/t/rfc-debuginfo-dwarf-refactor-into-to-lower-and-higher-level-libraries/86665/2
Implement the detection of tail calls performed with untrusted link
register, which violates the assumption made on entry to every function.
Unlike other pauth gadgets, detection of this one involves some amount
of guessing which branch instructions should be checked as tail calls.
In gs-pacret-autiasp.s, the undefined call `bl g` causes inconsistent
basic block splitting: in some platforms BOLT emits two blocks, on some
others one.
Defining a dummy `g` symbol forces a single basic block everywhere.
Currently NFC tests only trigger when the llvm-bolt binary itself
changes.
This patch adds `--check-bolt-sources`, which scans git output for any
modifications under bolt/, excluding:
- bolt/docs
- bolt/utils/docker
- bolt/utils/dot2html
If any matching files change between versions, a `.llvm-bolt.changes`
marker is created. Buildbots can then use this marker to trigger in-tree
tests.
Address the issue that stems from how the density is computed.
Binary *function* density is the ratio of its total dynamic number of
executed bytes over the static size in bytes. The meaning of it is the
amount of dynamic profile information relative to its static size.
Binary *profile* density is the minimum *function* density among *well-
-profiled* functions, taken as functions covering p99 samples, or, in
other words, excluding functions in the tail 1% of samples. p99 is an
arbitrary cutoff. The meaning of profile density is the *minimum amount
of profile information per function* to be able to optimize the program
well. The threshold for profile density is set empirically.
The dynamically executed bytes are taken directly from LBR fall-throughs
and for LBRs recorded in trampoline functions, such as
```
000000001a941ec0 <Sleef_expf8_u10>:
1a941ec0: jmpq *0x37b911fa(%rip) # <pnt_expf8_u10>
1a941ec6: nopw %cs:(%rax,%rax)
```
the fall-through has zero length:
```
# Branch Target NextBranch Count
T 1b171cf6 1a941ec0 1a941ec0 568562
```
But it's not correct to say this function has zero executed bytes, just
the size of the next branch is not included in the fall-through.
If such functions have non-trivial sample count, they will fall in p99
samples, and cause the profile density to be zero.
To solve this, we can either:
1. Include fall-through end jump size into executed bytes:
is logically sound but technically challenging: the size needs to
come from disassembly (expensive), and the threshold need to be
reevaluated with updated definition of binary function density.
2. Exclude pass-through functions from density computation:
follows the intent of profile density which is to set the amount of
profile information needed to optimize the function well. Single
instruction pass-through functions don't need samples many times
the size to be optimized well.
Go with option 2 as a reasonable compromise.
Test Plan: added bolt/test/X86/zero-density.s
After a label in a function without CFG information, use a reasonably
pessimistic estimation of register state (assume that any register that
can be clobbered in this function was actually clobbered) instead of the
most pessimistic "all registers are unsafe". This is the same estimation
as used by the dataflow variant of the analysis when the preceding
instruction is not known for sure.
Without this, leaf functions without CFG information are likely to have
false positive reports about non-protected return instructions, as
1) LR is unlikely to be signed and authenticated in a leaf function and
2) LR is likely to be used by a return instruction near the end of the
function and
3) the register state is likely to be reset at least once during the
linear scan through the function
Instead of refusing to analyze an instruction completely when it is
unreachable according to the CFG reconstructed by BOLT, use pessimistic
assumption of register state when possible. Nevertheless, unreachable
basic blocks found in optimized code likely means imprecise CFG
reconstruction, thus report a warning once per function.
Rename these relocation specifier constants, aligning with the naming
convention used by other targets (`S_` instead of `VK_`).
* ELF/COFF: AArch64MCExpr::VK_ => AArch64::S_ (VK_ABS/VK_PAGE_ABS are
also used by Mach-O as a hack)
* Mach-O: AArch64MCExpr::M_ => AArch64::S_MACHO_
* shared: AArch64MCExpr::None => AArch64::S_None
Apologies for the churn following the recent rename in #132595. This
change ensures consistency after introducing MCSpecifierExpr to replace
MCTargetSpecifier subclasses.
Pull Request: https://github.com/llvm/llvm-project/pull/144633
Replace AArch64MCExpr, which encodes expressions with relocation
specifiers, with the new generic MCSpecifierExpr interface, aligning
with other targets by phasing out target-specific XXXMCExpr classes.
Temporarily convert AArch64MCExpr to a namespace to avoid renaming
`AArch64MCExpr::VK_` constants in this PR. A follow-up patch will rename
these to `AArch64::S_` to match the convention used by other targets.
Move helper functions to AArch64MCAsmInfo.h, with the goal of eventually
removing AArch64MCExpr.h.
Pull Request: https://github.com/llvm/llvm-project/pull/144632
Remove SymbolToFileName mapping from every local symbol to its
containing FILE symbol name, and reuse FileSymbols to disambiguate
local symbols instead.
Also removes the check for `ld-temp.o` file symbol which was added to
prevent LTO build mode from affecting the disambiguated name. This may
cause incompatibility when using the profile collected on a binary built
in a different mode than the input binary.
Addresses #90661.
Speeds up discover file objects by 5-10% for large binaries:
- binary with ~1.2M symbols: 12.6422s -> 12.0297s
- binary with ~4.5M symbols: 48.8851s -> 43.7315s
This change speeds up fragment matching for large BOLTed binaries where
all fragments of global parent functions are put under `bolt-pseudo.o`
file symbol:
- before: iterating over symbols under `bolt-pseudo.o` only to fail
to find a parent,
- after: bail out immediately and use a global parent by name.
Test Plan: NFC, updated register-fragments-bolt-symbols.s
`BoltAddressTranslation::getFallthroughsInTrace` iterates over address
translation map entries and therefore has direct access to both original
and translated offsets. Return the translated offsets in fall-throughs
list to avoid duplicate address translation inside `doTrace`.
Test Plan: NFC
Address NFC mismatches caused by running perf2bolt from under the
wrapper script:
https://lab.llvm.org/buildbot/#/builders/92/builds/20938
> <stdin>:2:64: note: possible intended match here
>
/home/worker/bolt-worker2/bolt-x86_64-ubuntu-nfc/build/bin/llvm-bolt.old:
-spe is available only on AArch64.
Test Plan:
ninja check-bolt
Intel's Architectural LBR supports capturing branch type information
as part of LBR stack (SDM Vol 3B, part 2, October 2024):
```
20.1.3.2 Branch Types
The IA32_LBR_x_INFO.BR_TYPE and IA32_LER_INFO.BR_TYPE fields encode
the branch types as shown in Table 20-3.
Table 20-3. IA32_LBR_x_INFO and IA32_LER_INFO Branch Type Encodings
Encoding | Branch Type
0000B | COND
0001B | NEAR_IND_JMP
0010B | NEAR_REL_JMP
0011B | NEAR_IND_CALL
0100B | NEAR_REL_CALL
0101B | NEAR_RET
011xB | Reserved
1xxxB | OTHER_BRANCH
For a list of branch operations that fall into the categories above,
see Table 20-2.
Table 20-2. Branch Type Filtering Details
Branch Type | Operations Recorded
COND | Jcc, J*CXZ, and LOOP*
NEAR_IND_JMP | JMP r/m*
NEAR_REL_JMP | JMP rel*
NEAR_IND_CALL | CALL r/m*
NEAR_REL_CALL | CALL rel* (excluding CALLs to the next sequential IP)
NEAR_RET | RET (0C3H)
OTHER_BRANCH | JMP/CALL ptr*, JMP/CALL m*, RET (0C8H), SYS*,
interrupts, exceptions (other than debug exceptions), IRET, INT3,
INTn, INTO, TSX Abort, EENTER, ERESUME, EEXIT, AEX, INIT, SIPI, RSM
```
Linux kernel can preserve branch type when `save_type` is enabled,
even if CPU does not support Architectural LBR:
f09079bd04/tools/perf/Documentation/perf-record.txt (L457-L460)
> - save_type: save branch type during sampling in case binary is not
available later.
For the platforms with Intel Arch LBR support (12th-Gen+ client or
4th-Gen Xeon+ server), the save branch type is unconditionally enabled
when the taken branch stack sampling is enabled.
Kernel-reported branch type values:
8c6bc74c7f/include/uapi/linux/perf_event.h (L251-L269)
This information is needed to disambiguate external returns (from
DSO/JIT) to an entry point or a landing pad, when BOLT can't
disassemble the branch source.
This patch adds new pre-aggregated types:
- return trace (R),
- external return fall-through (r).
For such types, the checks for fall-through start (not an entry or
a landing pad) are relaxed.
Depends on #143295.
Test Plan: updated callcont-fallthru.s
Since Linux 6.14, Perf gained the ability to report SPE branch events
using the `brstack` format, which matches the layout of LBR/BRBE.
This patch reuses the existing LBR parsing logic to support SPE.
Example SPE brstack format:
```bash
perf script -i perf.data -F pid,brstack --itrace=bl
```
```
PID FROM / TO / PREDICTED
16984 0x72e342e5f4/0x72e36192d0/M/-/-/11/RET/-
16984 0x72e7b8b3b4/0x72e7b8b3b8/PN/-/-/11/COND/-
16984 0x72e7b92b48/0x72e7b92b4c/PN/-/-/8/COND/-
16984 0x72eacc6b7c/0x760cc94b00/P/-/-/9/RET/-
16984 0x72e3f210fc/0x72e3f21068/P/-/-/4//-
16984 0x72e39b8c5c/0x72e3627b24/P/-/-/4//-
16984 0x72e7b89d20/0x72e7b92bbc/P/-/-/4/RET/-
```
SPE brstack flags can be two characters long: `PN` or `MN`:
- `P` = predicted branch
- `M` = mispredicted branch
- `N` = optionally appears when the branch is NOT-TAKEN
- flag is relevant only to conditional branches
Example of usage with BOLT:
1. Capture SPE branch events:
```bash
perf record -e 'arm_spe_0/branch_filter=1/u' -- binary
```
2. Convert profile for BOLT:
```bash
perf2bolt -p perf.data -o perf.fdata --spe binary
```
3. Run BOLT Optimization:
```bash
llvm-bolt binary -o binary.bolted --data perf.fdata ...
```
A unit test verifies the parsing of the 'SPE brstack format'.
---------
Co-authored-by: Paschalis Mpeis <paschalis.mpeis@arm.com>
Some instruction-printing code used under LLVM_DEBUG does not handle CFI
instructions well. While CFI instructions seem to be harmless for the
correctness of the analysis results, they do not convey any useful
information to the analysis either, so skip them early.
Implement the detection of authentication instructions whose results can
be inspected by an attacker to know whether authentication succeeded.
As the properties of output registers of authentication instructions are
inspected, add a second set of analysis-related classes to iterate over
the instructions in reverse order.
Call continuation logic relies on assumptions about fall-through origin:
- the branch is external to the function,
- fall-through start is at the beginning of the block,
- the block is not an entry point or a landing pad.
Leverage trace information to explicitly check whether the origin is a
return instruction, and defer to checks above only in case of
DSO-external branch source.
This covers both regular and BAT cases, addressing call continuation
fall-through undercounting in the latter mode, which improves BAT
profile quality metrics. For example, for one large binary:
- CFG discontinuity 21.83% -> 0.00%,
- CFG flow imbalance 10.77%/100.00% -> 3.40%/13.82% (weighted/worst)
- CG flow imbalance 8.49% —> 8.49%.
Depends on #143289.
Test Plan: updated callcont-fallthru.s
Consistently apply traces as defined in #127125 for branch profile
aggregation. This combines branches and fall-through records into one.
With large input binaries/profiles, the speed up in aggregation time
(`-time-aggr`, wall time):
- perf.data, pre-BOLT input: 154.5528s -> 144.0767s
- pre-aggregated data, pre-BOLT input: 15.1026s -> 9.0711s
- pre-aggregated data, BOLTed input: 15.4871s -> 10.0077s
Test Plan: NFC
The CMake flag LLVM_APPEND_VC_REV can be passed when building BOLT a
BOLT to prevent including a VC Revision. This patch enables this
functionality.
Usage: `-DLLVM_APPEND_VC_REV=OFF` when running CMake.
* Move getPCRelHiFixup closer to the only caller RISCVAsmBackend::evaluateTargetFixup.
* Declare getSpecifierForName in RISCVMCAsmInfo, in align with other
targets that have migrated to the new relocation specifier representation.
Introduce `parse-mem-profile` option to limit overheads processing
tracing data (Intel PT or ARM ETM). By default, it's enabled for
perf data (existing behavior), unless `itrace` is passed to parse
tracing data where it's extremely expensive. In this case, the flag
needs to be set explicitly if needed.
On some AArch64 machines the splitting was inconsistent.
This causes cold `foo` to have a `mov` instruction before adrp.
```
<foo.cold.0>:
mov x0, #0x0 // =0
adrp x1, 0x600000 <_start>
add x1, x1, #0x14
ret
```
This patch removes the `mov` instruction right above .L2, making
splitting deterministic.
Record the number of function invocations from external code - code
outside the binary, which may include JIT code and DSOs. Accounting
external entry counts improves the fidelity of call graph flow
conservation analysis.
Test Plan: updated shrinkwrapping.test
Parsed branches and fall-throughs are validated in `doBranch` and
`doTrace` respectively. Simplify parseLBRSample by omitting the
validation. This also speeds up perf data processing as checks are only
done once for aggregated branches/fall-throughs and not individual LBR
entries.
Since invalid/external addresses are no longer sanitized during parsing,
sanitize them in `doBranch`.
Test Plan: updated X86/pre-aggregated-perf.test
Aggregated branch data has two containers: `Data` for local branches,
and `EntryData` for external branches. Fix the omission and sort
`EntryData` to ensure stable output fdata profiles.
Test Plan: updated pre-aggregated-perf.test
#140196 introduced UB by using uninitialized misprediction count for
pre-aggregated traces. Fix by zero initializing both counters.
Test Plan: updated entry-point-fallthru.s
Do not recommend the strict mode to the user when ADR relaxation fails
on a non-simple function, i.e. a function with unknown CFG.
We cannot rely on relocations to reconstruct compiler-generated jump
tables for AArch64, hence strict mode does not work as intended.
When we call setIgnored() on functions that already have CFG built,
these functions are not going to get emitted and we risk missing
external function references being updated.
To mitigate the potential issues, run scanExternalRefs() on such
functions to create patches/relocations.
Since scanExternalRefs() relies on function relocations, we have to
preserve relocations until the function is emitted. As a result, the
memory overhead without debug info update could reach up to 2%.
Define a pre-aggregated basic sample format:
```
E <event name>
S <location> <count>
```
`-nl` flag is required to use parsed basic samples.
Test Plan: update pre-aggregated-perf.test
The mispred and execnt values were originally recorded in reverse order
and then consumed in the opposite order to compensate.
This patch records and uses them in the same order (FDATA) for clarity.
That order is:
```
<is symbol?> <closest ELF symbol or DSO name> <relative FROM address>
<is symbol?> <closest ELF symbol or DSO name> <relative TO address>
<number of mispredictions> <number of branches>
```
Add a capability to produce multiple heatmaps with given bucket sizes.
The default heatmap block size (64B) could be too fine-grained for
large binaries. Extend the option `block-size` to accept a list of
bucket sizes for additional heatmaps with coarser granularity. The
heatmap is simply rescaled so provided sizes should be multiples of
each other. Human-readable suffixes can be used, e.g. 4K, 16kb, 1MiB.
New defaults: 64B (base bucket size), 4KB (default page size),
256KB (for large binaries).
Test Plan: updated heatmap-preagg.test
The linker may omit data markers for long absolute veneers causing BOLT
to treat data as code. Detect such veneers and introduce data markers
artificially before BOLT's disassembler kicks in.