In review of bbde6b, I had originally proposed that we support the
legacy text format. As review evolved, it bacame clear this had been a
bad idea (too much complexity), but in order to let that patch finally
move forward, I approved the change with the variant. This change undoes
the variant, and updates all the tests to just use the array form.
The insertion point of COPY isn't always optimal and could eventually
lead to a worse block layout, see the regression test in the first
commit.
This change affects many architectures but the amount of total
instructions in the test cases seems too be slightly lower.
This changes the final stage of InstrRef, i.e. the TransferTracker
(which combines the values locations with the variable values), so that
it treats a DEBUG_VALUE of an EntryValue just like a DEBUG_VALUE of a
constant: a location that is never clobbered and can be propagated to
subsequent BBs as long as no other DEBUG_VALUE intrinsics updated the
variable.
We add two tests here:
1. `entry_value_clobbered_stack_copy` that saves a register on the
stack, uses this register as an entry value DBG_VALUE location, and then
clobbers it. Prior to this patch, this test would crash because we would
try to describe a new location for the variable in terms of what was
saved on the stack, and use an invalid expression to do so. This is not
needed as an EntryValue can never be clobbered.
2. `entry_value_gets_propagated`, that tests that an EntryValue
DBG_VALUE is propagated in a diamond-shaped CFG.
This patch is trying to reland
https://github.com/llvm/llvm-project/pull/77938 but also fixes the bug
with InstrRef based LiveDebugValues, where entry values were not being
propagated in a diamond-shaped CFG.
Register assembly printer passes in the pass registry.
This makes it possible to use `llc -start-before=<target>-asm-printer ...` in tests.
Adds a `char &ID` parameter to the AssemblyPrinter constructor to allow
targets to use the `INITIALIZE_PASS` macros and register the pass in the
pass registry. This currently has a default parameter so it won't break
any targets that have not been updated.
Windows x64 Unwind V2 adds epilog information to unwind data:
specifically, the length of the epilog and the offset of each epilog.
The first step to do this is to add markers to the beginning and end of
each epilog when generating Windows x64 code. I've modelled this after
how LLVM was marking ARM and AArch64 epilogs in Windows (and unified the
code between the three).
We use variable locations such as DBG_VALUE $xmm0 as shorthand to refer
to "the low lane of $xmm0", and this is reflected in how DWARF is
interpreted too. However InstrRefBasedLDV tries to be smart and
interprets such a DBG_VALUE as a 128-bit reference. We then issue a
DW_OP_deref_size of 128 bits to the stack, which isn't permitted by
DWARF (it's larger than a pointer).
Solve this for now by not using DW_OP_deref_size if it would be illegal.
Instead we'll use DW_OP_deref, and the consumer will load the variable
type from the stack, which should be correct.
There's still a risk of imprecision when LLVM decides to use smaller or
larger value types than the source-variable type, which manifests as
too-little or too-much memory being read from the stack. However we
can't solve that without putting more type information in debug-info.
fixes#64093
Similar to 806761a7629df268c8aed49657aeccffa6bca449
-mtriple= specifies the full target triple while -march= merely sets the
architecture part of the default target triple (e.g. Windows, macOS).
Therefore, -march= is error-prone and not recommended for tests without
a target triple. The issue has been benign as these MIR tests do not
utilize object file format specific detail, but it's good to change
these tests to neighbor files that use -mtriple=x86_64
The optimiser will produce empty blocks that are unconditionally
executed according to the CFG -- while it may not be meaningful code,
and won't get a prologue_end position, we need to not crash on this
input.
The fault comes from assuming that there's always a next block with some
instructions in it, that will eventually produce some meaningful control
flow to stop at -- in the given reproducer in issue #117206 this isn't
true, because the function terminates with `unreachable`. Thus, I've
refactored the "get next instruction logic" into a helper that'll step
through all blocks and terminate if there aren't any more.
Reproducer from aeubanks
In 39b2979a4 Pavel has kindly refined the implementation of a test in such
a way that it doesn't trip up over this patch -- the test wishes to
stimulate LLDBs presentation of line0 locations, rather than wanting to
always step on line-zero on entry to artificial_location.c. As that's what
was tripping up this change, reapply.
Original commit message follows.
[DWARF] Emit a worst-case prologue_end flag for pathological inputs (#107849)
prologue_end usually indicates where the end of the function-initialization
lies, and is where debuggers usually choose to put the initial breakpoint
for a function. Our current algorithm piggy-backs it on the first available
source-location: which doesn't necessarily have anything to do with the
start of the function.
To avoid this in heavily-optimised code that lacks many useful source
locations, pick a worst-case "if all else fails" prologue_end location, of
the first instruction that appears to do meaningful computation. It'll be
given the function-scope line number, which should run-on from the start of
the function anyway. This means if your code is completely inverted by the
optimiser, you can at least put a breakpoint at the _start_ like you
expect, even if it's difficult to then step through.
This patch also attempts to preserve some good behaviour we have without
optimisations -- at O0, if the prologue immediately falls into a loop body
without any computation happening, then prologue_end lands at the start of
that loop. This is desirable; but does mean we need to do more work to
detect and support those situations.
This reverts commit bf483ddb42065405e345393e022dc72357ec5a3a.
See PR, there's a test testing for this behaviour (possibly adaptable), and
a duplicate line entry too
prologue_end usually indicates where the end of the function-initialization
lies, and is where debuggers usually choose to put the initial breakpoint
for a function. Our current algorithm piggy-backs it on the first available
source-location: which doesn't necessarily have anything to do with the
start of the function.
To avoid this in heavily-optimised code that lacks many useful source
locations, pick a worst-case "if all else fails" prologue_end location, of
the first instruction that appears to do meaningful computation. It'll be
given the function-scope line number, which should run-on from the start of
the function anyway. This means if your code is completely inverted by the
optimiser, you can at least put a breakpoint at the _start_ like you
expect, even if it's difficult to then step through.
This patch also attempts to preserve some good behaviour we have without
optimisations -- at O0, if the prologue immediately falls into a loop body
without any computation happening, then prologue_end lands at the start of
that loop. This is desirable; but does mean we need to do more work to
detect and support those situations.
As defined in LangRef, branching on `undef` is undefined behavior.
This PR aims to remove undefined behavior from tests. As UB tests break
Alive2 and may be the root cause of breaking future optimizations.
Here's an Alive2 proof for one of the examples:
https://alive2.llvm.org/ce/z/TncxhP
Both are based on MachineLICMBase, and the functionality there is
"switched" based on a PreRegAlloc flag. This commit is simply about
trusting the original value of that flag, defined by the `MachineLICM`
and `EarlyMachineLICM` classes.
The `PreRegAlloc` flag used to be overwritten it based on MRI.isSSA(),
which is un-reliable due to how it is inferred by the MIRParser. I see
that we can now define isSSA in MIR (thanks @gargaroff ), meaning the
fix isn’t really needed anymore, but redefining that flag still feels
wrong.
Note that I'm looking into upstreaming more changes to MachineLICM, see
[the discourse
thread](https://discourse.llvm.org/t/extending-post-regalloc-machinelicm/82725).
LDV could reorder reinserted fragment and non-fragment debug values for
the same variable (compared to the input order), potentially resulting
in stale values being presented.
For example, before:
DBG_VALUE 1001, $noreg, !13, !DIExpression(DW_OP_LLVM_fragment, 0, 16)
DBG_VALUE 1002, $noreg, !13, !DIExpression(DW_OP_LLVM_fragment, 16, 16)
DBG_VALUE %0, $noreg, !13, !DIExpression()
After (without this patch):
DBG_VALUE %stack.0, 0, !13, !DIExpression()
DBG_VALUE 1002, $noreg, !13, !DIExpression(DW_OP_LLVM_fragment, 16, 16)
DBG_VALUE 1001, $noreg, !13, !DIExpression(DW_OP_LLVM_fragment, 0, 16)
It would also reorder DBG_VALUEs for different variables. Although that
does not matter for the debug information output, it resulted in some
noise in before/after pass diffs.
This should hopefully align so that instruction referencing and
DBG_VALUE emit debug instructions in the same order (see the
sdag-salvage-add.ll change).
Reverted due to large .debug_line size regressions for some
configurations; work currently in place to improve the output of this
behaviour in PR #108251.
This patch also modifies two tests that were created or modified after
the original commit landed and are affected by the revert:
llvm/test/CodeGen/X86/pseudo_cmov_lower2.ll
llvm/test/DebugInfo/X86/empty-line-info.ll
This reverts commit 5fef40c2c477e92187bd4e5c18091eca6b8465cc.
Fixes the previous buildbot error by adding an explicit triple to the test,
ensuring that llc can produce a valid object file.
This reverts commit 926f0979af4f6172d4ed3dea5603aa97c800bef1.
Reverted (along with the NFC followup fix) due to buildbot failure:
https://lab.llvm.org/buildbot/#/builders/160/builds/4142
This reverts commit 3ef37e2f8f672393ee409fde8309198df0981735, and commit
616f7d3d4f6d9bea6f776e357c938847e522a681.
Fixes: https://github.com/llvm/llvm-project/issues/104695
This patch adds the is_stmt flag to line table entries for the first
instruction with a non-0 line location in each basic block, to ensure
that it will be used for stepping even if the last instruction in the
previous basic block had the same line number; this is important for
cases where the new BB is reachable from BBs other than the preceding
block.
Now revised to actually make the unit test compile, which I'd been
ignoring. No actual functional change, it's a type difference.
Original commit message follows.
[DebugInfo][InstrRef] Index DebugVariables and some DILocations (#99318)
A lot of time in LiveDebugValues is spent computing DenseMap keys for
DebugVariables, and they're made up of three pointers, so are large.
This patch installs an index for them: for the SSA and value-to-location
mapping parts of InstrRefBasedLDV we don't need to access things like
the variable declaration or the inlining site, so just use a uint32_t
identifier for each variable fragment that's tracked. The compile-time
performance improvements are substantial (almost 0.4% on the tracker).
About 80% of this patch is just replacing DebugVariable references with
DebugVariableIDs instead, however there are some larger consequences. We
spend lots of time fetching DILocations when emitting DBG_VALUE
instructions, so index those with the DebugVariables: this means all
DILocations on all new DBG_VALUE instructions will normalise to the
first-seen DILocation for the variable (which should be fine).
We also used to keep an ordering of when each variable was seen first in
a DBG_* instruction, in the AllVarsNumbering collection, so that we can
emit new DBG_* instructions in a stable order. We can hang this off the
DebugVariable index instead, so AllVarsNumbering is deleted.
Finally, rather than ordering by AllVarsNumbering just before DBG_*
instructions are linked into the output MIR, store instructions along
with their DebugVariableID, so that they can be sorted by that instead.
A lot of time in LiveDebugValues is spent computing DenseMap keys for
DebugVariables, and they're made up of three pointers, so are large.
This patch installs an index for them: for the SSA and value-to-location
mapping parts of InstrRefBasedLDV we don't need to access things like
the variable declaration or the inlining site, so just use a uint32_t
identifier for each variable fragment that's tracked. The compile-time
performance improvements are substantial (almost 0.4% on the tracker).
About 80% of this patch is just replacing DebugVariable references with
DebugVariableIDs instead, however there are some larger consequences. We
spend lots of time fetching DILocations when emitting DBG_VALUE
instructions, so index those with the DebugVariables: this means all
DILocations on all new DBG_VALUE instructions will normalise to the
first-seen DILocation for the variable (which should be fine).
We also used to keep an ordering of when each variable was seen first in
a DBG_* instruction, in the AllVarsNumbering collection, so that we can
emit new DBG_* instructions in a stable order. We can hang this off the
DebugVariable index instead, so AllVarsNumbering is deleted.
Finally, rather than ordering by AllVarsNumbering just before DBG_*
instructions are linked into the output MIR, store instructions along
with their DebugVariableID, so that they can be sorted by that instead.
- Use computeMaxCallFrameSize() in PEI::calculateCallFrameInfo() instead of duplicating the code.
- Set AdjustsStack in FinalizeISel instead of in computeMaxCallFrameSize().
Imagine a loop of the form:
```
preheader:
%r = def
header:
bcc latch, inner
inner1:
..
inner2:
b latch
latch:
%r = subs %r
bcc header
```
It can be possible for code to spend a decent amount of time in the
header<->latch loop, not going into the inner part of the loop as much.
The greedy register allocator can prefer to spill _around_ %r though,
adding spills around the subs in the loop, which can be very detrimental
for performance. (The case I am looking at is actually a very deeply
nested set of loops that repeat the header<->latch pattern at multiple
different levels).
The greedy RA will apply a preference to spill to the IV, as it is live
through the header block. This patch attempts to add a heuristic to
prevent that in this case for variables that look like IVs, in a similar
regard to the extra spill weight that gets added to variables that look
like IVs, that are expensive to spill. That will mean spills are more
likely to be pushed into the inner blocks, where they are less likely to
be executed and not as expensive as spills around the IV.
This gives a 8% speedup in the exchange benchmark from spec2017 when
compiled with flang-new, whilst importantly stabilising the scores to be
less chaotic to other changes. Running ctmark showed no difference in
the compile time. I've tried to run a range of benchmarking for
performance, most of which were relatively flat not showing many large
differences. One matrix multiply case improved 21.3% due to removing a
cascading chains of spills, and some other knock-on effects happen which
usually cause small differences in the scores.
This is a preparatory patch for extending DWARFDebugLine to properly
parse line number programs with maximum_operations_per_instruction > 1
for VLIW targets.
Add some scaffolding for handling op-index in line number programs, and
add printouts for that in the table. As this affects a lot of tests,
this is done in a separate commit to get a cleaner review for the actual
op-index implementation.
Verbose printouts are not present in many tests, and adding op-index to
those will require a bit more code changes, so that is done in the
actual implementation patch.
Reviewed By: StephenTozer
Differential Revision: https://reviews.llvm.org/D152535
X86's CMOV conversion transforms CMOV instructions into control flow between
blocks, meaning the value is computed by a PHI rather than a "real" machine
instruction. In instruction-referencing mode, we need to transfer the
instruction label between the old CMOV and the new PHI instruction to mark
where the variable value is computed.
There's an extra complication in that memory operands can be unfolded from the
CMOV and sunk into the new blocks -- the test checks both scenarios where the
instruction number has to hop between instructions.
This omission exposed by Dexter testing.
Reviewed By: Orlando
Differential Revision: https://reviews.llvm.org/D145565
X86's CMOV conversion transforms CMOV instructions into control flow between
blocks, meaning the value is computed by a PHI rather than a "real" machine
instruction. In instruction-referencing mode, we need to transfer the
instruction label between the old CMOV and the new PHI instruction to mark
where the variable value is computed.
There's an extra complication in that memory operands can be unfolded from the
CMOV and sunk into the new blocks -- the test checks both scenarios where the
instruction number has to hop between instructions.
This omission exposed by Dexter testing.
Reviewed By: Orlando
Differential Revision: https://reviews.llvm.org/D145565
D63932 added a module flag to indicate that we are executing the regular
LTO post merge pipeline, so that GlobalDCE could perform more aggressive
optimization for Dead Virtual Function Elimination. This caused issues
trying to reuse bitcode that had already been through the LTO pipeline
(see context in D139816).
Instead support this by passing down a parameter flag to the
GlobalDCEPass constructor, which is the more usual way for indicating
this information.
Most test changes are to remove incidental uses of this flag. Of the 2
real uses, llvm/test/LTO/ARM/lto-linking-metadata.ll is now obsolete and
removed in this patch, and the virtual-functions-visibility-post-lto.ll
test is updated to use the regular LTO default pipeline where this
parameter is set to true.
Differential Revision: https://reviews.llvm.org/D153655