Use std::make_shared to create a StopInfoSP, which inherits from
shared_from_this. It's both the most efficient and safest way to create
these objects:
- With make_shared, the object and the control block are allocated
together, which is more efficient.
- With make_shared, the enable_shared_from_this base class is properly
linked to the control block before the constructor finishes, so
shared_from_this() will be safe to use (though still not recommended
during construction).
LLDB breakpoint conditions take an expression that's evaluated using the
language of the code where the breakpoint is located. Users have asked
to have an option to tell it to evaluate the expression in a specific
language.
This is feature is especially helpful for Swift, for example for a
condition based on the value in memory at an offset from a register.
Such a condition is pretty difficult to write in Swift, but easy in C.
This PR adds a new argument (-Y) to specify the language of the
condition expression. We can't reuse the current -L option, since you
might want to break on only Swift symbols, but run a C expression there
as per the example above.
rdar://146119507
This is necessary so that LLDB does not select (or show the stop reason
for) a thread which stopped at an internal breakpoint.
Other than manual testing/inspection, which I've done, this does not
seem to lend itself to API testing, as we cannot set internal
breakpoints through the SBAPI.
This NFC patch simplifies the main loop in HandleProcessStateChanged
event by moving duplicated code into the StopInfo class, also allowing
StopInfo subclasses to override behavior.
More specifically, two functions are created:
* ShouldShow: should a Thread with such StopInfo should be printed when
the debugger stops? Currently, no StopInfo subclasses override this, but
a subsequent patch will fix a bug by making StopInfoBreakpoint check
whether the breakpoint is internal.
* ShouldSelect: should a Thread with such a StopInfo be selected? This
is currently overridden by StopInfoUnixSignal but will, in the future,
be overridden by StopInfoBreakpoint.
This reverts commit
87b7f63a11,
reapplying
7e66cf74fb
with a small (and probably temporary)
change to generate more debug info to help with diagnosing buildbot
issues.
Make StreamAsynchronousIO an unique_ptr instead of a shared_ptr. I tried
passing the class by value, but the llvm::raw_ostream forwarder stored
in the Stream parent class isn't movable and I don't think it's worth
changing that. Additionally, there's a few places that expect a
StreamSP, which are easily created from a StreamUP.
lldb today has two rules: When a thread stops at a BreakpointSite, we
set the thread's StopReason to be "breakpoint hit" (regardless if we've
actually hit the breakpoint, or if we've merely stopped *at* the
breakpoint instruction/point and haven't tripped it yet). And second,
when resuming a process, any thread sitting at a BreakpointSite is
silently stepped over the BreakpointSite -- because we've already
flagged the breakpoint hit when we stopped there originally.
In this patch, I change lldb to only set a thread's stop reason to
breakpoint-hit when we've actually executed the instruction/triggered
the breakpoint. When we resume, we only silently step past a
BreakpointSite that we've registered as hit. We preserve this state
across inferior function calls that the user may do while stopped, etc.
Also, when a user adds a new breakpoint at $pc while stopped, or changes
$pc to be the address of a BreakpointSite, we will silently step past
that breakpoint when the process resumes. This is purely a UX call, I
don't think there's any person who wants to set a breakpoint at $pc and
then hit it immediately on resuming.
One non-intuitive UX from this change, butt is necessary: If you're
stopped at a BreakpointSite that has not yet executed, you `stepi`, you
will hit the breakpoint and the pc will not yet advance. This thread has
not completed its stepi, and the ThreadPlanStepInstruction is still on
the stack. If you then `continue` the thread, lldb will now stop and
say, "instruction step completed", one instruction past the
BreakpointSite. You can continue a second time to resume execution.
The bugs driving this change are all from lldb dropping the real stop
reason for a thread and setting it to breakpoint-hit when that was not
the case. Jim hit one where we have an aarch64 watchpoint that triggers
one instruction before a BreakpointSite. On this arch we are notified of
the watchpoint hit after the instruction has been unrolled -- we disable
the watchpoint, instruction step, re-enable the watchpoint and collect
the new value. But now we're on a BreakpointSite so the watchpoint-hit
stop reason is lost.
Another was reported by ZequanWu in
https://discourse.llvm.org/t/lldb-unable-to-break-at-start/78282 we
attach to/launch a process with the pc at a BreakpointSite and
misbehave. Caroline Tice mentioned it is also a problem they've had with
putting a breakpoint on _dl_debug_state.
The change to each Process plugin that does execution control is that
1. If we've stopped at a BreakpointSite that has not been executed yet,
we will call Thread::SetThreadStoppedAtUnexecutedBP(pc) to record that.
When the thread resumes, if the pc is still at the same site, we will
continue, hit the breakpoint, and stop again.
2. When we've actually hit a breakpoint (enabled for this thread or
not), the Process plugin should call
Thread::SetThreadHitBreakpointSite(). When we go to resume the thread,
we will push a step-over-breakpoint ThreadPlan before resuming.
The biggest set of changes is to StopInfoMachException where we
translate a Mach Exception into a stop reason. The Mach exception codes
differ in a few places depending on the target (unambiguously), and I
didn't want to duplicate the new code for each target so I've tested
what mach exceptions we get for each action on each target, and
reorganized StopInfoMachException::CreateStopReasonWithMachException to
document these possible values, and handle them without specializing
based on the target arch.
I first landed this patch in July 2024 via
https://github.com/llvm/llvm-project/pull/96260
but the CI bots and wider testing found a number of test case failures
that needed to be updated, I reverted it. I've fixed all of those issues
in separate PRs and this change should run cleanly on all the CI bots
now.
rdar://123942164
This reverts commit a774de807e56c1147d4630bfec3110c11d41776e.
This is the same changes as last time, plus:
* We load the binary into the target object so that on Windows, we can
resolve the locations of the functions.
* We now assert that each required breakpoint has at least 1 location,
to prevent an issue like that in the future.
* We are less strict about the unsupported error message, because it
prints "error: windows" on Windows instead of "error: gdb-remote".
Reverts llvm/llvm-project#123945
Has failed on the Windows on Arm buildbot:
https://lab.llvm.org/buildbot/#/builders/141/builds/5865
```
********************
Unresolved Tests (2):
lldb-api :: functionalities/reverse-execution/TestReverseContinueBreakpoints.py
lldb-api :: functionalities/reverse-execution/TestReverseContinueWatchpoints.py
********************
Failed Tests (1):
lldb-api :: functionalities/reverse-execution/TestReverseContinueNotSupported.py
```
Reverting while I reproduce locally.
This reverts commit 22561cfb443267905d4190f0e2a738e6b412457f and fixes
b7b9ccf44988edf49886743ae5c3cf4184db211f (#112079).
The problem is that x86_64 and Arm 32-bit have memory regions above the
stack that are readable but not writeable. First Arm:
```
(lldb) memory region --all
<...>
[0x00000000fffcf000-0x00000000ffff0000) rw- [stack]
[0x00000000ffff0000-0x00000000ffff1000) r-x [vectors]
[0x00000000ffff1000-0xffffffffffffffff) ---
```
Then x86_64:
```
$ cat /proc/self/maps
<...>
7ffdcd148000-7ffdcd16a000 rw-p 00000000 00:00 0 [stack]
7ffdcd193000-7ffdcd196000 r--p 00000000 00:00 0 [vvar]
7ffdcd196000-7ffdcd197000 r-xp 00000000 00:00 0 [vdso]
ffffffffff600000-ffffffffff601000 --xp 00000000 00:00 0 [vsyscall]
```
Compare this to AArch64 where the test did pass:
```
$ cat /proc/self/maps
<...>
ffffb87dc000-ffffb87dd000 r--p 00000000 00:00 0 [vvar]
ffffb87dd000-ffffb87de000 r-xp 00000000 00:00 0 [vdso]
ffffb87de000-ffffb87e0000 r--p 0002a000 00:3c 76927217 /usr/lib/aarch64-linux-gnu/ld-linux-aarch64.so.1
ffffb87e0000-ffffb87e2000 rw-p 0002c000 00:3c 76927217 /usr/lib/aarch64-linux-gnu/ld-linux-aarch64.so.1
fffff4216000-fffff4237000 rw-p 00000000 00:00 0 [stack]
```
To solve this, look up the memory region of the stack pointer (using
https://lldb.llvm.org/resources/lldbgdbremote.html#qmemoryregioninfo-addr)
and constrain the read to within that region. Since we know the stack is
all readable and writeable.
I have also added skipIfRemote to the tests, since getting them working
in that context is too complex to be worth it.
Memory write failures now display the range they tried to write, and
register write errors will show the name of the register where possible.
The patch also includes a workaround for a an issue where the test code
could mistake an `x` response that happens to begin with an `O` for an
output packet (stdout). This workaround will not be necessary one we
start using the [new
implementation](https://discourse.llvm.org/t/rfc-fixing-incompatibilties-of-the-x-packet-w-r-t-gdb/84288)
of the `x` packet.
---------
Co-authored-by: Pavel Labath <pavel@labath.sk>
This commit adds support for a
`SBProcess::ContinueInDirection()` API. A user-accessible command for
this will follow in a later commit.
This feature depends on a gdbserver implementation (e.g. `rr`) providing
support for the `bc` and `bs` packets. `lldb-server` does not support
those packets, and there is no plan to change that. For testing
purposes, this commit adds a Python implementation of *very limited*
record-and-reverse-execute functionality, implemented as a proxy between
lldb and lldb-server in `lldbreverse.py`. This should not (and in
practice cannot) be used for anything except testing.
The tests here are quite minimal but we test that simple breakpoints and
watchpoints work as expected during reverse execution, and that
conditional breakpoints and watchpoints work when the condition calls a
function that must be executed in the forward direction.
Prior to this patch, the function returned an exit status, sometimes a
ValueObject with an error and a Status object. This patch removes the
Status object and ensures the error is consistently returned as the
error of the ValueObject.
This fixes the two test suite failures that I missed in the PR:
https://github.com/llvm/llvm-project/pull/112939
One was a poorly written test case - it assumed that on connect to a
gdb-remote with a running process, lldb MUST have fetched all the frame
0 registers. In fact, there's no need for it to do so (as the CallSite
patch showed...) and if we don't need to we shouldn't. So I fixed the
test to only expect a `g` packet AFTER calling read_registers.
The other was a place where some code had used 0 when it meant
LLDB_INVALID_LINE_NUMBER, which I had fixed but missed one place where
it was still compared to 0.
…ne stepping (#112939)"
This was breaking some gdb-remote packet counting tests on the bots. I
can't see how this patch could cause that breakage, but I'm reverting to
figure that out.
This reverts commit f14743794587db102c6d1b20f9c87a1ac20decfd.
Previously lldb didn't support setting breakpoints on call site
locations. This patch adds that ability.
It would be very slow if we did this by searching all the debug
information for every inlined subroutine record looking for a call-site
match, so I added one restriction to the call-site support. This change
will find all call sites for functions that also supply at least one
line to the regular line table. That way we can use the fact that the
line table search will move the location to that subsequent line (but
only within the same function). When we find an actually moved source
line match, we can search in the function that contained that line table
entry for the call-site, and set the breakpoint location back to that.
When I started writing tests for this new ability, it quickly became
obvious that our support for virtual inline stepping was pretty buggy.
We didn't print the right file & line number for the breakpoint, and we
didn't set the position in the "virtual inlined stack" correctly when we
hit the breakpoint. We also didn't step through the inlined frames
correctly. There was code to try to detect the right inlined stack
position, but it had been refactored a while back with the comment that
it was super confusing and the refactor was supposed to make it clearer,
but the refactor didn't work either.
That code was made much clearer by abstracting the job of "handling the
stack readjustment" to the various StopInfo's. Previously, there was a
big (and buggy) switch over stop info's. Moving the responsibility to
the stop info made this code much easier to reason about.
We also had no tests for virtual inlined stepping (our inlined stepping
test was actually written specifically to avoid the formation of a
virtual inlined stack... So I also added tests for that along with the
tests for setting the call-site breakpoints.
ValueObject is part of lldbCore for historical reasons, but conceptually
it deserves to be its own library. This does introduce a (link-time) circular
dependency between lldbCore and lldbValueObject, which is unfortunate
but probably unavoidable because so many things in LLDB rely on
ValueObject. We already have cycles and these libraries are never built
as dylibs so while this doesn't improve the situation, it also doesn't
make things worse.
The header includes were updated with the following command:
```
find . -type f -exec sed -i.bak "s%include \"lldb/Core/ValueObject%include \"lldb/ValueObject/ValueObject%" '{}' \;
```
Reverting this again; I added a commit which added @skipIfDarwin
markers to the TestReverseContinueBreakpoints.py and
TestReverseContinueNotSupported.py API tests, which use lldb-server
in gdbserver mode which does not work on Darwin. But the aarch64 ubuntu
bot reported a failure on TestReverseContinueBreakpoints.py,
https://lab.llvm.org/buildbot/#/builders/59/builds/6397
File "/home/tcwg-buildbot/worker/lldb-aarch64-ubuntu/llvm-project/lldb/test/API/functionalities/reverse-execution/TestReverseContinueBreakpoints.py", line 63, in test_reverse_continue_skip_breakpoint
self.reverse_continue_skip_breakpoint_internal(async_mode=False)
File "/home/tcwg-buildbot/worker/lldb-aarch64-ubuntu/llvm-project/lldb/test/API/functionalities/reverse-execution/TestReverseContinueBreakpoints.py", line 81, in reverse_continue_skip_breakpoint_internal
self.expect(
File "/home/tcwg-buildbot/worker/lldb-aarch64-ubuntu/llvm-project/lldb/packages/Python/lldbsuite/test/lldbtest.py", line 2372, in expect
self.runCmd(
File "/home/tcwg-buildbot/worker/lldb-aarch64-ubuntu/llvm-project/lldb/packages/Python/lldbsuite/test/lldbtest.py", line 1002, in runCmd
self.assertTrue(self.res.Succeeded(), msg + output)
AssertionError: False is not true : Process should be stopped due to history boundary
Error output:
error: Process must be launched.
This reverts commit 4f297566b3150097de26c6a23a987d2bd5fc19c5.
This commit only adds support for the
`SBProcess::ReverseContinue()` API. A user-accessible command for this
will follow in a later commit.
This feature depends on a gdbserver implementation (e.g. `rr`) providing
support for the `bc` and `bs` packets. `lldb-server` does not support
those packets, and there is no plan to change that. So, for testing
purposes, `lldbreverse.py` wraps `lldb-server` with a Python
implementation of *very limited* record-and-replay functionality for use
by *tests only*.
The majority of this PR is test infrastructure (about 700 of the 950
lines added).
This commit only adds support for the
`SBProcess::ReverseContinue()` API. A user-accessible command for this
will follow in a later commit.
This feature depends on a gdbserver implementation (e.g. `rr`) providing
support for the `bc` and `bs` packets. `lldb-server` does not support
those packets, and there is no plan to change that. So, for testing
purposes, `lldbreverse.py` wraps `lldb-server` with a Python
implementation of *very limited* record-and-replay functionality for use
by *tests only*.
The majority of this PR is test infrastructure (about 700 of the 950
lines added).
This PR introduces a new `ThreadPlanSingleThreadTimeout` that will be
used to address potential deadlock during single-thread stepping.
While debugging a target with a non-trivial number of threads (around
5000 threads in one example target), we noticed that a simple step over
can take as long as 10 seconds. Enabling single-thread stepping mode
significantly reduces the stepping time to around 3 seconds. However,
this can introduce deadlock if we try to step over a method that depends
on other threads to release a lock.
To address this issue, we introduce a new
`ThreadPlanSingleThreadTimeout` that can be controlled by the
`target.process.thread.single-thread-plan-timeout` setting during
single-thread stepping mode. The concept involves counting the elapsed
time since the last internal stop to detect overall stepping progress.
Once a timeout occurs, we assume the target is not making progress due
to a potential deadlock, as mentioned above. We then send a new async
interrupt, resume all threads, and `ThreadPlanSingleThreadTimeout`
completes its task.
To support this design, the major changes made in this PR are:
1. `ThreadPlanSingleThreadTimeout` is popped during every internal stop
and reset (re-pushed) to the top of the stack (as a leaf node) during
resume. This is achieved by always returning `true` from
`ThreadPlanSingleThreadTimeout::DoPlanExplainsStop()` and
`ThreadPlanSingleThreadTimeout::MischiefManaged()`.
2. A new thread-specific async interrupt stop is introduced, which can
be detected/consumed by `ThreadPlanSingleThreadTimeout`.
3. The clearing of branch breakpoints in the range thread plan has been
moved from `DoPlanExplainsStop()` to `ShouldStop()`, as it is not
guaranteed that it will be called.
The detailed design is discussed in the RFC below:
[https://discourse.llvm.org/t/improve-single-thread-stepping/74599](https://discourse.llvm.org/t/improve-single-thread-stepping/74599)
---------
Co-authored-by: jeffreytan81 <jeffreytan@fb.com>
This reverts commit 05f0e86cc895181b3d2210458c78938f83353002.
The debuginfo dexter tests are failing, probably because the way
stepping over breakpoints has changed with my patches. And there
are two API tests fails on the ubuntu-arm (32-bit) bot. I'll need
to investigate both of these, neither has an obvious failure reason.
lldb today has two rules: When a thread stops at a BreakpointSite, we
set the thread's StopReason to be "breakpoint hit" (regardless if we've
actually hit the breakpoint, or if we've merely stopped *at* the
breakpoint instruction/point and haven't tripped it yet). And second,
when resuming a process, any thread sitting at a BreakpointSite is
silently stepped over the BreakpointSite -- because we've already
flagged the breakpoint hit when we stopped there originally.
In this patch, I change lldb to only set a thread's stop reason to
breakpoint-hit when we've actually executed the instruction/triggered
the breakpoint. When we resume, we only silently step past a
BreakpointSite that we've registered as hit. We preserve this state
across inferior function calls that the user may do while stopped, etc.
Also, when a user adds a new breakpoint at $pc while stopped, or changes
$pc to be the address of a BreakpointSite, we will silently step past
that breakpoint when the process resumes. This is purely a UX call, I
don't think there's any person who wants to set a breakpoint at $pc and
then hit it immediately on resuming.
One non-intuitive UX from this change, but I'm convinced it is
necessary: If you're stopped at a BreakpointSite that has not yet
executed, you `stepi`, you will hit the breakpoint and the pc will not
yet advance. This thread has not completed its stepi, and the thread
plan is still on the stack. If you then `continue` the thread, lldb will
now stop and say, "instruction step completed", one instruction past the
BreakpointSite. You can continue a second time to resume execution. I
discussed this with Jim, and trying to paper over this behavior will
lead to more complicated scenarios behaving non-intuitively. And mostly
it's the testsuite that was trying to instruction step past a breakpoint
and getting thrown off -- and I changed those tests to expect the new
behavior.
The bugs driving this change are all from lldb dropping the real stop
reason for a thread and setting it to breakpoint-hit when that was not
the case. Jim hit one where we have an aarch64 watchpoint that triggers
one instruction before a BreakpointSite. On this arch we are notified of
the watchpoint hit after the instruction has been unrolled -- we disable
the watchpoint, instruction step, re-enable the watchpoint and collect
the new value. But now we're on a BreakpointSite so the watchpoint-hit
stop reason is lost.
Another was reported by ZequanWu in
https://discourse.llvm.org/t/lldb-unable-to-break-at-start/78282 we
attach to/launch a process with the pc at a BreakpointSite and
misbehave. Caroline Tice mentioned it is also a problem they've had with
putting a breakpoint on _dl_debug_state.
The change to each Process plugin that does execution control is that
1. If we've stopped at a BreakpointSite that has not been executed yet,
we will call Thread::SetThreadStoppedAtUnexecutedBP(pc) to record
that. When the thread resumes, if the pc is still at the same site, we
will continue, hit the breakpoint, and stop again.
2. When we've actually hit a breakpoint (enabled for this thread or not),
the Process plugin should call Thread::SetThreadHitBreakpointSite().
When we go to resume the thread, we will push a step-over-breakpoint
ThreadPlan before resuming.
The biggest set of changes is to StopInfoMachException where we
translate a Mach Exception into a stop reason. The Mach exception codes
differ in a few places depending on the target (unambiguously), and I
didn't want to duplicate the new code for each target so I've tested
what mach exceptions we get for each action on each target, and
reorganized StopInfoMachException::CreateStopReasonWithMachException to
document these possible values, and handle them without specializing
based on the target arch.
rdar://123942164
This patch is the next piece of work in my Large Watchpoint proposal,
https://discourse.llvm.org/t/rfc-large-watchpoint-support-in-lldb/72116
This patch breaks a user's watchpoint into one or more
WatchpointResources which reflect what the hardware registers can cover.
This means we can watch objects larger than 8 bytes, and we can watched
unaligned address ranges. On a typical 64-bit target with 4 watchpoint
registers you can watch 32 bytes of memory if the start address is
doubleword aligned.
Additionally, if the remote stub implements AArch64 MASK style
watchpoints (e.g. debugserver on Darwin), we can watch any power-of-2
size region of memory up to 2GB, aligned to that same size.
I updated the Watchpoint constructor and CommandObjectWatchpoint to
create a CompilerType of Array<UInt8> when the size of the watched
region is greater than pointer-size and we don't have a variable type to
use. For pointer-size and smaller, we can display the watched granule as
an integer value; for larger-than-pointer-size we will display as an
array of bytes.
I have `watchpoint list` now print the WatchpointResources used to
implement the watchpoint.
I added a WatchpointAlgorithm class which has a top-level static method
that takes an enum flag mask WatchpointHardwareFeature and a user
address and size, and returns a vector of WatchpointResources covering
the request. It does not take into account the number of watchpoint
registers the target has, or the number still available for use. Right
now there is only one algorithm, which monitors power-of-2 regions of
memory. For up to pointer-size, this is what Intel hardware supports.
AArch64 Byte Address Select watchpoints can watch any number of
contiguous bytes in a pointer-size memory granule, that is not currently
supported so if you ask to watch bytes 3-5, the algorithm will watch the
entire doubleword (8 bytes). The newly default "modify" style means we
will silently ignore modifications to bytes outside the watched range.
I've temporarily skipped TestLargeWatchpoint.py for all targets. It was
only run on Darwin when using the in-tree debugserver, which was a proxy
for "debugserver supports MASK watchpoints". I'll be adding the
aforementioned feature flag from the stub and enabling full mask
watchpoints when a debugserver with that feature is enabled, and
re-enable this test.
I added a new TestUnalignedLargeWatchpoint.py which only has one test
but it's a great one, watching a 22-byte range that is unaligned and
requires four 8-byte watchpoints to cover.
I also added a unit test, WatchpointAlgorithmsTests, which has a number
of simple tests against WatchpointAlgorithms::PowerOf2Watchpoints. I
think there's interesting possible different approaches to how we cover
these; I note in the unit test that a user requesting a watch on address
0x12e0 of 120 bytes will be covered by two watchpoints today, a
128-bytes at 0x1280 and at 0x1300. But it could be done with a 16-byte
watchpoint at 0x12e0 and a 128-byte at 0x1300, which would have fewer
false positives/private stops. As we try refining this one, it's helpful
to have a collection of tests to make sure things don't regress.
I tested this on arm64 macOS, (genuine) x86_64 macOS, and AArch64
Ubuntu. I have not modifed the Windows process plugins yet, I might try
that as a standalone patch, I'd be making the change blind, but the
necessary changes (see ProcessGDBRemote::EnableWatchpoint) are pretty
small so it might be obvious enough that I can change it and see what
the Windows CI thinks.
There isn't yet a packet (or a qSupported feature query) for the gdb
remote serial protocol stub to communicate its watchpoint capabilities
to lldb. I'll be doing that in a patch right after this is landed,
having debugserver advertise its capability of AArch64 MASK watchpoints,
and have ProcessGDBRemote add eWatchpointHardwareArmMASK to
WatchpointAlgorithms so we can watch larger than 32-byte requests on
Darwin.
I haven't yet tackled WatchpointResource *sharing* by multiple
Watchpoints. This is all part of the goal, especially when we may be
watching a larger memory range than the user requested, if they then add
another watchpoint next to their first request, it may be covered by the
same WatchpointResource (hardware watchpoint register). Also one "read"
watchpoint and one "write" watchpoint on the same memory granule need to
be handled, making the WatchpointResource cover all requests.
As WatchpointResources aren't shared among multiple Watchpoints yet,
there's no handling of running the conditions/commands/etc on multiple
Watchpoints when their shared WatchpointResource is hit. The goal beyond
"large watchpoint" is to unify (much more) the Watchpoint and Breakpoint
behavior and commands. I have a feeling I may be slowly chipping away at
this for a while.
Re-landing this patch after fixing two undefined behaviors in
WatchpointAlgorithms found by UBSan and by failures on different
CI bots.
rdar://108234227
This patch is the next piece of work in my Large Watchpoint proposal,
https://discourse.llvm.org/t/rfc-large-watchpoint-support-in-lldb/72116
This patch breaks a user's watchpoint into one or more
WatchpointResources which reflect what the hardware registers can cover.
This means we can watch objects larger than 8 bytes, and we can watched
unaligned address ranges. On a typical 64-bit target with 4 watchpoint
registers you can watch 32 bytes of memory if the start address is
doubleword aligned.
Additionally, if the remote stub implements AArch64 MASK style
watchpoints (e.g. debugserver on Darwin), we can watch any power-of-2
size region of memory up to 2GB, aligned to that same size.
I updated the Watchpoint constructor and CommandObjectWatchpoint to
create a CompilerType of Array<UInt8> when the size of the watched
region is greater than pointer-size and we don't have a variable type to
use. For pointer-size and smaller, we can display the watched granule as
an integer value; for larger-than-pointer-size we will display as an
array of bytes.
I have `watchpoint list` now print the WatchpointResources used to
implement the watchpoint.
I added a WatchpointAlgorithm class which has a top-level static method
that takes an enum flag mask WatchpointHardwareFeature and a user
address and size, and returns a vector of WatchpointResources covering
the request. It does not take into account the number of watchpoint
registers the target has, or the number still available for use. Right
now there is only one algorithm, which monitors power-of-2 regions of
memory. For up to pointer-size, this is what Intel hardware supports.
AArch64 Byte Address Select watchpoints can watch any number of
contiguous bytes in a pointer-size memory granule, that is not currently
supported so if you ask to watch bytes 3-5, the algorithm will watch the
entire doubleword (8 bytes). The newly default "modify" style means we
will silently ignore modifications to bytes outside the watched range.
I've temporarily skipped TestLargeWatchpoint.py for all targets. It was
only run on Darwin when using the in-tree debugserver, which was a proxy
for "debugserver supports MASK watchpoints". I'll be adding the
aforementioned feature flag from the stub and enabling full mask
watchpoints when a debugserver with that feature is enabled, and
re-enable this test.
I added a new TestUnalignedLargeWatchpoint.py which only has one test
but it's a great one, watching a 22-byte range that is unaligned and
requires four 8-byte watchpoints to cover.
I also added a unit test, WatchpointAlgorithmsTests, which has a number
of simple tests against WatchpointAlgorithms::PowerOf2Watchpoints. I
think there's interesting possible different approaches to how we cover
these; I note in the unit test that a user requesting a watch on address
0x12e0 of 120 bytes will be covered by two watchpoints today, a
128-bytes at 0x1280 and at 0x1300. But it could be done with a 16-byte
watchpoint at 0x12e0 and a 128-byte at 0x1300, which would have fewer
false positives/private stops. As we try refining this one, it's helpful
to have a collection of tests to make sure things don't regress.
I tested this on arm64 macOS, (genuine) x86_64 macOS, and AArch64
Ubuntu. I have not modifed the Windows process plugins yet, I might try
that as a standalone patch, I'd be making the change blind, but the
necessary changes (see ProcessGDBRemote::EnableWatchpoint) are pretty
small so it might be obvious enough that I can change it and see what
the Windows CI thinks.
There isn't yet a packet (or a qSupported feature query) for the gdb
remote serial protocol stub to communicate its watchpoint capabilities
to lldb. I'll be doing that in a patch right after this is landed,
having debugserver advertise its capability of AArch64 MASK watchpoints,
and have ProcessGDBRemote add eWatchpointHardwareArmMASK to
WatchpointAlgorithms so we can watch larger than 32-byte requests on
Darwin.
I haven't yet tackled WatchpointResource *sharing* by multiple
Watchpoints. This is all part of the goal, especially when we may be
watching a larger memory range than the user requested, if they then add
another watchpoint next to their first request, it may be covered by the
same WatchpointResource (hardware watchpoint register). Also one "read"
watchpoint and one "write" watchpoint on the same memory granule need to
be handled, making the WatchpointResource cover all requests.
As WatchpointResources aren't shared among multiple Watchpoints yet,
there's no handling of running the conditions/commands/etc on multiple
Watchpoints when their shared WatchpointResource is hit. The goal beyond
"large watchpoint" is to unify (much more) the Watchpoint and Breakpoint
behavior and commands. I have a feeling I may be slowly chipping away at
this for a while.
rdar://108234227
This patch is rearranging code a bit to add WatchpointResources to
Process. A WatchpointResource is meant to represent a hardware
watchpoint register in the inferior process. It has an address, a size,
a type, and a list of Watchpoints that are using this
WatchpointResource.
This current patch doesn't add any of the features of
WatchpointResources that make them interesting -- a user asking to watch
a 24 byte object could watch this with three 8 byte WatchpointResources.
Or a Watchpoint on 1 byte at 0x1002 and a second watchpoint on 1 byte at
0x1003, these must both be served by a single WatchpointResource on that
doubleword at 0x1000 on a 64-bit target, if two hardware watchpoint
registers were used to track these separately, one of them may not be
hit. Or if you have one Watchpoint on a variable with a condition set,
and another Watchpoint on that same variable with a command defined or
different condition, or ignorecount, both of those Watchpoints need to
evaluate their criteria/commands when their WatchpointResource has been
hit.
There's a bit of code movement to rearrange things in the direction I'll
need for implementing this feature, so I want to start with reviewing &
landing this mostly NFC patch and we can focus on the algorithmic
choices about how WatchpointResources are shared and handled as they're
triggeed, separately.
This patch also stops printing "Watchpoint <n> hit: old value: <x>, new
vlaue: <y>" for Read watchpoints. I could make an argument for print
"Watchpoint <n> hit: current value <x>" but the current output doesn't
make any sense, and the user can print the value if they are
particularly interested. Read watchpoints are used primarily to
understand what code is reading a variable.
This patch adds more fallbacks for how to print the objects being
watched if we have types, instead of assuming they are all integral
values, so a struct will print its elements. As large watchpoints are
added, we'll be doing a lot more of those.
To track the WatchpointSP in the WatchpointResources, I changed the
internal API which took a WatchpointSP and devolved it to a Watchpoint*,
which meant touching several different Process files. I removed the
watchpoint code in ProcessKDP which only reported that watchpoints
aren't supported, the base class does that already.
I haven't yet changed how we receive a watchpoint to identify the
WatchpointResource responsible for the trigger, and identify all
Watchpoints that are using this Resource to evaluate their conditions
etc. This is the same work that a BreakpointSite needs to do when it has
been tiggered, where multiple Breakpoints may be at the same address.
There is not yet any printing of the Resources that a Watchpoint is
implemented in terms of ("watchpoint list", or
SBWatchpoint::GetDescription).
"watchpoint set var" and "watchpoint set expression" take a size
argument which was previously 1, 2, 4, or 8 (an enum). I've changed this
to an unsigned int. Most hardware implementations can only watch 1, 2,
4, 8 byte ranges, but with Resources we'll allow a user to ask for
different sized watchpoints and set them in hardware-expressble terms
soon.
I've annotated areas where I know there is work still needed with
LWP_TODO that I'll be working on once this is landed.
I've tested this on aarch64 macOS, aarch64 Linux, and Intel macOS.
https://discourse.llvm.org/t/rfc-large-watchpoint-support-in-lldb/72116
(cherry picked from commit fc6b72523f3d73b921690a713e97a433c96066c6)
...and follow ups.
As it has caused test failures on Linux Arm and AArch64:
https://lab.llvm.org/buildbot/#/builders/96/builds/49126https://lab.llvm.org/buildbot/#/builders/17/builds/45824
```
lldb-shell :: Subprocess/clone-follow-child-wp.test
lldb-shell :: Subprocess/fork-follow-child-wp.test
lldb-shell :: Subprocess/vfork-follow-child-wp.test
```
This reverts commit a6c62bf1a4717accc852463b664cd1012237d334,
commit a0a1ff3ab40e347589b4e27d8fd350c600526735 and commit
fc6b72523f3d73b921690a713e97a433c96066c6.
This patch is rearranging code a bit to add WatchpointResources to
Process. A WatchpointResource is meant to represent a hardware
watchpoint register in the inferior process. It has an address, a size,
a type, and a list of Watchpoints that are using this
WatchpointResource.
This current patch doesn't add any of the features of
WatchpointResources that make them interesting -- a user asking to watch
a 24 byte object could watch this with three 8 byte WatchpointResources.
Or a Watchpoint on 1 byte at 0x1002 and a second watchpoint on 1 byte at
0x1003, these must both be served by a single WatchpointResource on that
doubleword at 0x1000 on a 64-bit target, if two hardware watchpoint
registers were used to track these separately, one of them may not be
hit. Or if you have one Watchpoint on a variable with a condition set,
and another Watchpoint on that same variable with a command defined or
different condition, or ignorecount, both of those Watchpoints need to
evaluate their criteria/commands when their WatchpointResource has been
hit.
There's a bit of code movement to rearrange things in the direction I'll
need for implementing this feature, so I want to start with reviewing &
landing this mostly NFC patch and we can focus on the algorithmic
choices about how WatchpointResources are shared and handled as they're
triggeed, separately.
This patch also stops printing "Watchpoint <n> hit: old value: <x>, new
vlaue: <y>" for Read watchpoints. I could make an argument for print
"Watchpoint <n> hit: current value <x>" but the current output doesn't
make any sense, and the user can print the value if they are
particularly interested. Read watchpoints are used primarily to
understand what code is reading a variable.
This patch adds more fallbacks for how to print the objects being
watched if we have types, instead of assuming they are all integral
values, so a struct will print its elements. As large watchpoints are
added, we'll be doing a lot more of those.
To track the WatchpointSP in the WatchpointResources, I changed the
internal API which took a WatchpointSP and devolved it to a Watchpoint*,
which meant touching several different Process files. I removed the
watchpoint code in ProcessKDP which only reported that watchpoints
aren't supported, the base class does that already.
I haven't yet changed how we receive a watchpoint to identify the
WatchpointResource responsible for the trigger, and identify all
Watchpoints that are using this Resource to evaluate their conditions
etc. This is the same work that a BreakpointSite needs to do when it has
been tiggered, where multiple Breakpoints may be at the same address.
There is not yet any printing of the Resources that a Watchpoint is
implemented in terms of ("watchpoint list", or
SBWatchpoint::GetDescription).
"watchpoint set var" and "watchpoint set expression" take a size
argument which was previously 1, 2, 4, or 8 (an enum). I've changed this
to an unsigned int. Most hardware implementations can only watch 1, 2,
4, 8 byte ranges, but with Resources we'll allow a user to ask for
different sized watchpoints and set them in hardware-expressble terms
soon.
I've annotated areas where I know there is work still needed with
LWP_TODO that I'll be working on once this is landed.
I've tested this on aarch64 macOS, aarch64 Linux, and Intel macOS.
https://discourse.llvm.org/t/rfc-large-watchpoint-support-in-lldb/72116
The Watchpoint and Breakpoint objects try to track the hardware index
that was used for them, if they are hardware wp/bp's. The majority of
our debugging goes over the gdb remote serial protocol, and when we set
the watchpoint/breakpoint, there is no (standard) way for the remote
stub to communicate to lldb which hardware index was used. We have an
lldb-extension packet to query the total number of watchpoint registers.
When a watchpoint is hit, there is an lldb extension to the stop reply
packet (documented in lldb-gdb-remote.txt) to describe the watchpoint
including its actual hardware index,
<addr within wp range> <wp hw index> <actual accessed address>
(the third field is specifically needed for MIPS). At this point, if the
stub reported these three fields (the stub is only required to provide
the first), we can know the actual hardware index for this watchpoint.
Breakpoints are worse; there's never any way for us to be notified about
which hardware index was used. Breakpoints got this as a side effect of
inherting from StoppointSite with Watchpoints.
We expose the watchpoint hardware index through "watchpoint list -v" and
through SBWatchpoint::GetHardwareIndex.
With my large watchpoint support, there is no *single* hardware index
that may be used for a watchpoint, it may need multiple resources. Also
I don't see what a user is supposed to do with this information, or an
IDE. Knowing the total number of watchpoint registers on the target, and
knowing how many Watchpoint Resources are currently in use, is helpful.
Knowing how many Watchpoint Resources
a single user-specified watchpoint needed to be implemented is useful.
But knowing which registers were used is an implementation detail and
not available until we hit the watchpoint when using gdb remote serial
protocol.
So given all that, I'm removing watchpoint hardware index numbers. I'm
changing the SB API to always return -1.
This reverts commit a7b78cac9a77e3ef6bbbd8ab1a559891dc693401.
With updates to the tests.
TestWatchTaggedAddress.py: Updated the expected watchpoint types,
though I'm not sure there should be a differnt default for the two
ways of setting them, that needs to be confirmed.
TestStepOverWatchpoint.py: Skipped this everywhere because I think
what used to happen is you couldn't put 2 watchpoints on the same
address (after alignment). I guess that this is now allowed because
modify watchpoints aren't accounted for, but likely should be.
Needs investigating.
This reverts commit 933ad5c897ee366759a54869b35b2d7285a92137.
This caused 1 test failure and an unexpected pass on AArch64 Linux:
https://lab.llvm.org/buildbot/#/builders/96/builds/45765
Wasn't reported because the bot was already red at the time.
Watchpoints in lldb can be either 'read', 'write', or 'read/write'. This
is exposing the actual behavior of hardware watchpoints. gdb has a
different behavior: a "write" type watchpoint only stops when the
watched memory region *changes*.
A user is using a watchpoint for one of three reasons:
1. Want to find what is changing/corrupting this memory.
2. Want to find what is writing to this memory.
3. Want to find what is reading from this memory.
I believe (1) is the most common use case for watchpoints, and it
currently can't be done in lldb -- the user needs to continue every time
the same value is written to the watched-memory manually. I think gdb's
behavior is the correct one. There are some use cases where a developer
wants to find every function that writes/reads to/from a memory region,
regardless of value, I want to still allow that functionality.
This is also a bit of groundwork for my large watchpoint support
proposal
https://discourse.llvm.org/t/rfc-large-watchpoint-support-in-lldb/72116
where I will be adding support for AArch64 MASK watchpoints which watch
power-of-2 memory regions. A user might ask to watch 24 bytes, and a
MASK watchpoint stub can do this with a 32-byte MASK watchpoint if it is
properly aligned. And we need to ignore writes to the final 8 bytes of
that watched region, and not show those hits to the user.
This patch adds a new 'modify' watchpoint type and it is the default.
Re-landing this patch after addressing testsuite failures found in CI on
Linux, Intel machines, and windows.
rdar://108234227
Watchpoints in lldb can be either 'read', 'write', or 'read/write'. This
is exposing the actual behavior of hardware watchpoints. gdb has a
different behavior: a "write" type watchpoint only stops when the
watched memory region *changes*.
A user is using a watchpoint for one of three reasons:
1. Want to find what is changing/corrupting this memory.
2. Want to find what is writing to this memory.
3. Want to find what is reading from this memory.
I believe (1) is the most common use case for watchpoints, and it
currently can't be done in lldb -- the user needs to continue every time
the same value is written to the watched-memory manually. I think gdb's
behavior is the correct one. There are some use cases where a developer
wants to find every function that writes/reads to/from a memory region,
regardless of value, I want to still allow that functionality.
This is also a bit of groundwork for my large watchpoint support
proposal
https://discourse.llvm.org/t/rfc-large-watchpoint-support-in-lldb/72116
where I will be adding support for AArch64 MASK watchpoints which watch
power-of-2 memory regions. A user might ask to watch 24 bytes, and a
MASK watchpoint stub can do this with a 32-byte MASK watchpoint if it is
properly aligned. And we need to ignore writes to the final 8 bytes of
that watched region, and not show those hits to the user.
This patch adds a new 'modify' watchpoint type and it is the default.
rdar://108234227
Reading through the Watchpoint class, I found this method
that wasn't being used properly, and a couple of ivars that
weren't used at all. Cleanup.
Differential Revision: https://reviews.llvm.org/D155768
This is a user facing action, it is meant to focus the user's attention on
something other than the 0th frame when you stop somewhere where that's
helpful. For instance, stopping in pthread_kill after an assert will select
the assert frame.
This is not something you want to have happen internally in lldb, both
because internally you really don't want the selected frame changing out
from under you, and because the recognizers can do arbitrary work, and that
can cause deadlocks or other unexpected behavior.
However, it's not something that the current code does
explicitly after a stop has been delivered, it's expected to happen implicitly
as part of stopping. I changing this to call SMRF explicitly after a user
stop, but that got pretty ugly quickly.
So I added a bool to control whether to run this and audited all the current
uses to determine whether we're returning to the user or not.
Differential Revision: https://reviews.llvm.org/D148863
Watchpoints from lldb-server are sent in the stop info packet
as a `reason:watchpoint` and `description:asciihex` keys; the
latter's asciihex has one to three integer values. This patch
documents the purpose of those three different numbers, and
clarifies the behavior on MIPS with the third number which is
outside the range of any watched memory range means to silently
skip the watchpoint.
lldb was previously using this silently skip watchpoint behavior
for AArch64 as well, but in the case of AArch64 we see a watchpoint
address outside of a watched memory range when the write BEGINS
before the watched memory range, but extends in to it. We don't
want to silently skip these.
Differential Revision: https://reviews.llvm.org/D147816
rdar://83996471
We were checking "WasTheLastResumeForUserExpression" but that returns true even
if that expression was completed, provided we haven't run again. This uses a
better check.
This is actually fairly hard to trigger. It happens the first time you hit an
objc_exception_throw breakpoint and invoke that frame recognizer for that. But
I couldn't trigger it using a Python based frame recognizer. So I wrote a test
for the objc_exception_throw_breakpoint recognizer which should have been there
anyway... It fails (the target auto-continues) w/o this patch and succeeds with
it.
Differential Revision: https://reviews.llvm.org/D147587
Previously we only looked at the si_signo field, so you got:
```
(lldb) bt
* thread #1, name = 'a.out.mte', stop reason = signal SIGSEGV
* frame #0: 0x00000000004007f4
```
This patch adds si_code so we can show:
```
(lldb) bt
* thread #1, name = 'a.out.mte', stop reason = signal SIGSEGV: sync tag check fault
* frame #0: 0x00000000004007f4
```
The order of errno and code was incorrect in ElfLinuxSigInfo::Parse.
It was the order that a "swapped" siginfo arch would use, which for Linux,
is only MIPS. We removed MIPS Linux support some time ago.
See:
fe15c26ee2/include/uapi/asm-generic/siginfo.h (L121)
A test is added using memory tagging faults. Which were the original
motivation for the changes.
Reviewed By: JDevlieghere
Differential Revision: https://reviews.llvm.org/D146045
This reverts commit 8d024a79ea783ed3fbb5691aeaf186ad3f0a4ae9.
I accidentally included some "in progress" work that wasn't supposed to
go with this commit.
We need to step the watchpoint instruction in these cases, but the
when we queued the ThreadPlanStepOverWatchpoint to do this, we didn't
make it a Controlling plan. So if you are stepping, this plan returns as
though it were a utility plan, and the stepping plan keeps going.
This only partially fixes the problem on Darwin; there's another bug
with reporting a watchpoint when we're instruction single stepping over
an instruction that triggers a watchpoint. The kernel reports the
"single step completed" but not the watchpoint hit. So this commit
also refactors the test into a part that works (at least on Darwin) and
a part that still fails.
We may have to adjust the test result expectations for other systems after
this fix.
Differential Revision: https://reviews.llvm.org/D146337
lldb was originally designed to get the watchpoint exception behavior
from the gdb remote serial protocol stub -- exceptions are either
received before the instruction executes, or after the instruction
has executed. This behavior was reported via two lldb extensions
to gdb RSP, so generic remote stubs like gdbserver or a JTAG stub,
would not tell lldb which behavior was correct, and it would default
to "exceptions are received after the instruction has executed".
Two architectures hard coded their correct "exceptions before
instruction" behavior, to work around this issue.
Most architectures have a fixed behavior of watchpoint exceptions,
and we can center that information in lldb. We can allow a remote
stub to override the default behavior via our packet extensions
if it's needed on a specific target.
This patch also separates the fetching of the number of watchpoints
from whether exceptions are before/after the insn. Currently if
lldb couldn't fetch the number of watchpoints (not really needed), it
also wouldn't get when exceptions are received, and watchpoint
handling would fail. lldb doesn't actually use the number of
watchpoints for anything beyond printing it to the user.
Differential Revision: https://reviews.llvm.org/D143215
rdar://101426626
Currently in some cases lldb reports stop reason as "step out" or "step over" (from thread plan completion) instead of "breakpoint", if the user breakpoint happens to be set on the same address.
The part of f08f5c9926 seems to overwrite internal breakpoint detection logic, so that only the last breakpoint for the current stop address is considered.
Together with step-out plans not clearing its breakpoint until they are destrouyed, this creates a situation when there is a user breakpoint set for address, but internal breakpoint makes lldb report a plan completion stop reason instead of breakpoint.
This patch reverts that internal breakpoint detection logic to consider all breakpoints
Reviewed By: jingham
Differential Revision: https://reviews.llvm.org/D140368
The synchronous callbacks are not intended to start the target running
during the callback, and doing so is flakey. This patch converts them
to being regular async callbacks, and adds some testing for sequential
reports that have caused problems in the field.
Differential Revision: https://reviews.llvm.org/D134927
This commit combines the initial commit (7c240de609af), a fix for x86_64 Linux
(3a0581501e76) and a fix for thinko in a last minute rewrite that I really
should have run the testsuite on.
Also, make sure that all the "I need to step over watchpoint" plans execute
before we call a public stop. Otherwise, e.g. if you have N watchpoints and
a Signal, the signal stop info will get us to stop with the watchpoints in a
half-done state.
Differential Revision: https://reviews.llvm.org/D130674