The architectures provided to skipIf / expectedFail are regular
expressions (v. _match_decorator_property() in decorators.py
so on Darwin systems "arm64" would match the skips for "arm" (32-bit
Linux). Update these to "arm$" to prevent this, and also update
three tests (TestBuiltinFormats.py, TestCrossDSOTailCalls.py,
TestCrossObjectTailCalls.py) that were skipped for arm64 via this
behavior, and need to be skipped or they will fail.
This was moviated by the new TestDynamicValue.py test which has
an expected-fail for arm, but the test was passing on arm64 Darwin
resulting in failure for the CIs.
`SBDebugger().Create()` returns a debugger with only the host platform
in its platform list. If the test suite is running for a remote
platform, it should be explicitly added and selected in the new debugger
created within the test, otherwise, the test will fail because the host
platform may not be able to launch the built binary.
In async mode, the test terminates sooner than it should
(`run_to_source_breakpoint` does not work in this mode), and then the
test crashes due to #76057. Most of the time, the test does not fail
because its already XFAILed, but the crash still registers as a failure.
These are expected to fail but sometimes crash during the test leaving them
as unresolved.
Same failure message and likely same cause as the other test in this file.
TestGlobalModuleCache.py, a recently added test, tries to update a
source file in the build directory, but it assumes the file is writable.
In our distributed build and test system, this is not always true, so
the test often fails with a write permissions error.
This change fixes that by setting the permissions on the file to be
writable before attempting to write to it.
This reverts commit 01c4ecb7ae21a61312ff0c0176c0ab9f8656c159,
d14d52158bc444e2d036067305cf54aeea7c9edb and
a756dc4724a279d76898bacd054a04832b02caa8.
This removes the logging and workaround I added earlier,
and puts back the skip for Arm/AArch64 Linux.
I've not seen it fail on AArch64 since, but let's not create
more noise if it does.
I've written up the issue as https://github.com/llvm/llvm-project/issues/76057.
It's something to do with trying to destroy a process while
a thread is doing a single sep. So my workaround wouldn't have
worked in any case. It needs a more involved fix.
And remove the workaround I was trying, as this logging may prove what
the actual issue is.
Which I think is that the thread plan map in Process is cleared before
the threads are destroyed. So Thread::ShouldStop could be getting
the current plan, then the plan map is cleared, then Thread::ShouldStop
is deciding based on that plan to pop a plan from the now empty stack.
This reverts commit 481bb62e50317cf20df9493aad842790162ac3e7 and
71b4d7498ffecca5957fa0a63b1abf141d7b8441, along with the logging
and assert I had added to the test previously.
Now that I've caught it failing on Arm:
https://lab.llvm.org/buildbot/#/builders/17/builds/46598
Now I have enough to investigate, skip the test on the effected
platforms while I do that.
This reverts commit 35dacf2f51af251a74ac98ed29e7c454a619fcf1.
And relands the original change with two additions so I can debug the failure on Arm/AArch64:
* Enable lldb step logging in the tests.
* Assert that the current plan is not the base plan at the spot I believe is calling PopPlan.
These will be removed and replaced with a proper fix once I see some failures on the bots,
I couldn't reproduce it locally.
(also, no sign of it on the x86_64 bot)
When you debug a binary and the change & rebuild and then rerun in lldb
w/o quitting lldb, the Modules in the Global Module Cache for the old
binary & .o files if used are now "unreachable". Nothing in lldb is
holding them alive, and they've already been unlinked. lldb will
properly discard them if there's not another Target referencing them.
However, this only works in simple cases at present. If you have several
Targets that reference the same modules, it's pretty easy to end up
stranding Modules that are no longer reachable, and if you use a
sequence of SBDebuggers unreachable modules can also get stranded. If
you run a long-lived lldb process and are iteratively developing on a
large code base, lldb's memory gets filled with useless Modules.
This patch adds a test for the mode that currently works:
(lldb) target create foo
(lldb) run
<rebuild foo outside lldb>
(lldb) run
In that case, we do delete the unreachable Modules.
The next step will be to add tests for the cases where we fail to do
this, then see how to safely/efficiently evict unreachable modules in
those cases as well.