Before changing the compiler version in
https://github.com/llvm/llvm-project/pull/108761, we first of all need
to upgrade the HEAD version to `Clang-20` in the Docker files and push
new builder images to the CI.
When bringing up a new cross compiler from scratch, we build
libunwind/libcxx in a setup where the toolchain is incomplete and unable
to perform the normal linker checks; this requires a few special cases
in the CMake files.
We simulate that scenario by removing the libc++ headers, libunwind and
libc++ libraries from the installed toolchain.
We need to set CMAKE_CXX_COMPILER_WORKS since CMake fails to probe the
compiler. We need to set CMAKE_CXX_COMPILER_TARGET, since LLVM's
heuristics fail when CMake hasn't been able to probe the environment
properly. (This is normal; one has to set those options when setting up
such a toolchain from scratch.)
This adds CI coverage for these build scenarios, which otherwise seldom
are tested by some build flow (but are essential when setting up a cross
compiler from scratch).
We always strive to test libc++ as close as possible to the way we are
actually shipping it. This was approximated reasonably well by setting
up the minimal driver flags when running the test suite, however we were
running the test suite against the library located in the build
directory.
This patch improves the situation by installing the library (the
headers, the built library, modules, etc) into a fake location and then
running the test suite against that fake "installation root".
This should open the door to getting rid of the temporary copy of the
headers we make during the build process, however this is left for a
future improvement.
Note that this adds quite a bit of verbosity whenever running the test
suite because we install the headers beforehand every time. We should be
able to override this to silence it, however CMake doesn't currently
give us a way to do that, see https://gitlab.kitware.com/cmake/cmake/-/issues/26085.
This patch decouples macOS CI testing from BuildKite, which makes the
maintenance of macOS CI easier and more accessible to all contributors.
Right now, the macOS CI is running entirely on machines owned by the
LLVM Foundation with only a small set of contributors having direct
access to them. In particular, updating these machines is currently
a very time-consuming manual process that requires taking the machines
offline, and using Github-provided instances makes that an order of
magnitude easier.
The story for performing back-deployment testing still needs to be
figured out, so for now we are retaining some jobs under BuildKite.
The base of android-buildkite-builder is buildkite-builder, not
android-build-base. android-build-base is only used for its /opt/android
directory, so move the Docker installation step into
android-buildkite-builder.
Install bzip2 for extracting ndk_platform.tar.bz2.
Add "set -e" to RUN heredocs to catch failing commands.
The Android Emulator has started printing this message, so pass the
`-no-metrics` option:
```
##############################################################################
## WARNING - ACTION REQUIRED ##
## Consider using the '-metrics-collection' flag to help improve the ##
## emulator by sending anonymized usage data. Or use the '-no-metrics' ##
## flag to bypass this warning and turn off the metrics collection. ##
## In a future release this warning will turn into a one-time blocking ##
## prompt to ask for explicit user input regarding metrics collection. ##
## ##
## Please see '-help-metrics-collection' for more details. You can use ##
## '-metrics-to-file' or '-metrics-to-console' flags to see what type of ##
## data is being collected by emulator as part of usage statistics. ##
##############################################################################
```
In order to test libc++ under the "Apple System Library" configuration,
we need to run the tests using DYLD_LIBRARY_PATH. This is required
because libc++ gets an install_name of /usr/lib when built as a system
library, which means that we must override the copy of libc++ used by
the whole process. This effectively reverts 2cf2f1b, which was the wrong
solution for the problem I was having.
Of course, this assumes that the just-built libc++ is sufficient to
replace the system library, which is not actually the case
out-of-the-box. Indeed, the system library contains a few symbols that
are not provided by the upstream library, leading to undefined symbols
when replacing the system library by the just-built one.
To solve this problem, we separately build shims that provide those
missing symbols and we manually link against them when we build
executables in the tests. While this is somewhat brittle, it provides a
localized and unintrusive way to allow testing the Apple system
configuration in an upstream environment, which has been a frequent
request.
We were not making any distinction between e.g. the "Apple-flavored"
libc++ built from trunk and the system-provided standard library on
Apple platforms. For example, any test that would be XFAILed on a
back-deployment target would unexpectedly pass when run on that
deployment target against the tip of trunk Apple-flavored libc++. In
reality, that test would be expected to pass because we're running
against the latest libc++, even if it is Apple-flavored.
To solve this issue, we introduce a new feature that describes whether
the Standard Library in use is the one provided by the system by
default, and that notion is different from the underlying standard
library flavor. We also refactor the existing Lit features to make a
distinction between availability markup and the library we're running
against at runtime, which otherwise limit the flexibility of what we can
express in the test suite. Finally, we refactor some of the
back-deployment versions that were incorrect (such as thinking that LLVM
10 was introduced in macOS 11, when in reality macOS 11 was synced with
LLVM 11).
Fixes#82107
This patch makes a few adjustments to the way we run the tests in the
Apple configuration on macOS:
First, we stop using DYLD_LIBRARY_PATH. Using that environment variable
leads to libc++.dylib being replaced by the just-built one for the whole
process, and that assumes compatibility between the system-provided
dylib and the just-built one. Unfortunately, that is not the case
anymore due to typed allocation, which is only available in the system
one. Instead, we want to layer the just-built libc++ on top of the
system-provided one, which seems to be what happens when we set a rpath
instead.
Second, add a missing XFAIL for a std::print test that didn't work as
expected when building with availability annotations enabled. When we
enable these annotations, std::print falls back to a non-unicode and
non-terminal output, which breaks the test.
Since switching the Windows CI environment over to GitHub Actions
runners, the mingw tests run in a setup where the default compiler
binary is a mingw-targeting compiler, so we don't need to specify a
custom executable name.
For the i686 tests, we still specify a custom compiler name, in order to
target i686 instead of x86_64.
This is needed for a workaround to make sure the link later succeeds. I
don't know the reason for that but it is definitely needed.
https://github.com/llvm/llvm-project/pull/89234 will/wants to correct
the triple normalisation for -none- and this means that clang prior to
19, and clang 19 and above will have different answers and therefore
different library paths.
I don't want to bootstrap a clang just for libcxx CI, or require that
anyone building for Arm do the same, so ask the compiler what the triple
should be.
This will be compatible with 17 and 19 when we do update to that
version.
I'm assuming $CC is what anyone locally would set to override the
compiler, and `cc` is the binary name in our CI containers. It's not
perfect but it should cover most use cases.
If LLVM is configured with -DLLVM_DEFAULT_TARGET_TRIPLE, or compiler_rt
is configured with -DCOMPILER_RT_DEFAULT_TARGET_TRIPLE, while the
argument is not normalized, such as Debian-style vendor-less triple,
clang will try to find libclang_rt in lib/<normalized_triple>, while
libclang_rt is placed into lib/<triple_arg>.
Let's also place libclang_rt into lib/<normalized_triple>.
`libcxx/utils/ci/run-buildbot` is also updated to use
`armv7m-none-unknown-eabi` as normalized triple instead of
current `armv7m-none-eabi`.
ChangeLog of this PR:
This patch has been applied and revert twice:
1. The first try is https://github.com/llvm/llvm-project/pull/88407, and
then it is found that it causes some CI failures.
https://lab.llvm.org/buildbot/#/builders/98/builds/36366
It is then resolved by another commit:
1693009679https://lab.llvm.org/buildbot/#/builders/77/builds/36372
It is caused that `COMPILER_RT_DEFAULT_TARGET_TRIPLE` is overwrite
without taking care about `CACHE`.
2. The second try https://github.com/llvm/llvm-project/pull/88835,
resolves https://lab.llvm.org/buildbot/#/builders/77/builds/36372
and in fact only one `execute_process` is needed.
Then we find some other CI failures.
https://github.com/mstorsjo/llvm-mingw/actions/runs/8730621159https://buildkite.com/llvm-project/libcxx-ci/builds/34897#018eec06-612c-47f1-9931-d3bd88bf7ced
It is due to misunderstanding `-print-effective-triple`: which will
output `thumbv7-w64-windows-gnu` for `armv7-w64-windows-gnu` or some
other thumb-enabled arm triples.
In fact we should use `-print-target-triple`.
For armv7m-picolibc, `armv7m-none-eabi` is hardcoded in
libcxx/utils/ci/run-buildbot, while in fact `armv7m-none-unknown-eabi`
is the real normalized triple.
This enables testing of the LLDB libc++ specific data formatters.
This is enabled in the bootstrap build since building LLDB requires
Clang and this
is quite expensive. Adding this test changes the build time from 31 to
34 minutes.
In order to test the LLDB data formatters make is required and SWIG
needs to be updated to version 4.
As drive-by, this patch sorts the entries and removes some duplicates.
The OSS fuzz build has been broken for a while because of changes
to the cmake configuration. This patch fixes the issues by defaulting to
libunwind as the default runtime
The documentation mentions manually pushing Docker images to the CI. The
preferred way is to use the proper GitHub action. This updates the
documentation.
---------
Co-authored-by: Will Hawkins <whh8b@obs.cr>
This reverts commit 4109b18ee5de1346c2b89a5c89b86bae5c8631d3.
It looks like the automatic detection has false positives. This broke
the following build https://github.com/llvm/llvm-project/pull/85262
Yesterday, one of the issues the libc++ builders encountered was that
they were using a client that was too old; too old to even update
automatically.
To get things working, i had to push a testing image with this change.
The testing image has been working for 12 hours now, so it's time to
commit to it :-)
Revert "Revert #76246 and #76083"
This reverts commit 5c150e7eeba9db13cc65b329b3c3537b613ae61d.
Adds a small fix that should properly disable the tests on Windows.
Unfortunately the original poster has not provided feedback and the
original patch did not fail in the LLVM CI infrastructure.
Modules are known to fail on Windows due to non compliance of the
C library. Currently not having this patch prevents testing on other
platforms.
These cause test build failures on Windows.
This reverts the following commits:
57ca74843586c9a93c425036c5538aae0a2cfa60
d06ae33ec32122bb526fb35025c1f0cf979f1090
This removes the entire modules testing infrastructure.
The current infrastructure uses CMake to generate the std and std.compat
module. This requires quite a bit of plumbing and uses CMake. Since
CMake introduced module support in CMake 3.26, modules have a higher
CMake requirement than the rest of the LLVM project. (The LLVM project
requires 3.20.) The main motivation for this approach was how libc++
generated its modules. Every header had its own module partition. This
was changed to improve performance and now only two modules remain. The
code to build these can be manually crafted.
A followup patch will reenable testing modules, using a different
approach.
The updated picolibc version has "isblank" function with external
linkage. This is required for C++ modules support.
This should solve all the problems reported in #76980, but
we'll wait to validate this with the modules build without
closing that issue.
I recently came across LIBCXXABI_USE_LLVM_UNWINDER and was surprised to
notice it was disabled by default. Since we build libunwind by default
and ship it in the LLVM toolchain, it would seem to make sense that
libc++ and libc++abi rely on libunwind for unwinding instead of using
the system-provided unwinding library (if any).
Most importantly, using the system unwinder implies that libc++abi is
ABI compatible with that system unwinder, which is not necessarily the
case. Hence, it makes a lot more sense to instead default to using the
known-to-be-compatible LLVM unwinder, and let vendors manually select a
different unwinder if desired.
As a follow-up change, we should probably apply the same default to
compiler-rt.
Differential Revision: https://reviews.llvm.org/D150897Fixes#77662
rdar://120801778
This patch adds a configuration of the libc++ test suite that enables
optimizations when building the tests. It also adds a new CI
configuration to exercise this on a regular basis. This is added in the
context of [1], which requires building with optimizations in order to
hit the bug.
[1]: https://github.com/llvm/llvm-project/issues/68552