Summary:
We need `malloc` to return a larger size now that it's aligned properly
and we use a bunch of threads. Also the `match_any` test was wrong
because it assumed a 32-bit lanemask.
Summary:
Not all platforms support this function or header, but it was being
included by every test. Move it inside of the `ifdef` for the only user,
which is aarch64.
Reverts llvm/llvm-project#95419 and Reland #95358.
This PR is full of temporal fixes. After a discussion with @lntue, it is
better to avoid further changes to the cmake infrastructure for now as a
rework to the cmake utilities will be landed in the future.
- In /libc/src/__support/ OSUtil, changed quick_exit to just exit, and
put in namespace
LIBC_NAMESPACE::internal.
- In /libc/src/stdlib added quick_exit
- Added test files for quick_exit
Summary:
This is a massive patch because it reworks the entire build and
everything that depends on it. This is not split up because various bots
would fail otherwise. I will attempt to describe the necessary changes
here.
This patch completely reworks how the GPU build is built and targeted.
Previously, we used a standard runtimes build and handled both NVPTX and
AMDGPU in a single build via multi-targeting. This added a lot of
divergence in the build system and prevented us from doing various
things like building for the CPU / GPU at the same time, or exporting
the startup libraries or running tests without a full rebuild.
The new appraoch is to handle the GPU builds as strict cross-compiling
runtimes. The first step required
https://github.com/llvm/llvm-project/pull/81557 to allow the `LIBC`
target to build for the GPU without touching the other targets. This
means that the GPU uses all the same handling as the other builds in
`libc`.
The new expected way to build the GPU libc is with
`LLVM_LIBC_RUNTIME_TARGETS=amdgcn-amd-amdhsa;nvptx64-nvidia-cuda`.
The second step was reworking how we generated the embedded GPU library
by moving it into the library install step. Where we previously had one
`libcgpu.a` we now have `libcgpu-amdgpu.a` and `libcgpu-nvptx.a`. This
patch includes the necessary clang / OpenMP changes to make that not
break the bots when this lands.
We unfortunately still require that the NVPTX target has an `internal`
target for tests. This is because the NVPTX target needs to do LTO for
the provided version (The offloading toolchain can handle it) but cannot
use it for the native toolchain which is used for making tests.
This approach is vastly superior in every way, allowing us to treat the
GPU as a standard cross-compiling target. We can now install the GPU
utilities to do things like use the offload tests and other fun things.
Some certain utilities need to be built with
`--target=${LLVM_HOST_TRIPLE}` as well. I think this is a fine
workaround as we
will always assume that the GPU `libc` is a cross-build with a
functioning host.
Depends on https://github.com/llvm/llvm-project/pull/81557
Having libc_errno outside of the namespace causes versioning issues when
trying to link the tests against LLVM-libc. Most of this patch is just
moving libc_errno inside the namespace in tests. This isn't necessary in
the function implementations since those are already inside the
namespace.
This patch provides specific test macros to deal with `errno`.
This will help abstract away the differences between unit test and integration/hermetic tests in #79319.
In one case we use `libc_errno` which is a struct, in the other case we deal directly with `errno`.
Summary:
There is currently effort to change over the default AMDGPU code object
version https://github.com/llvm/llvm-project/pull/65410. However, this
unfortunately causes problems in the LLVM LibC test suite that leads to
a hang while executing. This is most likely a bug to do with indirect
call optimization, as it can be avoided without optimizations or with
manually preventing inlining in the AMDGPU startup code.
This patch sets the AMDGPU code object version to be four explicitly on
the LibC test suite. This should unblock the efforts to move the default
to 5 without breaking the test suite. This isn't a great solution, but
there is currently some time pressure to get COV5 landed and this seems
to be the easiest solution.
We use a simple bump ptr in the `libc` tests. If we run out of data we
can currently return other static memory and have weird failure cases.
We should fail more explicitly here by returning a null pointer instead.
Reviewed By: sivachandra
Differential Revision: https://reviews.llvm.org/D150529
This patch adds the necessary hacks to support global constructors and
destructors. This is an incredibly hacky process caused by the primary
fact that Nvidia does not provide any binary tools and very little
linker support. We first had to emit references to these functions and
their priority in D149451. Then we dig them out of the module once it's
loaded to manually create the list that the linker should have made for
us. This patch also contains a few Nvidia specific hacks, but it passes
the test, albeit with a stack size warning from `ptxas` for the
callback. But this should be fine given the resource usage of a common
test.
This also adds a dependency on LLVM to the NVPTX loader, which hopefully doesn't
cause problems with our CUDA buildbot.
Depends on D149451
Reviewed By: tra
Differential Revision: https://reviews.llvm.org/D149527
The packaged version of the `libc` library does not depend on the CUDA
installation because it only uses `clang` and emits LLVM-IR. However,
for testing we directly need the CUDA toolkit to emit and execute the
files. This patch explicitly passes `--cuda-path` to the relevant
compilations for NVPTX testing.
Reviewed By: tra
Differential Revision: https://reviews.llvm.org/D147653
This patch enables integration tests running on the GPU. This uses the
RPC interface implemented in D145913 to compile the necessary
dependencies for the integration test object. We can then use this to
compile the objects for the GPU directly and execute them using the AMD
HSA loader combined with its RPC server. For example, the compiler is
performing the following actions to execute the integration tests.
```
$ clang++ --target=amdgcn-amd-amdhsa -mcpu=gfx1030 -nostdlib -flto -ffreestanding \
crt1.o io.o quick_exit.o test.o rpc_client.o args_test.o -o image
$ ./amdhsa_loader image 1 2 5
args_test.cpp:24: Expected 'my_streq(argv[3], "3")' to be true, but is false
```
This currently only works with a single threaded client implementation
running on AMDGPU. Further work will implement multiple clients for AMD
and the ability to run on NVPTX as well.
Depends on D145913
Reviewed By: sivachandra, JonChesterfield
Differential Revision: https://reviews.llvm.org/D146256
The integration tests require the C memory functions as the compiler may
emit calls to them directly. The tests normally use the `__internal__`
variant that is built for testing, but these memory functions were
linked directly to preserve the entrypoint. Instead, we forward delcare
the internal versions and map the entrypoints to them manually inside
the integration test. This allows us to use the internal versions of
these files like the rest of the test objects.
Reviewed By: sivachandra
Differential Revision: https://reviews.llvm.org/D146177
Fixes#61355. The __dso_handle decl was introduced incorrectly into the startup
objects during the integration test cleanup which moved the integration tests
away from using an artificial sysroot to using -nostdlib. Having it in the
startup creates the duplicate symbol error when one does not use -nostdlib.
Since this is an integration test only problem, it is meaningful to keep it in
the integration test anyway.
Differential Revision: https://reviews.llvm.org/D145898
This part of the effort to make all test related pieces into the `test`
directory. This helps is excluding test related pieces in a straight
forward manner if LLVM_INCLUDE_TESTS is OFF. Future patches will also move
the MPFR wrapper and testutils into the 'test' directory.