Summary:
This function can easily be implemented by forwarding it to the host
process. This shows up in a few places that we might want to test the
GPU so it should be provided. Also, I find the idea of the GPU
offloading work to the CPU via `system` very funny.
Summary:
The `scanf` function has a "system file" configuration, which is pretty
much what the GPU implementation does at this point. So we should be
able to use it in much the same way.
Since new headergen is now the default for building LLVM-libc, the docs
need to be updated to reflect that. While I was editing those docs, I
took a quick pass at updating other out-of-date pages.
Summary:
We can enable the sscanf function on the GPU now. This required adding
the configs to the scanf list so that the GPU build didn't do float
conversions.
Summary:
I forgot that the OpenMP tests still look for this, reverting for now
until I can make a fix.
This reverts commit c1c6ed83e9ac13c511961e5f5791034a63168e7e.
Summary:
Previously, the GPU built the `libc` in a fat binary version that was
used to pass this to the link job in offloading languages like CUDA or
OpenMP. This was mostly required because NVIDIA couldn't consume the
standard static library version. Recent patches have now created the
`clang-nvlink-wrapper` which lets us do that. Now, the C library is just
included implicitly by the toolchain (or passed with -Xoffload-linker
-lc).
This code can be fully removed, which will heavily simplify the build
(and removed some bugs and garbage files I've encoutnered).
Summary:
This patch implements `clock_gettime` using the monotonic clock. This
allows users to get time elapsed at nanosecond resolution. This is
primarily to facilitate compiling the `chrono` library from `libc++`.
For this reason we provide both `CLOCK_MONOTONIC`, which we can
implement
with the GPU's global fixed-frequency clock, and `CLOCK_REALTIME` which
we cannot. The latter is provided just to make people who use this
header happy and it will always return failure.
Summary:
The `errno` variable is expected to be `thread_local` by the standard.
However, the GPU targets do not support `thread_local` and implementing
that would be a large endeavor. Because of that, we previously didn't
provide the `errno` symbol at all. However, to build some programs we at
least need to be able to link against `errno`. Many things that would
normally set `errno` completely ignore it currently (i.e. stdio) but
some programs still need to be able to link against correct C programs.
For this purpose this patch exports the `errno` symbol as a simple
global. Internally, this will be updated atomically so it's at least not
racy. Externally, this will be on the user. I've updated the
documentation to state as such. This is required to get `libc++` to
build.
Summary:
Straightforward RPC implementation of the `remove` function for the GPU.
Copies over the string and calls `remove` on it, passing the result
back. This is required for building some `libc++` functionality.
Summary:
I initially didn't report these as supported because they didn't provide
expected behavior and were very wasteful. The recent patch moved them to
a lock-free atomic implementation so they can now actually be used.
Summary:
After the overhaul of the GPU build the documentation pages were a
little stale. This updates them with more in-depth information on
building the GPU runtimes and using them. Specifically using them goes
through the differences between the offloading and direct compilation
modes.
Summary:
This directory is leftover from when we handled both AMDGPU and NVPTX in
the same build and merged them into a pseudo triple. Now the only thing
it contains is the RPC server header. This gets rid of it, but now that
it's in the base install directory we should make it clear that it's an
LLVM libc header.
Summary:
This is a massive patch because it reworks the entire build and
everything that depends on it. This is not split up because various bots
would fail otherwise. I will attempt to describe the necessary changes
here.
This patch completely reworks how the GPU build is built and targeted.
Previously, we used a standard runtimes build and handled both NVPTX and
AMDGPU in a single build via multi-targeting. This added a lot of
divergence in the build system and prevented us from doing various
things like building for the CPU / GPU at the same time, or exporting
the startup libraries or running tests without a full rebuild.
The new appraoch is to handle the GPU builds as strict cross-compiling
runtimes. The first step required
https://github.com/llvm/llvm-project/pull/81557 to allow the `LIBC`
target to build for the GPU without touching the other targets. This
means that the GPU uses all the same handling as the other builds in
`libc`.
The new expected way to build the GPU libc is with
`LLVM_LIBC_RUNTIME_TARGETS=amdgcn-amd-amdhsa;nvptx64-nvidia-cuda`.
The second step was reworking how we generated the embedded GPU library
by moving it into the library install step. Where we previously had one
`libcgpu.a` we now have `libcgpu-amdgpu.a` and `libcgpu-nvptx.a`. This
patch includes the necessary clang / OpenMP changes to make that not
break the bots when this lands.
We unfortunately still require that the NVPTX target has an `internal`
target for tests. This is because the NVPTX target needs to do LTO for
the provided version (The offloading toolchain can handle it) but cannot
use it for the native toolchain which is used for making tests.
This approach is vastly superior in every way, allowing us to treat the
GPU as a standard cross-compiling target. We can now install the GPU
utilities to do things like use the offload tests and other fun things.
Some certain utilities need to be built with
`--target=${LLVM_HOST_TRIPLE}` as well. I think this is a fine
workaround as we
will always assume that the GPU `libc` is a cross-build with a
functioning host.
Depends on https://github.com/llvm/llvm-project/pull/81557
Summary:
We previously had to disable these string functions because they were
not compatible with the definitions coming from the GNU / host
environment. The GPU, when exporting its declarations, has a very
difficult requirement that it be compatible with the host environment as
both sides of the compilation need to agree on definitions and what's
present.
This patch more or less gives up an just copies the definitions as
expected by `glibc` if they are provided that way, otherwise we fall
back to the accepted way. This is the alternative solution to an
existing PR which instead disable's GCC's handling.
Summary:
This function follows closely with the pattern of all the other
functions. That is, making a new opcode and forwarding the call to the
host. However, this also required modifying the test somewhat. It seems
that not all `libc` implementations follow the same error rules as are
tested here, and it is not explicit in the standard, so we simply
disable these EOF checks when targeting the GPU.
Summary:
These wrapper headers need to work around things in the standard
headers. The existing workarounds didn't correctly handle the macros for
`iscascii` and `toascii`. Additionally, `memrchr` can't be used because
it has a different declaration for C++ mode. Fix this so it can be
compiled.
Summary:
This patch adds the necessary entrypoints to handle the `fseek`,
`fflush`, and `ftell` functions. These are all very straightfoward, we
simply make RPC calls to the associated function on the other end.
Implementing it this way allows us to more or less borrow the state of
the stream from the server as we intentionally maintain no internal
state on the GPU device. However, this does not implement the `errno`
functinality so that must be ignored.
Summary:
This patch implements the `fgets`, `getc`, `fgetc`, and `getchar`
functions on the GPU. Their implementations are straightforward enough.
One thing worth noting is that the implementation of `fgets` will be
extremely slow due to the high latency to read a single char. A faster
solution would be to make a new RPC call to call `fgets` (due to the
special rule that newline or null breaks the stream). But this is left
out because performance isn't the primary concern here.
Summary:
This patch simply adds the necessary config to enable qsort and bsearch
on the GPU. It is *highly* unlikely that anyone will use these, as they
are single threaded, but we may as well support all entrypoints that we
can.
Summary:
This patch implements fwrite, putc, putchar, and fputc on the GPU. These
are very straightforward, the main difference for the GPU implementation
is that we are currently ignoring `errno`. This patch also introduces a
minimal smoke test for `putc` that is an exact copy of the `puts` test
except we print the string char by char. This also modifies the `fopen`
test to use `fwrite` to mirror its use of `fread` so that it is tested
as well.
This patch adds the necessary support to provide `assert` functionality
through the GPU `libc` implementation. This implementation creates a
special-case GPU implementation rather than relying on the common
version. This is because the GPU has special considerings for printing.
The assertion is printed out in chunks with `write_to_stderr`, however
when combined with the GPU execution model this causes 32+ threads to
all execute in-lock step. Meaning that we'll get a horribly fragmented
message. Furthermore, potentially thousands of threads could hit the
assertion at once and try to print even if we had it all in one
`printf`.
This is solved by having a one-time lock that each thread group / wave /
warp will attempt to claim. We only let one thread group pass through
while the others simply stop executing. Finally only the first thread in
that group will do the printing until we finally abort execution.
Reviewed By: sivachandra
Differential Revision: https://reviews.llvm.org/D159296
This function implements the `abort` function on the GPU. The
implementation here closely mirros the `exit` call where we first
synchornize with the RPC server to make sure it's listening and then we
exit on the GPU.
I was unsure if this should be a simple `__builtin_assert` on the GPU. I
elected to go with an RPC approach to make this a more "true" `abort`
call. That is, it should invoke some signal handlers and exit with the
proper code according to the implemented C library on the server.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D159210
The GPU has the ability to sleep for very short periods of time. We can
map this to the existing `nanosleep` utility. This patch maps the
nanosleep utility to the existing hardware instructions as best as
possible.
Depends on D159118
Reviewed By: JonChesterfield, sivachandra
Differential Revision: https://reviews.llvm.org/D159225
This patch implements the `clock()` function on the GPU. This function
is supposed to return a timestamp that can be converted into seconds
using the `CLOCKS_PER_SEC` macro. The GPU has a fixed frequency timer
that can be used for this purpose. However, there are some
considerations.
First is that AMDGPU does not have a statically known fixed frequency. I
know internally that the gfx10xx and gfx11xx series use a 100 MHz clock
which will probably remain for the future. Gfx9xx typically uses a 25
MHz clock except for the Vega 10 GPU. The only way to know for sure is
to look it up from the runtime. For this purpose, I elected to default
it to some known values and assign these to an exteranlly visible symbol
that can be initialized if needed. If we do not have a good guess we
just return zero.
Second is that the `CLOCKS_PER_SEC` macro only gives about a microsecond
of resolution. POSIX demands that it's 1,000,000 so it's best that we
keep with this tradition as almost all targets seem to respect this. The
reason this is important is because on the GPU we will almost assuredly
be copying the host's macro value (see the wrapper header) so we should
go with the POSIX version that's most likely to be set. (We could
probably make a warning if the included header doesn't match the
expected value).
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D159118
This patch implements the `fopen`, `fclose`, and `fread` functions on
the GPU. These are pretty much re-implemented from what existed but
using the new interface. Having this subset allows us to test the
interface a bit more strenuously since we can write and read to a file.
Reviewed By: sivachandra
Differential Revision: https://reviews.llvm.org/D157622
The GPU has much tighter requirements for handling IO functions.
Previously we attempted to define the GPU as one of the platform files.
Using a common interface allowed us to easily define these functions
without much extra work. However, it became more clear that this was a
poor fit for the GPU. The file interface uses function pointers, which
prevented inlining and caused bad perfromance and resource usage on the
GPU. Further, using an actual `FILE` type rather than referring to it as
a host stub prevented us from usin files coming from the host on the GPU
device.
After talking with @sivachandra, the approach now is to simply define
GPU specific versions of the functions we intend to support. Also, we
are ignoring `errno` for the time being as it is unlikely we will ever
care about supporting it fully.
Reviewed By: sivachandra
Differential Revision: https://reviews.llvm.org/D157427
This patch does the noisy work of removing the test opcodes from the
exported interface to an interface that is only visible in `libc`. The
benefit of this is that we both test the exported RPC registration more
directly, and we do not need to give this interface to users.
I have decided to export any opcode that is not a "core" libc feature as
having its MSB set in the opcode. We can think of these as non-libc
"extensions".
Reviewed By: JonChesterfield
Differential Revision: https://reviews.llvm.org/D154848
These functions have definitions differing between C and C++. GNU
respects the C++ definitions while the LLVM libc does not. This causes
many bugs and the current hack creates other issues. Rather than hack
around this I'd rather temporarily disable these than regress with the
integration into other offloading languages. We lose test support for
them but we should be able to re-enable these once the `libc` headers
provide these correctly.
Reviewed By: JonChesterfield
Differential Revision: https://reviews.llvm.org/D154850
This patch adds the necessary support for the fopen and fclose functions
to work on the GPU via RPC. I added a new test that enables testing this
with the minimal features we have on the GPU. I will update it once we
have `fread` and `fwrite` to actually check the outputted strings. For
now I just relied on checking manually via the outpuot temp file.
Reviewed By: JonChesterfield, sivachandra
Differential Revision: https://reviews.llvm.org/D154519
Another low hanging fruit we can put on the GPU, this ports the tests
over to the hermetic framework so we can run them on the GPU.
Reviewed By: sivachandra
Differential Revision: https://reviews.llvm.org/D154540
This patch simply enables the `div`, `ldiv,` and, `lldiv` functions on
the GPU. This should be straightforward enough.
Reviewed By: jdoerfert
Differential Revision: https://reviews.llvm.org/D154143
The RPC calls all have delays associated with them. Currently the `exit`
function does an async send and immediately exits the GPU. This can have
the effect that the RPC server never sees the exit call and we continue.
This patch changes that to first sync with the server before continuing
to perform its exit. There is still a hazard here, where the kernel can
complete before the RPC call reads back its response, but this is simply
multi-threaded hazards. This change ensures that the server *will*
always exit some time after the GPU exits.
Reviewed By: JonChesterfield
Differential Revision: https://reviews.llvm.org/D154112