glibc 2.42 made all usage of termios ioctl constants strictly internal
Therefore, we remove all usage for those removed constants.
This should only apply for Linux.
Fix#149103
Reference:
3d3572f590
@fweimer-rh @tstellar
(cherry picked from commit 0a17483c481d82eace3b36aee9cacb619eb027af)
When `ptrauth_calls` is present but `ptrauth_init_fini` is not, compiler
emits raw unsigned pointers in `.init_array`/`.fini_array` sections.
Previously, `__do_init`/`__do_fini` pointers, which are explicitly added
to the sections, were implicitly signed (due to the presense of
`ptrauth_calls`), while all the other pointers in the sections were
implicitly added by the compiler and thus non-signed.. As a result, the
sections contained a mix of unsigned function pointers and function
pointers signed with default signing schema.
This patch introduces use of inline assembly for this particular case,
so we can manually specify that we do not want to sign the pointers.
Note that we cannot use `__builtin_ptrauth_strip` for this purpose since
its result is not a constant expression.
(cherry picked from commit 19ba224fb8a925d095d84836bc9896bf564dfd99)
The sizes of the struct stat on MIPS64 differ in musl vs glibc.
See https://godbolt.org/z/qf9bcq8Y8 for the proof. Prior to this change,
compilation for MIPS64 musl would fail.
Signed-off-by: Jens Reidel <adrian@travitia.xyz>
(cherry picked from commit a5d6fa68e399dee9eb56f2671670085b26c06b4a)
This reverts commit 0c0aa56cdcf1fe3970a5f3875db412530512fc07.
This time with the following fixes for buildbot failures:
- Add underscore prefixes to symbol names on Apple platforms.
- Modify the test so that it skips the crash tests on platforms where
they are not expected to pass:
- Platforms that implement FEAT_PAuth but not FEAT_FPAC (e.g.
Apple M1, Cortex-A78C)
- Platforms where DA key is disabled (e.g. older Linux kernels,
Linux kernels with PAC disabled, likely Windows)
Original commit message follows:
The emulated PAC runtime functions emulate the ARMv8.3a pointer
authentication instructions and are intended for use in heterogeneous
testing environments. For more information, see the associated RFC:
https://discourse.llvm.org/t/rfc-emulated-pac/85557
Reviewers: mstorsjo, pawosm-arm, atrosinenko
Reviewed By: atrosinenko
Pull Request: https://github.com/llvm/llvm-project/pull/148094
The previous test simply tried to double free the pointer in the
EXPECT_DEATH macro. Unfortunately, the gtest infrastructure can allocate
a pointer that happens to be the previously freed pointer. Thus the free
doesn't fail since the spawned process does not attempt to free all of
the pointers allocated in the original test.
NOTE: Scudo should be checked to make sure that the TSD is not always
returning pointers in the same order they are freed. Although this
appears to be a problem with a program that only does a small number of
allocations.
As well as followup "builtins: Speculative MSVC fix."
This reverts commits 5b1db59fb87b4146f827d17396f54ef30ae0dc40 and
f1c4df5b7bb79efb3e9be7fa5f8176506499d0a6.
Needs fixes for failing tests which will take time to implement.
Despite being defined in the system headers, these commands are not in
fact part of the FreeBSD system call interface. They exist solely for
the Linuxulator, i.e. running Linux binaries on FreeBSD, and any attempt
to use them from a FreeBSD binary will return EINVAL. The fact we needed
to define _KERNEL (which, as the name implies, means we are compiling
the kernel) to even get the definition of shminfo should have been a
strong indicator that IPC_INFO at least was not a userspace interface.
To prepare for other platforms, such as 64-bit AIX, that have a non-zero
mmap beginning address.
---------
Co-authored-by: David Justo <david.justo.1996@gmail.com>
Adds support to RTSan for `free_sized` and `free_aligned_sized` from
C23.
Other sanitizers will be handled with their own separate PRs.
For https://github.com/llvm/llvm-project/issues/144435
Signed-off-by: Justin King <jcking@google.com>
This allows us to change the number of blocks stored according to the
size of BatchClass.
Also change the name `TransferBatch` to `Batch` given that it's never
the unit of transferring blocks.
* Fix splitting of arguments such as `LSAN_OPTIONS=suppressions=lsan.supp`
* Prevent environment variables set in parent process being overwritten
* Replace hard-coded `env` with `%env` to allow overriding depending on target
* Replace deprecated `pipes` usage with `shlex`
* Run formatter over `iossim_env.py`
**Related to:** https://github.com/llvm/llvm-project/issues/117925
**Follow up to:** https://github.com/llvm/llvm-project/pull/117929
**Context:**
As noted in the linked issue, some ASan configuration flags are not
honored on Windows when set through the `__asan_default_options` user
function. The reason for this is that `__asan_default_options` is not
available by the time `AsanInitInternal` executes, which is responsible
for applying the ASan flags.
To fix this properly, we'll probably need a deep re-design of ASan
initialization so that it is consistent across OS'es.
In the meantime, this PR offers a practical workaround.
**This PR:** refactors part of `AsanInitInternal` so that **idempotent**
flag-applying steps are extracted into a new function `ApplyOptions`.
This function is **also** invoked in the "weak function callback" on
Windows (which gets called when `__asan_default_options` is available)
so that, if any flags were set through the user-function, they are
safely applied _then_.
Today, `ApplyOptions` contains only a subset of flags. My hope is that
`ApplyOptions` will over time, through incremental refactorings
`AsanInitInternal` so that **all** flags are eventually honored.
Other minor changes:
* The introduction of a `ApplyAllocatorOptions` helper method, needed to
implement `ApplyOptions` for allocator options without re-initializing
the entire allocator. Reinitializing the entire allocator is expensive,
as it may do a whole pass over all the marked memory. To my knowledge,
this isn't needed for the options captured in `ApplyAllocatorOptions`.
* Rename `ProcessFlags` to `ValidateFlags`, which seems like a more
accurate name to what that function does, and prevents confusion when
compared to the new `ApplyOptions` function.
#144648 was reverted because it failed the new sanitizer test
`munmap_clear_shadow.c` in IOS's CI.
That issue could be fixed by disabling the test on some platforms, due
to the incompatibility of the test on these platforms.
In detail, we should disable the test in FreeBSD, Apple, NetBSD,
Solaris, and Haiku, where `ReleaseMemoryPagesToOS` executes
`madvise(beg, end, MADV_FREE)`, which tags the relevant pages as 'FREE'
and does not release them immediately.
Adds support to TSan for `free_sized` and `free_aligned_sized` from C23.
Other sanitizers will be handled with their own separate PRs.
For https://github.com/llvm/llvm-project/issues/144435
Signed-off-by: Justin King <jcking@google.com>
Reapply "[NFC][DebugInfo][DWARF] Create new low-level dwarf library (#…
(#145959)
This reapplies cbf781f0bdf2f680abbe784faedeefd6f84c246e, with fixes for
the shared-library build and the unconventional sanitizer-runtime build.
Original Description:
This is the culmination of a series of changes described in [1].
Although somewhat large by line count, it is almost entirely mechanical,
creating a new library in DebugInfo/DWARF/LowLevel. This new library has
very minimal dependencies, allowing it to be used from more places than
the normal DebugInfo/DWARF library--in particular from MC.
1.
https://discourse.llvm.org/t/rfc-debuginfo-dwarf-refactor-into-to-lower-and-higher-level-libraries/86665/2
Most pacbti instructions are a nop when the target does not have pacbti,
and thus safe to execute, but bxaut is an undefined instruction. When we
don't have pacbti (e.g. if we're compiling compiler-rt with
-mbranch-protection=standard in order to be forward-compatible with
pacbti while still working on targets without it) then we need to use
separate aut and bx instructions.
In TSan, every `k` bytes of application memory (where `k = 8`) maps to a
single shadow/meta cell. This design leads to two distinct outcomes when
calculating the end of a shadow range using `MemToShadow(addr_end)`,
depending on the alignment of `addr_end`:
- **Exclusive End:** If `addr_end` is aligned (`addr_end % k == 0`),
`MemToShadow(addr_end)` points to the first shadow cell *past* the
intended range. This address is an exclusive boundary marker, not a cell
to be operated on.
- **Inclusive End:** If `addr_end` is not aligned (`addr_end % k != 0`),
`MemToShadow(addr_end)` points to the last shadow cell that *is* part of
the range (i.e., the same cell as `MemToShadow(addr_end - 1)`).
Different TSan functions have different expectations for whether the
shadow end should be inclusive or exclusive. However, these expectations
are not always explicitly enforced, which can lead to subtle bugs or
reliance on unstated invariants.
The core of this patch is to ensure that functions ONLY requiring an
**exclusive shadow end** behave correctly.
1. Enforcing Existing Invariants:
For functions like `MetaMap::MoveMemory` and `MapShadow`, the assumption
is that the end address is always `k`-aligned. While this holds true in
the current codebase (e.g., due to some external implicit conditions),
this invariant is not guaranteed by the function's internal context. We
add explicit assertions to make this requirement clear and to catch any
future changes that might violate this assumption.
2. Fixing Latent Bugs:
In other cases, unaligned end addresses are possible, representing a
latent bug. This was the case in `UnmapShadow`. The `size` of a memory
region being unmapped is not always a multiple of `k`. When this
happens, `UnmapShadow` would fail to clear the final (tail) portion of
the shadow memory.
This patch fixes `UnmapShadow` by rounding up the `size` to the next
multiple of `k` before clearing the shadow memory. This is safe because
the underlying OS `unmap` operation is page-granular, and the page size
is guaranteed to be a multiple of `k`.
Notably, this fix makes `UnmapShadow` consistent with its inverse
operation, `MemoryRangeImitateWriteOrResetRange`, which already performs
a similar size round-up.
In summary, this PR:
- **Adds assertions** to `MetaMap::MoveMemory` and `MapShadow` to
enforce their implicit requirement for k-aligned end addresses.
- **Fixes a latent bug** in `UnmapShadow` by rounding up the size to
ensure the entire shadow range is cleared. Two new test cases have been
added to cover this scenario.
- Removes a redundant assertion in `__tsan_java_move`.
- Fixes an incorrect shadow end calculation introduced in commit
4052de6. The previous logic, while fixing an overestimation issue, did
not properly account for `kShadowCell` alignment and could lead to
underestimation.
Once part of PR #144648, follow the reviewer's advice and split into
this separate PR.
`unmap` works at page granularity, but supports an arbitrary non-zero
size as an argument, which results in possible shadow undercleaning in
the existing TSan implementation when `size % kShadowCell != 0`.
This change introduces two test cases to verify the shadow cleaning
effect in `unmap`.
- java_heap_init2.cpp: Imitating java_heap_init cpp, verify the
incomplete cleaning of meta
- munmap_clear_shadow.c: verify the incomplete cleaning of shadow
This reverts commit 5eb5f0d8760c6b757c1da22682b5cf722efee489 i.e.,
relands 1b71ea411a9d36705663b1724ececbdfec7cc98c.
Test case was failing on aarch64 because the long double type is
implemented differently on x86 vs aarch64. This reland restricts the
test to x86.
----
Original CL description:
A commonly used aid for debugging MSan reports is
`__msan_print_shadow()`, which requires manual app code annotations
(typically of the variable in the UUM report or nearby). This is in
contrast to ASan, which automatically prints out the shadow map when a
check fails.
This patch changes MSan to print the shadow that failed an outlined
check (checks are outlined per function after the
`-msan-instrumentation-with-call-threshold` is exceeded) if verbosity >=
1. Note that we do not print out the shadow map of "neighboring"
variables because this is technically infeasible; see "Caveat" below.
This patch can be easier to use than `__msan_print_shadow()` because
this does not require manual app code annotations. Additionally, due to
optimizations, `__msan_print_shadow()` calls can sometimes spuriously
affect whether a variable is initialized.
As a side effect, this patch also enables outlined checks for
arbitrary-sized shadows (vs. the current hardcoded handlers for
{1,2,4,8}-byte shadows).
Caveat: the shadow does not necessarily correspond to an individual user
variable, because MSan instrumentation may combine and/or truncate
multiple shadows prior to emitting a check that the mangled shadow is
zero (e.g., `convertShadowToScalar()`,
`handleSSEVectorConvertIntrinsic()`, `materializeInstructionChecks()`).
OTOH it is arguably a strength that this feature emit the shadow that
directly matters for the MSan check, but which cannot be obtained using
the MSan API.
A commonly used aid for debugging MSan reports is `__msan_print_shadow()`, which requires manual app code annotations (typically of the variable in the UUM report or nearby). This is in contrast to ASan, which automatically prints out the shadow map when a check fails.
This patch changes MSan to print the shadow that failed an outlined check (checks are outlined per function after the `-msan-instrumentation-with-call-threshold` is exceeded) if verbosity >= 1. Note that we do not print out the shadow map of "neighboring" variables because this is technically infeasible; see "Caveat" below.
This patch can be easier to use than `__msan_print_shadow()` because this does not require manual app code annotations. Additionally, due to optimizations, `__msan_print_shadow()` calls can sometimes spuriously affect whether a variable is initialized.
As a side effect, this patch also enables outlined checks for arbitrary-sized shadows (vs. the current hardcoded handlers for {1,2,4,8}-byte shadows).
Caveat: the shadow does not necessarily correspond to an individual user variable, because MSan instrumentation may combine and/or truncate multiple shadows prior to emitting a check that the mangled shadow is zero (e.g., `convertShadowToScalar()`, `handleSSEVectorConvertIntrinsic()`, `materializeInstructionChecks()`). OTOH it is arguably a strength that this feature emit the shadow that directly matters for the MSan check, but which cannot be obtained using the MSan API.
Correct the interval desc of ReleaseMemoryPagesToOS from [beg, end] to
[beg, end), as it actually does.
The previous incorrect description of [beg, end] might cause an
incorrect invoke as follows: `ReleaseMemoryPagesToOS(0, kPageSize - 1);`
Adds support to MSan for `free_sized` and `free_aligned_sized` from C23.
Other sanitizers will be handled with their own separate PRs.
For https://github.com/llvm/llvm-project/issues/144435
Signed-off-by: Justin King <jcking@google.com>
In the current gcov implementation, all lines within a basic block are
attributed to the source file of the block's containing function. This
is inaccurate when a block contains lines from other files (e.g., via
#include "foo.inc").
Commit
[406e81b](406e81b79d)
attempted to address this by filtering lines based on debug info types,
but this approach has two limitations:
* **Over-filtering**: Some valid lines belonging to the function are
incorrectly excluded.
* **Under-counting**: Lines not belonging to the function are filtered
out and omitted from coverage statistics.
**GCC Reference Behavior**
GCC's gcov implementation handles this case correctly.This change aligns
the LLVM behavior with GCC.
**Proposed Solution**
1. **GCNO Generation**:
* **Current**: Each block stores a single GCOVLines record (filename +
lines).
* **New**: Dynamically create new GCOVLines records whenever consecutive
lines in a block originate from different source files. Group subsequent
lines from the same file under one record.
2. **GCNO Parsing**:
* **Current**: Lines are directly attributed to the function's source
file.
* **New**: Introduce a GCOVLocation type to track filename/line mappings
within blocks. Statistics will reflect the actual source file for each
line.
This commit changes the interval shadow/meta address check from
inclusive-inclusive ( $[\mathrm{start}, \mathrm{end}]$ ) to
inclusive-exclusive ( $[\mathrm{start}, \mathrm{end})$ ), to resolve the
ambiguity of the end point address. This also aligns the logic with the
check for `isAppMem` (i.e., inclusive-exclusive), ensuring consistent
behavior across all memory classifications.
1. The `isShadowMem` and `isMetaMem` checks previously used an
inclusive-inclusive interval, i.e., $[\mathrm{start}, \mathrm{end}]$,
which could lead to a boundary address being incorrectly classified as
both Shadow and Meta memory, e.g., 0x3000_0000_0000 in
`Mapping48AddressSpace`.
- What's more, even when Shadow doesn't border Meta, `ShadowMem::end`
cannot be considered a legal shadow address, as TSan protects the gap,
i.e., `ProtectRange(ShadowEnd(), MetaShadowBeg());`
2. `ShadowMem`/`MetaMem` addresses are derived from `AppMem` using an
affine-like transformation (`* factor + bias`). This transformation
includes two extra modifications: high- and low-order bits are masked
out, and for Shadow Memory, an optional XOR operation may be applied to
prevent conflicts with certain AppMem regions.
- Given that all AppMem regions are defined as inclusive-exclusive
intervals, $[\mathrm{start}, \mathrm{end})$, the resulting Shadow/Meta
regions should logically also be inclusive-exclusive.
Note: This change is purely for improving code consistency and should
have no functional impact. In practice, the exact endpoint addresses of
the Shadow/Meta regions are generally not reached.