
In TSan, every `k` bytes of application memory (where `k = 8`) maps to a single shadow/meta cell. This design leads to two distinct outcomes when calculating the end of a shadow range using `MemToShadow(addr_end)`, depending on the alignment of `addr_end`: - **Exclusive End:** If `addr_end` is aligned (`addr_end % k == 0`), `MemToShadow(addr_end)` points to the first shadow cell *past* the intended range. This address is an exclusive boundary marker, not a cell to be operated on. - **Inclusive End:** If `addr_end` is not aligned (`addr_end % k != 0`), `MemToShadow(addr_end)` points to the last shadow cell that *is* part of the range (i.e., the same cell as `MemToShadow(addr_end - 1)`). Different TSan functions have different expectations for whether the shadow end should be inclusive or exclusive. However, these expectations are not always explicitly enforced, which can lead to subtle bugs or reliance on unstated invariants. The core of this patch is to ensure that functions ONLY requiring an **exclusive shadow end** behave correctly. 1. Enforcing Existing Invariants: For functions like `MetaMap::MoveMemory` and `MapShadow`, the assumption is that the end address is always `k`-aligned. While this holds true in the current codebase (e.g., due to some external implicit conditions), this invariant is not guaranteed by the function's internal context. We add explicit assertions to make this requirement clear and to catch any future changes that might violate this assumption. 2. Fixing Latent Bugs: In other cases, unaligned end addresses are possible, representing a latent bug. This was the case in `UnmapShadow`. The `size` of a memory region being unmapped is not always a multiple of `k`. When this happens, `UnmapShadow` would fail to clear the final (tail) portion of the shadow memory. This patch fixes `UnmapShadow` by rounding up the `size` to the next multiple of `k` before clearing the shadow memory. This is safe because the underlying OS `unmap` operation is page-granular, and the page size is guaranteed to be a multiple of `k`. Notably, this fix makes `UnmapShadow` consistent with its inverse operation, `MemoryRangeImitateWriteOrResetRange`, which already performs a similar size round-up. In summary, this PR: - **Adds assertions** to `MetaMap::MoveMemory` and `MapShadow` to enforce their implicit requirement for k-aligned end addresses. - **Fixes a latent bug** in `UnmapShadow` by rounding up the size to ensure the entire shadow range is cleared. Two new test cases have been added to cover this scenario. - Removes a redundant assertion in `__tsan_java_move`. - Fixes an incorrect shadow end calculation introduced in commit 4052de6. The previous logic, while fixing an overestimation issue, did not properly account for `kShadowCell` alignment and could lead to underestimation.
34 lines
1.2 KiB
C++
34 lines
1.2 KiB
C++
// RUN: %clangxx_tsan -O1 %s -o %t && %run %t 2>&1 | FileCheck %s
|
|
|
|
#include "java.h"
|
|
#include <errno.h>
|
|
#include <sys/mman.h>
|
|
|
|
int main() {
|
|
// Test a non-regular kHeapSize
|
|
// Previously __tsan_java_init failed because it encountered non-zero meta
|
|
// shadow for the destination.
|
|
size_t const kPageSize = sysconf(_SC_PAGESIZE);
|
|
int const kSize = kPageSize - 1;
|
|
jptr jheap2 = (jptr)mmap(0, kSize, PROT_READ | PROT_WRITE,
|
|
MAP_ANON | MAP_PRIVATE, -1, 0);
|
|
if (jheap2 == (jptr)MAP_FAILED)
|
|
return printf("mmap failed with %d\n", errno);
|
|
__atomic_store_n((int *)(jheap2 + kSize - 3), 1, __ATOMIC_RELEASE);
|
|
// Due to the previous incorrect meta-end calculation, the following munmap
|
|
// did not clear the tail meta shadow.
|
|
munmap((void *)jheap2, kSize);
|
|
int const kHeapSize2 = kSize + 1;
|
|
jheap2 = (jptr)mmap((void *)jheap2, kHeapSize2, PROT_READ | PROT_WRITE,
|
|
MAP_ANON | MAP_PRIVATE, -1, 0);
|
|
if (jheap2 == (jptr)MAP_FAILED)
|
|
return printf("second mmap failed with %d\n", errno);
|
|
__tsan_java_init(jheap2, kHeapSize2);
|
|
__tsan_java_move(jheap2, jheap2 + kHeapSize2 - 8, 8);
|
|
fprintf(stderr, "DONE\n");
|
|
return __tsan_java_fini();
|
|
}
|
|
|
|
// CHECK-NOT: WARNING: ThreadSanitizer: data race
|
|
// CHECK: DONE
|