4 Commits

Author SHA1 Message Date
Kunqiu Chen
0aafeb8ba1
Reland [TSan] Clarify and enforce shadow end alignment (#146676)
#144648 was reverted because it failed the new sanitizer test
`munmap_clear_shadow.c` in IOS's CI.
That issue could be fixed by disabling the test on some platforms, due
to the incompatibility of the test on these platforms.

In detail, we should disable the test in FreeBSD, Apple, NetBSD,
Solaris, and Haiku, where `ReleaseMemoryPagesToOS` executes
`madvise(beg, end, MADV_FREE)`, which tags the relevant pages as 'FREE'
and does not release them immediately.
2025-07-02 20:28:30 +08:00
Kunqiu Chen
9eac5f72f6
Revert "[TSan] Clarify and enforce shadow end alignment" (#146674)
Reverts llvm/llvm-project#144648 due to a test failure of the new added
test case `munmap_clear_shadow.c` in IOS .
2025-07-02 20:11:11 +08:00
Kunqiu Chen
bc90166a50
[TSan] Clarify and enforce shadow end alignment (#144648)
In TSan, every `k` bytes of application memory (where `k = 8`) maps to a
single shadow/meta cell. This design leads to two distinct outcomes when
calculating the end of a shadow range using `MemToShadow(addr_end)`,
depending on the alignment of `addr_end`:

- **Exclusive End:** If `addr_end` is aligned (`addr_end % k == 0`),
`MemToShadow(addr_end)` points to the first shadow cell *past* the
intended range. This address is an exclusive boundary marker, not a cell
to be operated on.
- **Inclusive End:** If `addr_end` is not aligned (`addr_end % k != 0`),
`MemToShadow(addr_end)` points to the last shadow cell that *is* part of
the range (i.e., the same cell as `MemToShadow(addr_end - 1)`).

Different TSan functions have different expectations for whether the
shadow end should be inclusive or exclusive. However, these expectations
are not always explicitly enforced, which can lead to subtle bugs or
reliance on unstated invariants.


The core of this patch is to ensure that functions ONLY requiring an
**exclusive shadow end** behave correctly.

1.  Enforcing Existing Invariants:
For functions like `MetaMap::MoveMemory` and `MapShadow`, the assumption
is that the end address is always `k`-aligned. While this holds true in
the current codebase (e.g., due to some external implicit conditions),
this invariant is not guaranteed by the function's internal context. We
add explicit assertions to make this requirement clear and to catch any
future changes that might violate this assumption.

2.  Fixing Latent Bugs:
In other cases, unaligned end addresses are possible, representing a
latent bug. This was the case in `UnmapShadow`. The `size` of a memory
region being unmapped is not always a multiple of `k`. When this
happens, `UnmapShadow` would fail to clear the final (tail) portion of
the shadow memory.

This patch fixes `UnmapShadow` by rounding up the `size` to the next
multiple of `k` before clearing the shadow memory. This is safe because
the underlying OS `unmap` operation is page-granular, and the page size
is guaranteed to be a multiple of `k`.

Notably, this fix makes `UnmapShadow` consistent with its inverse
operation, `MemoryRangeImitateWriteOrResetRange`, which already performs
a similar size round-up.

In summary, this PR:

- **Adds assertions** to `MetaMap::MoveMemory` and `MapShadow` to
enforce their implicit requirement for k-aligned end addresses.
- **Fixes a latent bug** in `UnmapShadow` by rounding up the size to
ensure the entire shadow range is cleared. Two new test cases have been
added to cover this scenario.
  - Removes a redundant assertion in `__tsan_java_move`.
- Fixes an incorrect shadow end calculation introduced in commit
4052de6. The previous logic, while fixing an overestimation issue, did
not properly account for `kShadowCell` alignment and could lead to
underestimation.
2025-06-27 14:43:34 +08:00
Kunqiu Chen
956bab0381
[TSan] Add 2 test cases related to incomplete shadow cleanup in unmap (#145472)
Once part of PR #144648, follow the reviewer's advice and split into
this separate PR.

`unmap` works at page granularity, but supports an arbitrary non-zero
size as an argument, which results in possible shadow undercleaning in
the existing TSan implementation when `size % kShadowCell != 0`.

This change introduces two test cases to verify the shadow cleaning
effect in `unmap`.

- java_heap_init2.cpp: Imitating java_heap_init cpp, verify the
incomplete cleaning of meta
- munmap_clear_shadow.c: verify the incomplete cleaning of shadow
2025-06-25 15:22:54 +08:00