562 Commits

Author SHA1 Message Date
Rahul Joshi
74b7abf154
[IRBuilder] Add new overload for CreateIntrinsic (#131942)
Add a new `CreateIntrinsic` overload with no `Types`, useful for
creating calls to non-overloaded intrinsics that don't need additional
mangling.
2025-03-31 08:10:34 -07:00
Thurston Dang
8726e97345
[msan] Handle SSE2 cvt(t?)ps2dq/cvt(t?)pd2dq and cvtpd2ps using handleSSEVectorConvertIntrinsicByProp (#132815)
cvt(t?)ps2dq/cvt(t?)pd2dq and cvtpd2ps are currently handled strictly.
This patch handles them using handleSSEVectorConvertIntrinsicByProp
(from https://github.com/llvm/llvm-project/pull/130705), generalized to
handle SSE intrinsics that do not have a rounding mode parameter.
2025-03-28 17:59:59 -07:00
Thurston Dang
5946696d67
[msan] Handle NEON vector load (#130457)
This adds an explicit handler for:
- llvm.aarch64.neon.ld1x2, llvm.aarch64.neon.ld1x3,
llvm.aarch64.neon.ld1x4
- llvm.aarch64.neon.ld2, llvm.aarch64.neon.ld3, llvm.aarch64.neon.ld4
- llvm.aarch64.neon.ld2lane, llvm.aarch64.neon.ld3lane,
llvm.aarch64.neon.ld4lane
- llvm.aarch64.neon.ld2r, llvm.aarch64.neon.ld3r, llvm.aarch64.neon.ld4r
instead of relying on the default strict handler.

Updates the tests from https://github.com/llvm/llvm-project/pull/125267
2025-03-19 20:46:14 -07:00
Thurston Dang
c30ff922ca
[msan] Handle llvm.x86.vcvtps2ph.128/256 explicitly (#130705)
Check whether each lane is fully initialized, and propagate the shadow
per lane instead of using the strict handling of visitInstruction.

Updates the tests from https://github.com/llvm/llvm-project/pull/129807
2025-03-13 16:15:37 -04:00
Thurston Dang
667bbd2ecc
[msan] Apply handleVectorReduceIntrinsic to max/min vector instructions (#129819)
Changes the handling of:
- llvm.aarch64.neon.smaxv
- llvm.aarch64.neon.sminv
- llvm.aarch64.neon.umaxv
- llvm.aarch64.neon.uminv
- llvm.vector.reduce.smax
- llvm.vector.reduce.smin
- llvm.vector.reduce.umax
- llvm.vector.reduce.umin
- llvm.vector.reduce.fmax
- llvm.vector.reduce.fmin
from the default strict handling (visitInstruction) to
handleVectorReduceIntrinsic.

Also adds a parameter to handleVectorReduceIntrinsic to specify whether
the return type must match the elements of the vector.

Updates the tests from https://github.com/llvm/llvm-project/pull/129741,
https://github.com/llvm/llvm-project/pull/129810,
https://github.com/llvm/llvm-project/pull/129768
2025-03-08 19:31:48 -08:00
Thurston Dang
3a0c33afd1
[msan] Handle Arm NEON pairwise min/max instructions (#129824)
Change the handling of:
- llvm.aarch64.neon.fmaxp
- llvm.aarch64.neon.fminp
- llvm.aarch64.neon.fmaxnmp
- llvm.aarch64.neon.fminnmp
- llvm.aarch64.neon.smaxp
- llvm.aarch64.neon.sminp
- llvm.aarch64.neon.umaxp
- llvm.aarch64.neon.uminp
from the incorrect heuristic handler (maybeHandleSimpleNomemIntrinsic)
to handlePairwiseShadowOrIntrinsic.

Updates the tests from https://github.com/llvm/llvm-project/pull/129760

Adds a note that maybeHandleSimpleNomemIntrinsic may incorrectly match
horizontal/pairwise intrinsics.
2025-03-08 19:08:08 -08:00
Nikita Popov
979c275097
[IR] Store Triple in Module (NFC) (#129868)
The module currently stores the target triple as a string. This means
that any code that wants to actually use the triple first has to
instantiate a Triple, which is somewhat expensive. The change in #121652
caused a moderate compile-time regression due to this. While it would be
easy enough to work around, I think that architecturally, it makes more
sense to store the parsed Triple in the module, so that it can always be
directly queried.

For this change, I've opted not to add any magic conversions between
std::string and Triple for backwards-compatibilty purses, and instead
write out needed Triple()s or str()s explicitly. This is because I think
a decent number of them should be changed to work on Triple as well, to
avoid unnecessary conversions back and forth.

The only interesting part in this patch is that the default triple is
Triple("") instead of Triple() to preserve existing behavior. The former
defaults to using the ELF object format instead of unknown object
format. We should fix that as well.
2025-03-06 10:27:47 +01:00
Thurston Dang
5d404d75cf
[msan] Generalize handlePairwiseShadowOrIntrinsic, and handle x86 pairwise add/sub (#127567)
x86 pairwise add and sub are currently handled by applying the pairwise add intrinsic to the shadow (https://github.com/llvm/llvm-project/pull/124835), due to the lack of an x86 pairwise OR intrinsic. handlePairwiseShadowOrIntrinsic was added (https://github.com/llvm/llvm-project/pull/126008) to handle Arm
pairwise add, but assumes that the intrinsic operates on each pair of elements as defined by the LLVM type. In contrast, x86 pairwise add/sub may sometimes have e.g., <1 x i64> as a parameter but actually be operating on <2 x i32>.

This patch generalizes handlePairwiseShadowOrIntrinsic, to allow reinterpreting the parameters to be a vector of specified element size, and then uses this function to handle x86 pairwise add/sub.
2025-02-26 21:57:02 -08:00
Thurston Dang
51d8255203
[msan] Handle Arm NEON saturating extract and narrow (#125742)
This handles NEON saturating extract and narrow (Intrinsic::aarch64_neon_{sqxtn, sqxtun, uqxtn}) by (ab)using handleShadowOr() to perform the shadow cast. Previously, these were unknown intrinsics handled suboptimally by visitInstruction.

Updates the tests from https://github.com/llvm/llvm-project/pull/125288 and https://github.com/llvm/llvm-project/pull/125140
2025-02-12 16:22:49 -08:00
Thurston Dang
0d95631a3a
[msan] Handle llvm.[us]cmp (starship operator) (#125804)
Apply handleShadowOr to llvm.[us]cmp. Previously, llvm.[su]cmp was correctly handled heuristically when each parameter type is the same as the return type (e.g., `call i8 @llvm.ucmp.i8.i8(i8 %x, i8 %y)`) but handled incorrectly by visitInstruction when the return type is different e.g., (`call i8 @llvm.ucmp.i8.i62(i62 %x, i62 %y)`, `call <4 x i8> @llvm.ucmp.v4i8.v4i32(<4 x i32> %x, <4 x i32> %y)`).

Updates the tests from https://github.com/llvm/llvm-project/pull/125790
2025-02-12 13:38:45 -08:00
Thurston Dang
e9e6ba6a5e
[msan] Handle single-parameter Arm NEON vector convert intrinsics (#126136)
This handles the following llvm.aarch64.neon intrinsics, which were suboptimally handled by visitInstruction:
- fcvtas, fcvtau
- fcvtms, fcvtmu
- fcvtns, fcvtnu
- fcvtps, fcvtpu
- fcvtzs, fcvtzu

The old instrumentation checked that the shadow of every element of the input vector was fully initialized, and aborted otherwise. The new instrumentation propagates the shadow: for each element of the output, the shadow is initialized iff the corresponding element of the input is *fully* initialized (since these are floating-point to integer conversions).

Updates the tests from https://github.com/llvm/llvm-project/pull/126095
2025-02-12 13:20:22 -08:00
Jie Fu
a0fbc19ad6 [MemorySanitizer] Silence an unused-variable warning (NFC)
/llvm-project/llvm/lib/Transforms/Instrumentation/MemorySanitizer.cpp:2622:22:
 error: unused variable 'ReturnType' [-Werror,-Wunused-variable]
    FixedVectorType *ReturnType = cast<FixedVectorType>(I.getType());
                     ^
1 error generated.
2025-02-12 11:32:51 +08:00
Thurston Dang
bfbe5319a8
[msan] Add handlePairwiseShadowOrIntrinsic and use it to handle Arm NEON pairwise add (#126008)
This patch adds a function, handlePairwiseShadowOrIntrinsic that ORs
pairs of adjacent shadow values; this is suitable for propagating shadow
for 1- or 2-vector intrinsics that combine adjacent fields. It then
applies handlePairwiseShadowOrIntrinsic to Arm NEON pairwise add:
llvm.aarch64.neon.{addhn, raddhn} (currently incorrectly handled) and
llvm.aarch64.neon.{saddlp, uaddlp} (currently suboptimally handled).

Updates the tests from https://github.com/llvm/llvm-project/pull/125820.
2025-02-11 19:13:18 -08:00
Thurston Dang
73a1c7b8d6
[msan] Handle Arm NEON sum long across vector (#125784)
Apply handleVectorReduceIntrinsic() to llvm.aarch64.neon.[su]addlv.
Previously, these were unknown intrinsics handled suboptimally by
visitInstruction.

Updates the tests from https://github.com/llvm/llvm-project/pull/125761
2025-02-06 08:55:26 -08:00
Thurston Dang
c9446ff8a3
[msan] Handle Arm NEON floating-point min/max (vector) (#125778)
Apply handleVectorReduceIntrinsic() to Intrinsic::aarch64_neon_f{min,max}(mn)?v. Previously, these intrinsics were handled correctly (by maybeHandleSimpleNomemIntrinsic) if each parameter's type was the same as the return type; otherwise, they were handled suboptimally by visitInstruction().

Updates the tests from https://github.com/llvm/llvm-project/pull/125729.
2025-02-05 19:54:45 -08:00
Thurston Dang
c09e51ae97
[msan][NFCI] Add arg_size() assertions (#125907)
This prevents the handlers from being called with blatantly inappropriate intrinsics.

Currently, if the handlers are called with an intrinsic that doesn't have enough arguments, it may abort; that is bad, but visible. The more insidious risk is that a handler is called with an intrinsic that has more arguments than expected; that will not visibly fail.
2025-02-05 16:24:36 -08:00
Thurston Dang
3e436a8d18
[msan] Handle Intrinsic::vector_reduce_f{add,mul} (#125615)
This adds handleVectorReduceWithStarterIntrinsic() (similar to
handleVectorReduceIntrinsic but for intrinsics with an additional
starting parameter) and uses it to handle
Intrinsic::vector_reduce_f{add,mul}.

Updates the tests from https://github.com/llvm/llvm-project/pull/125597
2025-02-04 10:28:36 -08:00
Thurston Dang
3513886c96
[msan] Generalize handleVectorReduceIntrinsic to support Arm NEON add reduction to scalar (#125288)
This generalizes handleVectorReduceIntrinsic to allow intrinsics where
the return type is not the same as the fields. This patch then applies
the generalized handleVectorReduceIntrinsic to support the following Arm
NEON add reduction to scalar intrinsics: llvm.aarch64.neon.{faddv,
saddv, uaddv}.

Updates the tests from https://github.com/llvm/llvm-project/pull/125271
2025-02-04 10:27:30 -08:00
Thurston Dang
f10979f607
[msan] Handle llvm.bitreverse by applying intrinsic to shadow (#125606)
llvm.bitreverse was incorrectly handled by the heuristic handler,
because it did not reverse the bits of the shadow.

This updates the instrumentation to use the handler from
https://github.com/llvm/llvm-project/pull/114490 and updates the test
from https://github.com/llvm/llvm-project/pull/125592
2025-02-03 17:26:13 -08:00
Fangrui Song
4c7aa6f983 [msan] Fix -Wunused-variable in non-assertion builds after #124421 2025-01-28 20:20:25 -08:00
Thurston Dang
fdadef9be3
[msan] Handle x86_avx512_(min|max)_p[sd]_512 intrinsics (#124421)
The AVX/SSE variants are already handled heuristically (maybeHandleSimpleNomemIntrinsic via handleUnknownIntrinsic), but the AVX512 variants contain an additional parameter (the rounding method) which fails to match heuristically. This patch generalizes maybeHandleSimpleNomemIntrinsic to allow additional flags (ignored by MSan) and explicitly call it to handle AVX512 min/max ps/pd intrinsics.

It also updates the test added in https://github.com/llvm/llvm-project/pull/123980
2025-01-28 19:12:44 -08:00
Thurston Dang
4a426079d6
[msan] Use horizontal add to compute shadow for horizontal sub (#124835)
This improves the horizontal sub handling (from
https://github.com/llvm/llvm-project/pull/124159), by always using
horizontal add for the shadow, as recommended by Vitaly.

Fixes https://github.com/llvm/llvm-project/issues/124662
2025-01-28 14:56:05 -08:00
Thurston Dang
7bd9c780e3
[msan][NFCI] Generalize handleIntrinsicByApplyingToShadow to allow alternative intrinsic for shadows (#124831)
https://github.com/llvm/llvm-project/pull/124159 uses
handleIntrinsicByApplyingToShadow for horizontal add/sub, but Vitaly
recommends always using the add version to avoid false negatives for
fully uninitialized data
(https://github.com/llvm/llvm-project/issues/124662).

This patch lays the groundwork by generalizing
handleIntrinsicByApplyingToShadow to allow using a different intrinsic
(of the same type as the original intrinsic) for the shadow. Planned
work will apply it to horizontal sub.
2025-01-28 12:35:07 -08:00
Thurston Dang
063db51cd4 Reapply "[msan] Add handlers for AVX masked load/store intrinsics (#123857)"
This reverts commit b9d301cc7e4fe4c442ec15169686fa4a18f5cdfc i.e.,
relands db79fb2a91df31a07f312f8e061936927ac5c506.

I had mistakenly thought this caused a buildbot breakage (the actual
culprit was my other patch,
https://github.com/llvm/llvm-project/pull/123980, which landed at the
same time) and thus had reverted it even though AFAIK it is not broken.
2025-01-28 18:11:44 +00:00
Jeremy Morse
e14962a39c
[NFC][DebugInfo] Use iterators for instruction insertion in more places (#124291)
As part of the "RemoveDIs" work to eliminate debug intrinsics, we're
replacing methods that use Instruction*'s as positions with iterators.
This patch changes some more complex call-sites, those crossing file
boundaries and where I've had to perform some minor rewrites.
2025-01-27 15:25:17 +00:00
Thurston Dang
b9d301cc7e Revert "[msan] Add handlers for AVX masked load/store intrinsics (#123857)"
This reverts commit db79fb2a91df31a07f312f8e061936927ac5c506.

Reason: buildbot breakage
(https://lab.llvm.org/buildbot/#/builders/144/builds/16636/steps/6/logs/FAIL__LLVM__avx512-intrinsics-upgrade_ll)
2025-01-27 01:10:35 +00:00
Thurston Dang
db79fb2a91
[msan] Add handlers for AVX masked load/store intrinsics (#123857)
This patch adds explicit support for AVX masked load/store intrinsics,
largely by applying the intrinsics to the shadows (but subtly different
to handleIntrinsicByApplyingToShadow()).

We do not reuse the handleMaskedLoad/Store functions. The key challenge
is that the LLVM masked intrinsics require a vector of booleans, while
AVX masked intrinsics use the MSBs of a vector of integers.
X86InstCombineIntrinsic.cpp::simplifyX86MaskedLoad mentions that the x86
backend does not know how to efficiently convert from a vector of
booleans back into the AVX mask format; therefore, they (and we) do not
reduce AVX masked intrinsics into LLVM masked intrinsics.
2025-01-26 15:40:55 -08:00
Jeremy Morse
6292a808b3
[NFC][DebugInfo] Use iterator-flavour getFirstNonPHI at many call-sites (#123737)
As part of the "RemoveDIs" project, BasicBlock::iterator now carries a
debug-info bit that's needed when getFirstNonPHI and similar feed into
instruction insertion positions. Call-sites where that's necessary were
updated a year ago; but to ensure some type safety however, we'd like to
have all calls to getFirstNonPHI use the iterator-returning version.

This patch changes a bunch of call-sites calling getFirstNonPHI to use
getFirstNonPHIIt, which returns an iterator. All these call sites are
where it's obviously safe to fetch the iterator then dereference it. A
follow-up patch will contain less-obviously-safe changes.

We'll eventually deprecate and remove the instruction-pointer
getFirstNonPHI, but not before adding concise documentation of what
considerations are needed (very few).

---------

Co-authored-by: Stephen Tozer <Melamoto@gmail.com>
2025-01-24 13:27:56 +00:00
Thurston Dang
8ef171ee83
[msan] Handle horizontal add/subtract intrinsic by applying to shadow (#124159)
Horizontal add (hadd) and subtract (hsub) are currently heuristically
handled by `maybeHandleSimpleNomemIntrinsic()` (via
`handleUnknownIntrinsic()`), which computes the shadow by bitwise OR'ing
the two operands. This has false positives for hadd/hsub shadows. For
example, suppose the shadows for the two operands are 00000000 and
11111111 respectively. The expected shadow for the result is 00001111,
but `maybeHandleSimpleNomemIntrinsic` would compute it as 11111111.

This patch handles horizontal add using
`handleIntrinsicByApplyingToShadow` (from
https://github.com/llvm/llvm-project/pull/114490), which has no false
positives for hadd/hsub: if each pair of adjacent shadow values is zero
(fully initialized), the result will be zero (fully initialized). More
generally, it is precise for hadd/hsub if at least one of the two
adjacent shadow values in each pair is zero.

It does have some false negatives for hadd/hsub: if we add/subtract two
adjacent non-zero shadow values, some bits of the result may incorrectly
be zero. We consider this an acceptable tradeoff for performance. To
make shadow propagation precise, we want the equivalent of "horizontal
OR", but this is not available. Reducing horizontal OR to (permutation
plus bitwise OR) is left as an exercise for the reader.
2025-01-23 22:53:56 -08:00
Thurston Dang
969eb4ec4c [msan][NFC] Correct and clarify comment for getShadowPtrOffset()
The stated return type was incorrect; this patch corrects it. More generally, it explains how the Offset and its components fits into the overall shadow mapping calculation.
2025-01-24 00:36:40 +00:00
Thurston Dang
9cefa3e6fc
[msan] Generalize handleIntrinsicByApplyingToShadow by adding bitcasting (#123474)
`handleIntrinsicByApplyingToShadow` (introduced in
https://github.com/llvm/llvm-project/pull/114490) requires that the
intrinsic supports integer-ish operands; this is not the case for all
intrinsics. This patch generalizes the function to bitcast the shadow
arguments to be the same type as the original intrinsic, thus
guaranteeing that the intrinsic exists. Additionally, it casts the
computed shadow to be an appropriate shadow type.

This function assumes that the intrinsic will handle arbitrary
bit-patterns (for example, if the intrinsic accepts floats for var1, we
assume that it works normally even if inputs are NaNs etc.).
2025-01-22 18:17:14 -08:00
Mats Jun Larsen
416f1c465d
[IR] Replace of PointerType::get(Type) with opaque version (NFC) (#123617)
In accordance with https://github.com/llvm/llvm-project/issues/123569

In order to keep the patch at reasonable size, this PR only covers for
the llvm subproject, unittests excluded.
2025-01-21 00:32:56 +09:00
Thurston Dang
58a70dffcc
[msan] Add debugging for handleUnknownIntrinsic (#123381)
This adds an experimental flag, msan-dump-strict-intrinsics (modeled
after msan-dump-strict-instructions), which prints out any intrinsics
that are heuristically handled. Additionally, MSan will print out
heuristically handled intrinsics when -debug is passed as a flag in
debug builds.

MSan's intrinsic handling can be broken down into:

1) special cases (usually highly accurate)
2) heuristic handling (sometimes erroneous)
3) not handled

This patch's -msan-dump-strict-intrinsics is intended to help debug Case
2. Case 3) (which includes all the heuristics that are not handled by
special cases nor heuristics) can be debugged using the existing
-msan-dump-strict-instructions.
2025-01-17 11:27:39 -08:00
Sergey Kachkov
04b002bbb8
[IRBuilder] Add Align argument for CreateMaskedExpandLoad and CreateMaskedCompressStore (#122878)
This patch adds possibility to specify alignment for
llvm.masked.expandload/llvm.masked.compressstore intrinsics in IRBuilder
(this is mostly NFC for now since it's only used in MemorySanitizer, but
there is an intention to generate these intrinsics in the compiler
passes, e.g. in LoopVectorizer)
2025-01-15 12:19:23 +03:00
Alexander Shaposhnikov
3791323343
[msan] Add support for avx_round_pd_256/avx_round_ps_256 (#119334)
Add support for avx_round_pd_256/avx_round_ps_256.
This is a follow-up to https://github.com/llvm/llvm-project/pull/118441

Test plan:
ninja check-all
2024-12-09 23:27:34 -08:00
Thurston Dang
3b74abdf04
[msan] Support NEON vector multiplication instructions (#117944)
Approximates the shadow propagation via OR'ing.

Updates the neon_vmul.ll test introduced in
https://github.com/llvm/llvm-project/pull/117935
2024-12-09 11:39:29 -08:00
Kazu Hirata
1b95e76d8f [Instrumentation] Fix a warning
This patch fixes:

  llvm/lib/Transforms/Instrumentation/MemorySanitizer.cpp:3840:14:
  error: unused variable 'NumArgOperands' [-Werror,-Wunused-variable]
2024-12-04 08:31:40 -08:00
Alexander Shaposhnikov
95e44d3670
[msan] Add handling for sse41_round_pd/sse41_round_ps (#118441)
Add handling for sse41_round_pd/sse41_round_ps similarly to
maybeHandleSimpleNomemIntrinsic.

Test plan: ninja check-all
2024-12-04 08:27:08 -08:00
k-kashapov
f2fa9ac616
[nfc][MSan] Change for-loop to ArgNo instead of drop_begin (#117553)
As discussed in
https://github.com/llvm/llvm-project/pull/109284#discussion_r1838830571
Changed for loop to use `ArgNo` instead of `drop_begin` to keep loop
code consistent with other helpers.

Co-authored-by: Kamil Kashapov <kashapov@ispras.ru>
2024-12-03 14:32:54 -08:00
k-kashapov
d9e2fb70d0
[msan] Add 32-bit platforms support (#109284)
References https://github.com/llvm/llvm-project/issues/103057

Added `VAArgHelper` functions for platforms: ARM32, i386, RISC-V,
PowerPC32, MIPS32.

ARM, RISCV and MIPS share similar conventions regarding va args.
Therefore `VAArgGenericHelper` was introduced to avoid code duplication.

---------

Co-authored-by: Kamil Kashapov <kashapov@ispras.ru>
Co-authored-by: Vitaly Buka <vitalybuka@google.com>
2024-11-14 01:41:13 -08:00
Vitaly Buka
debfd7b0b4
[msan] Remove unnecacary zero increment (#116185) 2024-11-14 00:59:01 -08:00
Kamil Kashapov
ad26835b2c [nfc][msan] Move VarArgGenericHelper
Part of #109284
2024-11-12 00:36:44 -08:00
Kamil Kashapov
469ac11841 [nfc][msan] Remove 64 from VarArg*Helper names
Part of #109284
2024-11-12 00:26:35 -08:00
Kamil Kashapov
b94a24e5dd [nfc][msan] Reorder ifs in CreateVarArgHelper
Part of #109284
2024-11-12 00:26:35 -08:00
Vitaly Buka
adb476b012
[nfc][msan] Clang-format MemorySanitizer.cpp (#115828)
Extracted from #109284

Co-authored-by: Kamil Kashapov <kashapov@ispras.ru>
2024-11-11 23:17:05 -08:00
Thurston Dang
e549ec529c
[msan] Add handleIntrinsicByApplyingToShadow; support NEON tbl/tbx (#114490)
This adds a general function that handles intrinsics by applying the
intrinsic to the shadows, and applies it to the specific case of Arm
NEON TBL/TBX intrinsics.

This also updates the tests from
https://github.com/llvm/llvm-project/pull/114462
2024-11-01 14:58:45 -07:00
Vitaly Buka
cf8d24531e
[msan] Reduces overhead of #113200, by 10% (#113201)
CTMark #113200 size overhead was 5.3%, now it's 4.7%.

The patch affects only signed integers.

https://alive2.llvm.org/ce/z/Lv5hyi

* The patch replaces code which extracted sign bit,
maximized/minimized it, then packed it back, with
simple sign bit flip. The another way to think about
transformation is as a subtraction of MIN_SINT from
A/B. Then we map MIN_SINT to 0, 0 to -MIN_SINT, and
MAX_SINT to MAX_UINT.

* Then to maximize/minimize A/B we don't need
to extract sign bit, we can apply shadow the
same way as to other bits.

* After sign bit flip, we had to switch to unsigned
version of the predicates.

* After change above  getHighestPossibleValue/getLowestPossibleValue
became very similar, so we can combine into a single function.

* Because the function does sign bit flip and
requires unsigned predicates used for returned values,
there is no point in keeping it as a member of class,
to hide, we switch to function local lambda.
2024-10-24 20:46:49 -07:00
Vitaly Buka
c77d8edf80
Revert "Revert "[msan] Switch to -msan-handle-icmp-exact my default"" (#113379)
Reverts llvm/llvm-project#113376

Fixed with #113378
2024-10-22 14:05:35 -07:00
Vitaly Buka
71792dc570
[NFC][msan] Workaround arg evaluation order diff GCC vs Clang (#113378) 2024-10-22 13:31:46 -07:00
Vitaly Buka
c3aa8b7dd6
Revert "[msan] Switch to -msan-handle-icmp-exact my default" (#113376)
Reverts llvm/llvm-project#113200

Breaks bots, see llvm/llvm-project#113200
2024-10-22 13:05:59 -07:00