…nd malloc parameters bound
Using a naive expression walker, it is possible to compute valuable
information for allocation functions, GEP and alloca, even in the
presence of some dynamic information.
We don't rely on computeConstantRange to avoid taking advantage of
undefined behavior, which would be counter-productive wrt. usual
llvm.objectsize usage.
llvm.objectsize plays an important role in _FORTIFY_SOURCE definitions,
so improving its diagnostic in turns improves the security of compiled
application.
As a side note, as a result of recent optimization improvements, clang
no longer passes
https://github.com/serge-sans-paille/builtin_object_size-test-suite This
commit restores the situation and greatly improves the scope of code
handled by the static version of __builtin_object_size.
This is a recommit of https://github.com/llvm/llvm-project/pull/115522
with fix applied.
Using a naive expression walker, it is possible to compute valuable
information for
allocation functions, GEP and alloca, even in the presence of some
dynamic
information.
We don't rely on computeConstantRange to avoid taking advantage of
undefined behavior, which would be counter-productive wrt. usual
llvm.objectsize usage.
llvm.objectsize plays an important role in _FORTIFY_SOURCE definitions,
so improving its diagnostic in turns improves the security of compiled
application.
As a side note, as a result of recent optimization improvements, clang
no
longer passes
https://github.com/serge-sans-paille/builtin_object_size-test-suite
This commit restores the situation and greatly improves the scope of
code handled by the static version of __builtin_object_size.
The internal structure used to carry intermediate computations hold
signed values. If an object size happens to overflow signed values, we
can get invalid result, so make sure this situation never happens.
This is not very limitative as static allocation of such large values
should scarcely happen.
Using LazyValueInfo, it is possible to compute valuable information for
allocation functions, GEP and alloca, even in the presence of dynamic
information.
llvm.objectsize plays an important role in _FORTIFY_SOURCE definitions,
so improving its diagnostic in turns improves the security of compiled
application.
As a side note, as a result of recent optimization improvements, clang
no longer passes
https://github.com/serge-sans-paille/builtin_object_size-test-suite This
commit restores the situation and greatly improves the scope of code
handled by the static version of __builtin_object_size.
…and Select/Phi
When picking a SizeOffsetAPInt through combineSizeOffset, the behavior
differs if we're going to apply a constant offset that's positive or
negative: If it's positive, then we need to compare the remaining bytes
(i.e. Size
- Offset), but if it's negative, we need to compare the preceding bytes
(i.e. Offset).
Fix#111709
This fixes all the places that hit the new assertion added in
https://github.com/llvm/llvm-project/pull/106524 in tests. That is,
cases where the value passed to the APInt constructor is not an N-bit
signed/unsigned integer, where N is the bit width and signedness is
determined by the isSigned flag.
The fixes either set the correct value for isSigned, set the
implicitTrunc flag, or perform more calculations inside APInt.
Note that the assertion is currently still disabled by default, so this
patch is mostly NFC.
We should handle allocator attributes not only on function
declarations, but also on the call-site. That way we can e.g.
also optimize cases where the allocator function is a virtual
function call.
This was already supported in some of the MemoryBuiltins helpers,
but not all of them. This adds support for allocsize, alloc-family
and allockind("free").
This is a helper to avoid writing `getModule()->getDataLayout()`. I
regularly try to use this method only to remember it doesn't exist...
`getModule()->getDataLayout()` is also a common (the most common?)
reason why code has to include the Module.h header.
Uses the new InsertPosition class (added in #94226) to simplify some of
the IRBuilder interface, and removes the need to pass a BasicBlock
alongside a BasicBlock::iterator, using the fact that we can now get the
parent basic block from the iterator even if it points to the sentinel.
This patch removes the BasicBlock argument from each constructor or call
to setInsertPoint.
This has no functional effect, but later on as we look to remove the
`Instruction *InsertBefore` argument from instruction-creation
(discussed
[here](https://discourse.llvm.org/t/psa-instruction-constructors-changing-to-iterator-only-insertion/77845)),
this will simplify the process by allowing us to deprecate the
InsertPosition constructor directly and catch all the cases where we use
instructions rather than iterators.
The use of std::pair makes the values it holds opaque. Using classes
improves this while keeping the POD aspect of a std::pair. As a nice
addition, the "known" functions held inappropriately in the Visitor
classes can now properly reside in the value classes. :-)
We're running into stack overflows for huge functions with lots of phis.
Even without the stack overflows, this is recursing >7000 in some
auto-generated code.
This fixes the stack overflow and brings down the compile time to
something reasonable.
visit will skip visiting instructions it already has visited
to avoid issues with cycles in the data graph. However,
the result of this skipping behavior is that if we
encounter the same instruction twice, and that instruction
has a well defined result and isn't part of a cycle, we
will introduce unknowns into the analysis even though we
knew the size and offset of the instruction's result.
Instead of skipping such instructions, keep a cache of
the result of visiting them. This result is initialized
to unknown() before visiting, so if we happen to visit
it again recursively (perhaps as the result of a cycle
or a phi), we will get unknown as the cached result and
exit out.
The inserted instructions can usually be simplified. Make sure this
happens in the same InstCombine iteration by adding them to the
worklist.
We happen to get some better optimization in two cases, but this is
just a lucky accident. https://github.com/llvm/llvm-project/issues/63472
tracks implementing a fold for that case.
This doesn't track all inserted instructions yet, for that we would
also have to include those created by ObjectSizeOffsetEvaluator.
Conservatively return unknown in this degenerate case. This is
hard to hit in practice, because such phis are usually optimized
away before they reach a getObjectSize() call.
Fixes https://github.com/llvm/llvm-project/issues/63013.
Switch to the just updated versions of the API in tcmalloc that change
the name of the hot cold paramter to a reserved identifier __hot_cold_t.
This was based on feedback from Richard Smith, as I also need to add
some follow-on handling to clang so they are annotated properly.
Differential Revision: https://reviews.llvm.org/D149475
Many uses of getIntPtrType() were using that type to calculate the
neened type for GEP offset arguments. However, some time ago,
DataLayout was extended to support pointers where the size of the
pointer is not equal to the size of the values used to index it.
Much code was already migrated to, for example, use getIndexSizeInBits
instead of getPtrSizeInBits, but some rewrites still used
getIntPtrType() to get the type for GEP offsets.
This commit changes uses of getIntPtrType() to getIndexType() where
they are involved in a GEP-related calculation.
In at least one case (bounds check insertion) this resolves a compiler
crash that the new test added here would previously trigger.
This commit does not impact
- C library-related rewriting (memcpy()), which are operating under
the assumption that intptr_t == size_t. While all the mechanisms for
breaking this assumption now exist, doing so is outside the scope of
this commit.
- Code generation and below. Note that the use of getIntPtrType() in
CodeGenPrepare will be changed in a future commit.
- Usage of getIntPtrType() in any backend
Depends on D143435
Reviewed By: arichardson
Differential Revision: https://reviews.llvm.org/D143437
value() has undesired exception checking semantics and calls
__throw_bad_optional_access in libc++. Moreover, the API is unavailable without
_LIBCPP_NO_EXCEPTIONS on older Mach-O platforms (see
_LIBCPP_AVAILABILITY_BAD_OPTIONAL_ACCESS).
This commit fixes LLVMAnalysis and its dependencies.
All functions of this kind already use allocator attributes. This
also highlights that isMallocOrCallocLikeFn() doesn't really do
what the name says (notably, it does not check for allocator
attributes). The places it is used in are also very dubious, so
we'll want to remove it.
This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated. The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.
This is part of an effort to migrate from llvm::Optional to
std::optional:
https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
Update both memprof and callsite metadata to reflect inlined functions.
For callsite metadata this is simply a concatenation of each cloned
call's call stack with that of the inlined callsite's.
For memprof metadata, each profiled memory info block (MIB) is either
moved to the cloned allocation call or left on the original allocation
call depending on whether its context matches the newly refined call
stack context on the cloned call. We also reapply context trimming
optimizations based on the refined set of contexts on each of the calls
(cloned and original), via utilities in MemoryProfileInfo.
Depends on D128142.
Differential Revision: https://reviews.llvm.org/D128143