value() has undesired exception checking semantics and calls
__throw_bad_optional_access in libc++. Moreover, the API is unavailable without
_LIBCPP_NO_EXCEPTIONS on older Mach-O platforms (see
_LIBCPP_AVAILABILITY_BAD_OPTIONAL_ACCESS).
This fixes check-llvm.
Adds the ability to load InlineAdvisors as plugins. This allows developing and distributing inlining heuristics outside of tree.
The PluginInlineAdvisorAnalysis class serves as the entry point for dynamic advisors. Plugins must register instances of this class to provide their own InliningAdvisor.
Reviewed By: mtrofin
Differential Revision: https://reviews.llvm.org/D139644
All current analyses ignore the context. We make the argument mandatory
for analyses, but optional for the query interface.
Reviewed By: nikic
Differential Revision: https://reviews.llvm.org/D136512
This patch replaces uses of LLVM_HAVE_TF_API with LLVM_HAVE_TFLITE in
a couple of CMakeLists.txt.
Now that 842b0d0fe2dd142305a9461e50cdce9aff7f86bc has landed,
we now have:
LLVM_HAVE_TF_API is defined if and only if LLVM_HAVE_TFLITE
evaluates to true
in the CMake variable world (assuming that you do not set
LLVM_HAVE_TF_API on the cmake invocation).
FWIW, the story is a little different in the C++ macro world, where:
LLVM_HAVE_TF_API is defined if and only if LLVM_HAVE_TFLITE is
defined
This is why edc83a15b45e6b91fce3f35622a6b0a6d34e5211 consisted only of
mechanical replacements.
Differential Revision: https://reviews.llvm.org/D140061
D138014 restricted AST to work on immutable IR. This means it is
also safe to use a single BatchAA instance for the entire AST
lifetime, instead of only batching parts of individual queries.
The primary motivation for this is not compile-time, but rather
having a central place to control cross-iteration AA, which will
be used by D137958.
Differential Revision: https://reviews.llvm.org/D137955
This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated. The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.
This is part of an effort to migrate from llvm::Optional to
std::optional:
https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
It's an artifact very specific to using TFAgents during training, so it
belongs with ModelUnderTrainingRunner.
Differential Revision: https://reviews.llvm.org/D139031
This restores commit 98ed423361de2f9dc0113a31be2aa04524489ca9 and
follow on fix 00c22351ba697dbddb4b5bf0ad94e4bcea4b316b, which were
reverted in 5d938eb6f79b16f55266dd23d5df831f552ea082 due to an
MSVC bot failure. I've included a fix for that failure.
Differential Revision: https://reviews.llvm.org/D135714
This reverts commit 00c22351ba697dbddb4b5bf0ad94e4bcea4b316b.
This reverts commit 98ed423361de2f9dc0113a31be2aa04524489ca9.
Seemingly MSVC has some kind of issue with this patch, in terms of linking:
https://lab.llvm.org/buildbot/#/builders/123/builds/14137
I'll post more detail on D135714 momentarily.
This restores 47459455009db4790ffc3765a2ec0f8b4934c2a4, which was
reverted in commit 452a14efc84edf808d1e2953dad2c694972b312f, along with
fixes for a couple of bot failures.
Implements the ThinLTO summary support for memprof related metadata.
This includes support for the assembly format, and for building the
summary from IR during ModuleSummaryAnalysis.
To reduce space in both the bitcode format and the in memory index,
we do 2 things:
1. We keep a single vector of all uniq stack id hashes, and record the
index into this vector in the callsite and allocation memprof
summaries.
2. When building the combined index during the LTO link, the callsite
and allocation memprof summaries are only kept on the FunctionSummary
of the prevailing copy.
Differential Revision: https://reviews.llvm.org/D135714
This was done as a test for D137302 and it makes sense to push these changes
Reviewed By: dblaikie
Differential Revision: https://reviews.llvm.org/D137493
For a min and max reduction idioms, the identity (i.e. neutral) element
should be datatype's highest and lowest possible values respectively.
Current implementation in IVDescriptors incorrectly returns -Inf for FMin
reduction and +Inf for FMax reduction. This patch fixes this bug which
was causing incorrect reduction computation results in loops vectorized
by LV.
Differential Revision: https://reviews.llvm.org/D137220
Use DL-aware ConstantFoldCompareInstOperands() API instead of
ConstantExpr API. The practical effect of this is that SCCP can
now fold comparisons that require DL.
Relative to the previous attempt, this also updates the
ValueLattice unit tests.
-----
Resolve the TODO about incorrect getCompare() behavior. This can
be made more precise (e.g. by materializing the undef value and
performing constant folding on it), but for now just return an
unknown result to fix the correctness issue.
This should be NFC in terms of user-visible behavior, because the
only user of this method (SCCP) was already guarding against
UndefValue results.
This patch fixes:
llvm/utils/unittest/googletest/include/gtest/gtest.h:1526:11: error:
comparison of integers of different signs: 'const unsigned long' and
'const int' [-Werror,-Wsign-compare]
We have similar code to translate a demanded elements mask for a shuffle's operands in multiple places - this patch adds a helper function to VectorUtils and updates a number of locations to use it directly.
Differential Revision: https://reviews.llvm.org/D136832
Followup to D135962 to rename remaining uses of
FunctionModRefBehavior to MemoryEffects. Does not touch API names
yet, but also updates variables names FMRB/MRB to ME, to match the
new type name.
Currently, AAResultBase (from which alias analysis providers inherit)
stores a reference back to the AAResults aggregation it is part of,
so it can perform recursive alias analysis queries via
getBestAAResults().
This patch removes the back-reference from AAResultBase to AAResults,
and instead passes the used aggregation through the AAQueryInfo.
This can be used to perform recursive AA queries using the full
aggregation.
Differential Revision: https://reviews.llvm.org/D94363
Spurious ref edges are ref edges that still exist in the call graph even
though the corresponding IR reference no longer exists. This can cause
issues when deleting a dead function which has a spurious ref edge
pointed at it because currently we expect the dead function's RefSCC to
be trivial.
In the case that the dead function's RefSCC is not trivial, remove all
ref edges from other nodes in the RefSCC to it.
Removing a ref edge can result in splitting RefSCCs. There's actually no
reason to revisit those RefSCCs because currently we only run passes on
SCCs, and we've already added all SCCs in the RefSCC to the worklist.
(as opposed to removing the ref edge in
updateCGAndAnalysisManagerForPass() which can modify the call graph of
SCCs we have not visited yet). We also don't expect that RefSCC
refinement will allow us to glean any more information for optimization
use. Also, doing so would drastically increase the complexity of
LazyCallGraph::removeDeadFunction(), requiring us to return a list of
invalidated RefSCCs and new RefSCCs to add to the worklist.
Fixes#56503
Reviewed By: asbirlea
Differential Revision: https://reviews.llvm.org/D133907
Currently, FunctionModRefBehavior tracks whether the function reads
or writes memory (ModRefInfo) and which locations it can access
(argmem, inaccessiblemem and other). This patch changes it to track
ModRef information per-location instead.
To give two examples of why this is useful:
* D117095 highlights a weakness of ModRef modelling in the presence
of operand bundles. For a memcpy call with deopt operand bundle,
we want to say that it can read any memory, but only write argument
memory. This would allow them to be treated like any other calls.
However, we currently can't express this and have to say that it
can read or write any memory.
* D127383 would ideally be modelled as a separate threadid location,
where threadid Refs outside pre-split coroutines can be ignored
(like other accesses to constant memory). The current representation
does not allow modelling this precisely.
The patch as implemented is intended to be NFC, but there are some
obvious opportunities for improvements and simplification. To fully
capitalize on this we would also want to change the way we represent
memory attributes on functions, but that's a larger change, and I
think it makes sense to separate out the FunctionModRefBehavior
refactoring.
Differential Revision: https://reviews.llvm.org/D130896
In the Tensorflow C lib utilities, an error gets thrown if some features
haven't gotten passed into the model (due to differences in ordering
which now don't exist with the transition to TFLite). However, this is
not currently the case when using TFLiteUtils. This patch makes some
minor changes to throw an error when not all inputs of the model have
been passed, which when not handled will result in a seg fault within
TFLite.
Reviewed By: mtrofin
Differential Revision: https://reviews.llvm.org/D133451
This is a long-standing FIXME with a non-FMF test that exposes
the bug as shown in issue #57357.
It's possible that there's still a way to miscompile by
mis-identifying/mis-folding FP min/max patterns, but
this patch only exposes a couple of seemingly minor
regressions while preventing the broken transform.
TLite is a lightweight, statically linkable[1], model evaluator, supporting a
subset of what the full tensorflow library does, sufficient for the
types of scenarios we envision having. It is also faster.
We still use saved models as "source of truth" - 'release' mode's AOT
starts from a saved model; and the ML training side operates in terms of
saved models.
Using TFLite solves the following problems compared to using the full TF
C API:
- a compiler-friendly implementation for runtime-loadable (as opposed
to AOT-embedded) models: it's statically linked; it can be built via
cmake;
- solves an issue we had when building the compiler with both AOT and
full TF C API support, whereby, due to a packaging issue on the TF
side, we needed to have the pip package and the TF C API library at
the same version. We have no such constraints now.
The main liability is it supporting a subset of what the full TF
framework does. We do not expect that to cause an issue, but should that
be the case, we can always revert back to using the full framework
(after also figuring out a way to address the problems that motivated
the move to TFLite).
Details:
This change switches the development mode to TFLite. Models are still
expected to be placed in a directory - i.e. the parameters to clang
don't change; what changes is the directory content: we still need
an `output_spec.json` file; but instead of the saved_model protobuf and
the `variables` directory, we now just have one file, `model.tflite`.
The change includes a utility showing how to take a saved model and
convert it to TFLite, which it uses for testing.
The full TF implementation can still be built (not side-by-side). We
intend to remove it shortly, after patching downstream dependencies. The
build behavior, however, prioritizes TFLite - i.e. trying to enable both
full TF C API and TFLite will just pick TFLite.
[1] thanks to @petrhosek's changes to TFLite's cmake support and its deps!
This just shuffles implementations and declarations around. Now the
logger and the TF C API-based model evaluator are separate.
Differential Revision: https://reviews.llvm.org/D131116
getModRefInfo() queries currently track whether the result is a
MustAlias on a best-effort basis. The only user of this functionality
is the optimized memory access type in MemorySSA -- which in turn
has no users. Given that this functionality has not found a user
since it was introduced five years ago (in D38862), I think we
should drop it again.
The context is that I'm working to separate FunctionModRefBehavior
to track mod/ref for different location kinds (like argmem or
inaccessiblemem) separately, and the fact that ModRefInfo also has
an unrelated Must flag makes this quite awkward, especially as this
means that NoModRef is not a zero value. If we want to retain the
functionality, I would probably split getModRefInfo() results into
a part that just contains the ModRef information, and a separate
part containing a (best-effort) AliasResult.
Differential Revision: https://reviews.llvm.org/D130713
Adds a number of utilities that are used to help create and update
memprof related metadata. These will be used during profile matching
and annotation, as well as by the inliner when updating the metadata.
Also adds unit tests for the utilities.
See also related RFCs:
RFC: Sanitizer-based Heap Profiler [1]
RFC: A binary serialization format for MemProf [2]
RFC: IR metadata format for MemProf [3]
(Note that the IR metadata format has changed from the RFC during
implementation, as described in the preceeding patch adding the basic
metadata and verification support.)
Depends on D128141.
Differential Revision: https://reviews.llvm.org/D128854
To solve the readnone problems in coroutines. See
https://discourse.llvm.org/t/address-thread-identification-problems-with-coroutine/62015
for details.
According to the discussion, we decide to fix the problem by inserting
isPresplitCoroutine() checks in different passes instead of
wrapping/unwrapping readnone attributes in CoroEarly/CoroCleanup passes.
In this direction, we might not be able to cover every case at first.
Let's take a "find and fix" strategy.
Reviewed By: nikic, nhaehnle, jyknight
Differential Revision: https://reviews.llvm.org/D127383