Similar to the existing SelectionDAG::SplitVector helper, this helper creates the EXTRACT_ELEMENT nodes for the LO/HI halves of the scalar source.
Differential Revision: https://reviews.llvm.org/D147264
The patch customized lower vector type ISD::STRICT_FP_ROUND to RISCVISD::STRICT_FP_ROUND.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D147113
Extend the existing store(load()) removal code to account for intermediate truncates that some targets won't remove with canCombineTruncStore - we only care about the load/store MemoryVT.
Fixes regression from D146121
Switch DAGISel over to UniformityAnalysis, which was one of the last remaining users of the DivergenceAnalysis.
No explosions seen during internal testing so this looks like a smooth transition.
Reviewed By: sameerds
Differential Revision: https://reviews.llvm.org/D145918
Switch DAGISel over to UniformityAnalysis, which was one of the last remaining users of the DivergenceAnalysis.
No explosions seen during internal testing so this looks like a smooth transition.
Reviewed By: sameerds
Differential Revision: https://reviews.llvm.org/D145918
The patch mainly does two things. The first is allowing scalable vector
ISD::STRICT_FP_EXTEND. The second is making RISC-V customized lower
strict_fpextend to riscv_strict_fpextend_vl, the strict version of
riscv_fpextend_vl.
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D145548
It turns out that there are relatively trivial, albeit rare, cases that
require a MaxDepth of more than 16 (see added test). However, we want to
avoid having to rely on a large fixed MaxDepth.
Since these cases are relatively rare, apply the following strategy:
1. Start with a low MaxDepth of 16 - if the entry node was not
reached, we can return (the common case).
2. If the entry node was reached, exponentially increase MaxDepth up
to some large limit that should cover all cases and guard against
stack exhaustion.
This retains the better performance with a low MaxDepth in the common
case, and in complex cases backs off and retries. On a whole, this is
preferable vs. starting with a large MaxDepth which would unnecessarily
penalize the common case where a low MaxDepth is sufficient.
Reviewed By: dvyukov
Differential Revision: https://reviews.llvm.org/D145386
Move MaxDepth into the lambda, since it is not needed outside. This
fixes some compilers that complain about missing capture:
error C3493: 'MaxDepth' cannot be implicitly captured because no
default capture mode has been specified
Fixes: f693932fbea7 ("[SelectionDAG] Transitively copy NodeExtraInfo on RAUW")
During legalization of the SelectionDAG, some nodes are replaced with
arch-specific nodes. These may be complex nodes, where the root node no
longer corresponds to the node that should carry the extra info.
Fix the issue by copying extra info to the new node and all its new
transitive operands during RAUW. See code comments for more details.
This fixes the remaining pcsections-atomics.ll tests on X86.
v2: Optimize copyExtraInfo() deep copy. For now we assume that only
NodeExtraInfo that have PCSections set require deep copy. Furthermore,
limit the depth of graph search while pre-populating the visited set,
assuming the to-be-replaced subgraph 'From' has limited complexity. An
assertion catches if the maximum depth needs to be increased.
Reviewed By: dvyukov
Differential Revision: https://reviews.llvm.org/D144677
During legalization of the SelectionDAG, some nodes are replaced with
arch-specific nodes. These may be complex nodes, where the root node no
longer corresponds to the node that should carry the extra info.
Fix the issue by copying extra info to the new node and all its new
transitive operands during RAUW. See code comments for more details.
This fixes the remaining pcsections-atomics.ll tests on X86.
Reviewed By: dvyukov
Differential Revision: https://reviews.llvm.org/D144677
One of the cleanups necessary for D136529 - another being how we're going to handle moving freeze through multiple result nodes (like uaddo and subcarry)
The patch tries to solve duplicated combine work for vp sdnodes. The idea is to
introduce MatchConext that verifies specific patterns and generate specific node
infromation. There is two MatchConext in DAGCombiner. EmptyMatcher is for
normal nodes and VPMatcher is for vp nodes.
The idea of this patch is come form Simon Moll's proposal [0]. I only fixed some
minor issues and added few new features in this patch.
[0]: c38a14484a
Reviewed By: craig.topper
Differential Revision: https://reviews.llvm.org/D141891
Currently, in TargetLowering, if the target does not support fminnum, we lower
to fminimum if neither operand could be a NaN. But this isn't quite correct
because fminnum and fminimum treat +/-0 differently; so, we need to prove that
one of the operands isn't a zero, or we don't have signed zeros.
Differential Revision: https://reviews.llvm.org/D143256
Revert "[SelectionDAG] Add missing setValue calls in visitIntrinsicCall"
This reverts commit 0c64e1b68f36640ffe82fc90e6279c50617ad1cc.
This reverts commit 1142e6c7c795de7f80774325a07ed49bc95a48c9.
It spuriously added !pcsections where they shouldn't be. See added test
case in test/CodeGen/X86/pcsections.ll as an example. The reason is that
the SelectionDAG chains operations in a basic block as "operands"
pointing to preceding instructions. This resulted in setting the
metadata on _all_ instructions preceding the one that should have the
metadata.
Reverting for now because the semantics of !pcsections was completely
buggy now.
When adding pcsections to SDNodes, recursively add them to all values of
the node as well.
Reviewed By: melver
Differential Revision: https://reviews.llvm.org/D141048
These are essentially add/sub 1 with a clamping value.
AMDGPU has instructions for these. CUDA/HIP expose these as
atomicInc/atomicDec. Currently we use target intrinsics for these,
but those do no carry the ordering and syncscope. Add these to
atomicrmw so we can carry these and benefit from the regular
legalization processes.
The original logic resulted in inserting an integer vector into
a floating point one and vice versa. Patch also adds the missing
assert that would have caught the issue.
Differential Revision: https://reviews.llvm.org/D142303
Now that D139525 fixes the Hexagon infinite loop, the stopgap can be
removed to provide more information about known bits in SPLAT_VECTOR
whose operands are smaller than the bit width (which is most of the
time)
Reviewed By: reames
Differential Revision: https://reviews.llvm.org/D141075
Sometimes we end up with a shuffles in DAG that would be
better represented as a `ISD::ZERO_EXTEND_VECTOR_INREG`,
and a failure to do so causes suboptimal codegen in a number of cases,
especially when we will then cast vector to scalar.
I acknowledge, the test changes here are rather underwhelming,
but as with all of codegen, it's always a yak shawing,
and this is the most stripped down version of the patch
that shows *some* effect without having insurmountable amount
of fallout to deal with. The next change resolves this regression.
The transformation will be extended in follow-ups.
The combiner for BUILD_VECTOR that merges consecutive
loads into a wide load had two issues:
- It didn't check that the input loads all have the
same input chain
- It didn't update nodes that are chained to the original
loads to be chained to the new load
This caused issues with bootstrap when
3c4d2a03968ccf5889bacffe02d6fa2443b0260f was committed.
This patch fixes the issue so it can unblock this commit.
Differential revision: https://reviews.llvm.org/D140046
The Assignment Tracking debug-info feature is outlined in this RFC:
https://discourse.llvm.org/t/
rfc-assignment-tracking-a-better-way-of-specifying-variable-locations-in-ir
Add initial revision of assignment tracking analysis pass
---------------------------------------------------------
This patch squashes five individually reviewed patches into one:
#1https://reviews.llvm.org/D136320#2https://reviews.llvm.org/D136321#3https://reviews.llvm.org/D136325#4https://reviews.llvm.org/D136331#5https://reviews.llvm.org/D136335
Patch #1 introduces 2 new files: AssignmentTrackingAnalysis.h and .cpp. The
two subsequent patches modify those files only. Patch #4 plumbs the analysis
into SelectionDAG, and patch #5 is a collection of tests for the analysis as
a whole.
The analysis was broken up into smaller chunks for review purposes but for the
most part the tests were written using the whole analysis. It would be possible
to break up the tests for patches #1 through #3 for the purpose of landing the
patches seperately. However, most them would require an update for each
patch. In addition, patch #4 - which connects the analysis to SelectionDAG - is
required by all of the tests.
If there is build-bot trouble, we might try a different landing sequence.
Analysis problem and goal
-------------------------
Variables values can be stored in memory, or available as SSA values, or both.
Using the Assignment Tracking metadata, it's not possible to determine a
variable location just by looking at a debug intrinsic in
isolation. Instructions without any metadata can change the location of a
variable. The meaning of dbg.assign intrinsics changes depending on whether
there are linked instructions, and where they are relative to those
instructions. So we need to analyse the IR and convert the embedded information
into a form that SelectionDAG can consume to produce debug variable locations
in MIR.
The solution is a dataflow analysis which, aiming to maximise the memory
location coverage for variables, outputs a mapping of instruction positions to
variable location definitions.
API usage
---------
The analysis is named `AssignmentTrackingAnalysis`. It is added as a required
pass for SelectionDAGISel when assignment tracking is enabled.
The results of the analysis are exposed via `getResults` using the returned
`const FunctionVarLocs *`'s const methods:
const VarLocInfo *single_locs_begin() const;
const VarLocInfo *single_locs_end() const;
const VarLocInfo *locs_begin(const Instruction *Before) const;
const VarLocInfo *locs_end(const Instruction *Before) const;
void print(raw_ostream &OS, const Function &Fn) const;
Debug intrinsics can be ignored after running the analysis. Instead, variable
location definitions that occur between an instruction `Inst` and its
predecessor (or block start) can be found by looping over the range:
locs_begin(Inst), locs_end(Inst)
Similarly, variables with a memory location that is valid for their lifetime
can be iterated over using the range:
single_locs_begin(), single_locs_end()
Further detail
--------------
For an explanation of the dataflow implementation and the integration with
SelectionDAG, please see the reviews linked at the top of this commit message.
Reviewed By: jmorse
This teaches the DemandedElts version of isConstOrConstSplat about
SPLAT_VECTORS, in the same way as the non-DemandedElts version by
calling the demanded-bits version from the non-demanded-bits version.
Differential Revision: https://reviews.llvm.org/D128919
The cases where the result type doesn't match the range type
are inadequately tested, but I'm not sure how to write such a
test. During the pre-legalize combine, any obviously optimizable
code gets handled so it's harder to test legalized extloads.
This was previously reverted due to a hang on a Hexagon bot. This turned out to be a bug in the Hexagon backend around how splat_vectors are legalized (which they're using for fixed length vectors!). I adjusted this patch to remove the implicit truncate support. This hides the hexagon bug for now, and unblocks the rest of the change.
Original commit message:
This is the SelectionDAG equivalent of D136470, and is thus an alternate patch to D128159.
The basic idea here is that we track a single lane for scalable vectors which corresponds to an unknown number of lanes at runtime. This is enough for us to perform lane wise reasoning on many arithmetic operations.
This patch also includes an implementation for SPLAT_VECTOR as without it, the lane wise reasoning has no base case. The original patch which inspired this (D128159), also included STEP_VECTOR. I plan to do that as a separate patch.
Differential Revision: https://reviews.llvm.org/D137140
This patch mechanically replaces None with std::nullopt where the
compiler would warn if None were deprecated. The intent is to reduce
the amount of manual work required in migrating from Optional to
std::optional.
This is part of an effort to migrate from llvm::Optional to
std::optional:
https://discourse.llvm.org/t/deprecating-llvm-optional-x-hasvalue-getvalue-getvalueor/63716
I had reverted this before the holiday week because a problem was reported with a related change (D137140 - scalable vector known bits in DAG). I had initially confused the two patches, and then decided to leave this reverted out an abundance of caution. Now that we're through the holiday week, reapplying.
I also roled in fixes for several post commit review comments that hadn't landed with the original change.
Original commit message
This is a continuation of the series of patches adding lane wise support for scalable vectors in various knownbit-esq routines.
The basic idea here is that we track a single lane for scalable vectors which corresponds to an unknown number of lanes at runtime. This is enough for us to perform lane wise reasoning on many arithmetic operations.
Differential Revision: https://reviews.llvm.org/D137141