Add 3 new iterator ranges to VPPhiAccessors
* incoming_values(): returns a range over the incoming
values of a phi
* incoming_blocks(): returns a range over the incoming
blocks of a phi
* incoming_values_and_blocks: returns a range over pairs of
incoming values and blocks.
Depends on https://github.com/llvm/llvm-project/pull/124838.
PR: https://github.com/llvm/llvm-project/pull/138472
PredicateInfo needs some no-op to which the predicate can be attached.
Currently this is an ssa.copy intrinsic. This PR replaces it with a
no-op bitcast.
Using a bitcast is more efficient because we don't have the overhead of
an overloaded intrinsic. It also makes things slightly simpler overall.
Split up the not clearly named prepareForVectorization transform into
buildVPlan0, which adds the vector preheader, middle and scalar
preheader blocks, as well as the canonical induction recipes and sets
the trip count. The new transform is run directly after building the
plain CFG VPlan initially.
The remaining code handling early exits and adding the branch in the
middle block is renamed to handleEarlyExitsAndAddMiddleCheck and still
runs at the original position.
With the code movement, we only have to add the skeleton once to the
initial VPlan, and cloning will take care of the rest. It will also
enable moving other construction steps to work directly on VPlan0, like
adding resume phis.
PR: https://github.com/llvm/llvm-project/pull/150848
The CMake flag has been on by default for a month without any issues.
This makes the feature support in LLVM unconditional (but does not
enable the feature by default).
The initial VPlan closely reflects the original scalar loop, so unsing
VPWidenPHIRecipe here is premature. Widened phi recipes should only be
introduced together with other widened recipes.
PR: https://github.com/llvm/llvm-project/pull/150847
Noticed this when checking the invariant that all phis in the header
block must be header phis. I think there's a missing set of parentheses
here, since otherwise it only cast<VPInstruction> when RecipeI isn't a
VPInstruction.
At this stage I'm just opportunistically deleting any code using
debug-intrinsic types, largely adjacent to calls to findDbgUsers. I'll
get to deleting that in probably one or more two commits.
Patch 1/4 adding bitcode support.
Store whether or not a function is using Key Instructions in its DISubprogram so
that we don't need to rely on the -mllvm flag -dwarf-use-key-instructions to
determine whether or not to interpret Key Instructions metadata to decide
is_stmt placement at DWARF emission time. This makes bitcode support simple and
enables well defined mixing of non-key-instructions and key-instructions
functions in an LTO context.
This patch adds the bit (using DISubprogram::SubclassData1).
PR 144104 and 144103 use it during DWARF emission.
PR 44102 adds bitcode
support.
See pull request for overview of alternative attempts.
The `SeedCollector` class gets two new arguments: `CollectStores` and
`CollectLoads`. These replace the `sbvec-collect-seeds` cl::opt flag.
This is done to help with reusing the SeedCollector class in a future
pass. The cl::opt flag is moved to the seed collection pass:
Passes/SeedCollection.cpp
Add a new getNumOperandsForOpcode helper to determine the number of
operands from the opcode. For now, it is used to verify the number
operands at VPInstruction construction.
It returns -1 for a few opcodes where the number of operands cannot be
determined (GEP, Switch, PHI, Call).
This can also be used in a follow-up to determine if a VPInstruction is
masked based on the number of arguments.
PR: https://github.com/llvm/llvm-project/pull/142284
We currently generate code like this on x86 for a jump table with 5 elements,
assuming the call target is in rbx:
lea global_addr(%rip), %rax # initialize temporary rax with base address
mov %rbx, %rcx # initialize another temporary rcx for index (rbx will be used for the call, so it is still live)
sub %rax, %rcx # compute `address - base`
ror $0x3, %rcx # compute `(address - base) ror 3` i.e. index
cmp $0x4, %rcx # check index <= 4
ja .Ltrap
[...]
.Ltrap:
ud1
A more efficient instruction sequence, that only needs one temporary
register and one fewer instruction, is possible by subtracting the
address we are testing from the fixed address instead of vice versa:
lea (global_addr + 4*8)(%rip), %rax # initialize temporary rax with address of last element
sub %rbx, %rax # compute `last element - address`
ror $0x3, %rax # compute `(last element - address) ror 3` i.e. 4 - index
cmp $0x4, %rax # check 4 - index <= 4 (same as above)
ja .Ltrap
[...]
.Ltrap:
ud1
Change LowerTypeTests to generate that sequence. As a consequence, the
order of bits in the bitsets is reversed. Because it doesn't matter how we
do the subtraction on other architectures (to the best of my knowledge),
do so unconditionally.
Reviewers: fmayer, vitalybuka
Reviewed By: fmayer
Pull Request: https://github.com/llvm/llvm-project/pull/142887
Most of the recent development on the MemProfiler has been on the Use part. The instrumentation has been quite stable for a while. As the complexity of the use grows (with undrifting, diagnostics etc) I figured it would be good to separate these two implementations.
This patch implements a simple pass that tries to de-duplicate packs. If
there are two packing patterns inserting the exact same values in the
exact same order, then we will keep the top-most one of them. Even
though such patterns may be optimized away by subsequent passes it is
still useful to do this within the vectorizer because otherwise the cost
estimation may be off, making the vectorizer over conservative.
Update initial construction to connect the Plan's entry to the scalar
preheader during initial construction. This moves a small part of the
skeleton creation out of ILV and will also enable replacing
VPInstruction::ResumePhi with regular VPPhi recipes.
Resume phis need 2 incoming values to start with, the second being the
bypass value from the scalar ph (and used to replicate the incoming
value for other bypass blocks). Adding the extra edge ensures we
incoming values for resume phis match the incoming blocks.
PR: https://github.com/llvm/llvm-project/pull/140132
This patch moves the logic to manage IR flags to a separate VPIRFlags
class. For now, VPRecipeWithIRFlags is the only class that inherits
VPIRFlags. The new class allows for simpler passing of flags when
constructing recipes, simplifying the constructors for various recipes
(VPInstruction in particular, which now just has 2 constructors, one
taking an extra VPIRFlags argument.
This mirrors the approach taken for VPIRMetadata and makes it easier to
extend in the future. The patch also adds a unified flagsValidForOpcode
to check if the flags in a VPIRFlags match the provided opcode.
PR: https://github.com/llvm/llvm-project/pull/140621
This reverts commit 793bb6b257fa4d9f4af169a4366cab3da01f2e1f.
The recommitted version contains a fix to make sure only the original
phis are processed in convertPhisToBlends nu collecting them in a vector
first. This fixes a crash when no mask is needed, because there is only
a single incoming value.
Original message:
This patch moves the logic to predicate and linearize a VPlan to a
dedicated VPlan transform. It mostly ports the existing logic directly.
There are a number of follow-ups planned in the near future to
further improve on the implementation:
* Edge and block masks are cached in VPPredicator, but the block masks
are still made available to VPRecipeBuilder, so they can be accessed
during recipe construction. As a follow-up, this should be replaced by
adding mask operands to all VPInstructions that need them and use that
during recipe construction.
* The mask caching in a map also means that this map needs updating each
time a new recipe replaces a VPInstruction; this would also be handled
by adding mask operands.
PR: https://github.com/llvm/llvm-project/pull/128420
This patch moves the logic to predicate and linearize a VPlan to a
dedicated VPlan transform. It mostly ports the existing logic directly.
There are a number of follow-ups planned in the near future to
further improve on the implementation:
* Edge and block masks are cached in VPPredicator, but the block masks
are still made available to VPRecipeBuilder, so they can be accessed
during recipe construction. As a follow-up, this should be replaced by
adding mask operands to all VPInstructions that need them and use that
during recipe construction.
* The mask caching in a map also means that this map needs updating each
time a new recipe replaces a VPInstruction; this would also be handled
by adding mask operands.
PR: https://github.com/llvm/llvm-project/pull/128420
Of the 128-bits of buffer descriptor only 48 bits are address bits, so
following the discussion on https://discourse.llvm.org/t/clarifiying-the-semantics-of-ptrtoint/83987/54,
the logic conclusion is to set the index width to 48 bits instead of
the current value of 128.
Most of the test changes are mechanical datalayout updates, but there
is one actual change: the ptrmask test now uses .i48 instead of .i128
and I had to update SelectionDAGBuilder to correctly extend the mask.
Reviewed By: krzysz00
Pull Request: https://github.com/llvm/llvm-project/pull/139419
Support cloning VPlans as they are created by the initial buildVPlan,
i.e. scalar header not yet connected and no trip-count set. This is not
used yet but will in follow-up changes/
Also add a unit test for cloning & printing.
It's possible for virtual constant propagation in whole program
devirtualization to create unaligned loads. We originally saw this with
4-byte aligned relative vtables where we could store 8-byte values
before/after the vtable. But since the vtable is 4-byte aligned and we
unconditionally do an 8-byte load, we can't guarantee that the stored
constant will always be aligned to 8 bytes. We can also see this with
normal vtables whenever a 1-byte char is stored in the vtable because
the offset calculation for the GEP doesn't take into account the
original vtable alignment.
This patch introduces two changes to virtual constant propagation:
1. Do not propagate constants whose preferred alignment is larger than
the vtable alignment. This is required because if the constants are
stored in the vtable, we can only guarantee the constant will be stored
at an address at most aligned to the vtable's alignment.
2. Round up the offset used in the GEP before the load to ensure it's at
an address suitably aligned such that we can load from it.
This patch updates tests to reflect this alignment change and adds some
cases for relative vtables.
Thread-local globals live, by default, in the default globals address
space, which may not be 0, so we need to overload @llvm.thread.pointer
to support other address spaces, and use the default globals address
space in Clang.
783a846 changed VPScalarIVStepsRecipe to take 3 arguments (adding
VF explicitly) instead of 2, but didn't change the corresponding
pattern matcher.
This matcher was only used in vputils::isHeaderMask, and no test
ever reached that function with a ScalarIVSteps recipe for the
value being matched -- it was always a WideCanonicalIV. So the
matcher bailed out immediately before checking arguments and
asserting that the number of arguments in the recipe was the
same provided by the matcher.
Since the constructors for ScalarIVSteps take 3 values, we should
be safe to update the matcher and guard it with a dedicated gtest.
m_CanonicalIV() on the other hand is removed; as a phi recipe it
may not have a consistent number of arguments to match, only
requiring one (the start value) when being constructed with the
assumption that a second incoming value is added for the backedge
later. In order to keep the matcher we would need to add multiple
matchers with different numbers of arguments for it depending on
what phase of vplan construction we were in, and ensure that we
never reorder matcher usage vs. vplan transformation. Since the
main IR PatternMatch.h doesn't contain any matchers for PHI nodes,
I think we can just remove it and match via m_Specific() using the
VPValue we get from Plan.getCanonicalIV().
Move early-exit handling up front to original VPlan construction, before
introducing early exits.
This builds on https://github.com/llvm/llvm-project/pull/137709, which
adds exiting edges to the original VPlan, instead of adding exit blocks
later.
This retains the exit conditions early, and means we can handle early
exits before forming regions, without the reliance on VPRecipeBuilder.
Once we retain all exits initially, handling early exits before region
construction ensures the regions are valid; otherwise we would leave
edges exiting the region from elsewhere than the latch.
Removing the reliance on VPRecipeBuilder removes the dependence on
mapping IR BBs to VPBBs and unblocks predication as VPlan transform:
https://github.com/llvm/llvm-project/pull/128420.
Depends on https://github.com/llvm/llvm-project/pull/137709 (included in
PR).
PR: https://github.com/llvm/llvm-project/pull/138393
This commit introduces a new unit test that covers a block address
cloning bug. Specifically, this test covers the bug tracked in
http://github.com/llvm/llvm-project/issues/47769 which has been resolved
in the meantime.
(NFC modulo debug output changes)
Add generic helper to print phi operands (incoming values) together with
their incoming blocks.
As more and more transforms are added, keeping the incoming blocks of
phis becomes more important. Print incoming blocks via VPPhiAcessors, to
make debugging easier.
Update initial VPlan construction to include exit conditions and edges.
The loop region is now first constructed without entry/exiting. Those
are set after inserting the region in the CFG, to preserve the original
predecessor/successor order of blocks.
For now, all early exits are disconnected before forming the regions,
but a follow-up will update uncountable exit handling to also happen
here. This is required to enable VPlan predication and remove the
dependence any IR BBs
(https://github.com/llvm/llvm-project/pull/128420).
PR: https://github.com/llvm/llvm-project/pull/137709
Migrate their usage to the `AnyMem*Inst` family, and add a isAtomic()
query on the base class for that hierarchy. This matches the idioms we
use for e.g. isAtomic on load, store, etc.. instructions, the existing
isVolatile idioms on mem* routines, and allows us to more easily share
code between atomic and non-atomic variants.
As with #138568, the goal here is to simplify the class hierarchy and
make it easier to reason about. I'm moving from easiest to hardest, and
will stop at some point when I hit "good enough". Longer term, I'd sorta
like to merge or reverse the naming on the plain Mem*Inst and the
AnyMem*Inst, but that's a much larger and more risky change. Not sure
I'm going to actually do that.
Add:
mapAtomInstance - map the atom group number to a new group.
RemapSourceAtom - apply the mapped atom group number to this instruction.
Modify:
CloneBasicBlock - Call mapAtomInstance on cloned instruction's DebugLocs
if MapAtoms is true (default). Setting to false could
lead to a degraded debugging experience. See code comment.
Optimisations like loop unroll that duplicate instructions need to remap source
atom groups so that each duplicated source construct instance is considered
distinct when determining is_stmt locations.
This commit adds the remapping functionality and a unittest.
RFC:
https://discourse.llvm.org/t/rfc-improving-is-stmt-placement-for-better-interactive-debugging/82668