This should be generally good for a lot of LSV cases, but the attached
test demonstrates a specific compile time issue that appears in the
event where the `CaptureTracking` default max uses is raised.
Without using batching alias analysis, this test takes 6 seconds to
compile in a release build. With, less than a second. This is because
the mechanism that proves `NoAlias` in this case is very expensive
(`CaptureTracking.cpp`), and caching the result leads to 2 calls to that
mechanism instead of ~300,000 (run with -stats to see the difference)
This test only demonstrates the compile time issue if
`capture-tracking-max-uses-to-explore` is set to at least 1024, because
with the default value of 100, the `CaptureTracking` analysis is not
run, `NoAlias` is not proven, and the vectorizer gives up early.
The tests in this directory all depend on the AMDGPU target being
present so we can let opt infer the data layout.
Reviewed By: arsenm
Pull Request: https://github.com/llvm/llvm-project/pull/137924
Strengthen out-of-bounds guarantees for buffer accesses by disallowing
buffer accesses with alignment lower than natural alignment.
This is needed to specifically address the edge case where an access
starts out-of-bounds and then enters in-bounds, as the hardware would
treat the entire access as being out-of-bounds. This is normally not
needed for most users, but at least one graphics device extension
(VK_EXT_robustness2) has very strict requirements - in-bounds accesses
must return correct value, and out-of-bounds accesses must return zero.
The direct consequence of the patch is that a buffer access at negative
address is not merged by load-store-vectorizer with one at a positive
address, which fixes a CTS test.
Targets that do not care about the new behavior are advised to use the
new target feature relaxed-buffer-oob-mode that maintains the state from
before the patch.
[AMDGPU][NFC] Replace gfx940 and gfx941 with gfx942 in llvm/test
gfx940 and gfx941 are no longer supported. This is one of a series of PRs to remove them from the code base.
This PR uses gfx942 instead of gfx940 and gfx941 in the test RUN-lines (unless there is already a RUN-line for gfx942).
The only notable difference in the test output is that gfx942 does not force the use of sc0 and sc1 on stores while gfx940 and gfx941 do (cf. https://reviews.llvm.org/D149986).
For SWDEV-512631
This patch introduces a new method:
void Vectorizer::mergeEquivalenceClasses(EquivalenceClassMap &EQClasses)
const;
The method is called at the end of
Vectorizer::collectEquivalenceClasses() and is needed to merge
equivalence classes that differ only by their underlying objects (UO1
and UO2), where UO1 is 1-level-indirection underlying base for UO2. This
situation arises due to the limited lookup depth used during the search
of underlying bases with llvm::getUnderlyingObject(ptr).
Using any fixed lookup depth can result into creation of multiple
equivalence classes that only differ by 1-level indirection bases.
The new approach merges equivalence classes if they have adjacent bases
(1-level indirection). If a series of equivalence classes form ladder
formed of 1-step/level indirections, they are all merged into a single
equivalence class. This provides more opportunities for the load-store
vectorizer to generate better vectors.
---------
Signed-off-by: Klochkov, Vyacheslav N <vyacheslav.n.klochkov@intel.com>
This patch introduces a new method:
void Vectorizer::mergeEquivalenceClasses(EquivalenceClassMap &EQClasses)
const
The method is called at the end of
Vectorizer::collectEquivalenceClasses() and is needed to merge
equivalence classes that differ only by their underlying objects (UO1
and UO2), where UO1 is 1-level-indirection underlying base for UO2. This
situation arises due to the limited lookup depth used during the search
of underlying bases with llvm::getUnderlyingObject(ptr).
Using any fixed lookup depth can result into creation of multiple
equivalence classes that only differ by 1-level indirection bases.
The new approach merges equivalence classes if they have adjacent bases
(1-level indirection). If a series of equivalence classes form ladder
formed of 1-step/level indirections, they are all merged into a single
equivalence class. This provides more opportunities for the load-store
vectorizer to generate better vectors.
---------
Signed-off-by: Klochkov, Vyacheslav N <vyacheslav.n.klochkov@intel.com>
LoadStoreVectorizer hangs on certain examples, when its reorder function
goes into a cycle. Detect this cycle and explicitly forbid it, using an
assert, and document the resulting crash in a test-case under AArch64.
Port CodeGenPrepare to new pass manager and dependency
BasicBlockSectionsProfileReader
Fixes: #75380
Co-authored-by: Krishna-13-cyber <84722531+Krishna-13-cyber@users.noreply.github.com>
Revert e0c554ad87d18dcbfcb9b6485d0da800ae1338d1 "Port CodeGenPrepare to new pass manager (and BasicBlockSectionsProfil… (#75380)"
Revert #75380 and #77054 as they were breaking EXPENSIVE_CHECKS buildbots: https://lab.llvm.org/buildbot/#/builders/104
Port CodeGenPrepare to new pass manager and dependency
BasicBlockSectionsProfileReader
Fixes: #64560
Co-authored-by: Krishna-13-cyber <84722531+Krishna-13-cyber@users.noreply.github.com>
These tests rely on SCEV looking recognizing an "or" with no common
bits as an "add". Add the disjoint flag to relevant or instructions
in preparation for switching SCEV to use the flag instead of the
ValueTracking query. The IR with disjoint flag matches what
InstCombine would produce.
In commit 2be0abb7fe72ed453 (D149893) the load store vectorized was
reimplemented. One thing that can happen with the new LSV is that
it can increase the align of alloca and global objects. However,
the code comments indicate that the intention only was to increase
alignment of alloca.
Now we will use stripPointerCasts to analyse if the load/store really
is accessing an alloca (same as getOrEnforceKnownAlignment is using).
And then we only try to change the align if we find an alloca
instruction. This way the code will match better with code comments,
and we won't change alignment of non-stack variables to use the
"StackAdjustedAlignment".
Differential Revision: https://reviews.llvm.org/D152386
Previously, getConstantOffset could return an APInt with a different
bitwidth than the input pointers. For example, we might be loading an
opaque 64-bit pointer, but stripAndAccumulateInBoundsConstantOffsets
might give a 32-bit offset.
This was OK in most cases because in gatherChains, we casted the APInt
back to the original ASPtrBits.
But it was not OK when considering selects. We'd call getConstantOffset
twice and compare the resulting APInt's, which might not have the same
bit width.
This fixes that. Now getConstantOffset always returns offsets with the
correct width, so we don't need the hack of casting it in gatherChains,
and it works correctly when we're handling selects.
Differential Revision: https://reviews.llvm.org/D151640
Previously we used the later of GEPA or GEPB. This is hacky because
really we should be using the later of the two load/store instructions
being considered. But also it's flat-out incorrect, because GEPA and
GEPB might be in different BBs, in which case we cannot ask which one
comes last (assertion failure,
https://reviews.llvm.org/D149893#4378332).
Fixed, now we use the correct context instruction.
Differential Revision: https://reviews.llvm.org/D151630
The motivation for this change is a workload generated by the XLA compiler
targeting nvidia GPUs.
This kernel has a few hundred i8 loads and stores. Merging is critical for
performance.
The current LSV doesn't merge these well because it only considers instructions
within a block of 64 loads+stores. This limit is necessary to contain the
O(n^2) behavior of the pass. I'm hesitant to increase the limit, because this
pass is already one of the slowest parts of compiling an XLA program.
So we rewrite basically the whole thing to use a new algorithm. Before, we
compared every load/store to every other to see if they're consecutive. The
insight (from tra@) is that this is redundant. If we know the offset from PtrA
to PtrB, then we don't need to compare PtrC to both of them in order to tell
whether C may be adjacent to A or B.
So that's what we do. When scanning a basic block, we maintain a list of
chains, where we know the offset from every element in the chain to the first
element in the chain. Each instruction gets compared only to the leaders of
all the chains.
In the worst case, this is still O(n^2), because all chains might be of length
1. To prevent compile time blowup, we only consider the 64 most recently used
chains. Thus we do no more comparisons than before, but we have the potential
to make much longer chains.
This rewrite affects many tests. The changes to tests fall into two
categories.
1. The old code had what appears to be a bug when deciding whether a misaligned
vectorized load is fast. Suppose TTI reports that load <i32 x 4> align 4
has relative speed 1, and suppose that load i32 align 4 has relative speed
32.
The intent of the code seems to be that we prefer the scalar load, because
it's faster. But the old code would choose the vectorized load.
accessIsMisaligned would set RelativeSpeed to 0 for the scalar load (and not
even call into TTI to get the relative speed), because the scalar load is
aligned.
After this patch, we will prefer the scalar load if it's faster.
2. This patch changes the logic for how we vectorize. Usually this results in
vectorizing more.
Explanation of changes to tests:
- AMDGPU/adjust-alloca-alignment.ll: #1
- AMDGPU/flat_atomic.ll: #2, we vectorize more.
- AMDGPU/int_sideeffect.ll: #2, there are two possible locations for the call to @foo, and the pass is brittle to this. Before, we'd vectorize in case 1 and not case 2. Now we vectorize in case 2 and not case 1. So we just move the call.
- AMDGPU/adjust-alloca-alignment.ll: #2, we vectorize more
- AMDGPU/insertion-point.ll: #2 we vectorize more
- AMDGPU/merge-stores-private.ll: #1 (undoes changes from git rev 86f9117d476, which appear to have hit the bug from #1)
- AMDGPU/multiple_tails.ll: #1
- AMDGPU/vect-ptr-ptr-size-mismatch.ll: Fix alignment (I think related to #1 above).
- AMDGPU CodeGen: I have difficulty commenting on these changes, but many of them look like #2, we vectorize more.
- NVPTX/4x2xhalf.ll: Fix alignment (I think related to #1 above).
- NVPTX/vectorize_i8.ll: We don't generate <3 x i8> vectors on NVPTX because they're not legal (and eventually get split)
- X86/correct-order.ll: #2, we vectorize more, probably because of changes to the chain-splitting logic.
- X86/subchain-interleaved.ll: #2, we vectorize more
- X86/vector-scalar.ll: #2, we can now vectorize scalar float + <1 x float>
- X86/vectorize-i8-nested-add-inseltpoison.ll: Deleted the nuw test because it was nonsensical. It was doing `add nuw %v0, -1`, but this is equivalent to `add nuw %v0, 0xffff'ffff`, which is equivalent to asserting that %v0 == 0.
- X86/vectorize-i8-nested-add.ll: Same as nested-add-inseltpoison.ll
Differential Revision: https://reviews.llvm.org/D149893
This is a follow-up to b71edfaa4ec3c998aadb35255ce2f60bba2940b0
since I forgot the lit.local.cfg files in that one.
Reformatting is done with `black`.
If you end up having problems merging this commit because you
have made changes to a python file, the best way to handle that
is to run git checkout --ours <yourfile> and then reformat it
with black.
If you run into any problems, post to discourse about it and
we will try to help.
RFC Thread below:
https://discourse.llvm.org/t/rfc-document-and-standardize-python-code-style
Reviewed By: barannikov88, kwk
Differential Revision: https://reviews.llvm.org/D150762
Re-land D145441 with data layout upgrade code fixed to not break OpenMP.
This reverts commit 3f2fbe92d0f40bcb46db7636db9ec3f7e7899b27.
Differential Revision: https://reviews.llvm.org/D149776
Per discussion at
https://discourse.llvm.org/t/representing-buffer-descriptors-in-the-amdgpu-target-call-for-suggestions/68798,
we define two new address spaces for AMDGCN targets.
The first is address space 7, a non-integral address space (which was
already in the data layout) that has 160-bit pointers (which are
256-bit aligned) and uses a 32-bit offset. These pointers combine a
128-bit buffer descriptor and a 32-bit offset, and will be usable with
normal LLVM operations (load, store, GEP). However, they will be
rewritten out of existence before code generation.
The second of these is address space 8, the address space for "buffer
resources". These will be used to represent the resource arguments to
buffer instructions, and new buffer intrinsics will be defined that
take them instead of <4 x i32> as resource arguments. ptr
addrspace(8). These pointers are 128-bits long (with the same
alignment). They must not be used as the arguments to getelementptr or
otherwise used in address computations, since they can have
arbitrarily complex inherent addressing semantics that can't be
represented in LLVM. Even though, like their address space 7 cousins,
these pointers have deterministic ptrtoint/inttoptr semantics, they
are defined to be non-integral in order to prevent optimizations that
rely on pointers being a [0, [addr_max]] value from applying to them.
Future work includes:
- Defining new buffer intrinsics that take ptr addrspace(8) resources.
- A late rewrite to turn address space 7 operations into buffer
intrinsics and offset computations.
This commit also updates the "fallback address space" for buffer
intrinsics to the buffer resource, and updates the alias analysis
table.
Depends on D143437
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D145441
Currently we happen to split a chain of 12xi8 accesses into 6xi8 + 6xi8, which
produces rather suboptimal code.
This change attempts to split-off non-multiples of 4bytes at the end and if that
does not work, splits on the smaller power-of-2 boundary.
Differential Revision: https://reviews.llvm.org/D147976
Many uses of getIntPtrType() were using that type to calculate the
neened type for GEP offset arguments. However, some time ago,
DataLayout was extended to support pointers where the size of the
pointer is not equal to the size of the values used to index it.
Much code was already migrated to, for example, use getIndexSizeInBits
instead of getPtrSizeInBits, but some rewrites still used
getIntPtrType() to get the type for GEP offsets.
This commit changes uses of getIntPtrType() to getIndexType() where
they are involved in a GEP-related calculation.
In at least one case (bounds check insertion) this resolves a compiler
crash that the new test added here would previously trigger.
This commit does not impact
- C library-related rewriting (memcpy()), which are operating under
the assumption that intptr_t == size_t. While all the mechanisms for
breaking this assumption now exist, doing so is outside the scope of
this commit.
- Code generation and below. Note that the use of getIntPtrType() in
CodeGenPrepare will be changed in a future commit.
- Usage of getIntPtrType() in any backend
Depends on D143435
Reviewed By: arichardson
Differential Revision: https://reviews.llvm.org/D143437
Based on experimentation on gfx906,908,90a and 1030, wider global loads / stores are more performant than multiple narrower ones independent of alignment -- this is especially true when combining 8 bit loads / stores, in which case speedup was usually 2x across all alignments.
Differential Revision: https://reviews.llvm.org/D145170
Change-Id: I6ee6c76e6ace7fc373cc1b2aac3818fc1425a0c1
Over the past day or so, i've took a large swing at our tests,
and reduced the number of tests that were still using the old syntax
from ~1800 to just 200.
Left to handle: (as it is seen in this patch)
* Transforms/LSR
* Transforms/CGP
* Transforms/TypePromotion
* Transforms/HardwareLoops
* Analysis/*
* some misc.
I think this is the right point to start actively refusing
to honor the old syntax, except for the old tests,
to prevent the old syntax from creeping back in.
Thus, let's add temporary default-off flag,
and if it is not passed refuse to accept old syntax.
The tests that still need porting are annotated with this flag.
Reviewed By: aeubanks
Differential Revision: https://reviews.llvm.org/D139647
Use shufflevector to do the subvector extracts. This allows a lot more
load merging on AMDGPU and also on NVPTX when <2 x half> is involved.
Differential Revision: https://reviews.llvm.org/D117219
Rather than checking for nounwind in particular, make sure the
instruction is guaranteed to transfer execution, which will also
handle non-willreturn calls correctly.
Fixes https://github.com/llvm/llvm-project/issues/52950.