This is a follow up to c0a264e, but note that there is a functional
difference here: the root changes for the memcpy.inline case. This
difference appears to have been accidental, but I kept this back to
facility separate review in case there's something I'm missing here.
This flag instructs the scheduler to stop scheduling after N
instructions, but
in the debug output it appears as if it's scheduling N+1 instructions,
e.g.
$ llc -misched-cutoff=10 -debug-only=machine-scheduler
example.ll 2>&1 | grep "^Scheduling SU" | wc -l
11
as it calls pickNode before calling checkSchedLimit.
Handles bitreverse for vector types which were previously falling back
onto Selection DAG. Includes 8-bit element vectors greater than 128 bits
and less than 64 bits: <32 x i8>, <4 x i8>, and odd vector types: <9 x
i8>.
This provides the `disable-schedmodel-in-sched-mi` flag. Using this, we
will disable the SchedModel / Itineraries during scheduling. This has
the effect of not using any latency / hardware resource information for
scheduling decisions.
We have the `schedmodel` flag, but this disables the `SchedModel` for
all passes. This allows disabling only for scheduling while preserving
the behavior of other passes (e.g. MachineLICM). This is conceptually
similar to other flags like `enable-aa-sched-mi`
This is a reland of #138434 except that:
- the bits for llvm/lib/CodeGen/RenameIndependentSubregs.cpp
have been dropped because they caused a test failure under asan, and
- the bits for llvm/lib/CodeGen/SelectionDAG/ScheduleDAGFast.cpp have
been improved with structured bindings.
The current implementation always creates a 1 bit constant for the
result of the `G_ICMP`, which will cause issues if the destination
register size is larger than that. With asserts enabled, it will cause a
crash in `buildConstant`:
```
llvm/lib/CodeGen/GlobalISel/MachineIRBuilder.cpp:322: virtual MachineInstrBuilder llvm::MachineIRBuilder::buildConstant(const DstOp &, const ConstantInt &): Assertion `EltTy.getScalarSizeInBits() == Val.getBitWidth() && "creating constant with the wrong size"' failed.
```
This reverts commit a9699a334bc9666570418a3bed9520bcdc21518b.
Breaks CodeGen/AMDGPU/collapse-endcf.ll in several configs
(sanitizer builds; macOS; possibly more), see comments on
https://github.com/llvm/llvm-project/pull/138434
Call with the individual subvector type, source vector and index operands instead of the original EXTRACT_SUBVECTOR node.
Some of these folds still assumed that EXTRACT_SUBVECTOR/INSERT_SUBVECTOR nodes could have variable indices, despite us moving to all constant indices some time ago - all of that code has now been simplified.
I've moved the narrowExtractedVectorBinOp call higher up, but it won't affect fold order - it didn't rely on the peekThroughBitcasts call, and worked on BinOps, not BUILD_VECTOR/INSERT_SUBVECTOR nodes.
Prep work to make it easier for more of these folds to work through BITCAST nodes.
In some cases of chained `sext` operators the debug metadata can be
missed. This patch propagates proper metadata to resulting node.
Particular case of issue is NVPTX codegen for function with bool local
variable:
```
void test(int i) {
bool xyz = i == 0;
foo(i);
}
```
---------
Signed-off-by: Alexander Peskov <apeskov@nvidia.com>
In InitializeSlots, if an interval with a non-zero StackID (A) is
encountered we set All/UsedColors to a size of A + 1. If after this we
process another interval with a non-zero StackID (B), where B < A, then
we resize All/UsedColors to size < A + 1, and lose the BitVector
associated with A.
AFAIK this is a latent bug upstream, but when adding a new TargetStackID
locally I hit a `NextColors[StackID] != -1 && "No more spill slots?"`
assertion due to this issue.
Currently BlockAddresses store both the Function and the BasicBlock they
reference, and the BlockAddress is part of the use list of both the
Function and BasicBlock.
This is quite awkward, because this is not really a use of the function
itself (and walks of function uses generally skip block addresses for
that reason). This also has weird implications on function RAUW (as that
will replace the function in block addresses in a way that generally
doesn't make sense), and causes other peculiar issues, like the ability
to have multiple block addresses for one block (with different
functions).
Instead, I believe it makes more sense to specify only the basic block
and let the function be implied by the BB parent. This does mean that we
may have block addresses without a function (if the BB is not inserted),
but this should only happen during IR construction.
- Move `MIPrinter` class to anonymous namespace, and remove it as a
friend of `MachineBasicBlock`.
- Move `canPredictBranchProbabilities` to `MachineBasicBlock` and change
it to use the new `BranchProbability::normalizeProbabilities` function
that accepts a range, and also to use `llvm::equal()` to check equality
of the two vectors.
- Use `ListSeparator` to print comma separate lists instead of manual
code to do that.
This experimental option was introduced in 2015 via commit 1192294, and
the target hook was added in 2020 via commit 99e865b6. There does not
appear to have ever been a use of this target hook in tree.
This code is complicating one of the most complicated and hard to
understand parts of our code base, and was an experiment introduced
nearly 10 years ago. Let's get rid of it.
Note that the idea described in the original patch is not neccessarily a
bad one, and we might return to it someday.
- Change MachineInstr operand accessors to use `ArrayRef` internally to
slice the operand array into sub-arrays.
- Minor: remove unnecessary {} on `MachineInstrBuilder::add`.
Implement proper splitting functions for PARTIAL_REDUCE_MLA ISD nodes.
This makes the udot_8to64 and sdot_8to64 tests generate dot product
instructions for when the new ISD nodes are used.
---------
Co-authored-by: James Chesterman <james.chesterman@arm.com>
If the SDNode is used it can pick up the wrong results number, for
example looking at the known bits of the first result where it should be
looking at the second. The SDValue is already present as the
SelectCodeCommon checks move from parent to child, pass the SDValue
through to CheckNodePredicate as Op so that it can use it if necessary.
SDNode *N is still generated, keeping most PatFrags the same.
Fixes#137274
This patch adds support for LLVM IR atomicrmw `fmaximum` and `fminimum`
instructions.
These mirror the `llvm.maximum.*` and `llvm.minimum.*` instructions, but
are atomic and use IEEE754 2019 handling for NaNs, which is different to
`fmax` and `fmin`. See:
https://llvm.org/docs/LangRef.html#llvm-minimum-intrinsic
for more details.
Future changes will allow this LLVM IR to be lowered to specialised
assembler instructions on suitable targets, such as AArch64.
Fix issue 131298 where an undefined $scc register causes verifier errors
when using SI_KILL_F32_COND_IMM_TERMINATOR instructions. The problem
occurs because the $scc register defined in a comparison before the kill
terminator is used in successor blocks, but was not properly marked as live-in.
This patch:
- Adds code to check if SCC is used in the successor block
- Adds SCC as a live-in to successor blocks
- Handles both explicit and implicit uses of SCC
With this patch the machine verifier no longer reports undefined $scc
errors in following kill terminator instruction.
Fixes#131298
---------
Co-authored-by: Matt Arsenault <arsenm2@gmail.com>
Reapplied after fixing the config issue that was causing issues following
the previous merge.
This reverts commit fdbf073a86573c9ac4d595fac8e06d252ce1469f.
- Add new pass manager version of `MachineUniformityAnalysis `.
- Query `TargetTransformInfo` in new pass manager version.
- Use `printAsOperand` when printing machine function name
The code below the removed check looks generic enough to support
arbitrary integer widths. This change helps 32-bit targets avoid
expensive expansion/libcalls in the case of zero input.
Pull Request: https://github.com/llvm/llvm-project/pull/137197
LegalizerHelper::reduceLoadStoreWidth does not work for non-byte-sized
types, because this would require (un)packing of bits across byte
boundaries.
Precommit tests: #134904
Current implementation tries to fold the operand before
rematerialization because it can reduce one register usage. But if there
is a physical register available we can still rematerialize it without
causing high register pressure.
This patch do this check to find the better choice. Then we can produce
xorps %xmm1, %xmm1
ucomiss %xmm1, %xmm0
instead of
ucomiss LCPI0_1(%rip), %xmm0
This patch adds support for LLVM IR atomicrmw `fmaximum` and `fminimum`
instructions.
These mirror the `llvm.maximum.*` and `llvm.minimum.*` instructions, but
are atomic and use IEEE754 2019 handling for NaNs, which is different to
`fmax` and `fmin`. See:
https://llvm.org/docs/LangRef.html#llvm-minimum-intrinsic
for more details.
Future changes will allow this LLVM IR to be lowered to specialised
assembler instructions on suitable targets, such as AArch64.
When floating-point operations are legalized to operations of a higher
precision (e.g. f16 fadd being legalized to f32 fadd) then we get
narrowing then widening operations between each operation. With the
appropriate fast math flags (nnan ninf contract) we can eliminate these
casts.
See https://discourse.llvm.org/t/rfc-keep-globalvalue-guids-stable/84801
for context.
This is a non-functional change which just changes the interface of
GlobalValue, in preparation for future functional changes. This part
touches a fair few users, so is split out for ease of review. Future
changes to the GlobalValue implementation can then be focused purely on
that class.
This does the following:
* Rename GlobalValue::getGUID(StringRef) to
getGUIDAssumingExternalLinkage. This is simply making explicit at the
callsite what is currently implicit.
* Where possible, migrate users to directly calling getGUID on a
GlobalValue instance.
* Otherwise, where possible, have them call the newly renamed
getGUIDAssumingExternalLinkage, to make the assumption explicit.
There are a few cases where neither of the above are possible, as the
caller saves and reconstructs the necessary information to compute the
GUID themselves. We want to migrate these callers eventually, but for
this first step we leave them be.
Replace "concept based polymorphism" with simpler PImpl idiom.
This pursues two goals:
* Enforce static type checking. Previously, target implementations hid
base class methods and type checking was impossible. Now that they
override the methods, the compiler will complain on mismatched
signatures.
* Make the code easier to navigate. Previously, if you asked your
favorite LSP server to show a method (e.g. `getInstructionCost()`), it
would show you methods from `TTI`, `TTI::Concept`, `TTI::Model`,
`TTIImplBase`, and target overrides. Now it is two less :)
There are three commits to hopefully simplify the review.
The first commit removes `TTI::Model`. This is done by deriving
`TargetTransformInfoImplBase` from `TTI::Concept`. This is possible
because they implement the same set of interfaces with identical
signatures.
The first commit makes `TargetTransformImplBase` polymorphic, which
means all derived classes should `override` its methods. This is done in
second commit to make the first one smaller. It appeared infeasible to
extract this into a separate PR because the first commit landed
separately would result in tons of `-Woverloaded-virtual` warnings (and
break `-Werror` builds).
The third commit eliminates `TTI::Concept` by merging it with the only
derived class `TargetTransformImplBase`. This commit could be extracted
into a separate PR, but it touches the same lines in
`TargetTransformInfoImpl.h` (removes `override` added by the second
commit and adds `virtual`), so I thought it may make sense to land these
two commits together.
Pull Request: https://github.com/llvm/llvm-project/pull/136674