20958 Commits

Author SHA1 Message Date
Eric Astor
b901b6ab17 Revert "[ms] [llvm-ml] Add support for .radix directive, and accept all radix specifiers"
This reverts commit 5dd1b6d612655c9006ba97a8b6487ded80719b48.
2020-09-23 13:59:34 -04:00
Craig Topper
f21f835ee8 [X86] Improve demanded bits for X86ISD::BEXTR.
If the control is constant we can figure out exactly which bits
of the input are demanded.

Differential Revision: https://reviews.llvm.org/D88072
2020-09-23 10:51:02 -07:00
Eric Astor
5dd1b6d612 [ms] [llvm-ml] Add support for .radix directive, and accept all radix specifiers
Add support for .radix directive, and radix specifiers [yY] (binary), [oOqQ] (octal), and [tT] (decimal).

Also, when lexing MASM integers, require radix specifier; MASM requires that all literals without a radix specifier be treated as in the default radix. (e.g., 0100 = 100)

Reviewed By: thakis

Differential Revision: https://reviews.llvm.org/D87400
2020-09-23 13:45:58 -04:00
Bing1 Yu
ec24e50553 [CostModel][X86] add CostModel for SK_Select(v8f64, v8i64, v16f32, v16i32, v32i16, v64i8)
add CostModel for SK_Select(v8f64, v8i64, v16f32, v16i32, v32i16, v64i8)

Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D87884
2020-09-23 10:29:10 +08:00
Simon Pilgrim
0793b45660 [X86] Add missing namespace closure comments. NFCI.
Fixes some clang-tidy llvm-namespace-comment warnings.
2020-09-22 15:06:59 +01:00
Simon Pilgrim
af71298648 [X86] Cleanup/add namespace closure comments. NFCI.
Fixes some clang-tidy llvm-namespace-comment warnings.
2020-09-22 15:06:58 +01:00
Meera Nakrani
a3d0dce260 [ARM][TTI] Prevents constants in a min(max) or max(min) pattern from being hoisted when in a loop
Changes TTI function getIntImmCostInst to take an additional Instruction parameter,
which enables us to be able to check it is part of a min(max())/max(min()) pattern that will match SSAT.
We can then mark the constant used as free to prevent it being hoisted so SSAT can still be generated.
Required minor changes in some non-ARM backends to allow for the optional parameter to be included.

Differential Revision: https://reviews.llvm.org/D87457
2020-09-22 11:54:10 +00:00
Craig Topper
a74b1faba2 [X86] Make reduceMaskedLoadToScalarLoad/reduceMaskedStoreToScalarStore work for avx512 after type legalization.
The scalar elements of the vXi1 build_vector will have been type legalized to i8 by padding with 0s. So we can't check for all ones. Instead we should just look at bit 0 of the constant.

Differential Revision: https://reviews.llvm.org/D87863
2020-09-20 13:54:20 -07:00
Craig Topper
4e8c028158 [X86] Stop reduceMaskedLoadToScalarLoad/reduceMaskedStoreToScalarStore from creating scalar i64 load/stores in 32-bit mode
If we emit a scalar i64 load/store it will get type legalized to two i32 load/stores.

Differential Revision: https://reviews.llvm.org/D87862
2020-09-20 13:46:59 -07:00
Simon Pilgrim
0bfeede669 [X86][SSE] Fold EXTEND_VECTOR_INREG(EXTRACT_SUBVECTOR(EXTEND(X),0)) -> EXTEND_VECTOR_INREG(X) 2020-09-20 18:39:12 +01:00
Simon Pilgrim
bb0078e591 [X86][SSE] Fold SIGN_EXTEND(SIGN_EXTEND_VECTOR_INREG(X)) -> SIGN_EXTEND_VECTOR_INREG(X)
It should be possible to make this generic, but we're not great at checking legality of *_EXTEND_VECTOR_INREG ops so I'm conservatively putting this inside X86ISelLowering.cpp
2020-09-20 18:39:12 +01:00
Simon Pilgrim
15c8306056 [X86][SSE] Fold EXTEND_VECTOR_INREG(EXTEND_VECTOR_INREG(X)) -> EXTEND_VECTOR_INREG(X)
It should be possible to make this generic, but we're not great at checking legality of *_EXTEND_VECTOR_INREG ops so I'm conservatively putting this inside X86ISelLowering.cpp
2020-09-20 16:33:02 +01:00
Simon Pilgrim
a0c8793ce6 [X86][SSE] Enable ZERO_EXTEND_VECTOR_INREG shuffle combining on SSE41 targets.
Allows ZERO_EXTEND_VECTOR_INREG to be shuffle combined on all targets where it is legal.
2020-09-20 16:05:10 +01:00
Simon Pilgrim
2b634a9d0e [X86] Rename getExtendInVec to getEXTEND_VECTOR_INREG. NFCI.
Make it easier to find the method by naming it after the ops it actually handles. We already do this for lowering/combining.
2020-09-20 15:19:39 +01:00
Simon Pilgrim
91720ee561 [X86] combineX86ShufflesRecursively - fix use after move warning. NFCI.
After moving WidenedMask is in an undefined state, so reduce scope of the variable so its reinitialized every iteration - we should still retain any memory allocation savings.
2020-09-20 14:06:50 +01:00
Simon Pilgrim
e17686ae60 [X86] Rename combineExtInVec to combineEXTEND_VECTOR_INREG. NFCI.
Make it easier to find the method by naming it after the ops it actually handles. We already do this for lowering.
2020-09-20 12:16:00 +01:00
Craig Topper
721d57f952 [X86] Return from SimplifyDemandedBitsForTargetNode after calculating known bits for VSHLI/VSRAI/VSRLI.
We were breaking out of the switch which falls into the default
implementation of SimplifyDemandedBitsForTargetNode which is a
wrapper around computeKnownBits. So we end up doing the recursion
and known bits calculation all over again. Instead we should return
with the known bits we calculated in the switch.
2020-09-18 23:57:01 -07:00
Craig Topper
58ecbbcdcd [X86] Fix copy paste mistake in @ccnp flag.
We were treating @ccp and @ccnp the same.
2020-09-18 21:28:01 -07:00
Philip Reames
06f136f61e [instcombine][x86] Converted pdep/pext with shifted mask to simple arithmetic
If the mask of a pdep or pext instruction is a shift masked (i.e. one contiguous block of ones) we need at most one and and one shift to represent the operation without the intrinsic. One all platforms I know of, this is faster than the pdep/pext.

The cost modelling for multiple contiguous blocks might be worth exploring in a follow up, but it's not relevant for my current use case. It would almost certainly be a win on AMDs where these are really really slow though.

Differential Revision: https://reviews.llvm.org/D87861
2020-09-18 14:54:24 -07:00
Simon Pilgrim
4ebd30722a [X86][AVX] lowerBuildVectorAsBroadcast - improve BROADCASTM lowering on non-VLX targets
Broadcast to a ZMM type then extract the low subvector.
2020-09-18 19:52:02 +01:00
Simon Pilgrim
ceadd98c2f [X86][AVX] lowerBuildVectorAsBroadcast - improve i64 BROADCASTM lowering on 32-bit targets
We already handle the the cases where we have a 'zero extended splat' build vector (a, 0, 0, 0, a, 0, 0, 0, ...) but were missing the case where the 'a' scalar was zero-extended as well - such as i64 -> vXi64 splat cases on 32-bit targets.
2020-09-18 16:59:57 +01:00
Craig Topper
3783d3bc7b [X86] Don't match x87 register inline asm constraints unless the VT is floating point or its a clobber
The register class picked will be the RFP80 register class which has a f80 VT. The code in SelectionDAGBuilder that generates copies around inline assembly doesn't know how to handle an integer and floating point type of different bit widths.

The test case is derived from this https://godbolt.org/z/sEa659 which gcc accepts but clang crashes on. This patch just gives a more graceful error. I'm not sure if the single element struct case is special in gcc. Adding another field to the struct makes gcc reject it. If we want to support this correctly I think we need a change in the frontend to give us the true element type. Right now the frontend just realizes the constraint can take a memory argument so creates an integer type of the same size and bitcasts.

Differential Revision: https://reviews.llvm.org/D87485
2020-09-17 11:26:50 -07:00
Simon Pilgrim
f026812110 InstCombiner.h - remove unnecessary KnownBits.h include. NFCI.
Move the include down to cpp files with an implicit dependency.
2020-09-17 14:28:42 +01:00
Rainer Orth
a9cbe5cf30 [X86] Fix stack alignment on 32-bit Solaris/x86
On Solaris/x86, several hundred 32-bit tests `FAIL`, all in the same way:

  env ASAN_OPTIONS=halt_on_error=false ./halt_on_error_suppress_equal_pcs.cpp.tmp
  Segmentation Fault (core dumped)

They segfault during startup:

  Thread 2 received signal SIGSEGV, Segmentation fault.
  [Switching to Thread 1 (LWP 1)]
  0x080f21f0 in __sanitizer::internal_mmap(void*, unsigned long, int, int, int, unsigned long long) () at /vol/llvm/src/llvm-project/dist/compiler-rt/lib/sanitizer_common/sanitizer_solaris.cpp:65
  65	                             int prot, int flags, int fd, OFF_T offset) {
  1: x/i $pc
  => 0x80f21f0 <_ZN11__sanitizer13internal_mmapEPvmiiiy+16>:	movaps 0x30(%esp),%xmm0
  (gdb) p/x $esp
  $3 = 0xfeffd488

The problem is that `movaps` expects 16-byte alignment, while 32-bit Solaris/x86
only guarantees 4-byte alignment following the i386 psABI.

This patch updates `X86Subtarget::initSubtargetFeatures` accordingly,
handles Solaris/x86 in the corresponding testcase, and allows for some
variation in address alignment in
`compiler-rt/test/ubsan/TestCases/TypeCheck/vptr.cpp`.

Tested on `amd64-pc-solaris2.11` and `x86_64-pc-linux-gnu`.

Differential Revision: https://reviews.llvm.org/D87615
2020-09-17 11:17:11 +02:00
Simon Pilgrim
b2c931eff3 [X86] EmitInstrWithCustomInserter - remove redundant getDebugLoc() calls. NFCI.
Use the same DebugLoc that is called at the top of the method.

Fixes some Wshadow static analyzer warnings.
2020-09-16 16:29:56 +01:00
Simon Pilgrim
cd46151202 [X86] Assert that we've found a terminator instruction. NFCI.
Fixes clang static analayzer null dereference warning.
2020-09-16 16:17:49 +01:00
Simon Pilgrim
aa4b0b755a [X86][SSE] Move VZEXT_MOVL(INSERT_SUBVECTOR(UNDEF,X,0)) handling into combineTargetShuffle.
Now that we're getting better at combining shuffles of different vector widths, this can now be performed as part of the standard target shuffle combines and isn't required for cleanup.

Exposed a minor issue in combineX86ShufflesRecursively where we failed to check if a shuffle's src ops were simple types.
2020-09-16 16:08:31 +01:00
Craig Topper
41f4cd60d5 [X86] Don't scalarize gather/scatters with non-power of 2 element counts. Widen instead.
We can pad the mask with zeros in order to widen. We already do
this for power 2 types that are smaller than a legal type.
2020-09-15 23:22:53 -07:00
Craig Topper
2ce1a697f0 [X86] Always use 16-bit displacement in 16-bit mode when there is no base or index register.
Previously we only did this if the immediate fit in 16 bits, but
the GNU assembler seems to just truncate.

Fixes PR46952
2020-09-15 19:31:48 -07:00
Craig Topper
05134877e6 [X86] Use Align in reduceMaskedLoadToScalarLoad/reduceMaskedStoreToScalarStore. Correct pointer info.
If we offset the pointer, we also need to offset the pointer info

Differential Revision: https://reviews.llvm.org/D87593
2020-09-15 11:22:02 -07:00
Simon Pilgrim
a43e68b58b [X86][AVX] lowerShuffleWithSHUFPS - handle missed canonicalization cases.
PR47534 exposes a case where calling lowerShuffleWithSHUFPS directly from a derived repeated mask (found by is128BitLaneRepeatedShuffleMask) results in us using an non-canonicalized mask.

The missed canonicalization in this case is trivial - just commute the mask so we have more (swapped) LHS than RHS references so lowerShuffleWithSHUFPS can handle it.
2020-09-15 17:31:08 +01:00
Simon Pilgrim
fc446935d7 [X86] detectAVGPattern - accept non-pow2 vectors by padding.
Drop the pow2 vector limitation for AVG generation by padding the vector to the next pow2, creating the PAVG nodes and then extracting the final subvector.

Fixes some poor codegen that has been annoying me for years.....
2020-09-15 10:07:03 +01:00
Craig Topper
46673763fe [X86] Place new constant node in topological order in X86DAGToDAGISel::matchBitExtract
Fixes PR47482
2020-09-14 16:59:04 -07:00
Craig Topper
da1aaa0b70 Revert "[X86] Place new constant node in topological order in X86DAGToDAGISel::matchBitExtract."
I got the bug number wrong.

This reverts commit 32515938901685bcbc438d5f5bb03cb8a9f4c637.
2020-09-14 16:58:57 -07:00
Craig Topper
3251593890 [X86] Place new constant node in topological order in X86DAGToDAGISel::matchBitExtract.
Fixes PR47525
2020-09-14 16:28:37 -07:00
Craig Topper
c193a689b4 [SelectionDAG] Use Align/MaybeAlign in calls to getLoad/getStore/getExtLoad/getTruncStore.
The versions that take 'unsigned' will be removed in the future.

I tried to use getOriginalAlign instead of getAlign in some
places. getAlign factors in the minimum alignment implied by
the offset in the pointer info. Since we're also passing the
pointer info we can use the original alignment.

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D87592
2020-09-14 13:54:50 -07:00
Eric Astor
23a2b03221 [ms] [llvm-ml] Add basic support for SEH, including PROC FRAME
Add basic support for SEH, including PROC FRAME

Reviewed By: thakis

Differential Revision: https://reviews.llvm.org/D86948
2020-09-14 14:32:55 -04:00
Eric Astor
20201dc76a [ms] [llvm-ml] Add support for size queries in MASM
Add support for size inference, sizeof, typeof, and lengthof.

Reviewed By: thakis

Differential Revision: https://reviews.llvm.org/D86947
2020-09-14 14:27:06 -04:00
Craig Topper
758732a34e [X86] Use ISD::PARITY directly instead of emitting CTPOP and AND from combineHorizontalPredicateResult.
We have a PARITY ISD node now so might as well use it. It will
get re-expanded later.
2020-09-12 20:01:17 -07:00
Craig Topper
ad3d6f993d [SelectionDAG][X86][ARM][AArch64] Add ISD opcode for __builtin_parity. Expand it to shifts and xors.
Clang emits (and (ctpop X), 1) for __builtin_parity. If ctpop
isn't natively supported by the target, this leads to poor codegen
due to the expansion of ctpop being more complex than what is needed
for parity.

This adds a DAG combine to convert the pattern to ISD::PARITY
before operation legalization. Type legalization is updated
to handled Expanding and Promoting this operation. If after type
legalization, CTPOP is supported for this type, LegalizeDAG will
turn it back into CTPOP+AND. Otherwise LegalizeDAG will emit a
series of shifts and xors followed by an AND with 1.

I've avoided vectors in this patch to avoid more legalization
complexity for this patch.

X86 previously had a custom DAG combiner for this. This is now
moved to Custom lowering for the new opcode. There is a minor
regression in vector-reduce-xor-bool.ll, but a follow up patch
can easily fix that.

Fixes PR47433

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D87209
2020-09-12 11:42:18 -07:00
Simon Pilgrim
3170d54842 [InstCombine][X86] Covert masked load/stores with (sign extended) bool vector masks to generic intrinsics.
As detailed on PR11210, if the mask is known to come from a (sign extended) bool vector (e.g. comparisons) then we can represent with a generic masked load/store without losing anything.

We already do something similar for BLENDV -> SELECT conversion.
2020-09-12 15:09:28 +01:00
Simon Pilgrim
50ee0b99ec [InstCombine][X86] getNegativeIsTrueBoolVec - use ConstantExpr evaluators. NFCI.
Don't do this manually, we can just use the ConstantExpr evaluators to do it more tidily for us.
2020-09-12 13:58:58 +01:00
Simon Pilgrim
35dc91aee2 [X86][SSE] lowerShuffleAsDecomposedShuffleBlend - support decomposed unpacks for some vXi8/vXi16 cases
Follow up to D86429 to handle the remaining regressions.

This patch generalizes lowerShuffleAsDecomposedShuffleBlend to lowerShuffleAsDecomposedShuffleMerge, and attempts to use an UNPCKL shuffle mask instead of a blend for the cases where the inputs are coming from alternating vXi8/vXi16 sources. Technically they don't have to be alternating (just as long as they can fit into a lower lane half for the unpack) but I didn't find as many general cases and it needed a lot more of the function to be altered.

For vXi32/vXi64 cases this could still be beneficial but in most cases the existing permute+blend approach was better.

Differential Revision: https://reviews.llvm.org/D87405
2020-09-12 13:39:33 +01:00
Simon Pilgrim
70a05ee288 [X86] Keep variables from getDataLayout/getDebugLoc calls as const reference. NFCI.
These are only ever used as references in the called functions, so just pass the original reference instead of copying it.
2020-09-11 10:44:42 +01:00
Anna Thomas
46329f6079 [ImplicitNullCheck] Handle instructions that preserve zero value
This is the first in a series of patches to make implicit null checks
more general. This patch identifies instructions that preserves zero
value of a register and considers that as a valid instruction to hoist
along with the faulting load. See added testcases.

Reviewed-By: reames, dantrushin

Differential Revision: https://reviews.llvm.org/D87108
2020-09-10 13:39:50 -04:00
Simon Pilgrim
b585fdae24 [X86] Use Register instead of unsigned. NFCI.
Fixes llvm-prefer-register-over-unsigned clang-tidy warnings.
2020-09-10 16:05:33 +01:00
Simon Pilgrim
de25ebaac6 [CostModel][X86] Add vXi32 division by uniform constant costs (PR47476)
Other types can be handled in future patches but their uniform / non-uniform costs are more similar and don't appear to cause many vectorization issues.
2020-09-10 12:17:54 +01:00
Simon Pilgrim
fc49abee56 [X86][SSE] lowerShuffleAsSplitOrBlend always returns a shuffle.
lowerShuffleAsSplitOrBlend always returns a target shuffle result (and is the default operation for lowering some shuffle types), so we don't need to check for null.
2020-09-10 11:45:08 +01:00
Simon Pilgrim
e80605e242 [X86] Remove WaitInsert::TTI member. NFCI.
This is only ever set/used inside WaitInsert::runOnMachineFunction so don't bother storing it in the class.
2020-09-10 11:45:08 +01:00
Amara Emerson
e5784ef8f6 [GlobalISel] Enable usage of BranchProbabilityInfo in IRTranslator.
We weren't using this before, so none of the MachineFunction CFG edges had the
branch probability information added. As a result, block placement later in the
pipeline was flying blind.

This is enabled only with optimizations enabled like SelectionDAG.

Differential Revision: https://reviews.llvm.org/D86824
2020-09-09 14:31:12 -07:00