17606 Commits

Author SHA1 Message Date
Craig Topper
8c75adf95b [InstCombine] Refinement of r299915. Only consider a ConstantVector for Neg if all the elements are Undef or ConstantInt.
llvm-svn: 299917
2017-04-11 06:32:48 +00:00
Craig Topper
18f9e424e7 [InstCombine] Support weird size element types in dyn_castNegVal.
llvm-svn: 299915
2017-04-11 05:42:47 +00:00
Hal Finkel
b63ed91549 [LICM] Hoist fp division from the loops and replace by a reciprocal
When allowed, we can hoist a division out of a loop in favor of a
multiplication by the reciprocal. Fixes PR32157.

Patch by vit9696!

Differential Revision: https://reviews.llvm.org/D30819

llvm-svn: 299911
2017-04-11 02:22:54 +00:00
Daniel Berlin
bf80cfe6b6 Revert "NewGVN: Don't propagate over phi backedges where undef causes us to have >1 value."
It's not ready yet this was an accidental commit :(

This reverts r299903

llvm-svn: 299904
2017-04-11 00:07:26 +00:00
Daniel Berlin
3938111fe7 NewGVN: Don't propagate over phi backedges where undef causes us to have >1 value.
Fixes PR 32607.

llvm-svn: 299903
2017-04-11 00:02:38 +00:00
Reid Kleckner
eb9dd5b87f Reland "[IR] Make AttributeSetNode public, avoid temporary AttributeList copies"
This re-lands r299875.

I introduced a bug in Clang code responsible for replacing K&R, no
prototype declarations with a real function definition with a prototype.
The bug was here:

       // Collect any return attributes from the call.
  -    if (oldAttrs.hasAttributes(llvm::AttributeList::ReturnIndex))
  -      newAttrs.push_back(llvm::AttributeList::get(newFn->getContext(),
  -                                                  oldAttrs.getRetAttributes()));
  +    newAttrs.push_back(oldAttrs.getRetAttributes());

Previously getRetAttributes() carried AttributeList::ReturnIndex in its
AttributeList. Now that we return the AttributeSetNode* directly, it no
longer carries that index, and we call this overload with a single node:
  AttributeList::get(LLVMContext&, ArrayRef<AttributeSetNode*>)

That aborted with an assertion on x86_32 targets. I added an explicit
triple to the test and added CHECKs to help find issues like this in the
future sooner.

llvm-svn: 299899
2017-04-10 23:31:05 +00:00
Davide Italiano
f58a30236b [NewGVN] Surround with parens to clarify allegedly ambiguous precedence.
This Placates GCC7 with -Werror. Also, clang-format the assertions
while I'm here.

llvm-svn: 299895
2017-04-10 23:08:35 +00:00
Davide Italiano
fa6a0a819d [MemorySSA] We don't need to compute dominator levels anymore.
Differential Revision:  https://reviews.llvm.org/D31818

llvm-svn: 299893
2017-04-10 22:44:46 +00:00
Matt Arsenault
3c1fc768ed Allow DataLayout to specify addrspace for allocas.
LLVM makes several assumptions about address space 0. However,
alloca is presently constrained to always return this address space.
There's no real way to avoid using alloca, so without this
there is no way to opt out of these assumptions.

The problematic assumptions include:
- That the pointer size used for the stack is the same size as
  the code size pointer, which is also the maximum sized pointer.

- That 0 is an invalid, non-dereferencable pointer value.

These are problems for AMDGPU because alloca is used to
implement the private address space, which uses a 32-bit
index as the pointer value. Other pointers are 64-bit
and behave more like LLVM's notion of generic address
space. By changing the address space used for allocas,
we can change our generic pointer type to be LLVM's generic
pointer type which does have similar properties.

llvm-svn: 299888
2017-04-10 22:27:50 +00:00
Dehao Chen
d4a3397861 Emit less compiler optimization remarks in samplepgo to reduce a call to findCalleeFunctionSamples which is going to be refactored.
Summary: Now the SamplePGO support is more stable, we do not need so many verbose optimization remarks emitted.

Reviewers: dnovillo, davidxl

Reviewed By: davidxl

Subscribers: fhahn, llvm-commits

Differential Revision: https://reviews.llvm.org/D31826

llvm-svn: 299883
2017-04-10 20:49:16 +00:00
Geoff Berry
635e505675 [GVNHoist] Call isGuaranteedToTransferExecutionToSuccessor on each instruction
w.r.t. https://bugs.llvm.org/show_bug.cgi?id=32153
The consensus seems to be isGuaranteedToTransferExecutionToSuccessor should be called for each function.

Patch by Aditya Kumar

Differential Revision: https://reviews.llvm.org/D31035

llvm-svn: 299882
2017-04-10 20:45:17 +00:00
Evgeniy Stepanov
ed7fce7c84 Revert "[asan] Put ctor/dtor in comdat."
This reverts commit r299696, which is causing mysterious test failures.

llvm-svn: 299880
2017-04-10 20:36:36 +00:00
Evgeniy Stepanov
ba7c2e9661 Revert "[asan] Fix dead stripping of globals on Linux."
This reverts commit r299697, which caused a big increase in object file size.

llvm-svn: 299879
2017-04-10 20:36:30 +00:00
Reid Kleckner
211b1f324f Revert "[IR] Make AttributeSetNode public, avoid temporary AttributeList copies"
This reverts r299875. A Linux bot came back with a test failure:
http://bb.pgr.jp/builders/test-clang-i686-linux-RA/builds/741/steps/test_clang/logs/Clang%20%3A%3A%20CodeGen__2006-05-19-SingleEltReturn.c

llvm-svn: 299878
2017-04-10 20:34:19 +00:00
Reid Kleckner
324c99dee5 [IR] Make AttributeSetNode public, avoid temporary AttributeList copies
Summary:
AttributeList::get(Fn|Ret|Param)Attributes no longer creates a temporary
AttributeList just to hide the AttributeSetNode type.

I've also added a factory method to create AttributeLists from a
parallel array of AttributeSetNodes. I think this simplifies
construction of AttributeLists when rewriting function prototypes.
Previously we would test if a particular index had attributes, and
conditionally add a temporary attribute list to a vector. Now the
attribute set vector is parallel to the argument vector already that
these passes already construct.

My long term vision is to wrap AttributeSetNode* inside an AttributeSet
type that holds the enum attributes, but that will come in a follow up
change.

I haven't done any performance measurements for this change because
profiling hasn't shown that any of the affected code is hot.

Reviewers: pete, chandlerc, sanjoy, hfinkel

Reviewed By: pete

Subscribers: jfb, llvm-commits

Differential Revision: https://reviews.llvm.org/D31198

llvm-svn: 299875
2017-04-10 20:18:10 +00:00
Sanjay Patel
e4159d2238 [InstCombine] improve variable names; NFCI
llvm-svn: 299871
2017-04-10 19:38:36 +00:00
Matt Arsenault
daa08875b3 [MemCpyOpt] Only replace memcpy with bitcast if address spaces match
Patch by James Price

llvm-svn: 299866
2017-04-10 19:00:25 +00:00
Daniel Berlin
74603a68ef MemorySSA: Make lifetime starts defs for mustaliased pointers
Summary:
While we don't want them aliasing with other pointers, there seems to
be no point in not having them clobber must-aliased'd pointers.

If some day, we split the aliasing and ordering chains, we'd make this
not aliasing but an ordering barrier (IE it doesn't affect it's
memory, but we can't hoist it above it).

Reviewers: hfinkel, george.burgess.iv

Subscribers: Prazek, llvm-commits

Differential Revision: https://reviews.llvm.org/D31865

llvm-svn: 299865
2017-04-10 18:46:00 +00:00
Craig Topper
0d830ff7bf [InstCombine] Use commutable matchers and m_OneUse in visitSub to shorten code. Add missing test cases.
In one case I removed commute handling for a multiply with a constant since we'll eventually get the constant on the right hand side.

llvm-svn: 299863
2017-04-10 18:09:25 +00:00
Craig Topper
98851adc2a [InstCombine] Use m_c_Add to shorten some code. Add testcases for this fold since they were missing. NFC
llvm-svn: 299853
2017-04-10 16:59:40 +00:00
Sanjay Patel
570e35c157 [InstCombine] fix matching of or-of-icmps constants (PR32524)
Also, make the same change in and-of-icmps and remove a hack for detecting that case.

Finally, add some FIXME comments because the code duplication here is awful.

This should fix the remaining IR problem noted in:
https://bugs.llvm.org/show_bug.cgi?id=32524

llvm-svn: 299851
2017-04-10 16:55:57 +00:00
Craig Topper
3eec73e20b [InstCombine] Support folding of add instructions with vector constants into select operations
We currently only fold scalar add of constants into selects. This improves this to support vectors too.

Differential Revision: https://reviews.llvm.org/D31683

llvm-svn: 299847
2017-04-10 16:40:00 +00:00
Craig Topper
31cc143b51 [InstCombine] Use commutable and/or/xor matchers to simplify some code
Summary:
This is my first time using the commutable matchers so wanted to make sure I was doing it right.

Are there any other matcher tricks to further shrink this? Can we commute the whole match so we don't have to LHS and RHS separately?

Reviewers: davide, spatel

Reviewed By: davide

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D31680

llvm-svn: 299840
2017-04-10 07:13:40 +00:00
Craig Topper
838d13e7ee [InstCombine] Make sure we preserve fast math flags when folding fp instructions into phi nodes
Summary: I noticed in the select folding code that we copied fast math flags, but did not do the same for the similar handling in phi nodes. This patch fixes that to do the same thing as select

Reviewers: spatel, davide, majnemer, hfinkel

Reviewed By: davide

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D31690

llvm-svn: 299838
2017-04-10 07:00:10 +00:00
Craig Topper
d8840d7b10 [InstCombine] use m_c_And and m_c_Xor to handle commuted versions of a transform.
llvm-svn: 299837
2017-04-10 06:53:28 +00:00
Craig Topper
7639460367 [InstCombine] Remove unnecessary dyn_cast to BinaryOperator around some matcher checks in visitXor.
The matchers themselves should be enough.

llvm-svn: 299835
2017-04-10 06:53:23 +00:00
Craig Topper
4738321f0c [InstCombine] Make the (A|B)^B -> A & ~B transform code consistent with the very similar (A&B)^B -> ~A & B code. This should be NFC except for the addition of hasOneUse check.
I think this code is still overly complicated and should use matchers, but first I wanted to make it consistent.

llvm-svn: 299834
2017-04-10 06:53:21 +00:00
Craig Topper
4f16d82d6b [InstCombine] Use m_OneUse to shorten some code. NFC
llvm-svn: 299833
2017-04-10 06:53:19 +00:00
Xin Tong
34888c08bc [SCCP] Resolve indirect branch target when possible.
Summary:
Resolve indirect branch target when possible.
This potentially eliminates more basicblocks and result in better evaluation for phi and other things.

Reviewers: davide, efriedma, sanjoy

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D30322

llvm-svn: 299830
2017-04-10 00:33:25 +00:00
Sanjay Patel
16a054d5c7 [InstCombine] remove dead cases from icmp pair switches; NFCI
"PredicatesFoldable" returns false for signed/unsigned mismatched pairs,
so these cases should never exist. We'll default to 'unreachable' on those 
predicate combos instead.

Most of what's left in these switches belongs in InstSimplify (and may 
already be there), so there's probably more that can be done to reduce
this code.

llvm-svn: 299829
2017-04-09 21:51:34 +00:00
Davide Italiano
612d5a9c5c [Mem2Reg] Remove AliasSetTracker updating logic from the pass.
No caller has been passing it for a long time.

llvm-svn: 299827
2017-04-09 20:47:14 +00:00
Hal Finkel
a9d67cf601 [MemorySSA] Fix use of pointsToConstantMemory in isUseTriviallyOptimizableToLiveOnEntry
In isUseTriviallyOptimizableToLiveOnEntry, pointsToConstantMemory needs to be
called on the load's pointer operand, not on the result of the load (which
might not even be a pointer).

llvm-svn: 299823
2017-04-09 12:57:50 +00:00
Craig Topper
afa07c5ef6 [InstCombine] Extend some OR combines to support vectors.
This adds support for these combines for vectors
(X^C)|Y -> (X|Y)^C iff Y&C == 0
Y|(X^C) -> (X|Y)^C iff Y&C == 0

llvm-svn: 299822
2017-04-09 06:12:41 +00:00
Craig Topper
e63c21b1ba [InstCombine] Extend a canonicalization check to apply to vector constants too.
llvm-svn: 299821
2017-04-09 06:12:39 +00:00
Craig Topper
437c97622b [InstCombine] Use the SubOne helper function to shorten some code. NFC
llvm-svn: 299819
2017-04-09 06:12:34 +00:00
Craig Topper
9d1821b262 [InstCombine] rename variable for easier reading; NFC
We usually give constants a 'C' somewhere in the name...

llvm-svn: 299818
2017-04-09 06:12:31 +00:00
Gor Nishanov
bfb2a9db31 [coroutines] Make CoroSplit pass deterministic
coro-split-after-phi.ll test was flaky due to non-determinism in
the coroutine frame construction that was sorting the spill
vector using a pointer to a def as a part of the key.

The sorting was intended to make sure that spills for the same def
are kept together, however, we populate the vector by processing
defs in order, so the spill entires will end up together anyways.

This change removes spill sorting and restores the determinism
in the test.

llvm-svn: 299809
2017-04-08 00:49:46 +00:00
Evgeniy Stepanov
349adbacca [cfi] Take over existing __cfi_check in CrossDSOCFI.
https://reviews.llvm.org/D31796 will emit a dummy __cfi_check in the
frontend.

llvm-svn: 299805
2017-04-07 23:00:20 +00:00
Daniel Berlin
a823656ce7 NewGVN: Make CongruenceClass a real class in preparation for splitting
NewGVN into analysis and eliminator.

llvm-svn: 299792
2017-04-07 18:38:09 +00:00
Gor Nishanov
138ad6c9c0 [coroutines] Insert spills of PHI instructions correctly
Summary:
Fix a bug where we were inserting a spill in between the PHIs in the beginning of the block.
Consider this fragment:

```
begin:
  %phi1 = phi i32 [ 0, %entry ], [ 2, %alt ]
  %phi2 = phi i32 [ 1, %entry ], [ 3, %alt ]
  %sp1 = call i8 @llvm.coro.suspend(token none, i1 false)
  switch i8 %sp1, label %suspend [i8 0, label %resume
                                  i8 1, label %cleanup]
resume:
  call i32 @print(i32 %phi1)
```
Unless we are spilling the argument or result of the invoke, we were always inserting the spill immediately following the instruction.
The fix adds a check that if the spilled instruction is a PHI Node, select an appropriate insert point with `getFirstInsertionPt()` that
skips all the PHI Nodes and EH pads.

Reviewers: majnemer, rnk

Reviewed By: rnk

Subscribers: qcolombet, EricWF, llvm-commits

Differential Revision: https://reviews.llvm.org/D31799

llvm-svn: 299771
2017-04-07 14:16:49 +00:00
Matthew Simpson
11fe2e9f2b Reapply r298620: [LV] Vectorize GEPs
This patch reapplies r298620. The original patch was reverted because of two
issues. First, the patch exposed a bug in InstCombine that caused the Chromium
builds to fail (PR32414). This issue was fixed in r299017. Second, the patch
introduced a bug in the vectorizer's scalars analysis that caused test suite
builds to fail on SystemZ. The scalars analysis was too aggressive and marked a
memory instruction scalar, even though it was going to be vectorized. This
issue has been fixed in the current patch and several new test cases for the
scalars analysis have been added.

llvm-svn: 299770
2017-04-07 14:15:34 +00:00
Craig Topper
33e0dbcc58 [InstCombine] Handle more commuted cases of ((A & B) | ~A) -> (~A | B)
llvm-svn: 299747
2017-04-07 07:32:00 +00:00
Daniel Berlin
d952ceae2f AliasAnalysis: Be less conservative about volatile than atomic.
Summary:
getModRefInfo is meant to answer the question "what impact does this
instruction have on a given memory location" (not even another
instruction).

Long debate on this on IRC comes to the conclusion the answer should be "nothing special".

That is, a noalias volatile store does not affect a memory location
just by being volatile.  Note: DSE and GVN and memdep currently
believe this, because memdep just goes behind AA's back after it says
"modref" right now.

see line 635 of memdep. Prior to this patch we would get modref there, then check aliasing,
and if it said noalias, we would continue.

getModRefInfo *already* has this same AA check, it just wasn't being used because volatile was
lumped in with ordering.

(I am separately testing whether this code in memdep is now dead except for the invariant load case)

Reviewers: jyknight, chandlerc

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D31726

llvm-svn: 299741
2017-04-07 01:28:36 +00:00
Craig Topper
72a622cac7 [InstCombine] Add more commuted patterns to support folding ((~A & B) | A) -> (A | B).
llvm-svn: 299737
2017-04-07 00:29:47 +00:00
Craig Topper
a521c30dc6 [InstCombine] Remove testing assert I accidentally left in r299710.
llvm-svn: 299715
2017-04-06 21:29:43 +00:00
Craig Topper
b4da6840d8 [InstCombine] When checking to see if we can turn subtracts of 2^n - 1 into xor, we only need to call computeKnownBits on the RHS not the whole subtract. While there use isMask instead of isPowerOf2(C+1)
Calling computeKnownBits on the RHS should allows us to recurse one step further. isMask is equivalent to the isPowerOf2(C+1) except in the case where C is all ones. But that was already handled earlier by creating a not which is an Xor with all ones. So this should be fine.

llvm-svn: 299710
2017-04-06 21:06:03 +00:00
Rong Xu
2bf4c59025 [PGO] Preserve GlobalsAA in pgo-memop-opt pass.
Preserve GlobalsAA analysis in memory intrinsic calls optimization based on
profiled size.

llvm-svn: 299707
2017-04-06 20:56:00 +00:00
Craig Topper
7226d796aa [InstCombine] Remove redundant combine from visitAnd
This combine is fully handled by SimplifyDemandedInstructionBits as of r299658 where I fixed this code to ensure the Add/Sub had only a single user. Otherwise it would fire and create additional instructions. That fix resulted in an improvement to code generated for tsan which is why I committed it before deleting.

Differential Revision: https://reviews.llvm.org/D31543

llvm-svn: 299704
2017-04-06 20:41:48 +00:00
Mehdi Amini
db11fdfda5 Revert "Turn some C-style vararg into variadic templates"
This reverts commit r299699, the examples needs to be updated.

llvm-svn: 299702
2017-04-06 20:23:57 +00:00
Mehdi Amini
579540a8f7 Turn some C-style vararg into variadic templates
Module::getOrInsertFunction is using C-style vararg instead of
variadic templates.

From a user prospective, it forces the use of an annoying nullptr
to mark the end of the vararg, and there's not type checking on the
arguments. The variadic template is an obvious solution to both
issues.

Patch by: Serge Guelton <serge.guelton@telecom-bretagne.eu>

Differential Revision: https://reviews.llvm.org/D31070

llvm-svn: 299699
2017-04-06 20:09:31 +00:00