92 Commits

Author SHA1 Message Date
Simon Pilgrim
c8312bdd16
[Headers][X86] Enable constexpr handling for pmulhw/pmulhuw intrinsics (#152540)
This patch updates the pmulhw/pmulhuw builtins to support constant
expression handling - extending the VectorExprEvaluator::VisitCallExpr
handling code that handles elementwise integer binop builtins.

Hopefully this can be used as reference patch to show how to add future
target specific constexpr handling with minimal code impact.

I've also enabled pmullw constexpr handling (which are tagged on
#152490) as they all use very similar tests.

I've also had to tweak the MMX -> SSE2 wrapper as undefs are not
permitted in constexpr shuffle masks

Fixes #152524
2025-08-08 17:02:50 +01:00
James Y Knight
0431d6dab4
Clang: convert __m64 intrinsics to unconditionally use SSE2 instead of MMX. (#96540)
The MMX instruction set is legacy, and the SSE2 variants are in every
way superior, when they are available -- and they have been available
since the Pentium 4 was released, 20 years ago.

Therefore, we are switching the "MMX" intrinsics to depend on SSE2,
unconditionally. This change entirely drops the ability to generate
vectorized code using compiler intrinsics for chips with MMX but without
SSE2: the Intel Pentium MMX, Pentium, II, and Pentium III (released
1997-1999), as well as AMD K6 and K7 series chips of around the same
timeframe. Targeting these older CPUs remains supported -- simply
without the ability to use MMX compiler intrinsics.

Migrating away from the use of MMX registers also fixes a rather
non-obvious requirement. The long-standing programming model for these
MMX intrinsics requires that the programmer be aware of the x87/MMX
mode-switching semantics, and manually call `_mm_empty()` between using
any MMX instruction and any x87 FPU instruction. If you neglect to, then
every future x87 operation will return a NaN result. This requirement is
not at all obvious to users of these these intrinsic functions, and
causes very difficult to detect bugs.

Worse, even if the user did write code that correctly calls
`_mm_empty()` in the right places, LLVM may sometimes reorder x87 and
mmx operations around each-other, unaware of this mode switching issue.

Eliminating the use of MMX registers eliminates this problem.

This change also deletes the now-unnecessary MMX `__builtin_ia32_*`
functions from Clang. Only 3 MMX-related builtins remain in use --
`__builtin_ia32_emms`, used by `_mm_empty`, and
`__builtin_ia32_vec_{ext,set}_v4si`, used by `_mm_insert_pi16` and
`_mm_extract_pi16`. Note particularly that the latter two lower to
generic, non-MMX, IR. Support for the LLVM intrinsics underlying these
removed builtins still remains, for the moment.

The file `clang/www/builtins.py` has been updated with mappings from the
newly-removed `__builtin_ia32` functions to the still-supported
equivalents in `mmintrin.h`.

(Originally uploaded at https://reviews.llvm.org/D86855 and
https://reviews.llvm.org/D94252)

Fixes issue #41665
Works towards #98272
2024-07-24 17:00:12 -04:00
James Y Knight
f0eb5587ce
Remove support for 3DNow!, both intrinsics and builtins. (#96246)
This set of instructions was only supported by AMD chips starting in
the K6-2 (introduced 1998), and before the "Bulldozer" family
(2011). They were never much used, as they were effectively superseded
by the more-widely-implemented SSE (first implemented on the AMD side
in Athlon XP in 2001).

This is being done as a predecessor towards general removal of MMX
register usage. Since there is almost no usage of the 3DNow!
intrinsics, and no modern hardware even implements them, simple
removal seems like the best option.

(Clang half originally uploaded in https://reviews.llvm.org/D94213)

Works towards issue #41665 and issue #98272.
2024-07-16 12:08:48 -04:00
Aaron Ballman
7d644e1215 [C11/C2x] Change the behavior of the implicit function declaration warning
C89 had a questionable feature where the compiler would implicitly
declare a function that the user called but was never previously
declared. The resulting function would be globally declared as
extern int func(); -- a function without a prototype which accepts zero
or more arguments.

C99 removed support for this questionable feature due to severe
security concerns. However, there was no deprecation period; C89 had
the feature, C99 didn't. So Clang (and GCC) both supported the
functionality as an extension in C99 and later modes.

C2x no longer supports that function signature as it now requires all
functions to have a prototype, and given the known security issues with
the feature, continuing to support it as an extension is not tenable.

This patch changes the diagnostic behavior for the
-Wimplicit-function-declaration warning group depending on the language
mode in effect. We continue to warn by default in C89 mode (due to the
feature being dangerous to use). However, because this feature will not
be supported in C2x mode, we've diagnosed it as being invalid for so
long, the security concerns with the feature, and the trivial
workaround for users (declare the function), we now default the
extension warning to an error in C99-C17 mode. This still gives users
an easy workaround if they are extensively using the extension in those
modes (they can disable the warning or use -Wno-error to downgrade the
error), but the new diagnostic makes it more clear that this feature is
not supported and should be avoided. In C2x mode, we no longer allow an
implicit function to be defined and treat the situation the same as any
other lookup failure.

Differential Revision: https://reviews.llvm.org/D122983
2022-04-20 11:30:12 -04:00
Aaron Ballman
ed509fe296 Use functions with prototypes when appropriate; NFC
A significant number of our tests in C accidentally use functions
without prototypes. This patch converts the function signatures to have
a prototype for the situations where the test is not specific to K&R C
declarations. e.g.,

  void func();

becomes

  void func(void);

This is the tenth batch of tests being updated (there are a
significant number of other tests left to be updated).
2022-02-15 09:28:02 -05:00
Simon Pilgrim
09857a4bd1 [X86] Remove __builtin_ia32_padd/psub saturated intrinsics and use generic __builtin_elementwise_add/sub_sat
D117898 added the generic __builtin_elementwise_add_sat and __builtin_elementwise_sub_sat with the same integer behaviour as the SSE/AVX instructions

This patch removes the __builtin_ia32_padd/psub saturated intrinsics and just uses the generics - the existing tests see no changes:

__m256i test_mm256_adds_epi8(__m256i a, __m256i b) {
  // CHECK-LABEL: test_mm256_adds_epi8
  // CHECK: call <32 x i8> @llvm.sadd.sat.v32i8(<32 x i8> %{{.*}}, <32 x i8> %{{.*}})
  return _mm256_adds_epi8(a, b);
}
2022-02-08 15:00:10 +00:00
Simon Pilgrim
3e50593b18 [X86] Remove __builtin_ia32_pmax/min intrinsics and use generic __builtin_elementwise_max/min
D111985 added the generic `__builtin_elementwise_max` and `__builtin_elementwise_min` intrinsics with the same integer behaviour as the SSE/AVX instructions

This patch removes the `__builtin_ia32_pmax/min` intrinsics and just uses `__builtin_elementwise_max/min` - the existing tests see no changes:
```
__m256i test_mm256_max_epu32(__m256i a, __m256i b) {
  // CHECK-LABEL: test_mm256_max_epu32
  // CHECK: call <8 x i32> @llvm.umax.v8i32(<8 x i32> %{{.*}}, <8 x i32> %{{.*}})
  return _mm256_max_epu32(a, b);
}
```
This requires us to add a `__v64qs` explicitly signed char vector type (we already have `__v16qs` and `__v32qs`).

Sibling patch to D117791

Differential Revision: https://reviews.llvm.org/D117798
2022-01-24 11:40:29 +00:00
Simon Pilgrim
e5147f82e1 [X86] Remove __builtin_ia32_pabs intrinsics and use generic __builtin_elementwise_abs
D111986 added the generic `__builtin_elementwise_abs()` intrinsic with the same integer absolute behaviour as the SSE/AVX instructions (abs(INT_MIN) == INT_MIN)

This patch removes the `__builtin_ia32_pabs*` intrinsics and just uses `__builtin_elementwise_abs` - the existing tests see no changes:
```
__m256i test_mm256_abs_epi8(__m256i a) {
  // CHECK-LABEL: test_mm256_abs_epi8
  // CHECK: [[ABS:%.*]] = call <32 x i8> @llvm.abs.v32i8(<32 x i8> %{{.*}}, i1 false)
  return _mm256_abs_epi8(a);
}
```
This requires us to add a `__v64qs` explicitly signed char vector type (we already have `__v16qs` and `__v32qs`).

Differential Revision: https://reviews.llvm.org/D117791
2022-01-24 11:25:21 +00:00
Simon Pilgrim
0abaf64580 Revert rG4727d29d908f9dd608dd97a58c0af1ad579fd3ca "[X86] Remove __builtin_ia32_pabs intrinsics and use generic __builtin_elementwise_abs"
Some build bots are referencing the `__builtin_ia32_pabs` intrinsics via alternative headers
2022-01-21 12:35:36 +00:00
Simon Pilgrim
3ef88b3184 Revert rG8ee135dcf8ff060656ad481c3e980fe8763576f5 "[X86] Remove __builtin_ia32_pmax/min intrinsics and use generic __builtin_elementwise_max/min"
Some build bots are referencing the `__builtin_ia32_pmax/min` intrinsics via alternative headers
2022-01-21 12:34:19 +00:00
Simon Pilgrim
8ee135dcf8 [X86] Remove __builtin_ia32_pmax/min intrinsics and use generic __builtin_elementwise_max/min
D111985 added the generic `__builtin_elementwise_max` and `__builtin_elementwise_min` intrinsics with the same integer behaviour as the SSE/AVX instructions

This patch removes the `__builtin_ia32_pmax/min` intrinsics and just uses `__builtin_elementwise_max/min` - the existing tests see no changes:
```
__m256i test_mm256_max_epu32(__m256i a, __m256i b) {
  // CHECK-LABEL: test_mm256_max_epu32
  // CHECK: call <8 x i32> @llvm.umax.v8i32(<8 x i32> %{{.*}}, <8 x i32> %{{.*}})
  return _mm256_max_epu32(a, b);
}
```
This requires us to add a `__v64qs` explicitly signed char vector type (we already have `__v16qs` and `__v32qs`).

Sibling patch to D117791

Differential Revision: https://reviews.llvm.org/D117798
2022-01-21 12:24:58 +00:00
Simon Pilgrim
4727d29d90 [X86] Remove __builtin_ia32_pabs intrinsics and use generic __builtin_elementwise_abs
D111986 added the generic `__builtin_elementwise_abs()` intrinsic with the same integer absolute behaviour as the SSE/AVX instructions (abs(INT_MIN) == INT_MIN)

This patch removes the `__builtin_ia32_pabs*` intrinsics and just uses `__builtin_elementwise_abs` - the existing tests see no changes:
```
__m256i test_mm256_abs_epi8(__m256i a) {
  // CHECK-LABEL: test_mm256_abs_epi8
  // CHECK: [[ABS:%.*]] = call <32 x i8> @llvm.abs.v32i8(<32 x i8> %{{.*}}, i1 false)
  return _mm256_abs_epi8(a);
}
```
This requires us to add a `__v64qs` explicitly signed char vector type (we already have `__v16qs` and `__v32qs`).

Differential Revision: https://reviews.llvm.org/D117791
2022-01-21 11:59:08 +00:00
Craig Topper
caf6b71ab2 [X86] Change the IR sequence for _mm_storeh_pi and _mm_storel_pi to perform the store as a <2 x float> instead of i64.
This is similar to what we do for loadl_pi and loadh_pi.

llvm-svn: 365669
2019-07-10 17:11:29 +00:00
Russell Gallop
4bcba163b1 [X86][test] Add test cases using immediates to builtins-x86.c
These builtins should work with immediate or variable shift operand for
gcc compatibility.

Differential Revision: https://reviews.llvm.org/D62850

llvm-svn: 362786
2019-06-07 09:51:44 +00:00
Andrew Savonichev
fa8cd7691a [OpenCL] Use long instead of long long in x86 builtins
Summary: According to C99 standard long long is at least 64 bits in
size. However, OpenCL C defines long long as 128 bit signed
integer. This prevents one to use x86 builtins when compiling OpenCL C
code for x86 targets. The patch changes long long to long for OpenCL
only.

Patch by: Alexander Batashev <alexander.batashev@intel.com>

Reviewers: craig.topper, Ka-Ka, eandrews, erichkeane, Anastasia

Reviewed By: Ka-Ka, erichkeane, Anastasia

Subscribers: a.elovikov, yaxunl, Anastasia, cfe-commits, ivankara, etyurin, asavonic

Tags: #clang

Differential Revision: https://reviews.llvm.org/D62580

llvm-svn: 362391
2019-06-03 12:34:59 +00:00
Craig Topper
931779761e Recommit r351160 "[X86] Make _xgetbv/_xsetbv on non-windows platforms"
V8 has been fixed now.

llvm-svn: 351391
2019-01-16 22:56:25 +00:00
Benjamin Kramer
9c53890833 Revert "[X86] Make _xgetbv/_xsetbv on non-windows platforms"
This reverts commit r351160. Breaks building v8.

llvm-svn: 351210
2019-01-15 17:23:36 +00:00
Craig Topper
69aed7c364 [X86] Make _xgetbv/_xsetbv on non-windows platforms
Summary:
This patch attempts to redo what was tried in r278783, but was reverted.

These intrinsics should be available on non-windows platforms with "xsave" feature check. But on Windows platforms they shouldn't have feature check since that's how MSVC behaves.

To accomplish this I've added a MS builtin with no feature check. And a normal gcc builtin with a feature check. When _MSC_VER is not defined _xgetbv/_xsetbv will be macros pointing to the gcc builtin name.

I've moved the forward declarations from intrin.h to immintrin.h to match the MSDN documentation and used that as the header file for the MS builtin.

I'm not super happy with this implementation, and I'm open to suggestions for better ways to do it.

Reviewers: rnk, RKSimon, spatel

Reviewed By: rnk

Subscribers: cfe-commits

Differential Revision: https://reviews.llvm.org/D56686

llvm-svn: 351160
2019-01-15 05:03:18 +00:00
Craig Topper
6fb26f93ef [X86] Replace __builtin_ia32_vbroadcastf128_pd256 and __builtin_ia32_vbroadcastf128_ps256 with an unaligned load intrinsics and a __builtin_shufflevector call.
llvm-svn: 333853
2018-06-03 19:42:59 +00:00
Craig Topper
9efb77e25f [X86] Remove a builtin that should have been removed in r332882.
llvm-svn: 332909
2018-05-21 22:10:02 +00:00
Craig Topper
842171de36 [X86] Use __builtin_convertvector to implement some of the packed integer to packed float conversion intrinsics.
I believe this is safe assuming default default FP environment. The conversion might be inexact, but it can never overflow the FP type so this shouldn't be undefined behavior for the uitofp/sitofp instructions.

We already do something similar for scalar conversions.

Differential Revision: https://reviews.llvm.org/D46863

llvm-svn: 332882
2018-05-21 20:19:17 +00:00
Alexander Ivchenko
0fb8c877c4 This patch aims to match the changes introduced
in gcc by https://gcc.gnu.org/ml/gcc-cvs/2018-04/msg00534.html.
The -mibt feature flag is being removed, and the -fcf-protection
option now also defines a CET macro and causes errors when used
on non-X86 targets, while X86 targets no longer check for -mibt
and -mshstk to determine if -fcf-protection is supported. -mshstk
is now used only to determine availability of shadow stack intrinsics.

Comes with an LLVM patch (D46882).

Patch by mike.dvoretsky

Differential Revision: https://reviews.llvm.org/D46881

llvm-svn: 332704
2018-05-18 11:56:21 +00:00
Gabor Buella
b220dd2b6c [X86] Introduce cldemote intrinsic
Reviewers: craig.topper, zvi

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D45257

llvm-svn: 329993
2018-04-13 07:37:24 +00:00
Gabor Buella
a052016ef2 [x86] wbnoinvd intrinsic
The WBNOINVD instruction writes back all modified
cache lines in the processor’s internal cache to main memory
but does not invalidate (flush) the internal caches.

Reviewers: craig.topper, zvi, ashlykov

Reviewed By: craig.topper

Differential Revision: https://reviews.llvm.org/D43817

llvm-svn: 329848
2018-04-11 20:09:09 +00:00
Oren Ben Simhon
fec21ec0c6 Control-Flow Enforcement Technology - Shadow Stack and Indirect Branch Tracking support (Clang side)
Shadow stack solution introduces a new stack for return addresses only.
The stack has a Shadow Stack Pointer (SSP) that points to the last address to which we expect to return.
If we return to a different address an exception is triggered.
This patch includes shadow stack intrinsics as well as the corresponding CET header.
It includes CET clang flags for shadow stack and Indirect Branch Tracking.

For more information, please see the following:
https://software.intel.com/sites/default/files/managed/4d/2a/control-flow-enforcement-technology-preview.pdf

Differential Revision: https://reviews.llvm.org/D40224

Change-Id: I79ad0925a028bbc94c8ecad75f6daa2f214171f1
llvm-svn: 318995
2017-11-26 12:34:54 +00:00
Yael Tsafrir
23e7733230 [X86] Lower _mm[256|512]_[mask[z]]_avg_epu[8|16] intrinsics to native llvm IR
Differential Revision: https://reviews.llvm.org/D37562

llvm-svn: 313011
2017-09-12 07:46:32 +00:00
Craig Topper
4574226c3f [X86] Clzero flag addition and inclusion under znver1
1. Adds the command line flag for clzero.
2. Includes the clzero flag under znver1.
3. Defines the macro for clzero.
4. Adds a new file which has the intrinsic definition for clzero instruction.

Patch by Ganesh Gopalasubramanian with some additional tests from me.

Differential revision: https://reviews.llvm.org/D29386

llvm-svn: 294559
2017-02-09 06:10:14 +00:00
Albert Gutowski
727ab8a803 Add some MS aliases for existing intrinsics
Reviewers: thakis, compnerd, majnemer, rsmith, rnk

Subscribers: alexshap, cfe-commits

Differential Revision: https://reviews.llvm.org/D24330

llvm-svn: 281540
2016-09-14 21:19:43 +00:00
Albert Gutowski
9918cb6573 Reverse commit 281375 (breaks building Chromium)
llvm-svn: 281399
2016-09-13 21:24:51 +00:00
Albert Gutowski
ae3fb3113f Add some MS aliases for existing intrinsics
Reviewers: thakis, compnerd, majnemer, rsmith, rnk

Subscribers: cfe-commits

Differential Revision: https://reviews.llvm.org/D24330

llvm-svn: 281375
2016-09-13 19:26:42 +00:00
Reid Kleckner
66e7717b46 Revert "[X86] Add xgetbv/x[X86] Add xgetbv xsetbv intrinsics to non-windows platforms"
This reverts commit r278783.  It breaks usage of _xgetbv on Windows.

llvm-svn: 278814
2016-08-16 16:04:14 +00:00
Marina Yatsina
197b65f833 [X86] Add xgetbv/x[X86] Add xgetbv xsetbv intrinsics to non-windows platforms
commit on behalf of guyblank

Differential Revision: https://reviews.llvm.org/D21959

llvm-svn: 278783
2016-08-16 08:13:36 +00:00
Simon Pilgrim
e3b9ee0645 [X86][SSE] Reimplement SSE fp2si conversion intrinsics instead of using generic IR
D20859 and D20860 attempted to replace the SSE (V)CVTTPS2DQ and VCVTTPD2DQ truncating conversions with generic IR instead.

It turns out that the behaviour of these intrinsics is different enough from generic IR that this will cause problems, INF/NAN/out of range values are guaranteed to result in a 0x80000000 value - which plays havoc with constant folding which converts them to either zero or UNDEF. This is also an issue with the scalar implementations (which were already generic IR and what I was trying to match).

This patch changes both scalar and packed versions back to using x86-specific builtins.

It also deals with the other scalar conversion cases that are runtime rounding mode dependent and can have similar issues with constant folding.

Differential Revision: https://reviews.llvm.org/D22105

llvm-svn: 276102
2016-07-20 10:18:01 +00:00
Craig Topper
a1bee4398c [X86] Remove dead builtins that don't exist in the backend intrinsic file and don't have custom handling in CGBuiltins.cpp either.
llvm-svn: 274825
2016-07-08 05:11:47 +00:00
Simon Pilgrim
beca5f295c [Clang][X86] Convert non-temporal store builtins to generic __builtin_nontemporal_store in headers
We can now use __builtin_nontemporal_store instead of target specific builtins for naturally aligned nontemporal stores which avoids the need for handling in CGBuiltin.cpp

The scalar integer nontemporal (unaligned) store builtins will have to wait as __builtin_nontemporal_store currently assumes natural alignment and doesn't accept the 'packed struct' trick that we use for normal unaligned load/stores.

The nontemporal loads require further backend support before we can safely convert them to __builtin_nontemporal_load

Differential Revision: http://reviews.llvm.org/D21272

llvm-svn: 272540
2016-06-13 09:57:52 +00:00
Simon Pilgrim
00880511b1 [X86][SSE] Replace (V)CVTTPS2DQ and VCVTTPD2DQ truncating (round to zero) f32/f64 to i32 with generic IR (clang)
The 'cvtt' truncation (round to zero) conversions can be safely represented as generic __builtin_convertvector (fptosi) calls instead of x86 intrinsics. We already do this (implicitly) for the scalar equivalents.

Note: I looked at updating _mm_cvttpd_epi32 as well but this still requires a lot more backend work to correctly lower (both for debug and optimized builds).

Differential Revision: http://reviews.llvm.org/D20859

llvm-svn: 271436
2016-06-01 21:46:51 +00:00
Craig Topper
09175dab31 [X86] Replace unaligned store builtins in SSE/AVX intrinsic files with code that will compile to a native unaligned store. Remove the builtins since they are no longer used.
Intrinsics will be removed from llvm in a future commit.

llvm-svn: 271214
2016-05-30 17:10:30 +00:00
Simon Pilgrim
91b77ceaed [X86][SSE] Replace VPMOVSX and (V)PMOVZX integer extension intrinsics with generic IR (clang)
The VPMOVSX and (V)PMOVZX sign/zero extension intrinsics can be safely represented as generic __builtin_convertvector calls instead of x86 intrinsics.

This patch removes the clang builtins and their use in the sse2/avx headers - a companion patch will remove/auto-upgrade the llvm intrinsics.

Note: We already did this for SSE41 PMOVSX sometime ago.

Differential Revision: http://reviews.llvm.org/D20684

llvm-svn: 271106
2016-05-28 08:12:45 +00:00
Simon Pilgrim
90770c7c76 [X86][SSE] Replace lossless i32/f32 to f64 conversion intrinsics with generic IR
Both the (V)CVTDQ2PD(Y) (i32 to f64) and (V)CVTPS2PD(Y) (f32 to f64) conversion instructions are lossless and can be safely represented as generic __builtin_convertvector calls instead of x86 intrinsics without affecting final codegen.

This patch removes the clang builtins and their use in the sse2/avx headers - a future patch will deal with removing the llvm intrinsics, but that will require a bit more work.

Differential Revision: http://reviews.llvm.org/D20528

llvm-svn: 270499
2016-05-23 22:13:02 +00:00
Ashutosh Nema
51c9dd0081 Add new intrinsic support for MONITORX and MWAITX instructions
Summary:
MONITORX/MWAITX instructions provide similar capability to the MONITOR/MWAIT
pair while adding a timer function, such that another termination of the MWAITX
instruction occurs when the timer expires. The presence of the MONITORX and 
MWAITX instructions is indicated by CPUID 8000_0001, ECX, bit 29.

The MONITORX and MWAITX instructions are intercepted by the same bits that
intercept MONITOR and MWAIT. MONITORX instruction establishes a range to be
monitored. MWAITX instruction causes the processor to stop instruction
execution and enter an implementation-dependent optimized state until
occurrence of a class of events.

Opcode of MONITORX instruction is "0F 01 FA". Opcode of MWAITX instruction is
"0F 01 FB". These opcode information is used in adding tests for the
disassembler.

These instructions are enabled for AMD's bdver4 architecture.

Patch by Ganesh Gopalasubramanian!

Reviewers: echristo, craig.topper

Subscribers: RKSimon, joker.eph, llvm-commits, cfe-commits

Differential Revision: http://reviews.llvm.org/D19796

llvm-svn: 269907
2016-05-18 11:56:23 +00:00
Andrea Di Biagio
8bb12d0a77 [x86] Fix maskload/store intrinsic definitions in avxintrin.h
According to the Intel documentation, the mask operand of a maskload and
maskstore intrinsics is always a vector of packed integer/long integer values.
This patch introduces the following two changes:
 1. It fixes the avx maskload/store intrinsic definitions in avxintrin.h.
 2. It changes BuiltinsX86.def to match the correct gcc definitions for avx
    maskload/store (see D13861 for more details).

Differential Revision: http://reviews.llvm.org/D13861

llvm-svn: 250816
2015-10-20 11:19:54 +00:00
Craig Topper
e33f51fa91 [X86] Add fxsr feature name for fxsave/fxrestore builtins.
llvm-svn: 250498
2015-10-16 06:22:36 +00:00
Eric Christopher
4fb4fbc5d6 Add the minimum target features that these tests depend upon.
llvm-svn: 250448
2015-10-15 20:04:40 +00:00
Amjad Aboud
2b9b8a5921 [X86] Add XSAVE intrinsic family
Add intrinsics for the
  XSAVE instructions (XSAVE/XSAVE64/XRSTOR/XRSTOR64)
  XSAVEOPT instructions (XSAVEOPT/XSAVEOPT64)
  XSAVEC instructions (XSAVEC/XSAVEC64)
  XSAVES instructions (XSAVES/XSAVES64/XRSTORS/XRSTORS64)

Differential Revision: http://reviews.llvm.org/D13014

llvm-svn: 250158
2015-10-13 12:29:35 +00:00
Simon Pilgrim
12919f7e49 [X86][SSE] Replace 128-bit SSE41 PMOVSX intrinsics with native IR
128-bit vector integer sign extensions correctly lower to the pmovsx instructions even for debug builds.

This patch removes the builtins and reimplements the _mm_cvtepi*_epi* intrinsics __using builtin_shufflevector (to extract the bottom most subvector) and __builtin_convertvector (to actually perform the sign extension).

Differential Revision: http://reviews.llvm.org/D12835

llvm-svn: 248092
2015-09-19 15:12:38 +00:00
Simon Pilgrim
a79dfb3223 [X86] Add __builtin_ia32_undef* intrinsics to test
Minor tweak to rL246083

llvm-svn: 246200
2015-08-27 20:29:13 +00:00
Michael Kuperstein
a3c7b74208 [X86] Add FXSR intrinsics
Add intrinsics for the FXSR instructions (FXSAVE/FXSAVE64/FXRSTOR/FXRSTOR64)

These were previously declared in Intrin.h for MSVC compatibility, but now
that we have them implemented, these declarations can be removed.

llvm-svn: 241053
2015-06-30 09:45:38 +00:00
Sanjay Patel
0c351aba25 [X86, AVX] replace vextractf128 intrinsics with generic shuffles
This is very much like D8088 (checked in at r231792).

Now that we've replaced the vinsertf128 intrinsics,
do the same for their extract twins.

Differential Revision: http://reviews.llvm.org/D8275

llvm-svn: 232052
2015-03-12 15:50:36 +00:00
Sanjay Patel
7f6aa52e93 [X86, AVX] Replace vinsertf128 intrinsics with generic shuffles.
We want to replace as much custom x86 shuffling via intrinsics
as possible because pushing the code down the generic shuffle
optimization path allows for better codegen and less complexity
in LLVM.

This is the sibling patch for the LLVM half of this change:
http://reviews.llvm.org/D8086

Differential Revision: http://reviews.llvm.org/D8088

llvm-svn: 231792
2015-03-10 15:19:26 +00:00
Craig Topper
b1bc5cf4bc [X86] Remove pblendw and pblendd builtins that aren't being used by the intrinsic headers.
llvm-svn: 230738
2015-02-27 06:54:25 +00:00