GCC 12 (https://gcc.gnu.org/PR101696) allows `arch=x86-64`
`arch=x86-64-v2` `arch=x86-64-v3` `arch=x86-64-v4` in the
target_clones function attribute. This patch ports the feature.
* Set KeyFeature to `x86-64{,-v2,-v3,-v4}` in `Processors[]`, to be used
by X86TargetInfo::multiVersionSortPriority
* builtins: change `__cpu_features2` to an array like libgcc. Define
`FEATURE_X86_64_{BASELINE,V2,V3,V4}` and depended ISA feature bits.
* CGBuiltin.cpp: update EmitX86CpuSupports to handle `arch=x86-64*`.
Close https://github.com/llvm/llvm-project/issues/55830
Reviewed By: pengfei
Differential Revision: https://reviews.llvm.org/D158329
Static Analyzer Tool complains about a large function call parameter which is is passed by value in CGBuiltin.cpp file.
1. In CodeGenFunction::EmitSMELdrStr(clang::SVETypeFlags, llvm::SmallVectorImpl<llvm::Value *> &, unsigned int): We are passing parameter TypeFlags of type clang::SVETypeFlags by value.
2. In CodeGenFunction::EmitSMEZero(clang::SVETypeFlags, llvm::SmallVectorImpl<llvm::Value *> &, unsigned int): We are passing parameter TypeFlags of type clang::SVETypeFlags by value.
3. In CodeGenFunction::EmitSMEReadWrite(clang::SVETypeFlags, llvm::SmallVectorImpl<llvm::Value *> &, unsigned int): We are passing parameter TypeFlags of type clang::SVETypeFlags by value.
4. In CodeGenFunction::EmitSMELd1St1(clang::SVETypeFlags, llvm::SmallVectorImpl<llvm::Value *> &, unsigned int): We are passing parameter TypeFlags of type clang::SVETypeFlags by value.
I see many places in CGBuiltin.cpp file, we are passing parameter TypeFlags of type clang::SVETypeFlags by reference.
clang::SVETypeFlags inherits several other types.
This patch passes parameter TypeFlags by reference instead of by value in the function.
Reviewed By: tahonermann, sdesmalen
Differential Revision: https://reviews.llvm.org/D158522
Currenly both Clang and GCC support the following set of flags that control
code gen of signed overflow:
* -fwrapv: overflow is defined as in two-complement
* -ftrapv: overflow traps
* -fsanitize=signed-integer-overflow: if undefined (no -fwrapv), then overflow
behaviour is controlled by UBSan runtime, overrides -ftrapv
Howerver, clang ignores these flags for __builtin_abs(int) and its higher-width
versions, so passing minimum integer value always causes poison.
The same holds for *abs(), which are not handled in frontend at all but folded
to llvm.abs.* intrinsics during InstCombinePass. The intrinsics are not
instrumented by UBSan, so the functions need special handling as well.
This patch does a few things:
* Handle *abs() in CGBuiltin the same way as __builtin_*abs()
* -fsanitize=signed-integer-overflow now properly instruments abs() with UBSan
* -fwrapv and -ftrapv handling for abs() is made consistent with GCC
Fixes#45129 and #45794
Reviewed By: efriedma, MaskRay
Differential Revision: https://reviews.llvm.org/D156821
This reverts commit 1783185790de29b24d3850d33d9a9d586e6bbd39,
which broke the buildbots, starting with when it was first built in https://lab.llvm.org/buildbot/#/builders/85/builds/18390
(N.B. I think the patch is uncovering real bugs; the revert
is simply to keep the tree green and the buildbots useful, because I'm not confident how to
fix-forward all the found bugs.)
Currenly both Clang and GCC support the following set of flags that
control code gen of signed overflow:
* -fwrapv: overflow is defined as in two-complement
* -ftrapv: overflow traps
* -fsanitize=signed-integer-overflow: if undefined (no -fwrapv), then
overflow behaviour is controlled by UBSan runtime, overrides -ftrapv.
However, clang ignores these flags for __builtin_abs(int) and its
higher-width versions, so passing minimum integer value always causes
poison.
The same holds for *abs(), which are not handled in frontend at all but
folded to llvm.abs.* intrinsics during InstCombinePass. The intrinsics
are not instrumented by UBSan, so the functions need special handling
as well.
This patch does a few things:
* Handle *abs() in CGBuiltin the same way as __builtin_*abs()
* -fsanitize=signed-integer-overflow now properly instruments abs() with
UBSan
* -fwrapv and -ftrapv handling for abs() is made consistent with GCC
Fixes https://github.com/llvm/llvm-project/issues/45129
Fixes https://github.com/llvm/llvm-project/issues/45794
Differential Revision: https://reviews.llvm.org/D156821
This allows use with non-0 address space stacks. llvm_ptr_ty should
never be used. This could use some more percolation up through mlir,
but this is enough to fix existing tests.
https://reviews.llvm.org/D156666
Fixed the type modifier (L->W), removed redundant feature checking code
since the feature has already been checked in `EmitBuiltinExpr`. And
Cleaned up unused diagnostic information.
Reviewed By: SixWeining
Differential Revision: https://reviews.llvm.org/D156866
Since we no longer support typed LLVM IR pointer types, the code can
be simplified into for example using PointerType::get directly instead
of using Type::getInt8PtrTy and Type::getInt32PtrTy etc.
Differential Revision: https://reviews.llvm.org/D156733
`alloca` instructions always return pointers to the `alloca` address space. This composes poorly with most HLLs which are address space agnostic and thus have all pointers point to generic/default. Static `alloca`s were already handled on the AST level, however dynamic `alloca`s were not, which would lead to subtly incorrect IR. This patch addresses that by inserting an address space cast iff the `alloca` address space is different from the default / expected.
Reviewed By: rjmccall, arsenm
Differential Revision: https://reviews.llvm.org/D156539
Add codegen for llvm bitreverse elementwise builtin
The bitreverse elementwise builtin is necessary for HLSL codegen.
Tests were added to make sure that the expected errors are encountered when these functions are given inputs of incompatible types, or too many inputs.
The new builtin is restricted to integer types only.
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D156357
Add codegen for llvm pow elementwise builtin
The pow elementwise builtin is necessary for HLSL codegen.
Tests were added to make sure that the expected errors are encountered when these functions are given inputs of incompatible types, or too many inputs.
The new builtin is restricted to floating point types only.
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D153310
The specification for LDR/STR says that:
The ZA array vector is selected by the sum of the vector select register
and immediate offset, modulo the number of bytes in a Streaming SVE
vector. [..] This instruction does not require the PE to be in Streaming
SVE mode
When the instruction is used outside of streaming mode, 'vscale' will result
in the wrong value being used for the offset because LLVM's code-generator
will emit the non-streaming 'RDVL/ADDVL' instead of the 'RDSVL/ADDSVL'
instructions which are used to get the Streaming-SVE vector length.
Reviewed By: bryanpkc
Differential Revision: https://reviews.llvm.org/D156121
This patch adds support for the following SME ACLE intrinsics (as defined
in https://arm-software.github.io/acle/main/acle.html):
- svaddha_za32[_u32]_m // also for s32
- svaddva_za32[_u32]_m // also for s32
- svaddha_za64[_u64]_m // also for s64
- svaddva_za64[_u64]_m // also for s64
The _za64 versions are available only when the sme-i16i64 feature is enabled.
Co-authored-by: Sagar Kulkarni <sagar.kulkarni1@huawei.com>
Reviewed By: sdesmalen
Differential Revision: https://reviews.llvm.org/D134680
This patch adds support for the following SME ACLE intrinsics (as defined
in https://arm-software.github.io/acle/main/acle.html):
- svread_hor_za8[_s8]_m // also for u8
- svread_hor_za16[_s16]_m // also for u16, f16, bf16
- svread_hor_za32[_s32]_m // also for u32, f32
- svread_hor_za64[_s64]_m // also for u64, f64
- svread_hor_za128[_s8]_m // also for s16, s32, s64, u8, u16, u32, u64, bf16, f16, f32, f64
- svread_ver_za8[_s8]_m // also for u8
- svread_ver_za16[_s16]_m // also for u16, f16, bf16
- svread_ver_za32[_s32]_m // also for u32, f32
- svread_ver_za64[_s64]_m // also for u64, f64
- svread_ver_za128[_s8]_m // also for s16, s32, s64, u8, u16, u32, u64, bf16, f16, f32, f64
- svwrite_hor_za8[_s8]_m // also for u8
- svwrite_hor_za16[_s16]_m // also for u16, f16, bf16
- svwrite_hor_za32[_s32]_m // also for u32, f32
- svwrite_hor_za64[_s64]_m // also for u64, f64
- svwrite_hor_za128[_s8]_m // also for s16, s32, s64, u8, u16, u32, u64, bf16, f16, f32, f64
- svwrite_ver_za8[_s8]_m // also for u8
- svwrite_ver_za16[_s16]_m // also for u16, f16, bf16
- svwrite_ver_za32[_s32]_m // also for u32, f32
- svwrite_ver_za64[_s64]_m // also for u64, f64
- svwrite_ver_za128[_s8]_m // also for s16, s32, s64, u8, u16, u32, u64, bf16, f16, f32, f64
Co-authored-by: Sagar Kulkarni <sagar.kulkarni1@huawei.com>
Reviewed By: sdesmalen, kmclaughlin
Differential Revision: https://reviews.llvm.org/D128648
Previously we returned i32 on RV32 and i64 on RV64. The instructions
only consume 32 bits and only produce 32 bits. For RV64, the result
is sign extended to 64 bits like *W instructions.
This patch removes this detail from the interface to improve
portability and consistency. This matches the proposal for scalar
intrinsics here https://github.com/riscv-non-isa/riscv-c-api-doc/pull/44
I've included IR autoupgrade support as well.
I'll be doing this for other builtins/intrinsics that currently use
'long' in other patches.
Reviewed By: VincentWu
Differential Revision: https://reviews.llvm.org/D154647
This removes another use of 'long' to mean xlen from builtins.
I've also converted the types to unsigned as proposed in D154616.
clmul_32 is available to RV64 as its emulation is clmul+sext.w
clmulh_32 and clmulr_32 are not available on RV64 as their emulation
is currently 6 instructions in the worst case.
OpenCL and HIP have -cl-fp32-correctly-rounded-divide-sqrt and
-fno-hip-correctly-rounded-divide-sqrt. The corresponding fpmath metadata
was only set on fdiv, and not sqrt. The backend is currently underutilizing
sqrt lowering options, and the responsibility is split between the libraries
and backend and this metadata is needed.
CUDA/NVCC has -prec-div and -prev-sqrt but clang doesn't appear to be
aiming for compatibility with those. Don't know if OpenMP has a similar
control.
D152023 made ubsan consider __builtin_clz of 0 undefined regardless of
the target. This ensures portability and matches gcc.
This causes the ACLE intrinsics to also be considered to also be
considered to be undefined for 0 since they used the generic builtins
as their implementation.
This patch adds builtins for ARM that ubsan doesn't know about to make
the behavior defined for 0. Alternatively, I could have added a zero
check to the intrinsics, but the dedicated builtin will give better -O0
codegen.
Fixes#63113.
Reviewed By: tmatheson
Differential Revision: https://reviews.llvm.org/D154915
Builtin floating-point number classification functions:
- __builtin_isnan,
- __builtin_isinf,
- __builtin_finite, and
- __builtin_isnormal
now are implemented using `llvm.is_fpclass`.
This change makes the target callback `TargetCodeGenInfo::testFPKind`
unneeded. It is preserved in this change and should be removed later.
Differential Revision: https://reviews.llvm.org/D112932
This patch renames the `OpenMPIRBuilderConfig` flags to reduce confusion over
their meaning. `IsTargetCodegen` becomes `IsGPU`, whereas `IsEmbedded` becomes
`IsTargetDevice`. The `-fopenmp-is-device` compiler option is also renamed to
`-fopenmp-is-target-device` and the `omp.is_device` MLIR attribute is renamed
to `omp.is_target_device`. Getters and setters of all these renamed properties
are also updated accordingly. Many unit tests have been updated to use the new
names, but an alias for the `-fopenmp-is-device` option is created so that
external programs do not stop working after the name change.
`IsGPU` is set when the target triple is AMDGCN or NVIDIA PTX, and it is only
valid if `IsTargetDevice` is specified as well. `IsTargetDevice` is set by the
`-fopenmp-is-target-device` compiler frontend option, which is only added to
the OpenMP device invocation for offloading-enabled programs.
Differential Revision: https://reviews.llvm.org/D154591
This adds new intrisics to support the LDAP1 and STL1 Advanced SIMD
(Neon) instructions introduced as part of FEAT_LRCPC3.
The new intrinsics `vldap1(q)_lane`/`vstl1(q)_lane` generate IR code
similar to the existing `vld1(q)_lane/st1(q)_lane` ones, but capturing
the difference in the atomic release/acquire memory model.
The LLVM code generation changes to ensure that this instruction pair
is lowered to the correct LDAP1/STL1 instructions will be covered in a
separate commit.
Based on a patch by Sam Elliott.
Reviewed By: tmatheson
Differential Revision: https://reviews.llvm.org/D153128
This is the way most targets do it for a simple mapping.
We can't do this for all builtins due to type overloading of the IR intrinsics.
Reviewed By: asb
Differential Revision: https://reviews.llvm.org/D154567
These builtins were recently changed to return 'int' like the
similar __builtin_clz/__builtin_ctz builtins, but the IR generation
was not updated to use a truncate.
`CGBuilderTy::CreateElementBitCast()` no longer does what its name suggests.
Remove remaining in-tree uses by one of the following methods.
* drop the call entirely
* fold it to an `Address` construction
* replace it with `Address::withElementType()`
This is a NFC cleanup effort.
Reviewed By: barannikov88, nikic, jrtc27
Differential Revision: https://reviews.llvm.org/D154285
This caused asserts in some Android and Windows builds:
SelectionDAGNodes.h:1138: llvm::SDValue::SDValue(SDNode *, unsigned int):
Assertion `(!Node || !ResNo || ResNo < Node->getNumValues()) && "Invalid result number for the given node!"' failed.
See comment on 85bdea023f
Also revert "HIP: Use frexp builtins in math headers"
which seems to depend on this change.
This reverts commit 85bdea023f5116f789095b606554739403042a21.
This reverts commit bf8e92c0e792cbe3c9cc50607a1e33c6912ffd0e.
These are basically the same thing and only differ for strictfp,
so add both for future proofing. Note all the elementwise functions are
currently broken for strictfp, and use non-constrained ops. Add a test
that demonstrates this, but doesn't attempt to fix it.
A new builtin function __builtin_isfpclass is added. It is called as:
__builtin_isfpclass(<floating point value>, <test>)
and returns an integer value, which is non-zero if the floating point
argument falls into one of the classes specified by the second argument,
and zero otherwise. The set of classes is an integer value, where each
value class is represented by a bit. There are ten data classes, as
defined by the IEEE-754 standard, they are represented by bits:
0x0001 (__FPCLASS_SNAN) - Signaling NaN
0x0002 (__FPCLASS_QNAN) - Quiet NaN
0x0004 (__FPCLASS_NEGINF) - Negative infinity
0x0008 (__FPCLASS_NEGNORMAL) - Negative normal
0x0010 (__FPCLASS_NEGSUBNORMAL) - Negative subnormal
0x0020 (__FPCLASS_NEGZERO) - Negative zero
0x0040 (__FPCLASS_POSZERO) - Positive zero
0x0080 (__FPCLASS_POSSUBNORMAL) - Positive subnormal
0x0100 (__FPCLASS_POSNORMAL) - Positive normal
0x0200 (__FPCLASS_POSINF) - Positive infinity
They have corresponding builtin macros to facilitate using the builtin
function:
if (__builtin_isfpclass(x, __FPCLASS_NEGZERO | __FPCLASS_POSZERO) {
// x is any zero.
}
The data class encoding is identical to that used in llvm.is.fpclass
function.
Differential Revision: https://reviews.llvm.org/D152351
* Add `Address::withElementType()` as a replacement for
`CGBuilderTy::CreateElementBitCast`.
* Partial progress towards replacing `CreateElementBitCast`, as it no
longer does what its name suggests. Either replace its uses with
`Address::withElementType()`, or remove them if no longer needed.
* Remove unused parameter 'Name' of `CreateElementBitCast`
Reviewed By: barannikov88, nikic
Differential Revision: https://reviews.llvm.org/D153196
This makes the scope and ordering arguments actually do something.
Also add some new OpenCL tests since the existing HIP tests didn't
cover address spaces.
Partial progress towards replacing in-tree uses of `Type::getPointerTo()`.
This needs to be done before deprecating the API.
Reviewed By: nikic, barannikov88
Differential Revision: https://reviews.llvm.org/D152321
Provide direct access to v_exp_f32 and v_exp_f16, so we can start
correctly lowering the generic exp intrinsics.
Unfortunately have to break from the usual naming convention of
matching the instruction name and stripping the v_ prefix. exp is
already taken by the export intrinsic. On the clang builtin side, we
have a choice of maintaining the convention to the instruction name,
or following the intrinsic name.
This will map directly to the hardware instruction which does not
handle denormals for f32. This will allow moving the generic intrinsic
to be lowered correctly. Also handles selecting the f16 version, but
there's no reason to use it over the generic intrinsic.
This commit implements support for WebAssembly table types and
respective builtins. Table tables are WebAssembly objects to store
reference types. They have a large amount of semantic restrictions
including, but not limited to, only being allowed to be declared
at the top-level as static arrays of zero-length. Not being arguments
or result of functions, not being stored ot memory, etc.
This commit introduces the __attribute__((wasm_table)) to attach to
arrays of WebAssembly reference types. And the following builtins to
manage tables:
* ref __builtin_wasm_table_get(table, idx)
* void __builtin_wasm_table_set(table, idx, ref)
* uint __builtin_wasm_table_size(table)
* uint __builtin_wasm_table_grow(table, ref, uint)
* void __builtin_wasm_table_fill(table, idx, ref, uint)
* void __builtin_wasm_table_copy(table, table, uint, uint, uint)
This commit also enables reference-types feature at bleeding-edge.
This is joint work with Alex Bradbury (@asb).
Reviewed By: aaron.ballman
Differential Revision: https://reviews.llvm.org/D139010