Support the following BCD format conversion builtins for PowerPC.
- `__builtin_bcdcopysign` – Conversion that returns the decimal value of
the first parameter combined with the sign code of the second parameter.
`
- `__builtin_bcdsetsign` – Conversion that sets the sign code of the
input parameter in packed decimal format.
> Note: This built-in function is valid only when all following
conditions are met:
> -qarch is set to utilize POWER9 technology.
> The bcd.h file is included.
## Prototypes
```c
vector unsigned char __builtin_bcdcopysign(vector unsigned char, vector unsigned char);
vector unsigned char __builtin_bcdsetsign(vector unsigned char, unsigned char);
```
## Usage Details
`__builtin_bcdsetsign`: Returns the packed decimal value of the first
parameter combined with the sign code.
The sign code is set according to the following rules:
- If the packed decimal value of the first parameter is positive, the
following rules apply:
- If the second parameter is 0, the sign code is set to 0xC.
- If the second parameter is 1, the sign code is set to 0xF.
- If the packed decimal value of the first parameter is negative, the
sign code is set to 0xD.
> notes:
> The second parameter can only be 0 or 1.
> You can determine whether a packed decimal value is positive or
negative as follows:
> - Packed decimal values with sign codes **0xA, 0xC, 0xE, or 0xF** are
interpreted as positive.
> - Packed decimal values with sign codes **0xB or 0xD** are interpreted
as negative.
---------
Co-authored-by: Aditi-Medhane <aditi.medhane@ibm.com>
This option is confusingly named. What it actually controls is whether,
under the default of `-ffloat16-excess-precision=standard`, it is
beneficial for performance to perform calculations on float (without
intermediate rounding) or not. For `-ffloat16-excess-precision=none` the
LLVM `half` type will always be used, and all backends are expected to
legalize it correctly.
Currently, `__attribute__((target("lasx")))` does not automatically
enable LSX support, causing Clang to fail with `-mno-lsx`. Since
LASX depends on LSX, enabling LASX should implicitly enable LSX to
avoid clang error.
Fixes#149512.
Depends on #153541
There should not be both `n8` and `n16:8`. This is a single list of
legal integers. Additionally, this should use the standard order in
increasing size `n8:16`.
2f497ec3a0056f15727ee6008211aeb2c4a8f455 updated the backend's rules for
when lock-free atomics are available, but we never made a corresponding
change to the frontend. Fix it to be consistent. This only affects
targets older than v7.
Depending on the particular version of the AArch32 architecture,
load/store exclusive operations might be available for various subset of
8, 16, 32, and 64-bit quantities. Sema knew nothing about this and was
accepting all four sizes, leading to a compiler crash at isel time if
you used a size not available on the target architecture.
Now the Sema checking stage emits a more sensible diagnostic, pointing
at the location in the code.
In order to allow Sema to query the set of supported sizes, I've moved
the enum of LDREX_x sizes out of its Arm-specific header into
`TargetInfo.h`.
Also, in order to allow the diagnostic to specify the correct list of
supported sizes, I've filled it with `%select{}`. (The alternative was
to make separate error messages for each different list of sizes.)
Tests if the runtime type of the function pointer matches the static
type. If this returns false, calling the function pointer will trap.
Uses `@llvm.wasm.ref.test.func` added in #147486.
Also adds a "gc" wasm feature to gate the use of the ref.test
instruction.
FMV priority is the returned value of a polymorphic function. On RISC-V
and X86 targets a 32-bit value is enough. On AArch64 we currently need
64 bits and we will soon exceed that. APInt seems to be a suitable
replacement for uint64_t, presumably with minimal compile time overhead.
It allows bit manipulation, comparison and variable bit width.
The Armv8-M architecture doesn't have the LDREXD and STREXD
instructions, for exclusive load/store of a 64-bit quantity split across
two registers. But the `__ARM_FEATURE_LDREX` macro was set to a value
that claims it does, because the case for Armv8 was missing a check for
M profile.
The Armv7 case got it right, so I've just made the two cases the same.
Mips was the only architecture having PtrDiffType = SignedInt and
IntPtrType = SignedLong
This fixes a problem on mipsel-windows-gnu triple, where uintptr_t was
wrongly defined as unsigned long instead of unsigned int, leading to
problems in compiler-rt.
compiler-rt/lib/interception/interception_type_test.cpp:24:17: error:
static assertion failed due to requirement
'__sanitizer::is_same<unsigned long, unsigned int>::value':
24 | COMPILER_CHECK((__sanitizer::is_same<__sanitizer::uptr,
::uintptr_t>::value));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compiler-rt/lib/interception/../sanitizer_common/sanitizer_internal_defs.h:369:44:
note: expanded from macro 'COMPILER_CHECK'
369 | #define COMPILER_CHECK(pred) static_assert(pred, "")
| ^~~~
compiler-rt/lib/interception/interception_type_test.cpp:25:17: error:
static assertion failed due to requirement '__sanitizer::is_same<long,
int>::value':
25 | COMPILER_CHECK((__sanitizer::is_same<__sanitizer::sptr,
::intptr_t>::value));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compiler-rt/lib/interception/../sanitizer_common/sanitizer_internal_defs.h:369:44:
note: expanded from macro 'COMPILER_CHECK'
369 | #define COMPILER_CHECK(pred) static_assert(pred, "")
| ^~~~
compiler-rt/lib/interception/interception_type_test.cpp:27:17: error:
static assertion failed due to requirement '__sanitizer::is_same<long,
int>::value':
27 | COMPILER_CHECK((__sanitizer::is_same<::PTRDIFF_T,
::ptrdiff_t>::value));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compiler-rt/lib/interception/../sanitizer_common/sanitizer_internal_defs.h:369:44:
note: expanded from macro 'COMPILER_CHECK'
369 | #define COMPILER_CHECK(pred) static_assert(pred, "")
Set MaxAtomicInlineWidth the same way as SPIR-V targets in 3cfd0c0d3697.
This PR fixes build warning in scoped atomic built-in in #146814:
`warning: large atomic operation may incur significant performance
penalty; ; the access size (2 bytes) exceeds the max lock-free size (0
bytes) [-Watomic-alignment]`
This patch fixes an issue where Microsoft-specific layout attributes,
such as __declspec(empty_bases), were ignored during CUDA/HIP device
compilation on a Windows host. This caused a critical memory layout
mismatch between host and device objects, breaking libraries that rely
on these attributes for ABI compatibility.
The fix introduces a centralized hasMicrosoftRecordLayout() check within
the TargetInfo class. This check is aware of the auxiliary (host) target
and is set during TargetInfo::adjust if the host uses a Microsoft ABI.
The empty_bases, layout_version, and msvc::no_unique_address attributes
now use this centralized flag, ensuring device code respects them and
maintains layout consistency with the host.
Fixes: https://github.com/llvm/llvm-project/issues/146047
https://reviews.llvm.org/D104808 set the alignment of long double to 64
bits. This is also the alignment specified in the LLVM data layout.
However, the alignment of __float128 was left at 128 bits.
I assume that this was just an oversight, rather than an intentional
divergence. The C ABI document currently does not make any statement
about `__float128`:
https://github.com/WebAssembly/tool-conventions/blob/main/BasicCABI.md
This is similar to -msve-vector-bits, but for streaming mode: it
constrains the legal values of "vscale", allowing optimizations based on
that constraint.
This also fixes conversions between SVE vectors and fixed-width vectors
in streaming functions with -msve-vector-bits and
-msve-streaming-vector-bits.
This rejects any use of arm_sve_vector_bits types in streaming
functions; if it becomes relevant, we could add
arm_sve_streaming_vector_bits types in the future.
This doesn't touch the __ARM_FEATURE_SVE_BITS define.
When compiling with `-march=armv9-a+nosve` we found that Clang still
defines the `__ARM_FEATURE_SVE2` macro, which is explicitly set in
`setArchFeatures` when compiling for armv9-a.
After some experimenting, I found out that the list of features passed
into `AArch64TargetInfo::handleTargetFeatures` has already been expanded
and takes into account `+no[feature]` and has already expanded features
like `armv9-a`.
From that I conclude that `setArchFeatures` is no longer required.
As the data layout a few lines further up specifies, the int, long and
pointer alignment should be 16 instead of the default of 32.
The long long alignment is also incorrect, but that would require a
change to the data layout as well.
Comparison with GCC, which consistently uses 2 byte alignment:
https://gcc.godbolt.org/z/K3x6a7dEf At least based on some spot checks,
the changes to bit field layout also make use match GCC now.
This was found by https://github.com/llvm/llvm-project/pull/144720.
This patch makes determining alignment and width of BitInt to be target
ABI specific and makes it consistent with [Procedure Call Standard for
the LoongArch™ Architecture] for LoongArch target
(https://github.com/loongson/la-abi-specs/blob/release/lapcs.adoc).
There're four possible formats to refer a register in inline assembly,
1. Numeric name without dollar sign ("f0")
2. Numeric name with dollar sign ("$f0")
3. ABI name without dollar sign ("fa0")
4. ABI name with dollar sign ("$fa0")
LoongArch GCC accepts 1 and 2 for FPRs before r15-8284[1] and all these
formats after the chagne. But Clang supports only 2 and 4 for FPRs. The
inconsistency has caused compatibility issues, such as QEMU's case[2].
This patch follows 0bbf3ddf5fea ("[Clang][LoongArch] Add GPR alias
handling without `$` prefix") and accepts FPRs without dollar sign
prefixes as well to keep aligned with GCC, avoiding future compatibility
problems.
Link:
https://gcc.gnu.org/cgit/gcc/commit/?id=d0110185eb78f14a8e485f410bee237c9c71548d
[1]
Link:
https://lore.kernel.org/qemu-devel/20250314033150.53268-3-ziyao@disroot.org/
[2]
There is a lot of redundant code that needs to be modified when new
Hexagon versions are added. Reduce the amount of this redundancy.
- compute ELF flags and attributes based on version feature names;
- simplify EnableHVX option handling by using arch features instead of
arch version enums;
- simplify completeHVXFeatures() by using features;
- delete several unused or redundant functions and constants:
isCPUValid, getCpu, getHexagonCPUSuffix;
- do not set HexagonArchVersion in initializeSubtargetDependencies, it
is set in ParseSubtargetFeatures;
Signed-off-by: Alexey Karyakin <akaryaki@quicinc.com>
Support the following packed BCD builtins for PowerPC.
```
__builtin_national2packed - Conversion of National format to Packed decimal format.
__builtin_packed2national - Conversion of Packed decimal format to national format.
__builtin_packed2zoned - Conversion of Packed decimal format to Zoned decimal format.
__builtin_zoned2packed - Conversion of Zoned decimal format to Packed decimal format.
```
### Prototypes:
`vector unsigned char __builtin_national2packed(vector unsigned char a,
unsigned char b);`
`vector unsigned char __builtin_packed2zoned(vector unsigned char,
unsigned char);`
`vector unsigned char __builtin_zoned2packed(vector unsigned char,
unsigned char);`
The condition for the 2nd parameter is consistent over all the 3
prototypes (0 or 1 only).
`vector unsigned char __builtin_packed2national(vector unsigned char);`
Co-authored-by: himadhith <himadhith.v@ibm.com>
Co-authored-by: Tony Varghese <tonypalampalliyil@gmail.com>
ArrayRef has a constructor that accepts std::nullopt. This
constructor dates back to the days when we still had llvm::Optional.
Since the use of std::nullopt outside the context of std::optional is
kind of abuse and not intuitive to new comers, I would like to move
away from the constructor and eventually remove it.
This patch takes care of the clang side of the migration.
1. The PR proceeds with a backend target hook to allow front-ends to
determine what target features are available in a compilation based on
the CPU name.
2. Fix a backend target feature bug that supports HTM for
Power8/9/10/11. However, HTM is only supported on Power8/9 according to
the ISA.
3. All target features that are hardcoded in PPC.cpp can be retrieved
from the backend target feature. I have double-checked that the
hardcoded logic for inferring target features from the CPU in the
frontend(PPC.cpp) is the same as in PPC.td.
The reland patch addressed the comment
https://github.com/llvm/llvm-project/pull/137670#discussion_r2143541120
1. Added the missing XMEGA2 definition. The avr64 devices use xmega2 which has SPM(X) defined.
2. The avr16/avr32 devices do have SPM and SPMX features, but the current xmega3 definition has not.
Xmega3 is also used for modern attiny series which do not have SPM(X), so that is correct.
Leave the avr16/avr32 devices unchanged (using xmega3 to be in sync with gcc definitions).
Fixes https://github.com/llvm/llvm-project/issues/116116
Add a new __ARM_FEATURE_CSSC macro that can be utilized during the
preprocessing stage.
__ARM_FEATURE_CSSC is defined to 1 if there is hardware support for
CSSC.
Implements the ACLE change:
https://github.com/ARM-software/acle/pull/394
This reverts commit 9208b343e962b9f1140ee345c0050a3920bdcbf2.
TargetParser shouldn't re-run the PPC subtarget tablegen target, it
should define its own `-gen-ppc-target-def` rule like all the other
targets do in llvm/include/llvm/TargetParser/CMakeLists.txt .
One user reported that there are incorrect CMake dependencies after this
change, so I will roll this back in the meantime.
1. The PR proceeds with a backend target hook to allow front-ends to
determine what target features are available in a compilation based on
the CPU name.
2. Fix a backend target feature bug that supports HTM for
Power8/9/10/11. However, HTM is only supported on Power8/9 according to
the ISA.
3. All target features that are hardcoded in PPC.cpp can be retrieved
from the backend target feature. I have double-checked that the
hardcoded logic for inferring target features from the CPU in the
frontend(PPC.cpp) is the same as in PPC.td.
The previous mapping we setting the hlsl_groupshared AS to 0, which
translated to either Generic or Function.
Changing this to 3, which translated to Workgroup.
Related to #142804
Handling of va_list on Cygwin environment must be matched to normal
Windows environment.
The existing test `test/CodeGen/ms_abi.c` seems relevant, but it
contains `__attribute__((sysv_abi))`, which is not supported on Cygwin.
The new test is based on the `__attribute__((ms_abi))` portion of that
test.
---------
Co-authored-by: jeremyd2019 <github@jdrake.com>
The LoongArch psABI recently added __bf16 type support. Now we can
enable this new type in clang.
Currently, bf16 operations are automatically supported by promoting to
float. This patch adds bf16 support by ensuring that load extension /
truncate store operations are properly expanded.
And this commit implements support for bf16 truncate/extend on hard FP
targets. The extend operation is implemented by a shift just as in the
standard legalization. This requires custom lowering of the truncate
libcall on hard float ABIs (the normal libcall code path is used on soft
ABIs).
We have multiple different attributes in clang representing device
kernels for specific targets/languages. Refactor them into one attribute
with different spellings to make it more easily scalable for new
languages/targets.
---------
Signed-off-by: Sarnie, Nick <nick.sarnie@intel.com>